CN113723841B - On-line detection method for tool missing in assembled prefabricated part - Google Patents

On-line detection method for tool missing in assembled prefabricated part Download PDF

Info

Publication number
CN113723841B
CN113723841B CN202111030371.0A CN202111030371A CN113723841B CN 113723841 B CN113723841 B CN 113723841B CN 202111030371 A CN202111030371 A CN 202111030371A CN 113723841 B CN113723841 B CN 113723841B
Authority
CN
China
Prior art keywords
target
prefabricated
detection
tool
prefabricated part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111030371.0A
Other languages
Chinese (zh)
Other versions
CN113723841A (en
Inventor
李学俊
谢佳员
琚川徽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Green Industry Innovation Research Institute of Anhui University
Original Assignee
Green Industry Innovation Research Institute of Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Green Industry Innovation Research Institute of Anhui University filed Critical Green Industry Innovation Research Institute of Anhui University
Priority to CN202111030371.0A priority Critical patent/CN113723841B/en
Publication of CN113723841A publication Critical patent/CN113723841A/en
Application granted granted Critical
Publication of CN113723841B publication Critical patent/CN113723841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Development Economics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of building industry, in particular to an on-line detection method for tool missing in an assembled prefabricated part. The method comprises the following steps: s1: constructing a real-time video-based tool detection network; s2: and acquiring real-time video of the motion state of the die table carrying the prefabricated part. S3: a rectangular target sensing area for performing on-line detection is provided. S4: performing target detection and target tracking on the real-time video by adopting a tool detection network, and sequentially acquiring type information in each prefabricated component, and quantity signals and position information of tools in the prefabricated components; s5: inquiring a cloud database according to the type information, and acquiring the quantity information and the reference value of the position information of the tooling in the prefabricated part; and comparing the measured values of the reference values to judge whether the product is qualified or not. The invention solves the problems of low detection efficiency of the manual quality of the prefabricated part, poor detection precision and real-time performance of automatic detection and difficulty in meeting the requirements.

Description

On-line detection method for tool missing in assembled prefabricated part
Technical Field
The invention relates to the field of building industry, in particular to an on-line detection method for tool missing in an assembled prefabricated part.
Background
The building is an important direction of development of the building industry, and is a building assembled by processing various types of building components in advance in a factory and then transporting the building components to a construction site through reliable connection. Compared with the existing cast-in-situ structure building, the cast-in-situ structure building has the advantages of large-scale production, high construction speed and low construction cost.
The quality control of the building prefabricated parts is the core for guaranteeing the quality of the fabricated building. Imperfections in any one prefabricated part can have an unavoidable impact on the final building quality, thereby leading to immeasurable losses for the whole construction project. A large number of various tools for assembly and installation are reserved in the prefabricated part, the parameter detection of the tools is important content of the quality detection of the prefabricated part, if the number or the positions of the tools on the prefabricated part are inconsistent with the design, high maintenance cost can be generated, the prefabricated part can be scrapped even directly, and large loss is caused for enterprises.
In an industrial construction prefabricated component production plant, a plurality of prefabricated components of different types are often produced according to the order requirements. In order to reduce the cost, the existing manufacturing enterprises generally adopt modes of converting the various components and parts into the production process on the same production line. In production, different products have various technical categories, large operation process difference and complex index system. The existing prefabricated components are large in size, various in types and complex in structure; this brings great difficulty to the design of the automated and intelligent technical scheme for the quality detection of the prefabricated parts. The existing automatic detection method is difficult to achieve the pace in the process of processing the problem of tool detection in the building prefabricated parts, the accuracy of the detection result is difficult to meet the requirement, and even the detection result needs to be checked manually.
For the above reasons, a large number of enterprises still adopt a manual detection mode to finish the inspection of the quality of the tools in the prefabricated parts; the efficiency of manual detection is low, can also influence the progress of production process moreover. Existing computer vision technology-based research is directed to detecting surface appearance defects of prefabricated components; and studies based on three-dimensional laser scanning technology are directed to detecting defects of dimensional deviations of prefabricated parts, but few studies are directed to structural defects of prefabricated parts. And quality control for structural defects is largely dependent on human experience. Meanwhile, for the quality problem checked afterwards, if quick feedback cannot be formed between the quality problem and the front-end production process, a large amount of scrapped products can be generated, and large losses are brought to enterprises. Only by designing a set of online detection method and forming feedback with production procedures, the capacity of enterprises for large-scale production can be improved, but the existing detection methods of various types can not meet the requirements of related technical indexes.
In addition, some common pure machine vision defect detection methods are also gradually tested and applied in the scene of real-time detection of defects of an online tool. However, the accuracy of the identification effect is still insufficient and the real-time performance is poor; the data volume of processing is extremely large, and the requirement on hardware is high.
Disclosure of Invention
Based on the above, it is necessary to solve the problems that in the prior art, the manual quality detection efficiency of the prefabricated member is low, the detection effect of an automatic detection method is insufficient, the detection precision and the real-time performance are poor, and the on-line detection requirement is difficult to meet; the on-line detection method for the tool missing in the prefabricated part is provided.
The invention discloses an on-line detection method for tool missing in an assembled prefabricated part, which is used for detecting whether a tool in the prefabricated part processed in a die meets the requirements or not in real time on a die table of a production line. The real-time online detection method in the invention comprises the following steps:
s1: constructing a real-time video-based tool detection network; the tool detection network comprises a target detection network and a target tracking network, and the target detection network and the target tracking network are respectively used for carrying out target detection and target tracking processing on the shot real-time video of the prefabricated component to obtain the quantity information and the position information of the tools in the prefabricated component.
S2: acquiring real-time video of the motion state of a die table carrying the prefabricated part, which is shot along a fixed downward-inclined view angle; the side of each prefabricated part to which the corresponding movement is directed is defined as the front side, and the side to which the corresponding movement is directed is defined as the rear side.
S3: setting a rectangular target sensing area for performing on-line detection, wherein the length of the target sensing area is equal to the length of a video frame of a shot real-time video, and the width of the target sensing area is equal to the pixel value W of the shot real-time video a The following formula is adopted for calculation:
in the above, W v For looking atWidth pixel value of frequency frame, T max F, for the maximum time the tool stays in the video max For average frame rate during real-time video processing, F min The minimum number of frames for the tool center point to stay in the induction zone.
Defining one side of the target induction zone, which corresponds to the movement direction of the die table, as the front edge of the target induction zone; one side corresponding to the moving direction of the die table is the trailing edge of the target induction zone.
S4: performing target detection and target tracking on the shot real-time video by adopting a tool detection network, and sequentially acquiring type information in each prefabricated component appearing in the real-time video, and quantity signals and position information of tools in the prefabricated components; the acquisition method specifically comprises the following steps:
s41: judging whether the front side of the prefabricated component to be arrived in the real-time video coincides with the front edge of the target induction zone, if so, acquiring the type information of the prefabricated component currently entering the target induction zone, and entering the next step; otherwise, continuing to wait.
S42: and sequentially carrying out target detection on the part of the corresponding target induction zone in each frame of the current real-time video through a target detection network, extracting all the tools appearing in the target induction zone, and recording the position information of each tool.
S43: performing target tracking on each tool extracted by the target detection network in each frame through the target tracking network, so as to allocate an identity code with global uniqueness to each newly added tool, and calculate the position information of the identity code; and then returning the target information with the identification code and the position information to the target detection network.
S44: judging whether the rear side of the prefabricated component currently executing target detection and target tracking coincides with the front edge of the target induction zone; if yes, counting the quantity information and the position information of all the tools in the current prefabricated part, and returning to the step S41 to wait for executing the target detection and target tracking process of the next prefabricated part; otherwise, returning to step S42, the target detection and target tracking process of the current prefabricated component is continued.
S5: inquiring a cloud database according to the acquired type information of the current prefabricated component, and acquiring the reference values of the quantity information and the position information of the tools in the type prefabricated component stored in the cloud database in advance; comparing the reference value with the actual measurement value of the quantity information and the position information of the tooling in the current prefabricated component obtained in the previous step, judging whether the quantity information and the position information completely coincide with each other, and if so, judging that the tooling of the current prefabricated component is complete; otherwise, judging that the tooling of the current prefabricated part is missing.
As a further improvement of the present invention, in step S1, the target detection network selects a YOLO V5 based network model; the target detection network adopts the pictures of the prefabricated component shot at the same shooting angle as the video acquired in the step S2 to sequentially complete the training, testing and verifying processes of the network model; the target tracking network selects a network model based on the SORT algorithm, and adjusts the target tracking life cycle parameter value of the target tracking network to be 1-5.
As a further improvement of the invention, the training, validation and testing process of the object detection network is specifically as follows:
(1) Acquiring original images of various different types of prefabricated tools meeting the shooting angle requirement, and preprocessing the original images; and obtaining all clear images which keep the complete structure of the prefabricated part, wherein each clear image forms an original data set.
(2) The method comprises the steps of manually marking images in an original data set, wherein the marked objects are prefabricated components and tools on the surfaces of the prefabricated components, and marked marking information comprises: type information of the prefabricated parts, quantity information of tools in the prefabricated parts and position information. Meanwhile, the image and the corresponding marking information are saved to obtain a new data set, and the new data set is randomly divided into a training set, a verification set and a test set according to the data ratio of 8:1:1.
(3) Training the constructed target detection network for multiple times by using a training set, and after each round of training is finished, verifying the target detection network by using a verification set to respectively obtain loss values of the target detection network in a training stage and a verification stage; stopping the training process after the loss value obtained by the training set is reduced and the loss value obtained by the verification set is increased in each round; and the five network models with the top five loss values obtained in the training stage are stored.
(4) And testing the stored five network models by using the test set, and then taking the network model with the highest mAP value in the test result as a final target detection network.
As a further improvement of the present invention, in step S2, the captured live video satisfies a depression angle of less than 90 °, where the depression angle is an angle between the viewing direction of the image and the horizontal direction. The mounting position of the shooting equipment comprises the right front or two sides along the movement direction of the die table; in the actual image capturing process, the image can be captured from the front of the mold table or from the two sides of the mold table.
As a further improvement of the invention, the target induction area set in the step 3 is a virtual area, the target induction area corresponds to a real defect detection area which is passed by a module in an actual detection field, and the area shot by a fixed visual angle in the real-time video is the area of the defect detection area; the extent of the defect detection zone includes at least the entire area of the target sensing zone.
As a further improvement of the present invention, in step S41 and step S44, the method of judging whether the front side or the rear side of the prefabricated member coincides with the front edge of the target induction zone is as follows:
(1) A group of photoelectric sensors are arranged in the defect inspection area, and the detection direction of each photoelectric sensor coincides with the straight line where the front edge of the corresponding target induction area is located; and the installation position of the photoelectric sensing area meets the following conditions: when any prefabricated part on the die table passes through the photoelectric sensor, the photoelectric sensor is shielded, and the state signal of the photoelectric sensor is changed; the state signal when the photo sensor is not blocked is defined as "1", and the state signal when it is blocked is defined as "0".
(2) Before any prefabricated part reaches the defect detection area, the state signal of the photoelectric sensor is 1. When one of the prefabricated parts enters the defect detection area, the front side of the prefabricated part is overlapped with the front edge of the target induction area first; at this time, the photoelectric sensor is just shielded by the prefabricated member, and the state signal of the photoelectric sensor is switched from 1 to 0. And judging that the front side of the prefabricated part coincides with the front edge of the target induction zone at the moment.
(3) Before the prefabricated part leaves the defect detection area, the state signals of the photoelectric sensor are all 0. When the prefabricated part completely leaves the defect detection area, the rear side of the prefabricated part is overlapped with the front edge of the target induction area first; at this time, the photoelectric sensor just recovers to a non-shielding state, and the state signal of the photoelectric sensor is switched from 0 to 1. And judging that the rear side of the prefabricated part coincides with the front edge of the target induction zone at the moment.
As a further improvement of the present invention, in step S41, the method for acquiring the type information of the prefabricated part currently entering the target induction area is as follows:
(1) A radio frequency identification chip pre-storing type information of prefabricated components in the mold is arranged on the side face of each mold in the mold table
(2) A radio frequency identification card reader for reading the stored data in the radio frequency identification chip is arranged in the defect detection area; the installation position of the radio frequency identification card reader meets the following conditions: when the front side of the prefabricated part coincides with the front edge of the target induction zone, the radio frequency identification chip on the side face of the prefabricated part is close to the video identification card reader, and the condition of reading the internal information of the radio frequency identification chip is achieved.
As a further improvement of the present invention, in step S4, in the process of performing target detection and target tracking on the relevant video frames of each prefabricated member, according to the pipeline number of each prefabricated member, classifying and storing the images in the corresponding video frames as data files; the archived data is part of image frames of the video stream data where the tooling overlaps the target sensing area.
As a further improvement of the present invention, in step S5, the cloud database stores therein the feature information extracted from the BIM model in association with all the models of prefabricated parts to be produced on the production line; when the production line starts to produce and detect, comparing the acquired quantity information and position information of the tools in the first part of each type of prefabricated component with ideal parameters in a corresponding BIM model, and taking the quantity information and position information of the tools acquired by detection in the first part as reference values for subsequent tool missing detection when the first part meets error requirements of each parameter; and when the produced first part does not meet the error requirement, scrapping the current prefabricated part, and re-producing and determining the first part.
As a further improvement of the present invention, the on-line detection method further includes:
when the absence of the tooling of the current prefabricated part is detected, stopping the movement process of the die table and sending an alarm signal to the front end of the production line; and then, according to the acquired quantity information, position information and data archive of the tooling in the current prefabricated part photo, a technician makes corresponding decisions of problem duplication analysis and problem product scrapping treatment.
The on-line detection method for the tool missing in the assembled prefabricated part has the following beneficial effects:
1. the invention discloses an on-line detection method for tool missing, which is a method for fusing machine vision and sensor technology. The photoelectric sensor, the RFID chip and the RFID card reader are arranged at the proper position of the target induction zone, so that the accurate position of each prefabricated part can be timely obtained without depending on image processing, and the type information of each prefabricated part can be obtained in a targeted manner; the data processing pressure of the hardware equipment is reduced, and the real-time performance of the tool detection process is guaranteed. Because the method provided by the invention has lower performance requirements on hardware of image processing, and the real-time performance of the system is good. Therefore, the quality of the prefabricated part fixture can be detected on line, the prefabricated part fixture can be adapted to the existing production line, and the production without stopping and the detection without stopping can be realized.
2. According to the online detection method for the tool missing, provided by the invention, the network model based on the YOLO V5 is respectively adopted as a target detection network, and the SORT network model is adopted as a target tracking network, so that the feature extraction and the target tracking effect of the frame-by-frame images can be realized; thereby more accurate quantity information and the positional information of frock in each prefabricated component of acquisition. In addition, the invention also reasonably installs the light supplementing lamp and adjusts the installation position of the camera; the tool in the acquired image can be protruded from the background, and the identification accuracy of the tool is further improved.
3. According to the invention, the target induction area is arranged in the image processing process and is used as the interested area in each frame of image, so that the accuracy of the system in processing the tool extraction problem is improved; the defect of insufficient generalization performance of the system in high-resolution images is overcome. Meanwhile, the calculation pressure of the processing unit is reduced, and the real-time performance of the system is ensured.
4. The invention can also give an alarm in time when detecting the defect of the fixture of the prefabricated component product on the production line, and stop the operation of the production line, thereby reducing the defect rate of the product and the production loss of enterprises.
Drawings
Fig. 1 is a step flowchart of an on-line detection method for tool missing in an assembled prefabricated part provided in embodiment 1 of the present invention;
FIG. 2 is a flow chart of the training, verification and testing process performed by the object detection network in embodiment 1 of the present invention;
FIG. 3 is a flowchart of a process of extracting tools by the target detection network in embodiment 1 of the present invention;
FIG. 4 is a logic block diagram of the process of detecting a tool defect in embodiment 1 of the present invention;
fig. 5 is a picture example of manual labeling of tools in a dataset image in a performance test of embodiment 1 of the present invention;
FIG. 6 is a basic architecture diagram of the YOLO V5 network model of example 1 of the present invention;
FIG. 7 shows the detection result of the tool target in the test sample 1 according to the embodiment 1 of the present invention;
FIG. 8 is a graph showing the detection result of the tool target in the test sample 2 according to the embodiment 1 of the present invention;
FIG. 9 is a graph showing the detection result of the tool target in the test sample 3 according to the embodiment 1 of the present invention;
FIG. 10 shows the detection result of the tool target in the test sample 4 according to the embodiment 1 of the present invention;
FIG. 11 shows the detection result of the tool target in the test specimen 5 in the embodiment 1 of the present invention;
FIG. 12 is a graph showing the detection result of the tool target in the test specimen 6 according to the embodiment 1 of the present invention;
FIG. 13 is a schematic diagram of a defect inspection system for assembled prefabricated parts in accordance with embodiment 1 of the present invention;
FIG. 14 is a system topology of a tooling defect detection system for an assembled preform in accordance with embodiment 1 of the present invention;
FIG. 15 is a schematic block diagram of a process module according to embodiment 1 of the present invention;
fig. 16 is a block diagram of a feature extraction unit according to embodiment 1 of the present invention;
marked in the figure as: 1. a video acquisition component; 2. a photoelectric sensor; 3. a type information identification component; 4. a processing module; 5. a die table; 6. a prefabricated member; 7. an alarm; 11. a mounting frame; 12. a camera; 13. a light supplementing lamp; 21. a laser emitter; 22. a laser receiver; 31. an RFID chip; 32. an RFID card reader; 41. a position acquisition unit; 42. a standard parameter acquisition unit; 43. a video processing unit; 44. a feature extraction unit; 45. a feature comparison unit; 51. a mold; 61. a tool; a 441 target detection subunit; 442. a target tracking subunit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "or/and" as used herein includes any and all combinations of one or more of the associated listed items.
Example 1
The embodiment provides an on-line detection method for the lack of a tool 61 in an assembled prefabricated part 6, which is used for detecting whether the tool 61 in the prefabricated part 6 processed in a die 51 meets the requirement or not on a die table 5 of a production line in real time. As shown in fig. 1, the real-time online detection method includes the following steps:
s1: constructing a real-time video-based tool 61 detection network; the tool 61 detection network comprises a target detection network and a target tracking network, which are respectively used for performing target detection and target tracking processing on the shot real-time video of the prefabricated component 6 to obtain the quantity information and the position information of the tool 61 in the prefabricated component 6.
The embodiment aims to solve the problem of performance detection of the prefabricated part 6 on the moving die table 5 on the running production line. It is therefore necessary for the component to build a network model that enables feature extraction of tooling 61 in the prefabricated component 6 passing over the mould stage 5 based on video stream data. In the embodiment, the problem of extracting the tooling 61 in the prefabricated part 6 is divided into two parts, wherein the first part is used for extracting all the tooling 61 appearing in each video frame; the second part is to track the targets of the tools 61 in different frames, so as to distinguish whether the tools 61 in the current frame are identical to the tools 61 in the previous frame, or else, an identity code with global uniqueness is allocated to the newly added tools 61; and further the exact number and positions of all the tools 61 in one prefabricated part 6 are counted. The two parts of working contents are needed to be realized through a target detection network and a target tracking network respectively.
Specifically, in the present embodiment, the object detection network selects a YOLO V5-based network model. Selecting a network model based on an SORT algorithm by a target tracking network; the SORT algorithm is a multi-objective tracking algorithm that is used in this embodiment as a network model for tracking individual tools 61 in the prefabricated component 6. Considering the matching relationship between the size of the target sensing area and the movement speed of the die table 5, the target tracking life cycle parameter value of the target tracking network is adjusted to be 1-5 in this embodiment.
The network model of YOLO V5 adopted in the present embodiment is a classical target detection network based on computer vision, and in the present embodiment, an infrastructure model architecture for identifying the target detection network of each tool 61 in the prefabricated member 6 is constructed based on the network. And the training, testing and verifying processes of the network model are completed through the real images with the same shooting angle and quality as those in the actual video stream data.
In this embodiment, as shown in fig. 2, the training, verification and testing process of the target detection network is specifically as follows:
(1) Acquiring original images of various different types of prefabricated tools 61 meeting the shooting angle requirement, and preprocessing the original images; all the sharp images that preserve the complete structure of the prefabricated elements 6 are obtained, each of which constitutes the original dataset.
The requirements for the images in the original dataset are as follows:
a. the acquired image should be at the same shooting angle as the video stream data of the pipeline acquired later. Therefore, the consistency of the objects in the data of the actual processing process of the training set in the training stage can be ensured, and the training effect of the network model is further ensured.
b. The taken view angle of the image in the acquired real-time video or raw data set satisfies the depression angle of less than 90 °, and the installation position of the photographing apparatus includes the right front or both sides in the moving direction of the die table 5. In the present embodiment, it is necessary to extract each of the tools 61 in the prefabricated parts 6, and the tools 61 are mainly metal bolt type members in each of the metal members protruding from the upper surface of the prefabricated parts 6, which are clearly distinguishable at an obliquely downward angle and which become indistinguishable at a vertically downward angle, and the depression angle photographed in the present embodiment is preferably 30 to 60 °. The equipment for simultaneous image acquisition is preferably mounted in such a position that the individual tooling 61 in the preform do not overlap as much as possible at this angle.
c. The image should remain clear and complete. Images with blurring, severe noise, ghost images, poor light conditions and overexposure in the acquired images can be removed. The image is simultaneously cropped to preserve as much as possible the complete structure of the prefabricated element 6 and to remove the background outside the prefabricated element 6. Thereby reducing the interference caused by irrelevant objects to the training of the network model.
d. The image of each preform 6 is reflected in the image as a sample, and should be adapted to the frequency of occurrence of the preform 6 on the actual production line. I.e. the more often a certain model of prefabricated element 6 is produced, the number of samples of the prefabricated element 6 in the original data set should be increased.
(2) The images in the original data set are manually marked, the marked objects are prefabricated components 6 and tools 61 on the surfaces of the prefabricated components, and marked marking information comprises: type information of the prefabricated parts 6, number information of the tools 61 in the prefabricated parts 6 and position information. Meanwhile, the image and the corresponding marking information are saved to obtain a new data set, and the new data set is randomly divided into a training set, a verification set and a test set according to the data ratio of 8:1:1.
(3) Training the constructed target detection network for multiple times by using a training set, and after each round of training is finished, verifying the target detection network by using a verification set to respectively obtain loss values of the target detection network in a training stage and a verification stage; stopping the training process after the loss value obtained by the training set is reduced and the loss value obtained by the verification set is increased in each round; and the five network models with the top five loss values obtained in the training stage are stored.
(4) And testing the stored five network models by using the test set, and then taking the network model with the highest mAP value in the test result as a final target detection network.
S2: acquiring real-time video of the motion state of the mold table 5 carrying the prefabricated part 6 photographed at a fixed angle of view obliquely downward; the side of each prefabricated part 6 to which the corresponding movement is directed is defined as the front side, and the side to which the corresponding movement is directed is defined as the rear side. The shooting angle of the acquired real-time video is consistent with the shooting angle of the sample images in the training set, and the shooting angle of the acquired real-time video and the shooting angle of the sample images in the training set can be different real prefabricated part 6 images acquired in the test production stage and the actual production stage. Such images and videos are typically acquired by high resolution, high frame rate industrial cameras.
S3: setting a rectangular target induction area for performing on-line detection; defining one side of the target induction zone, which corresponds to the movement direction of the die table 5, as the front edge of the target induction zone; one side corresponding to the movement direction of the die table 5 is the trailing edge of the target sensing area.
In general, sample data obtained for object detection and object tracking is taken by an industrial camera, which is wide in the range of taking, possibly simultaneously, a mold 51 containing a plurality of different prefabricated members 6 on a mold table 5. This may cause difficulty in the processing of the network detection model and the network tracking model, and the network model may not accurately distinguish the different prefabricated members 6, or may greatly affect the accuracy and real-time of the feature extraction result due to the scale of the processed data. To solve such a problem, the present embodiment introduces a concept of a target sensing area in acquiring real-time video.
The target induction zone is a virtual zone, and corresponds to a real defect detection zone through which the template 5 passes in the actual detection field, and a partial zone shot at a fixed visual angle in the real-time video is a zone of the defect detection zone; the extent of the defect detection zone includes at least the entire area of the target sensing zone. During the forward movement of the mould table 5 carrying the individual prefabricated elements 6, these prefabricated elements 6 will pass through the target sensing area in sequence, while the samples in the target detection network and the target tracking network are also only partial images of the corresponding target sensing area in the individual video frames.
From the above statements, it can be found that the setting of the target sensing area has a great correlation with the performance of the on-line detection method. Too small a target sensing area may cause the tooling 61 in the prefabricated part 6 to be unable to be caught due to too fast movement, and a missing inspection phenomenon occurs. The too large target sensing area can cause mutual interference due to the too many targets, and it is difficult to effectively distinguish each object. Meanwhile, the larger the target induction area is, the larger the data volume in the processing process is; this also puts pressure on the data processing process of the hardware, which affects the real-time nature of the tooling 61 detection in the prefabricated part 6.
In view of the above factors, the size of the target sensing area is set as follows in this embodiment: the length of the target sensing area and the video frame of the photographed real-time videoPixel value W of equal length and width of target sensing area a The following formula is adopted for calculation:
in the above, W v For the width pixel value, T, of a video frame max F for maximum time the tool 61 stays in the video max For average frame rate during real-time video processing, F min The minimum number of frames for the center point of the tooling 61 to stay in the sensing area.
In practice, the length of the target sensing zone will typically be greater than the width of the preform 6 or the die table 5. I.e. the video not only includes the mould table 5 or the prefabricated part 6, but also should further include areas on both sides thereof, to ensure that all areas of the prefabricated part 6 are included and the detection is completed. While the width of the target sensing area will typically be smaller than the length of the individual prefabricated elements 6; therefore, a single prefabricated member 6 can be ensured to pass through the target sensing area in a long enough time, the image size in each frame can be effectively reduced, the pressure of data processing is further reduced, and the instantaneity of the network model processing problem is ensured.
S4: performing target detection and target tracking on the shot real-time video by adopting a tool 61 detection network, and sequentially acquiring type information in each prefabricated part 6 appearing in the real-time video, and quantity signals and position information of the tools 61 in the prefabricated parts 6; as shown in fig. 3, the method for obtaining tool 61 information of the tool 61 detection network specifically includes:
S41: judging whether the front side of the prefabricated part 6 to be arrived in the real-time video coincides with the front edge of the target induction zone, if so, acquiring the type information of the prefabricated part 6 currently entering the target induction zone, and entering the next step; otherwise, continuing to wait.
S42: and sequentially carrying out target detection on the part of the corresponding target induction zone in each frame of the current real-time video through a target detection network, extracting all the tools 61 appearing in the target induction zone, and recording the position information of each tool 61.
S43: performing target tracking on each tool 61 extracted from each frame by the target detection network through the target tracking network, so as to allocate an identity code with global uniqueness to each newly added tool 61; and returning the target information with the identification code to the target detection network.
S44: judging whether the rear side of the prefabricated part 6 currently performing target detection and target tracking coincides with the front edge of the target sensing area; if yes, counting the quantity information and the position information of all the tools 61 in the current prefabricated part 6, and returning to the step S41 to wait for executing the target detection and target tracking process of the next prefabricated part 6; otherwise, the process returns to step S42 to continue the target detection and target tracking process of the current preform 6.
The method for judging whether the front side or the rear side of the prefabricated member 6 coincides with the front edge of the target induction zone is as follows:
(1) A group of photoelectric sensors 2 are arranged in the defect inspection area, and the detection direction of the photoelectric sensors 2 coincides with the straight line where the front edge of the corresponding target induction area is located; and the installation position of the photoelectric sensing area meets the following conditions: when any prefabricated part 6 on the die table 5 passes through the photoelectric sensor 2, the photoelectric sensor 2 is shielded, and the state signal is changed; the state signal when the photo sensor 2 is not blocked is defined as "1", and the state signal when it is blocked is defined as "0".
(2) Before any prefabricated part 6 reaches the defect detection area, the state signal of the photoelectric sensor 2 is 1. When one of the prefabricated parts 6 enters the defect detection area, the front side of the prefabricated part 6 is overlapped with the front edge of the target induction area first; at this time, the photo sensor 2 is just shielded by the prefabricated member 6, and the state signal of the photo sensor 2 is switched from 1 to 0. It is determined that the front side of the preform 6 coincides with the front edge of the target induction zone at this time.
(3) The status signal of the photo sensor 2 is 0 before the preformed member 6 leaves the defect detection zone. When the prefabricated part 6 completely leaves the defect detection area, the rear side of the prefabricated part 6 is overlapped with the front edge of the target induction area first; at this time, the photo sensor 2 just resumes the non-shielding state, and the state signal of the photo sensor 2 is switched from 0 to 1. It is determined that the rear side of the preform 6 coincides with the front edge of the target induction zone at this time.
The photo sensor 2 in this embodiment provides convenience for distinguishing individual prefabricated parts 6 in the video. In the present embodiment, the tool 61 extraction and the tool 61 tracking are completed by the tool 61 detecting the network. In practice, however, a single mold table 5 may simultaneously include a plurality of molds 51 for different types of prefabricated parts 6, wherein the types of prefabricated parts 6 to be produced are different. In the process of feature recognition and extraction, the embodiment needs to effectively distinguish different prefabricated components 6, so as to avoid counting the tools 61 in the two prefabricated components 6 into the same one. To solve this problem, it is also often necessary to add a new network model. However, the use of the machine vision technique has a problem of insufficient detection accuracy and high dependence on image quality. The problem of distinguishing the prefabricated parts 6 is solved by skillfully adopting the photoelectric sensor in the embodiment.
In this embodiment, it is considered that the height of the preform 6 on the die table 5 is generally higher than the surface of the die table 5, and that a gap is generally present between adjacent preforms 6. A group of photosensors is thus provided at a certain height. When the mould table 5 moves, the photoelectric sensor is blocked if the prefabricated part 6 passes by, and a state signal is generated. If the gap between the prefabricated elements 6 passes, the photoelectric sensor is not blocked, and another status signal is generated. By identifying the different status signals, it can be determined whether or not the preform 6 passes. In order to be compatible with the operation mode of the tool 61 detection network in the present embodiment, the present embodiment makes a limitation on the installation position of the photoelectric sensor; and overlapping the detection direction of the photoelectric sensor with the straight line where the front edge of the corresponding target sensing area is located. Namely: when the prefabricated part 6 just passes through the photoelectric sensor, the prefabricated part 6 is immediately judged to reach the target induction area, the frame of image in the monitoring video is extracted to detect and track the tool 61, when the prefabricated part 6 completely leaves the photoelectric sensor, the prefabricated part 6 is immediately judged to leave the target sensor, and the detection result of the tool 61 is output in the detection and tracking process of the tool 61 in the prefabricated part 6.
In addition to extracting the tooling 61 information in the preform 6, the present embodiment also requires the acquisition of the type of each preform 6. The number and positions of the tools 61 are different among different types of the prefabricated members 6, and the present embodiment needs to acquire the types of the respective prefabricated members 6 to determine the theoretical number and position information of the prefabricated members 6. And then comparing with the actual detection result.
Specifically, in the present embodiment, the method for acquiring the type information of the prefabricated part 6 currently entering the target induction zone is as follows:
(1) A radio frequency identification chip pre-storing type information of the prefabricated part 6 in the mold 51 is provided at the side of each mold 51 in the mold table 5.
(2) A radio frequency identification card reader for reading the stored data in the radio frequency identification chip is arranged in the defect detection area; the installation position of the radio frequency identification card reader meets the following conditions: when the front side of the prefabricated part 6 coincides with the front edge of the target induction zone, the radio frequency identification chip on the side surface of the prefabricated part 6 is close to the video identification card reader, and the condition of reading the internal information of the radio frequency identification chip is achieved.
S5: inquiring a cloud database according to the acquired type information of the current prefabricated part 6, and acquiring the reference values of the quantity information and the position information of the tooling 61 in the type prefabricated part 6 stored in the cloud database in advance; comparing the reference value with the actual measurement value of the quantity information and the position information of the tooling 61 in the current prefabricated part 6 obtained in the previous step, judging whether the quantity information and the position information completely coincide with each other, and if so, judging that the tooling 61 of the current prefabricated part 6 is complete; otherwise, the absence of the tooling 61 of the current prefabricated part 6 is judged. In the detection process, the horizontal deviation and the vertical deviation allowed by the position information are both size, and the size is equal to the width pixel value W of the sensing area of the tool 61 a Equal.
In this embodiment, as shown in fig. 4, the overall logic block diagram of online detection refers to four objects, namely, a data acquisition end, a data processing end, a cloud server and a defect alarm module. The on-line detection method runs the logic as follows between four objects:
at a data acquisition end, two types of data are acquired: (1) video stream data associated with each prefabricated element 6. (2) Type information of each prefabricated member 6 acquired by the RFID module.
At the data processing end, the working content comprises three parts: 1. standard parameter information (mainly parameter information of the tool 61, including quantity information and position information) of each of the different types of prefabricated members 6 is acquired from the cloud server according to the acquired type information of each of the prefabricated members 6. 2. And carrying out target detection and target tracking according to video stream data of each prefabricated part 6, and determining the actual number and position information of the tools 61 in each prefabricated part 6. 3. Comparing the tool 61 information obtained through video stream processing with the standard parameter information of the tool 61 obtained from the cloud storage module, judging whether the tool 61 information and the standard parameter information are consistent, and further determining whether the tool 61 in the prefabricated part 6 has defects.
Meanwhile, the detection result of the data processing end is also sent to a defect alarm module for carrying out fault prompt on the production and quality detection processes of products on the production line.
In this embodiment, the cloud server stores BIM models of all types of prefabricated members 6 to be produced on the production line; when the production line starts to produce and detect, comparing the acquired quantity information and position information of the tools 61 in the first part of each type of prefabricated member 6 with ideal parameters in a corresponding BIM model, and taking the quantity information and position information of the tools 61 detected and acquired in the first part as reference values for subsequent execution of missing detection of the tools 61 when the first part meets error requirements of each parameter; when the produced first piece does not meet the error requirement, the current prefabricated component 6 is scrapped, and the first piece is produced again and determined.
In this embodiment, although the BIM model is stored in the cloud server, in the tool 61 missing detection method of this embodiment, parameters of the BIM model are not directly used to compare with the detection results of the tools 61 of the prefabricated members 6, but a first member determined to be a qualified product is used as a standard for subsequently detecting the production performance of the tools 61. That is, the subsequent products are not all compared with the BIM model, but are compared with the qualified first part. Therefore, data which more accords with the actual condition of the production line can be obtained, and the situation that the standard BIM model is not suitable for the specific production state in the detection process and even the frequent error reporting in the production and detection processes of the product is caused is avoided.
In addition, in the process of performing target detection and target tracking on the relevant video frames of each prefabricated member 6, according to the pipeline number of each prefabricated member 6, the images in the corresponding video frames are classified and stored as data files; the archived data is either frame-by-frame images of the video stream or sampled images taken at a particular sampling frequency.
Meanwhile, when the absence of the tooling 61 of the current prefabricated part 6 is detected, stopping the movement process of the die table 5 and sending an alarm signal to the front end of the production line; and then, according to the acquired quantity information, position information and data archive of the tooling 61 in the current prefabricated part 6 photo, a corresponding decision of problem duplication analysis and problem product scrapping is made by a technician.
In order to verify the effectiveness of the method provided in this embodiment, a test method is used to verify the performance of the target detection network, and a specific test process includes: data acquisition, data preprocessing, target detection model construction, model training, model testing and analysis.
1. Data acquisition
The data required by the experiment are acquired by adopting a manual shooting mode, and comprise 800 original images, wherein the resolution ratio of the images is 3024 x 4032, and the data form an original data set in the embodiment.
2. Data preprocessing
(1) Data cleansing
And removing blurred, ghost and poor-image-quality images in the original data set, wherein 736 images remain after the images are removed.
(2) Data set partitioning
The original data set is divided into a training set, a validation set and a test set. The training set contained 590 images, the validation set contained 73 images, and the test set contained 73 images.
(3) Image compression
The resolution ratio of the original image is ultrahigh, the occupied data space is too large, and the noise is too much, which is unfavorable for training of the model, so that the original image is compressed, and the resolution ratio of the compressed image is 416 x 416.
(4) Manually annotated data sets
As shown in fig. 5, the images in the data set processed in the previous step are manually marked, the marked objects are shown tools 61 appearing in the images, the tools 61 are divided into two types, i.e. a pilar and a fixedpilar, respectively, the former refers to each columnar connecting tool 61 appearing in the image of the prefabricated part, the latter refers to two handle-shaped fixed tools 61 with larger structures in the prefabricated part, and in the data set of the embodiment, the number of the two targets is 5183, wherein the number of pilar is 4630, and the number of fixedpilar is 553.
3. Target detection model construction
In this embodiment, a YOLO V5 target detection model is built, and a network architecture diagram thereof is shown in fig. 6.
4. Model training
And (3) using a pre-training model based on the COCO data set to accelerate the training process, and loading a training set prepared in advance for training, wherein the period number epoch in the training process is set to be 50.
5. Model testing and analysis
In this embodiment, six test samples are set for testing, where the image resolution of the test sample 1 is 776×734, and the prefabricated member 6 in the image contains 14 tools 61. The image resolution of the test sample 2 is 855 x 844, and the prefabricated part 6 in the image contains 16 tools 61. The image resolution of the test sample 3 is 1743 x 363, and the prefabricated part 6 in the image contains 17 tools 61. The image resolution of the test sample 4 was 990 x 550, and 13 tools 61 were included in the preform 6 in the image. The image resolution of the test sample 5 was 1033 x 349, and the prefabricated part 6 in the image contained 17 tools 61. The image resolution of test sample 6 is 1647 x 460. Wherein the test sample 4 is a partial image of the test sample 3.
The results of each test sample identified by the network model in this embodiment are shown in fig. 7-12. Analysis of the above test results found that:
(1) The test samples 1 and 2 have good detection effect, and all the tools 61 contained in the test samples are completely identified. It can be seen that the performance of the target detection model in this embodiment is better.
(2) The test sample 3 has a missing detection, and a partial area (namely the test sample 4) is cut out, but has a better detection effect. This phenomenon occurs because: when the resolution of the image is too high, the model performs a resolution preprocessing on the test image, and adjusts the resolution of the image to 416×416, thereby causing image distortion. Namely, the object detection model provided in this embodiment has a poor generalization effect on high-resolution images. Meanwhile, this also illustrates that the method provided in this embodiment sets the "target sensing area" as the processing area of the actual target detection model; the method is a very correct choice, the image is not needed to be cut, the data volume processed by the network model can be further improved by cutting and processing the image, the real-time performance of the network model is reduced, and the processing speed and the real-time performance of the detection method can be improved by the processing method, and the detection precision of the method can be improved to a certain extent.
(3) The test sample 5 has a false detection phenomenon. Wherein the light-reflecting portion of the surface of the prefabricated member 6 is erroneously recognized as the target tooling 61. This means that the method provided in this embodiment still has a certain dependency on the quality of the image, and therefore the quality of the video or image of the preform 6 acquired should be improved as much as possible. Including in particular the use of higher performance industrial cameras and the provision of higher light effects to the viewing area of the industrial camera. The light compensating lamp 13 is used, for example, at a plurality of angles of the defect detection area to reduce the occurrence of the partial reflection phenomenon or shadow.
(4) The test sample 6 has a missing detection phenomenon. Wherein, only one target tool 61 with high overlapping degree is detected. The result reflects that the detection effect of the network model provided in the embodiment on the target with higher overlapping degree is still to be improved. In this embodiment or other embodiments, the mounting position of the camera may be changed, so that a serious overlapping phenomenon between the tools 61 in the acquired image does not occur as much as possible.
The detection accuracy of the Piclar and FixedPiclar targets is counted by a target detection model based on the YOLO v5 network in the verification test, and the data of average accuracy mAP are calculated as follows:
table 1: detection accuracy of the target detection model in the present embodiment
Type(s) AP(Pillar) AP(fixedpillar) mAP
Results 99.3% 98.3% 98.8%
As can be seen from the superscript, the network model in the method provided by the embodiment has the detection accuracy of more than 98% on different types of targets, so that the method has high practical value and can be popularized and applied.
In addition, on the premise of allowing the performance of the hardware device, in this embodiment or other embodiments, multiple groups of cameras may be further set to view, then, images at different angles are identified, and the identification results at different angles are subjected to weighted fusion processing, so as to obtain more accurate quantity information of the tools 61. And the influence of the overlapping problem of the tooling 61 on the detection precision is eliminated.
Example 2
The present embodiment provides a deployment scheme of an apparatus applying the online detection method in embodiment 1, where the tool 61 defect detection system for the prefabricated part 6 provided in the scheme can completely implement the online detection method in embodiment 1. Specifically, as shown in fig. 13 and 14, the tooling 61 defect detection system for the prefabricated part 6 provided in the present embodiment includes: a video acquisition component 1, a photoelectric sensor 2, a type information identification component 3 and a processing module 4.
The tool 61 defect detection system is mainly used for detecting whether the tool 61 of the prefabricated part 6 in each mold 51 on the mold table 5 passing through one defect detection area has defects or not. The mold table 5 in this embodiment is a platform with a motion assembly at the bottom, the motion state of the motion assembly is controlled by a motion controller, the molds 51 are mounted on the upper surface of the platform, gaps exist between adjacent molds 51 in the platform, the sizes of the molds 51 are different, and the heights of the molds 51 are different. The moving assembly drives the platform and the mold 51 thereon to move along with the production line, the mold table 5 moves, the prefabricated part 6 produced in the mold 51 passes through the defect detection area, and the defect detection process of the tooling 61 is finished in the area.
In this embodiment, the video capture assembly 1 includes a mounting 11, a camera 12, and a light supplement 13. The mounting frame 11 is positioned at the front side of the defect detection area; the camera 12 and the light supplementing lamp 13 are fixed on the mounting frame 11 and are positioned above the defect detection area; the angle of depression of the camera 12 in the viewing direction is less than 90 °. The mounting frame 11 in the embodiment adopts a portal frame type structure, and the camera 12 and the light supplementing lamp 13 are fixed on a cross rod at the top of the mounting frame 11; the camera 12 performs framing in a tilt-down nodding mode. The die table 5 passes through the mounting frame 11 in the moving process and leaves from the area where the camera 12 and the light supplementing lamp 13 are mounted, and the camera 12 completes the integral image capturing process of the die table 5. When the camera 12 shoots the die table 5 passing through the mounting frame 11, shooting can be completed from the right front of the die table 5 or from two sides of the die table 5; or at other angles of the mold table 5. Only two conditions need to be satisfied: (1) The view is taken at an angle inclined rather than vertically downwards, which ensures that the structure of the tooling 61 and the surface of the preform 6 in the acquired image can be clearly distinguished. (2) The camera 12 is adjusted to find the optimal view angle enabling minimum overlapping between the tools 61 in each prefabricated member 6 with respect to the surrounding angle in the horizontal direction of the mold table 5.
The light supplementing lamp 13 in the embodiment is mainly used for overcoming the problem that insufficient light in the defect detection area affects the quality of the image capturing, and further reduces the detection accuracy of the tool 61. In other embodiments, in order to further improve the lighting effect, a greater number of light compensating lamps 13 may be disposed in other areas below the cross bar of the mounting frame 11 on which the light compensating lamps 13 are mounted, so that the brightness of each area on the module 5 is kept consistent during the image capturing process, and the problem of local reflection does not occur.
The video acquisition component 1 is used to acquire real-time video stream data of objects (i.e. the mold table 5) passing through the defect detection area. The view angle of the video stream data acquired by the video acquisition component 1 is inclined downwards; the view finding area of the video capturing component 1 includes a target sensing area, which is an area for feature extraction.
It should be noted that: the target sensing area is not a physical area, but is a virtual area corresponding to the video stream, and in the virtual area, the characteristic information of a specific object (i.e. the tool 61) in each frame image of the video stream can be extracted through the network model. Although the target area is not a physical area, the area is typically relatively fixed in the video stream data, corresponding to a real location within the defect detection area. Namely: when some object moves to a specific position of the defect detection area, the object also enters the target sensing area in the real-time video stream data.
The photo sensor 2 is installed in the defect detection area, and the photo sensor 2 is used for acquiring the position of each mold 51 during the movement of the mold stage 5. In this embodiment, the photoelectric sensor 2 includes a laser emitter 21 and a laser receiver 22, where the laser emitter 21 and the laser receiver 22 are respectively installed at two sides of the movement path of the die table 5, and the connection line direction of the two is perpendicular to the movement direction of the die table 5 and coincides with the front edge of the target sensing area. The installation position of the photo sensor 2 also satisfies: when the positions of the molds 51 in the mold table 5 and the photoelectric sensors 2 coincide, the photoelectric sensors 2 are shielded; when the position on the die table 5 where the die 51 is not mounted coincides with the position of the photo sensor 2, the photo sensor 2 is not shielded. The state signal when the photo sensor 2 is blocked is defined as "0", and the state signal when it is not blocked is defined as "1".
The photo sensor 2 in this embodiment is a set of a laser transmitter 21 and a receiver, which are mutually inductive in the normal state, i.e. the state signal at this time is 1. Wherein the installation height of the photo sensor 2 is set to a position higher than the upper surface of the die table 5 and lower than the upper surface of the die 51 whose height is lowest. In this case, the mold table 5 itself does not block the photo sensor 2, and all the molds 51 of different heights may block the photo sensor.
In the above case, when the front side of the first mold 51 on the mold table 5 first shields the photo sensor 2 after the mold table 5 moves to the defect detection area, it is determined that the mold 51 has been reached. This shielding state is continued when the first mold 51 passes through the defect sensing area until the rear side of the first mold 51 coincides with the position of the photo sensor 2, and then the shielding state of the photo sensor 2 is ended, at which point it is determined that the first mold 51 has completely left the defect sensing area. The second die on the die table 5 and any subsequent dies 51 will go through the same process as described above during movement, and the actual positions of the dies 51 can be determined by the same method.
In order to accommodate the processing of each frame of image in the acquired video stream data in the subsequent process, the present embodiment mounts the photo sensor 2 at a position corresponding to the leading edge of the target sensing area. Considering that the front edge of the target sensing area coincides with the connecting line direction of the photoelectric sensor 2 arranged in the defect detection area; the method for determining the time when the die 51 enters/exits the target sensing area in this embodiment is as follows: (1) Judging that a certain current die 51 enters a target induction zone when the state signal of the photoelectric sensor 2 is changed from 1 to 0; (2) When the state signal of the photo sensor 2 changes from 0 to 1, it is judged that the current mold 51 is leaving the target sensing area. Accordingly, the start frame and the end frame of each frame image input to the network model for feature extraction can be determined according to the timing at which the mold 51 enters and exits the target sensing region.
In this embodiment, when determining whether or not the tooling 61 in the prefabricated member 6 has a defect, it is also necessary to acquire the type information of each prefabricated member 6 and its standard parameters. This part of the work is done by the type information identifying component 3. In the present embodiment, the type information identifying assembly 3 is installed in the defect detection area, and the type information identifying assembly 3 is used to acquire type information of the prefabricated parts 6 produced in the respective molds 51 reaching the inside of the target detection area. The type information identifying component 3 includes an RFID chip 31 and an RFID reader 32; the RFID chip 31 is mounted on the outer surface of each die 51 on the side corresponding to the movement path of the die table 5. The RFID chip 31 stores therein the type information of the prefabricated member 6 produced in the corresponding mold 51. The RFID reader 32 is installed in the defect detection area, and satisfies that the die stage 5 passes through the defect detection area, and at least one moment the RFID chip 31 and the RFID reader 32 are positioned close to each other, so that data reading can be realized between the two.
In order to enable the RFID reader 32 in the present embodiment to acquire data in the RFID chip 31 at the first time when the die 51 enters the target sensor, the present embodiment adjusts the positions of the RFID chip 31 and the RFID reader 32. So that when the front side of a certain die 51 coincides with the front edge of the target sensor (i.e. is detected by the photo sensor 2), the RFID chip 31 and the RFID reader 32 are also in a sensing state where data transmission is possible. Thus, the reference data for judging whether the prefabricated part 6 is qualified can be acquired at the first time, a foundation is laid for later data comparison, and the instantaneity of the system in processing the defect detection problem of the tool 61 is improved.
In this embodiment, a clamping groove for installing the RFID chip 31 is formed in the side surface of the mold 51, and a cover plate capable of being opened and closed is arranged at the clamping groove, and the cover plate is made of a resin material. The clamping groove and the cover plate can play a protective role on the RFID chip 31, and the chip is prevented from being in physical contact with an external object to fail in the using process. Meanwhile, the design of opening and closing is convenient for replacing the chip, and meanwhile, the adoption of the resin material as the cover plate can avoid the interference of the RFID communication process.
As shown in fig. 15, the processing module 4 includes a position acquisition unit 41, a standard parameter acquisition unit 42, a video processing unit 43, a feature extraction unit 44, and a feature comparison unit 45.
The position obtaining unit 41 is configured to obtain a status signal of the photoelectric sensor 2, and further determine a time when any one of the molds 51 enters/exits the target sensing area.
The standard parameter obtaining unit 42 is configured to obtain the type information of the mold 51 identified by the type information identifying component 3 when any one of the molds 51 reaches the target sensing area, and then query a server for the standard parameter corresponding to the current prefabricated component 6 according to the type information. In the present embodiment, the cloud server stores BIM models of all models of prefabricated members 6 to be produced on the production line in advance. When the production line starts to test production and performs quality detection, the cloud server determines whether each type of the received prefabricated component 6 is a first component, and if so, the cloud server returns standard parameters of the type of the prefabricated component 6 and requests to obtain actual measurement values of various parameters of the type of the prefabricated component 6 in a state of meeting error requirements. And after saving the measured values of all parameters of the qualified product, replacing the data in the BIM model with the values to serve as the subsequent standard parameters.
The video processing unit 43 in this embodiment is configured to extract corresponding frames in real-time video stream data associated with each mold 51 according to the time when each mold 51 enters and leaves the target sensing area; and extracts partial images corresponding to the target sensing area in the frame-by-frame images as source images detected by the tool 61. Finally, all the source images associated with the respective molds 51 are sequentially input to the feature extraction unit 44.
The video processing unit 43 in the present embodiment supports two modes of online detection and offline detection. For the online detection mode, the video processing unit 43 determines the moment when a certain mold 51 enters the target induction zone, then acquires the frame image in the video for the tool 61 to detect, and continues to acquire the subsequent frame until the mold 51 is detected to leave the target induction zone. In the off-line detection mode, the video processing unit 43 records the time when a certain mold 51 enters/exits the target sensing area. Then, corresponding video clips are cut out from the real-time video, and tool 61 detection is carried out according to the frame-by-frame images of the cut video clips.
As shown in fig. 16, the feature extraction unit 44 includes a target detection subunit 441 and a target tracking subunit 442; the target detection subunit 441 is configured to perform target detection on source images associated with each mold 51, and extract all the tools 61 in the prefabricated member 6 in each source image; the target tracking subunit 442 is configured to perform target tracking on the tools 61 appearing in all the source images, and sequentially configure an identity code with global uniqueness for the newly appearing tools 61 in each frame, where the target tracking subunit 442 also counts the number of tools 61 in each mold 51 and the position information corresponding to each tool 61. The feature comparing unit 45 is configured to compare the number information and the position information of the tooling 61 extracted by the feature extracting unit 44 with standard parameters, and determine whether the number information and the position information completely match with the standard parameters, if yes, determine that the tooling 61 of the prefabricated part 6 is not defective, otherwise determine that the tooling 61 of the prefabricated part 6 is defective.
It should be noted that, in this embodiment, instead of taking the entire frame-by-frame image in the real-time video as the input of the feature extraction unit 44, a part of the image in the corresponding target sensing region in the frame-by-frame image is taken as the input of the feature extraction unit 44. The reason for this is that: the camera 12 in the defect detection system of the tool 61 is mainly an industrial camera, and the resolution of the industrial camera is generally high, and the view finding range is relatively large. In this case, the data amount of each frame image of the video tends to be large, which puts pressure on the processing procedure of the feature extraction unit 44, affecting the real-time performance in the system processing procedure. Meanwhile, the feature extraction unit 44 in this embodiment is insufficient in generalization capability for high-resolution images, and when processing high-resolution images in a large range, the images are first subjected to resolution processing, which may cause image distortion and affect the accuracy of target extraction.
The present embodiment solves this problem by setting the target sensing area, which is the optimal region of interest input in the feature extraction unit 44, to maintain the image size, and can obtain the optimal recognition effect and rate. Meanwhile, considering the high frame rate of the industrial camera, the target sensing area set in the embodiment also cannot cause the condition that targets are lost in the frame-by-frame images, because the frame rate is high enough, the part outside the target sensing area in a certain frame image can be necessarily present in other frames, and the subsequent target detection and target tracking processes cannot be interfered.
Specifically, the target detection subunit 441 in this embodiment uses a trained YOLO V5-based network model to detect the tooling 61 in the prefabricated member 6. The target tracking subunit 442 tracks each tool 61 extracted in the target detection subunit 441 by using a network model based on the SORT algorithm, determines the association between the tool 61 extracted in each frame of image and the tool 61 in the previous frame of image, and further counts the number information and the position information of the tools 61 in the prefabricated member 6. The number information of the tools 61 may be obtained by a statistical method, and the position information of the tools 61 may be calculated according to the corresponding pixel positions of each tool 61 in the image. Meanwhile, the position information of the tool 61 can be calculated by combining the movement speed of the die table 5 and the moment when the photoelectric sensor 2 (the front edge of the target sensing area) presses the corresponding tool 61, so that the calculation result obtained based on the pixel position is corrected.
The training, verifying and testing process of the network model in the target detection subunit 441 is specifically described in embodiment 1, and is not described in detail in this embodiment.
In this embodiment, the defect detection system of the tooling 61 further includes an alarm 7, and the alarm 7 is electrically connected with the processing module 4; the processing module 4 is further configured to send a control command for stopping operation to the motion controller of the mold table 5 when detecting that a certain prefabricated member 6 has a defect of the tooling 61, and control the alarm 7 to send an alarm signal indicating the defect of the tooling 61.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of the invention should be assessed as that of the appended claims.

Claims (7)

1. An on-line detection method for the absence of a tool in an assembled prefabricated part is used for detecting whether the tool in the prefabricated part processed in a die meets the requirement or not on a die table of a production line in real time; the real-time online detection method is characterized by comprising the following steps of:
s1: constructing a real-time video-based tool detection network; the tool detection network comprises a target detection network and a target tracking network, which are respectively used for carrying out target detection and target tracking processing on the shot real-time video of the prefabricated component to obtain the quantity information and the position information of the tools in the prefabricated component;
s2: acquiring real-time video of the motion state of a mold table carrying the prefabricated part, which is shot along a fixed downward-inclined view angle; defining one side of each prefabricated part, which corresponds to the movement, as a front side, and defining one side of each prefabricated part, which corresponds to the movement, as a rear side;
s3: setting a rectangular target sensing area for performing on-line detection, wherein the length of the target sensing area is equal to the length and width of a shot video frame of the real-time videoW a The following formula is adopted for calculation:
in the above-mentioned method, the step of,W v is the width pixel value of the video frame,T max for the maximum time the tool stays in the video, F max For the average frame rate when video is processed in real time,F min the minimum number of frames for the center point of the tool to stay in the induction zone;
the target induction area is a virtual area, the target induction area corresponds to a real defect detection area which is actually detected in the field and is passed by the die table, and the area shot at a fixed visual angle in the real-time video is the area of the defect detection area; the range of the defect detection area at least comprises the whole area of the target induction area;
defining one side of the target induction zone, which corresponds to the movement direction of the die table, as the front edge of the target induction zone; one side corresponding to the movement direction of the die table is the trailing edge of the target induction zone;
s4: performing target detection and target tracking on the shot real-time video by adopting the tool detection network, and sequentially acquiring type information in each prefabricated component appearing in the real-time video, and quantity signals and position information of tools in the prefabricated components; the acquisition method specifically comprises the following steps:
s41: judging whether the front side of the prefabricated component to be arrived in the real-time video coincides with the front edge of the target induction zone, if so, acquiring the type information of the prefabricated component currently entering the target induction zone, and entering the next step; otherwise, continuing waiting;
The method for acquiring the type information of the prefabricated part currently entering the target induction zone comprises the following steps:
(1) A radio frequency identification chip pre-storing the type information of the prefabricated part in each mold is arranged on the side face of each mold in the mold table;
(2) A radio frequency identification card reader for reading the stored data in the radio frequency identification chip is arranged in the defect detection area; the installation position of the radio frequency identification card reader meets the following conditions: when the front side of the prefabricated part is overlapped with the front edge of the target induction zone, the radio frequency identification chip on the side surface of the prefabricated part is close to the video identification card reader, and the condition of reading the information inside the radio frequency identification chip is achieved;
s42: sequentially carrying out target detection on the part corresponding to the target induction zone in each frame of the current real-time video through the target detection network, extracting all tools appearing in the target induction zone, and recording the position information of each tool;
s43: performing target tracking on each tool extracted by the target detection network in each frame through the target tracking network, so as to allocate an identity code with global uniqueness to each newly added tool, and calculate the position information of the identity code; returning target information with the identification code and the position information to the target detection network;
S44: judging whether the rear side of the prefabricated component currently executing target detection and target tracking coincides with the front edge of the target induction zone; if yes, counting the quantity information and the position information of all the tools in the current prefabricated part, and returning to the step S41 to wait for executing the target detection and target tracking process of the next prefabricated part; otherwise, returning to the step S42 to continue to execute the target detection and target tracking process of the current prefabricated component;
the method for judging whether the front side or the rear side of the prefabricated component coincides with the front edge of the target induction zone is as follows:
(1) A group of photoelectric sensors are arranged in the defect inspection area, and the detection direction of each photoelectric sensor coincides with the straight line where the front edge of the corresponding target induction area is located; and the installation position of the photoelectric sensor satisfies the following conditions: when any one of the prefabricated components on the die table passes through the photoelectric sensor, the photoelectric sensor is shielded, and the state signal of the photoelectric sensor is changed; defining a state signal of '1' when the photoelectric sensor is not shielded, and defining a state signal of '0' when the photoelectric sensor is shielded;
(2) Before any one of the prefabricated parts reaches the defect detection area, the state signal of the photoelectric sensor is 1; when one of the prefabricated parts enters the defect detection area, the front side of the prefabricated part is overlapped with the front edge of the target induction area first; at the moment, the photoelectric sensor is just shielded by the prefabricated part, the state signal of the photoelectric sensor is switched from 1 to 0, and the front side of the prefabricated part is judged to be coincident with the front edge of the target induction zone at the moment;
(3) Before the prefabricated part leaves the defect detection area, the state signals of the photoelectric sensors are all 0; when the prefabricated part completely leaves the defect detection area, the rear side of the prefabricated part is firstly overlapped with the front edge of the target induction area; at the moment, the photoelectric sensor just recovers to a non-shielding state, the state signal of the photoelectric sensor is switched from 0 to 1, and the rear side of the prefabricated part is judged to be overlapped with the front edge of the target sensing area at the moment;
s5: inquiring a cloud database according to the acquired type information of the current prefabricated component, and acquiring the reference values of the quantity information and the position information of the tooling in the prefabricated component, which are stored in the cloud database in advance; comparing the reference value with the actual measurement value of the quantity information and the position information of the tooling in the current prefabricated component obtained in the previous step, judging whether the quantity information and the position information completely coincide with each other, and if so, judging that the tooling of the current prefabricated component is complete; otherwise, judging that the tooling of the current prefabricated part is missing.
2. The method for on-line detection of tool missing in prefabricated parts according to claim 1, wherein the method comprises the following steps: in step S1, the target detection network selects a network model based on YOLO V5; the target detection network adopts the pictures of the prefabricated component shot at the same shooting angle as the video acquired in the step S2 to sequentially complete the training, testing and verifying processes of the network model; and the target tracking network selects a network model based on the SORT algorithm, and adjusts the target tracking life cycle parameter value of the target tracking network to be 1-5.
3. The method for on-line detection of tool missing in prefabricated parts according to claim 2, wherein the method comprises the following steps: the training, verifying and testing process of the target detection network is specifically as follows:
(1) Acquiring original images of various different types of prefabricated tools meeting the shooting angle requirement, and preprocessing the original images; obtaining all clear images which keep the complete structure of the prefabricated part, wherein each clear image forms an original data set;
(2) And manually labeling the images in the original data set, wherein the labeled objects are the prefabricated components and the tools on the surfaces of the prefabricated components, and labeled labeling information comprises: type information of the prefabricated parts, quantity information and position information of tools in the prefabricated parts; simultaneously storing the image and the corresponding marking information thereof to obtain a new data set, and randomly dividing the new data set into a training set, a verification set and a test set according to the data ratio of 8:1:1;
(3) Training the constructed target detection network for multiple times by using the training set, and verifying the target detection network through the verification set after each round of training is finished to obtain loss values of the target detection network in a training stage and a verification stage respectively; stopping the training process after the loss value obtained by the training set is reduced and the loss value obtained by the verification set is increased in each round; and saving five network models with top five ranks of loss values obtained in the training stage;
(4) And testing the five stored network models by using the test set, and then taking the network model with the highest mAP value in the test result as the final target detection network.
4. The method for on-line detection of tool missing in prefabricated parts according to claim 1, wherein the method comprises the following steps: in step S2, the acquired real-time video is photographed at a viewing angle satisfying the depression angle of less than 90 °, and the installation position of the photographing apparatus includes the right front or both sides along the movement direction of the die table.
5. The method for on-line detection of tool missing in prefabricated parts according to claim 1, wherein the method comprises the following steps: in step S4, in the process of executing target detection and target tracking on the relevant video frames of each prefabricated part, according to the pipeline number of each prefabricated part, classifying and storing the images in the corresponding video frames as data for archiving; the archived data is part of image frames of the video stream data where the tooling overlaps the target induction area.
6. The method for on-line detection of tool missing in prefabricated parts according to claim 1, wherein the method comprises the following steps: in step S5, the cloud database stores feature information of the tool extracted from the BIM model associated with all types of prefabricated components to be produced on the production line; when the production line starts to produce and detect, comparing the acquired quantity information and position information of the tools in the first part of each type of prefabricated component with ideal parameters in a corresponding BIM model, and taking the quantity information and position information of the tools detected and acquired in the first part as reference values for subsequent tool missing detection when the first part meets error requirements of each parameter; and when the produced first part does not meet the error requirement, scrapping the current prefabricated part, and re-producing and determining the first part.
7. The method for on-line detection of a tooling defect in an assembled prefabricated part according to any one of claims 1 to 6, wherein the on-line detection method further comprises:
when the absence of the tooling of the current prefabricated part is detected, stopping the movement process of the die table, and sending an alarm signal to the front end of the production line.
CN202111030371.0A 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part Active CN113723841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111030371.0A CN113723841B (en) 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111030371.0A CN113723841B (en) 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part

Publications (2)

Publication Number Publication Date
CN113723841A CN113723841A (en) 2021-11-30
CN113723841B true CN113723841B (en) 2023-07-25

Family

ID=78681288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111030371.0A Active CN113723841B (en) 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part

Country Status (1)

Country Link
CN (1) CN113723841B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821478B (en) * 2022-05-05 2023-01-13 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI571809B (en) * 2014-11-14 2017-02-21 財團法人資訊工業策進會 Working item audit system and method thereof
CN104483234A (en) * 2014-12-19 2015-04-01 中冶集团武汉勘察研究院有限公司 Automatic iron ore grade detection system and method
US20170083790A1 (en) * 2015-09-23 2017-03-23 Behavioral Recognition Systems, Inc. Detected object tracker for a video analytics system
CN107809452A (en) * 2017-09-05 2018-03-16 杨立军 A kind of data by monitored equipment upload to high in the clouds and carry out regular and analysis system
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
US10260232B1 (en) * 2017-12-02 2019-04-16 M-Fire Supression, Inc. Methods of designing and constructing Class-A fire-protected multi-story wood-framed buildings
CN110348546A (en) * 2019-05-31 2019-10-18 云南齐星杭萧钢构股份有限公司 A kind of air navigation aid of assembled architecture prefabricated components installation
CN110568831A (en) * 2019-09-20 2019-12-13 惠州市新一代工业互联网创新研究院 First workpiece detection system based on Internet of things technology
KR102416226B1 (en) * 2019-11-25 2022-07-01 연세대학교 산학협력단 Generation Method of management-information on construction sites by using Image Capturing and Computer Program for the same
CN110992349A (en) * 2019-12-11 2020-04-10 南京航空航天大学 Underground pipeline abnormity automatic positioning and identification method based on deep learning

Also Published As

Publication number Publication date
CN113723841A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN113723325B (en) Frock defect detecting system of prefabricated component of assembled
CN111951237B (en) Visual appearance detection method
CN106226325B (en) A kind of seat surface defect detecting system and its method based on machine vision
CN108760747A (en) A kind of 3D printing model surface defect visible detection method
CN111951238A (en) Product defect detection method
CN105844659B (en) The tracking and device of moving component
CN101532926A (en) On-line vision detecting system for automatic impact specimen processing device and image processing method thereof
CN102915432B (en) A kind of vehicle-mounted microcomputer image/video data extraction method and device
CN106651849A (en) Area-array camera-based PCB bare board defect detection method
CN104949990A (en) Online detecting method suitable for defects of woven textiles
CN102879404B (en) System for automatically detecting medical capsule defects in industrial structure scene
CN113723841B (en) On-line detection method for tool missing in assembled prefabricated part
CN109507205A (en) A kind of vision detection system and its detection method
CN111207304B (en) Railway tunnel leaky cable vision inspection device and product positioning detection method
CN113643206A (en) Cow breathing condition detection method
CN113676669A (en) Image acquisition device, method, storage medium, and apparatus
CN113588653A (en) System and method for detecting and tracking quality of aluminum anode carbon block
CN111105413B (en) Intelligent spark plug appearance defect detection system
CN114705691B (en) Industrial machine vision control method and device
CN103366598B (en) A kind of vehicle parking is to position detecting system and detection method
CN112257514B (en) Infrared vision intelligent detection shooting method for equipment fault inspection
CN113155865A (en) Multi-camera-based aluminum die casting hole inner wall defect detection system and detection method
CN105787514B (en) Temperature checking method based on infrared vision matching
CN104048968A (en) Industrial processing part automatic defect identification system
CN108458655A (en) Support the data configurableization monitoring system and method for vision measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant