CN113723841A - Online detection method for tool missing in assembly type prefabricated part - Google Patents

Online detection method for tool missing in assembly type prefabricated part Download PDF

Info

Publication number
CN113723841A
CN113723841A CN202111030371.0A CN202111030371A CN113723841A CN 113723841 A CN113723841 A CN 113723841A CN 202111030371 A CN202111030371 A CN 202111030371A CN 113723841 A CN113723841 A CN 113723841A
Authority
CN
China
Prior art keywords
prefabricated part
target
detection
area
tool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111030371.0A
Other languages
Chinese (zh)
Other versions
CN113723841B (en
Inventor
李学俊
谢佳员
琚川徽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Green Industry Innovation Research Institute of Anhui University
Original Assignee
Green Industry Innovation Research Institute of Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Green Industry Innovation Research Institute of Anhui University filed Critical Green Industry Innovation Research Institute of Anhui University
Priority to CN202111030371.0A priority Critical patent/CN113723841B/en
Publication of CN113723841A publication Critical patent/CN113723841A/en
Application granted granted Critical
Publication of CN113723841B publication Critical patent/CN113723841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of building industry, in particular to an on-line detection method for tooling loss in an assembled prefabricated part. The method comprises the following steps: s1: constructing a tool detection network based on a real-time video; s2: and acquiring a real-time video of the motion state of the die table with the prefabricated part. S3: a rectangular target sensing area for performing online detection is set. S4: adopting a tool detection network to perform target detection and target tracking on the real-time video, and sequentially acquiring type information of each prefabricated part appearing in the real-time video, and a number signal and position information of tools in the prefabricated part; s5: inquiring a cloud database according to the type information, and acquiring the reference values of the quantity information and the position information of the tools in the prefabricated part; and comparing the reference value measured values to judge whether the product is qualified. The invention solves the problems of low efficiency of manual quality detection of the prefabricated part, poor detection precision and real-time performance of automatic detection and difficulty in meeting the requirements.

Description

Online detection method for tool missing in assembly type prefabricated part
Technical Field
The invention relates to the field of building industry, in particular to an on-line detection method for tooling loss in an assembled prefabricated part.
Background
The assembly type building, which is a building assembled by previously processing various types of building components in a factory and then transporting the building components to a construction site through reliable connection, is an important direction for the development of the construction industry. Compared with the existing cast-in-place structure building, the cast-in-place structure building has the advantages of large-scale production, high construction speed and low construction cost.
The quality control of the building prefabricated part is the core of guaranteeing the quality of the fabricated building. Flaws in any one prefabricated element may have an inevitable effect on the final building quality, thereby causing an immeasurable loss to the overall construction project. A large number of various tools for assembly and installation are reserved in a prefabricated part, parameter detection of the tools is important content of quality detection of the prefabricated part, if the number or the positions of the tools on the prefabricated part are not consistent with the design, high maintenance cost can be generated, even the prefabricated part can be directly scrapped, and great loss is caused to enterprises.
In an industrial building prefabricated part production workshop, a plurality of different types of prefabricated parts are produced according to order requirements. In order to reduce the cost, the existing production enterprises generally adopt a mode of converting a plurality of components on the same production line for production. In production, different products have various technical categories, large difference of operation processes and complex index system. The existing prefabricated parts are large in size, multiple in type and complex in structure; this brings great difficulty for the design of the automatic and intelligent technical scheme for the quality detection of the prefabricated parts. The existing automatic detection method is difficult to solve the problem of tool detection in the prefabricated building components, the accuracy of a detection result is often difficult to meet the requirement, and even manual recheck is needed.
For the reasons, a large number of enterprises still adopt a manual detection mode to complete the inspection of the tooling quality in the prefabricated part; the manual detection is inefficient and can also affect the progress of the production process. The existing research based on computer vision technology is dedicated to detecting the surface appearance defects of the prefabricated parts; and the research based on the three-dimensional laser scanning technology aims to detect the specification and dimension deviation defects of the prefabricated parts, but the research focuses on the structural defects of the prefabricated parts rarely. The quality control of the structural defects mainly depends on manual experience. Meanwhile, for the quality problem checked out afterwards, if the quick feedback cannot be formed timely with the front-end production process, the generation of large-batch scrapped products can be caused, and the quality problem brings great loss to enterprises. The capacity of enterprises for large-scale production can be improved only by designing a set of detection method for realizing online detection and forming feedback with production procedures, but the existing detection methods of various types cannot meet the requirements of related technical indexes.
In addition, some common pure machine vision defect detection methods are gradually tested and applied in a scene of real-time detection of tool defects on line. However, the method still has the defects of insufficient accuracy of the identification effect and poor real-time performance; the data processing amount is extremely large, and the requirement on hardware is high.
Disclosure of Invention
On the basis, the problems that in the prior art, the manual quality detection efficiency of the prefabricated part is low, the detection effect of an automatic detection method is insufficient, the detection precision and the real-time performance are poor, and the online detection requirement is difficult to meet need to be solved; the method for detecting the tool loss in the prefabricated part on line is provided.
The invention discloses an online detection method for tool loss in an assembled prefabricated part, which is used for detecting whether the tool in the prefabricated part processed in a mold meets the requirement in real time on the mold table of a production line. The real-time online detection method comprises the following steps:
s1: constructing a tool detection network based on a real-time video; the tool detection network comprises a target detection network and a target tracking network which are respectively used for carrying out target detection and target tracking processing on the shot real-time video of the prefabricated part to obtain the quantity information and the position information of the tools in the prefabricated part.
S2: acquiring a real-time video of the motion state of a mold table which is shot along an obliquely downward fixed visual angle and is loaded with a prefabricated part; the side of each prefabricated part corresponding to the movement direction is defined as a front side, and the side corresponding to the movement direction is defined as a rear side.
S3: setting a rectangular target sensing area for performing online detection, wherein the length of the target sensing area is equal to the length of a video frame of the shot real-time video, and the width of the target sensing area is equal to the width of the video frameaCalculated using the following formula:
Figure BDA0003244937020000021
in the above formula, WvIs the width pixel value, T, of a video framemaxFor the maximum time that the tool stays in the video, FmaxAverage frame rate in real-time video processing, FminThe minimum number of frames for the center point of the tool to stay in the induction area.
Defining the front edge of the target induction area at the side of the target induction area corresponding to the moving direction of the mold table; and one side of the mold table corresponding to the moving direction is the back edge of the target induction area.
S4: performing target detection and target tracking on the shot real-time video by adopting a tool detection network, and sequentially acquiring type information in each prefabricated part appearing in the real-time video, and a number signal and position information of tools in the prefabricated part; the acquisition method specifically comprises the following steps:
s41: judging whether the front side of the prefabricated part to be reached in the real-time video is overlapped with the front edge of the target induction area, if so, acquiring the type information of the prefabricated part currently entering the target induction area, and entering the next step; otherwise, the waiting is continued.
S42: and sequentially carrying out target detection on the part of the corresponding target induction area in each frame of the current real-time video through a target detection network, extracting all tools appearing in the target induction area, and recording the position information of each tool.
S43: performing target tracking on each tool extracted from each frame by a target detection network through a target tracking network, so as to allocate an identity mark code with global uniqueness to each newly added tool and calculate position information of the identity mark code; and then returning the target information with the identity code and the position information to the target detection network.
S44: judging whether the rear side of the prefabricated part currently executing target detection and target tracking is superposed with the front edge of the target induction area; counting the quantity information and the position information of all tools in the current prefabricated part, and returning to the step S41 to wait for executing the target detection and target tracking process of the next prefabricated part; otherwise, the process returns to step S42 to continue the target detection and target tracking process of the current prefabricated part.
S5: inquiring a cloud database according to the acquired type information of the current prefabricated part, and acquiring reference values of quantity information and position information of tools in the prefabricated part, which are pre-stored in the cloud database; comparing the reference value with the actual measurement values of the quantity information and the position information of the tools in the current prefabricated part obtained in the previous step, judging whether the reference value and the actual measurement values completely accord with each other, and if so, judging that the tools of the current prefabricated part are complete; otherwise, judging that the tooling of the current prefabricated part is missing.
As a further improvement of the present invention, in step S1, the target detection network selects a network model based on YOLO V5; the target detection network adopts the pictures of the prefabricated parts shot at the same shooting angle as the video obtained in the step S2 to finish the training, testing and verifying processes of the network model in sequence; the target tracking network selects a network model based on an SORT algorithm, and the target tracking life cycle parameter value of the target tracking network is adjusted to 1-5.
As a further improvement of the invention, the training, verification and test process of the target detection network is specifically as follows:
(1) acquiring original images of various types of prefabricated tools meeting the requirements of shooting angles, and preprocessing the original images; and obtaining all clear images of the complete structure of the reserved prefabricated part, wherein each clear image forms an original data set.
(2) Carrying out manual labeling on the image in the original data set, wherein the labeled object is a prefabricated part and a tool on the surface of the prefabricated part, and the labeled mark information comprises: the type information of the prefabricated part, the number information and the position information of the tools in the prefabricated part. And simultaneously storing the images and the corresponding marking information to obtain a new data set, and randomly dividing the new data set into a training set, a verification set and a test set according to the data ratio of 8:1: 1.
(3) Carrying out multiple rounds of training on the constructed target detection network by using a training set, and verifying the target detection network through a verification set after each round of training is finished to respectively obtain loss values of the target detection network in a training stage and a verification stage; stopping the training process when the loss value obtained by the training set in each round is reduced and the loss value obtained by the verification set is increased; and storing five network models with loss values ranked in the top five obtained in the training stage.
(4) And testing the five stored network models by using the test set, and then taking the network model with the highest mAP value in the test result as a final target detection network.
As a further improvement of the present invention, in step S2, the shooting angle of view of the acquired live video satisfies a depression angle of less than 90 °, where the depression angle is an angle between the viewing direction of the image and the horizontal direction. The installation position of the shooting equipment comprises the position right in front of or on two sides of the moving direction of the die table; the shooting can be carried out from the right front of the mould table or from two sides of the mould table in the actual image taking process.
As a further improvement of the invention, the target sensing area set in step 3 is a virtual area, the target sensing area corresponds to a real defect detection area through which the mold passes in the actual detection field, and the area shot by the fixed visual angle in the real-time video is the area of the defect detection area; the range of the defect detection area at least comprises the whole area of the target sensing area.
As a further improvement of the present invention, in steps S41 and S44, the method of determining whether the front side or the rear side of the prefabricated part coincides with the leading edge of the target sensing area is as follows:
(1) a group of photoelectric sensors are arranged in the defect inspection area, and the detection direction of each photoelectric sensor is superposed with the straight line where the front edge of the corresponding target sensing area is located; and the mounting position of the photoelectric sensing area satisfies: when any prefabricated part on the die table passes through the photoelectric sensor, the photoelectric sensor is shielded, and the state signal of the photoelectric sensor is further changed; the status signal when the photoelectric sensor is not shielded is defined as "1", and the status signal when the photoelectric sensor is shielded is defined as "0".
(2) Before any prefabricated part reaches the defect detection area, the state signal of the photoelectric sensor is 1. When one prefabricated part enters the defect detection area, the front side of the prefabricated part is firstly superposed with the front edge of the target sensing area; at the moment, the photoelectric sensor is just shielded by the prefabricated part, and the state signal of the photoelectric sensor is switched from 1 to 0. And determining that the front side of the prefabricated part coincides with the front edge of the target sensing area at the moment.
(3) Before the prefabricated part leaves the defect detection area, the state signals of the photoelectric sensors are all 0. When the prefabricated part completely leaves the defect detection area, the rear side of the prefabricated part is firstly superposed with the front edge of the target sensing area; at this time, the photoelectric sensor just restores to the unshielded state again, and the state signal of the photoelectric sensor is switched from 0 to 1. And determining that the rear side of the prefabricated part coincides with the front edge of the target sensing area at the moment.
As a further improvement of the present invention, in step S41, the method of acquiring the type information of the prefabricated part currently entering the target sensing area is as follows:
(1) an RFID chip pre-storing the type information of the prefabricated parts in the die is arranged on each die side in the die table
(2) Arranging a radio frequency identification card reader for reading storage data in a radio frequency identification chip in the defect detection area; the installation position of the radio frequency identification card reader meets the following requirements: when the front side of the prefabricated part is superposed with the front edge of the target induction area, the radio frequency identification chip on the side surface of the prefabricated part is close to the video identification card reader, and the condition of reading the internal information of the radio frequency identification chip is achieved.
As a further improvement of the present invention, in step S4, in the process of performing target detection and target tracking on the relevant video frames of each prefabricated component, the images in the corresponding video frames are stored in a classified manner as data files according to the pipeline number of each prefabricated component; the archived data is a part of image frames of the video stream data, wherein the tooling is overlapped with the target induction area.
As a further improvement of the present invention, in step S5, the cloud database stores feature information extracted from the BIM model, which is associated with all models of prefabricated parts to be produced on the production line; when the production line starts to produce and detect, comparing the acquired quantity information and position information of the tools in the first piece of each type of prefabricated part with the corresponding ideal parameters in the BIM model, and when the first piece meets the error requirements of each parameter, taking the quantity information and position information of the tools detected and acquired in the first piece as reference values for subsequently executing tool missing detection; and when the produced first piece does not meet the error requirement, discarding the current prefabricated part, and reproducing and determining the first piece.
As a further improvement of the present invention, the online detection method further comprises:
when the tool of the current prefabricated part is detected to be missing, stopping the moving process of the die table, and sending an alarm signal to the front end of the production line; and then, the technicians archive the quantity information, the position information and the data of the tools in the obtained current prefabricated part photo to make corresponding decisions of problem duplicate analysis and problem product scrapping treatment.
The invention provides an online detection method for tooling loss in an assembled prefabricated part, which has the following beneficial effects:
1. the invention discloses an on-line detection method for tooling loss, which is a method for fusing machine vision and sensor technology. A photoelectric sensor, an RFID chip and an RFID card reader are arranged at proper positions of a target sensing area, so that the accurate positions of all prefabricated parts can be obtained in time without depending on image processing, and the type information of all prefabricated parts can be obtained in a targeted manner; this has reduced hardware equipment's data processing pressure, has ensured the real-time nature of frock testing process. Because the method provided by the invention has lower performance requirement on the hardware of image processing, and the real-time performance of the system is good. Therefore, the tool quality of the prefabricated part can be detected on line, the tool can be adapted to the existing production line, and non-stop production and non-stop detection are realized.
2. In the tool missing online detection method provided by the invention, a network model based on YOLO V5 is respectively adopted as a target detection network, and an SORT network model is adopted as a target tracking network, so that the effects of feature extraction and target tracking of frame-by-frame images can be realized; thereby the quantity information and the positional information of frock in each prefabricated component of more accurate acquisition. In addition, the light supplement lamp is reasonably installed, and the installation position of the camera is adjusted; the tool can be protruded from the background in the acquired image, and the identification accuracy of the tool is further improved.
3. According to the invention, a target induction area is set in the image processing process and is used as an interested area in each frame image of the image, so that the accuracy of the system in processing the tool extraction problem is improved; the defect of insufficient generalization performance of the system in high-resolution images is overcome. Meanwhile, the calculation pressure of the processing unit is reduced, and the real-time performance of the system is guaranteed.
4. The invention can also send out an alarm in time when the tool defects of the prefabricated part products on the production line are detected, and stop the operation of the production line, thereby reducing the defect rate of the products and the production loss of enterprises.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for detecting tool missing in an assembled prefabricated component in an online manner according to embodiment 1 of the present invention;
FIG. 2 is a flowchart of a process of completing training, verification and testing by a target detection network in embodiment 1 of the present invention;
fig. 3 is a flowchart of a process of tool extraction performed by the target detection network in embodiment 1 of the present invention;
FIG. 4 is a logic diagram of a process of detecting defects in a tool in embodiment 1 of the present invention;
fig. 5 is an example of a picture for manually labeling a tool in a data set image in a performance test according to embodiment 1 of the present invention;
FIG. 6 is a diagram of the basic architecture of the YOLO V5 network model in embodiment 1 of the present invention;
FIG. 7 shows the result of detecting the target of the tool in the test sample 1 in example 1 of the present invention;
FIG. 8 shows the result of detecting the target of the tool in the test sample 2 in example 1 of the present invention;
FIG. 9 shows the result of detecting the target of the tool in the test sample 3 in example 1 of the present invention;
FIG. 10 shows the result of detecting the target of the tool in the test sample 4 in example 1 of the present invention;
FIG. 11 shows the result of detecting the target of the tool in the test sample 5 in example 1 of the present invention;
FIG. 12 shows the result of detecting the target of the tool in the test sample 6 in example 1 of the present invention;
fig. 13 is a schematic structural diagram of a tooling defect detection system for an assembled prefabricated part in embodiment 1 of the present invention;
fig. 14 is a system topology diagram of a tooling defect detection system for an assembled prefabricated part in embodiment 1 of the present invention;
FIG. 15 is a block diagram of a processing module according to embodiment 1 of the present invention;
fig. 16 is a block diagram of a feature extraction unit according to embodiment 1 of the present invention;
labeled as: 1. a video acquisition component; 2. a photoelectric sensor; 3. a type information identification component; 4. a processing module; 5. a mould table; 6. prefabricating a component; 7. an alarm; 11. a mounting frame; 12. a camera; 13. a light supplement lamp; 21. a laser transmitter; 22. a laser receiver; 31. an RFID chip; 32. an RFID card reader; 41. a position acquisition unit; 42. a standard parameter acquisition unit; 43. a video processing unit; 44. a feature extraction unit; 45. a feature comparison unit; 51. a mold; 61. assembling; 441 target detection subunit; 442. and a target tracking subunit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "or/and" includes any and all combinations of one or more of the associated listed items.
Example 1
The embodiment provides an online detection method for detecting the absence of the tool 61 in the prefabricated part 6, which is used for detecting whether the tool 61 in the prefabricated part 6 processed in the die 51 meets the requirement in real time on the die table 5 of a production line. As shown in fig. 1, the real-time online detection method includes the following steps:
s1: constructing a tool 61 detection network based on a real-time video; the tool 61 detection network comprises a target detection network and a target tracking network, which are respectively used for performing target detection and target tracking processing on the shot real-time video of the prefabricated part 6 to obtain the quantity information and the position information of the tools 61 in the prefabricated part 6.
The problem to be solved by the present embodiment is to perform performance testing of the prefabricated parts 6 on the moving mold table 5 on the running production line. Therefore, a network model capable of extracting features of the tool 61 in the prefabricated part 6 passing on the mold table 5 based on the video stream data is required to be built. The problem of extracting the tools 61 in the prefabricated part 6 is divided into two parts in the embodiment, wherein the first part is all the tools 61 appearing in each video frame; the second part is to track the targets of the tools 61 appearing in different frames, so as to distinguish whether the tools 61 in the current frame and the tools 61 in the previous frame are the same or not, and otherwise, an identity mark code with global uniqueness is distributed for the newly added tools 61; and then the accurate number and position of all the tools 61 in one prefabricated part 6 are counted. The two parts of work contents need to be realized through a target detection network and a target tracking network respectively.
Specifically, in the present embodiment, the target detection network selection is based on the network model of YOLO V5. Selecting a network model based on an SORT algorithm by a target tracking network; the SORT algorithm is a multi-target tracking algorithm, and is used as a network model for tracking each tool 61 in the prefabricated part 6 in the embodiment. In consideration of the matching relationship between the size of the target sensing area and the movement speed of the mold table 5, the target tracking life cycle parameter value of the target tracking network is adjusted to 1-5 in this embodiment.
The network model of YOLO V5 used in this embodiment is a classic target detection network based on computer vision, and in this embodiment, a basic model architecture of the target detection network for identifying each tool 61 in the prefabricated part 6 is constructed based on the network. And the training, testing and verifying processes of the network model are completed through the real images with the same shooting angle and quality as those in the actual video stream data.
In this embodiment, as shown in fig. 2, the training, verifying and testing process of the target detection network is specifically as follows:
(1) acquiring original images of various types of prefabricated tools 61 meeting the requirements of shooting angles, and preprocessing the original images; all sharp images are obtained, which preserve the complete structure of the prefabricated part 6, each sharp image constituting the original data set.
The requirements for the images in the original dataset are as follows:
a. the captured image should be the same as the shot angle in the later captured video stream data of the pipeline. Therefore, the consistency of objects in the data of the training set in the training stage in the actual processing process can be ensured, and the training effect of the network model is further ensured.
b. The shooting visual angle of the acquired real-time video or the images in the original data set meets the condition that the depression angle is smaller than 90 degrees, and the installation position of the shooting equipment comprises the position right ahead or two sides along the moving direction of the mold table 5. In the present embodiment, it is necessary to extract each tool 61 in the prefabricated part 6, and the tool 61 is a metal member, mainly a metal bolt-like member, protruding from the upper surface of the prefabricated part 6, which is clearly distinguished at an angle of inclining downward and becomes indistinguishable at an angle of vertically downward, and the angle of depression photographed in the present embodiment is preferably 30 to 60 °. The installation position of the device for simultaneously capturing images should preferably be such that the individual tools 61 in the preform do not overlap at this angle as much as possible.
c. The image should remain clear and complete. The images with blurriness, serious noise, double images, poor light conditions and overexposure in the acquired images can be removed. The image is simultaneously cropped to preserve the complete structure of the prefabricated part 6 as much as possible and to remove the background outside the prefabricated part 6. Thereby reducing the interference of the irrelevant objects to the network model training.
d. The image of each prefabricated part 6 reflected in the image of the sample is adapted to the frequency of occurrence of the prefabricated part 6 on the actual production line. I.e. the more frequently a certain type of prefabricated part 6 is produced, the larger the number of samples of this prefabricated part 6 in the original data set should be.
(2) The image in the original data set is manually marked, the marked object is the prefabricated part 6 and the tool 61 on the surface of the prefabricated part, and the marked mark information comprises: type information of the prefabricated part 6, number information of the tools 61 in the prefabricated part 6, and position information. And simultaneously storing the images and the corresponding marking information to obtain a new data set, and randomly dividing the new data set into a training set, a verification set and a test set according to the data ratio of 8:1: 1.
(3) Carrying out multiple rounds of training on the constructed target detection network by using a training set, and verifying the target detection network through a verification set after each round of training is finished to respectively obtain loss values of the target detection network in a training stage and a verification stage; stopping the training process when the loss value obtained by the training set in each round is reduced and the loss value obtained by the verification set is increased; and storing five network models with loss values ranked in the top five obtained in the training stage.
(4) And testing the five stored network models by using the test set, and then taking the network model with the highest mAP value in the test result as a final target detection network.
S2: acquiring a real-time video of the motion state of a die table 5 which is shot along an obliquely downward fixed visual angle and is loaded with a prefabricated part 6; the side of each prefabricated part 6 corresponding to the movement direction is defined as the front side, and the side corresponding to the movement direction is defined as the rear side. The shooting angle of the acquired real-time video is consistent with that of the sample image in the training set, and the two images can be different real prefabricated part 6 images acquired in the test production stage and the actual production stage. Such images and videos are typically acquired by high resolution, high frame rate industrial cameras.
S3: setting a rectangular target induction area for performing online detection; defining the front edge of the target induction area at the side of the target induction area corresponding to the moving direction of the mold table 5; the side corresponding to the moving direction of the die table 5 is the rear edge of the target sensing area.
Generally, the acquired sample data for object detection and object tracking is photographed by an industrial camera, which has a wide photographing range and may simultaneously include a plurality of molds 51 of different prefabricated parts 6 on the mold base 5. This may cause difficulty in the processing of the network detection model and the network tracking model, and the network model may not be able to accurately distinguish different prefabricated parts 6, or the accuracy and real-time performance of the feature extraction result may be affected by the large scale of the processed data. To solve such a problem, the present embodiment introduces a concept of a target sensing region in acquiring a real-time video.
The target sensing area is a virtual area, the target sensing area corresponds to a real defect detection area through which the mould table 5 passes in an actual detection field, and a part of area shot by a fixed visual angle in a real-time video is an area of the defect detection area; the range of the defect detection area at least comprises the whole area of the target sensing area. During the process that the mold table 5 carries each prefabricated part 6 to move forward, the prefabricated parts 6 pass through the target sensing area in sequence, and the samples in the target detection network and the target tracking network are only partial images of the corresponding target sensing area in each video frame.
Through the above statements, it can be found that the setting of the target sensing region has a great correlation with the performance of the online detection method. The target sensing area is too small, so that the tool 61 in the prefabricated part 6 cannot be captured due to too fast movement, and a missing detection phenomenon occurs. And the too large target sensing area can cause mutual interference due to too many targets, and each object is difficult to be effectively distinguished. Meanwhile, the larger the target sensing area is, the larger the data volume in the processing process is; this also puts pressure on the data processing process of the hardware, and affects the real-time performance of the detection of the tool 61 in the prefabricated part 6.
Combining the above factors, the size of the target sensing region is set as follows in this embodiment: the length of the target sensing area is equal to the length of the video frame of the shot real-time video, and the pixel value W of the width of the target sensing areaaCalculated using the following formula:
Figure BDA0003244937020000091
in the above formula, WvIs the width pixel value, T, of a video framemaxFor the maximum time that the tool 61 stays in the video, FmaxAverage frame rate in real-time video processing, FminThe minimum number of frames for the center point of the tool 61 to stay in the sensing area.
In practice, the length of the target sensing area will generally be greater than the width of the prefabricated part 6 or the die table 5. That is, the video should include not only the mold table 5 or the prefabricated part 6, but also areas on both sides thereof to ensure that all areas of the prefabricated part 6 are contained and the detection is completed. While the width of the target sensing area will typically be less than the length of a single prefabricated element 6; therefore, the situation that a single prefabricated part 6 can pass through the target induction area within a long enough time can be guaranteed, the image size in each frame can be effectively reduced, the data processing pressure is further reduced, and the real-time performance of the network model processing problem is guaranteed.
S4: performing target detection and target tracking on the shot real-time video by adopting a tool 61 detection network, and sequentially acquiring type information in each prefabricated part 6 appearing in the real-time video, and quantity signals and position information of tools 61 in the prefabricated parts 6; as shown in fig. 3, the method for acquiring the information of the tool 61 by the tool 61 detection network specifically includes:
s41: judging whether the front side of the prefabricated part 6 to be reached in the real-time video is overlapped with the front edge of the target induction area, if so, acquiring the type information of the prefabricated part 6 currently entering the target induction area, and entering the next step; otherwise, the waiting is continued.
S42: and sequentially carrying out target detection on the part of the corresponding target sensing area in each frame of the current real-time video through a target detection network, extracting all the tools 61 appearing in the target sensing area, and recording the position information of each tool 61.
S43: performing target tracking on each tool 61 extracted from each frame by the target detection network through the target tracking network, thereby allocating an identity mark code with global uniqueness to each newly added tool 61; and returning the target information with the identity code to the target detection network.
S44: judging whether the rear side of the prefabricated part 6 currently executing target detection and target tracking is superposed with the front edge of the target induction area; if yes, counting the number information and the position information of all the tools 61 in the current prefabricated part 6, and returning to the step S41 to wait for executing the target detection and target tracking process of the next prefabricated part 6; otherwise, the process returns to step S42 to continue the target detection and target tracking process of the current prefabricated part 6.
The method for judging whether the front side or the rear side of the prefabricated part 6 coincides with the front edge of the target sensing area is as follows:
(1) a group of photoelectric sensors 2 are arranged in the defect inspection area, and the detection direction of each photoelectric sensor 2 is superposed with the straight line where the front edge of the corresponding target induction area is located; and the mounting position of the photoelectric sensing area satisfies: when any prefabricated part 6 on the die table 5 passes through the photoelectric sensor 2, the photoelectric sensor 2 is shielded, and the state signal of the photoelectric sensor is further changed; the status signal when the photosensor 2 is not shielded is defined as "1", and the status signal when the photosensor is shielded is defined as "0".
(2) Before any prefabricated part 6 reaches the defect detection area, the state signal of the photoelectric sensor 2 is 1. When one of the prefabricated parts 6 enters the defect detection area, the front side of the prefabricated part 6 is firstly overlapped with the front edge of the target sensing area; at this time, the photoelectric sensor 2 is just shielded by the prefabricated part 6, and the state signal of the photoelectric sensor 2 is switched from 1 to 0. It is determined that the front side of the prefabricated part 6 at this time coincides with the leading edge of the target sensing area.
(3) Before the prefabricated part 6 leaves the defect detection area, the status signal of the photoelectric sensor 2 is 0. When the prefabricated part 6 completely leaves the defect detection area, the rear side of the prefabricated part 6 is firstly overlapped with the front edge of the target sensing area; at this time, the photoelectric sensor 2 is just restored to the non-shielded state again, and the state signal of the photoelectric sensor 2 is switched from 0 to 1. It is determined that the rear side of the prefabricated part 6 at this time coincides with the leading edge of the target sensing area.
The photoelectric sensor 2 in the present embodiment provides convenience in distinguishing the individual prefabricated parts 6 in the video. In the present embodiment, the tool 61 detects the network to complete two contents of tool 61 extraction and tool 61 tracking. In the actual problem handling, a single mould table 5 will simultaneously comprise a plurality of moulds 51 of different types of prefabricated elements 6, wherein different types of prefabricated elements 6 are produced. In the process of feature recognition and extraction, different prefabricated parts 6 need to be effectively distinguished, and the tool 61 in two prefabricated parts 6 is prevented from being counted to the same tool. To solve this problem, a new network model is usually added. However, the method has the problems of insufficient detection precision and high dependence on image quality in the case of using the machine vision technology. In this embodiment, the photoelectric sensor is skillfully used to solve the problem of distinguishing the prefabricated parts 6.
It is considered in this embodiment that the height of the prefabricated parts 6 on the mold table 5 is generally higher than the surface of the mold table 5, and a gap is generally present between the adjacent prefabricated parts 6. A set of photosensors is therefore provided at a certain height. When the mould table 5 is moved, the photoelectric sensor is shielded if the prefabricated part 6 passes, and a state signal is generated. If the gap between the prefabricated parts 6 passes, the photoelectric sensor is not shielded, and another state signal is generated. By detecting the different status signals, it can be determined whether a prefabricated part 6 has passed through. In order to adapt to the working mode of the detection network of the tool 61 in the embodiment, the embodiment limits the installation position of the photoelectric sensor; and the detection direction of the photoelectric sensor is coincided with the straight line where the front edge of the corresponding target induction area is located. Namely: when the prefabricated part 6 just passes through the photoelectric sensor, the prefabricated part 6 is immediately judged to reach a target sensing area, the frame image in the monitoring video is extracted to detect and track the tool 61, when the prefabricated part 6 completely leaves the photoelectric sensor, the prefabricated part 6 is immediately judged to leave the target sensor, and the detection result of the tool 61 is output in the process of finishing the detection and tracking of the tool 61 in the prefabricated part 6.
In addition to extracting the tool 61 information in the prefabricated parts 6, the present embodiment also needs to acquire the types of the respective prefabricated parts 6. The number and positions of the tools 61 are different among different types of prefabricated parts 6, and the present embodiment needs to acquire the types of the respective prefabricated parts 6 to determine the number and position information of theoretical works of the prefabricated parts 6. And then comparing with the actual detection result.
Specifically, in the present embodiment, the method of acquiring the type information of the prefabricated part 6 currently entering the target sensing area is as follows:
(1) an rfid chip pre-storing information on the type of the prefabricated part 6 in the mold 51 is provided on the side of each mold 51 in the mold table 5.
(2) Arranging a radio frequency identification card reader for reading storage data in a radio frequency identification chip in the defect detection area; the installation position of the radio frequency identification card reader meets the following requirements: when the front side of the prefabricated part 6 coincides with the front edge of the target sensing area, the radio frequency identification chip on the side surface of the prefabricated part 6 is close to the video identification card reader, and the condition of reading the internal information of the radio frequency identification chip is achieved.
S5: according to the obtained currentInquiring a cloud database according to the type information of the prefabricated part 6, and acquiring the number information and the reference value of the position information of the tools 61 in the prefabricated part 6, which are stored in the cloud database in advance; comparing the reference value with the number information and the actual measurement value of the position information of the tool 61 in the current prefabricated part 6 obtained in the previous step, judging whether the reference value and the actual measurement value completely accord with each other, and if so, judging that the tool 61 of the current prefabricated part 6 is complete; otherwise, the tool 61 of the current prefabricated part 6 is judged to be missing. In the detection process, the horizontal deviation and the vertical deviation allowed by the position information are both size, and the size and the width pixel value W of the tool 61 induction areaaAre equal.
In this embodiment, an overall logic block diagram of online detection is shown in fig. 4, and relates to four objects, namely, a data acquisition end, a data processing end, a cloud server, and a defect alarm module. The logic for the online detection method to run between four objects is as follows:
at the data acquisition end, two types of data are acquired: (1) video stream data associated with each prefabricated part 6. (2) The type information of each prefabricated part 6 acquired by the RFID module.
At the data processing end, the work content comprises three parts: 1. and acquiring standard parameter information (mainly parameter information of the tool 61, including quantity information and position information) of each prefabricated part 6 of different types from the cloud server according to the acquired type information of each prefabricated part 6. 2. And performing target detection and target tracking according to the video stream data of each prefabricated part 6, and determining the number and position information of the actual tools 61 in each prefabricated part 6. 3. And comparing the tool 61 information obtained by processing the video stream with the standard parameter information of the tool 61 acquired from the cloud storage module, judging whether the two information are consistent, and further determining whether the tool 61 in the prefabricated part 6 has a defect.
And meanwhile, the detection result of the data processing end is also sent to a defect alarm module for carrying out fault prompt on the production and quality detection process of the product on the production line.
In this embodiment, the BIM models of all models of prefabricated parts 6 to be produced on the production line are stored in the cloud server; when the production line starts to produce and detect, comparing the acquired quantity information and position information of the tools 61 in the first piece of each type of prefabricated part 6 with the corresponding ideal parameters in the BIM model, and when the first piece meets the error requirements of each parameter, taking the quantity information and position information of the tools 61 detected and acquired in the first piece as reference values for subsequently executing missing detection of the tools 61; and when the produced first piece does not meet the error requirement, discarding the current prefabricated part 6, and reproducing and determining the first piece.
In the present embodiment, although the BIM model is stored in the cloud server, in the tool 61 missing detection method of the present embodiment, the parameters of the BIM model are not directly used for comparison with the tool 61 detection results of the prefabricated parts 6, but a first part determined as a qualified product is used as a standard for performing subsequent tool 61 production performance detection. That is, the subsequent products are not compared with the BIM model, but are compared with the first product which is judged to be qualified. Therefore, data which better accords with the actual conditions of the production line can be obtained, and the condition that the standard BIM model is not suitable for the specific production state in the detection process and even causes frequent error reporting in the production and detection processes of the product is avoided.
In addition, in the process of performing target detection and target tracking on the relevant video frames of each prefabricated part 6, the image in the corresponding video frame is stored in a classified manner as data file according to the pipeline number of each prefabricated part 6; the archived data is a frame-by-frame image of the video stream or a sampled image acquired at a particular sampling frequency.
Meanwhile, when detecting that the tooling 61 of the current prefabricated part 6 is missing, stopping the moving process of the die table 5 and sending an alarm signal to the front end of the production line; and then, the technicians make corresponding decisions of problem duplicate analysis and problem product scrapping treatment according to the acquired quantity information, position information and data archive of the tools 61 in the current photo of the prefabricated part 6.
In order to verify the effectiveness of the method provided by this embodiment, this embodiment adopts a test method to verify the performance of the target detection network, and the specific test process includes: the method comprises five steps of data acquisition, data preprocessing, target detection model building, model training and model testing and analysis.
1. Data acquisition
The data required by the experiment are acquired in a manual shooting mode, the data comprise 800 original images in total, the resolution of the images is 3024 x 4032, and the data form an original data set in the embodiment.
2. Data pre-processing
(1) Data cleansing
Images with poor blurring, ghosting and image quality in the original data set are removed, and 736 images are left after removal.
(2) Data set partitioning
The raw data set is divided into a training set, a validation set, and a test set. The training set contained 590 images, the validation set 73 images, and the test set 73 images.
(3) Image compression
The original image has ultrahigh resolution, occupies too large data space, contains too much noise, and is not beneficial to the training of the model, so the original image is compressed, and the resolution of the compressed image is 416 × 416.
(4) Manually labeling data sets
As shown in fig. 5, the images in the data set processed in the previous step are manually labeled, the labeled objects are the tools 61 shown in the images, the tools 61 are divided into two types, namely, pilar and fixedpilar, the former refers to the columnar connecting tools 61 appearing in the images of the prefabricated parts, the latter refers to the two handle-shaped fixing tools 61 with larger structures in the prefabricated parts, and in the data set of the present embodiment, the number of the two objects is 5183 in total, wherein 4630 pilars are provided, and 553 fixedpilar are provided.
3. Target detection model building
In this embodiment, a YOLO V5 target detection model is built, and a network architecture diagram thereof is shown in fig. 6.
4. Model training
The training process is accelerated by using a pre-training model based on a COCO data set, and a training set prepared in advance is loaded for training, wherein the epoch number epoch in the training process is set to be 50.
5. Model testing and analysis
In this embodiment, six test samples are set for testing, wherein the image resolution of the test sample 1 is 776 × 734, and the prefabricated part 6 in the image contains 14 tools 61. The image resolution of the test specimen 2 was 855 x 844, and the preform 6 in the image contained 16 tools 61. The resolution of the image of the test specimen 3 was 1743 × 363, and the preform 6 in the image contained 17 tools 61. The image resolution of the test specimen 4 was 990 × 550, and the preform 6 in the image contained 13 tools 61. The resolution of the image of the test specimen 5 was 1033 × 349, and the preform 6 in the image contained 17 tools 61. The image resolution of the test specimen 6 was 1647 × 460. Wherein the test sample 4 is a partial image of the test sample 3.
The results of the individual test samples identified by the network model in this embodiment are shown in fig. 7-12. The analysis of the above test results found that:
(1) the test samples 1 and 2 have good detection effects, and all the tools 61 contained in the test samples are completely identified. It can be seen that the performance of the target detection model in this embodiment is better.
(2) The test sample 3 has a missing detection, and a part of the area (namely, the test sample 4) is intercepted, so that the detection effect is better. This occurs because: when the resolution of the image is too large, the model performs Resize preprocessing on the test image to adjust the image resolution to 416 × 416, which leads to image distortion. That is, the target detection model provided by the embodiment has a poor generalization effect on high-resolution images. Meanwhile, this also illustrates that the "target sensing area" is set as the processing area of the actual target detection model in the method provided by this embodiment; the method is a very correct choice, the image does not need to be cut, the image cutting and processing can further improve the data amount processed by the network model and reduce the real-time performance of the network model, and the processing method can improve the processing speed and the real-time performance of the detection method and can also improve the detection precision of the method to a certain extent.
(3) The test sample 5 has a false detection phenomenon. In this case, the light-reflecting portion of the surface of the prefabricated part 6 is erroneously recognized as the target tool 61. This shows that the method provided in this embodiment still has a certain dependency on the quality of the image, and therefore the quality of the acquired video or image of the prefabricated part 6 should be improved as much as possible. Including in particular the use of higher performance industrial cameras and providing higher light effects to the viewing area of industrial cameras. For example, the fill-in light 13 is used at multiple angles in the defect detection area to reduce the occurrence of local reflection or shadow.
(4) The test sample 6 has a missing detection phenomenon. Of these, only one target tool 61 with a high degree of overlap is detected. The result reflects that the detection effect of the network model provided in this embodiment on the target with higher overlapping degree still needs to be improved. In this embodiment or other embodiments, the installation position of the camera may be changed, so that the overlapping phenomenon between the tools 61 in the acquired image is not serious as much as possible.
The detection accuracy of the target detection model based on the YOLO v5 network in the verification test on the Pillar and FixedPillar targets is counted, and the data of the average accuracy mAP are calculated as follows:
table 1: detection accuracy of the target detection model in the embodiment
Type (B) AP(Pillar) AP(fixedpillar) mAP
Results 99.3% 98.3% 98.8%
As can be seen from the foregoing, the network model in the method provided by this embodiment has detection accuracy of targets of different types exceeding 98%, and therefore has a very high practical value and can be popularized and applied.
In addition, on the premise that the performance of the hardware device allows, multiple sets of cameras can be arranged to view in this embodiment or other embodiments, then images at different angles are identified, and the identification results at different angles are subjected to weighted fusion processing to obtain more accurate information on the number of the tools 61. The influence of the tool 61 overlapping problem on the detection precision is eliminated.
Example 2
The present embodiment provides a deployment scheme of a device applying the online detection method in embodiment 1, and the system for detecting defects of the tooling 61 of the prefabricated part 6 provided in the scheme can completely implement the online detection method in embodiment 1. Specifically, as shown in fig. 13 and 14, the system for detecting the defect of the tooling 61 of the prefabricated part 6 provided in the present embodiment includes: the device comprises a video acquisition component 1, a photoelectric sensor 2, a type information identification component 3 and a processing module 4.
The tool 61 defect detection system is mainly used for detecting whether the tool 61 of the prefabricated part 6 in each die 51 on the die table 5 passing through a defect detection area has defects. The mold table 5 in this embodiment is a platform with a moving component at the bottom, the moving state of the moving component is controlled by a moving controller, the molds 51 are installed on the upper surface of the platform, gaps exist between adjacent molds 51 in the platform, the molds 51 have different sizes, and the heights of the molds 51 are different. The moving assembly drives the platform and the mold 51 thereon to move along with the production line, the mold table 5 moves, and the prefabricated part 6 which is produced in the mold 51 passes through the defect detection area, and the defect detection process of the tool 61 is completed in the area.
In this embodiment, the video capturing assembly 1 includes a mounting frame 11, a camera 12, and a fill light 13. The mounting frame 11 is positioned on the front side of the defect detection area; the camera 12 and the light supplement lamp 13 are fixed on the mounting frame 11 and are positioned above the defect detection area; the depression angle in the viewing direction of the camera 12 is less than 90 °. The mounting frame 11 in this embodiment is of a gantry structure, and the inter-camera 12 and the light supplement lamp 13 are fixed on a cross bar at the top of the mounting frame 11; the camera 12 performs framing in a downward oblique downward oblique manner. The mold table 5 penetrates through the mounting frame 11 in the moving process, and leaves from the area where the camera 12 and the light supplement lamp 13 are mounted, and the camera 12 finishes the whole image taking process of the mold table 5. When the camera 12 shoots the die table 5 passing through the mounting frame 11, shooting can be finished from the right front side of the die table 5 or from two sides of the die table 5; or at other angles of the mold table 5. Only two conditions need to be satisfied: (1) the use of an oblique rather than vertical downward angle for viewing ensures that the structure of the tool 61 and the surface of the prefabricated element 6 can be clearly distinguished in the acquired image. (2) The surrounding angle of the camera 12 in the horizontal direction with respect to the mold table 5 during the framing is adjusted to find the optimum framing angle that enables the minimum overlap between the tools 61 in the respective prefabricated parts 6.
The light supplement lamp 13 in this embodiment mainly aims at overcoming the problem that the quality of getting for instance is affected by insufficient light in the defect detection area, and then the detection precision of the tool 61 is reduced. In other embodiments, in order to further improve the illumination effect, a greater number of light supplement lamps 13 may be disposed in other areas below the cross bar of the mounting frame 11 where the light supplement lamps 13 are mounted, so that the brightness of each area on the mold table 5 is kept consistent in the image capturing process, and the problem of local light reflection does not occur.
The video acquisition component 1 is used to acquire real-time video stream data of objects (i.e. the mold table 5) passing through the defect detection area. The visual angle of the video stream data acquired by the video acquisition component 1 is inclined downwards; the viewing area of the video acquisition component 1 comprises a target sensing area, and the target sensing area is an area for feature extraction.
It should be noted that: the target sensing area is not a physical area, but a corresponding virtual area in the video stream, and in the area, the extraction of the feature information of a specific object (i.e., the tool 61) in each frame image of the video stream can be completed through a network model. Although the target sensing area is not a physical area, the area is typically relatively fixed in the video stream data, which corresponds to a real location within the defect detection area. Namely: when some object moves to a specific position of the defect detection area, the object also enters the target sensing area in the real-time video stream data.
The photoelectric sensor 2 is installed in the defect detection area, and the photoelectric sensor 2 is used for acquiring the position of each mold 51 in the moving process of the mold table 5. In this embodiment, the photoelectric sensor 2 includes a laser emitter 21 and a laser receiver 22, the laser emitter 21 and the laser receiver 22 are respectively installed at two sides of the moving path of the mold table 5, and the connection line direction of the laser emitter 21 and the laser receiver is perpendicular to the moving direction of the mold table 5 and coincides with the leading edge of the target sensing area. The mounting position of the photoelectric sensor 2 also satisfies: when the positions of the molds 51 in the mold table 5 coincide with those of the photoelectric sensor 2, the photoelectric sensor 2 is shielded; when the position of the die table 5 where the die 51 is not mounted coincides with the position of the photoelectric sensor 2, the photoelectric sensor 2 is not shielded. The status signal when the photosensor 2 is shielded is defined as "0", and the status signal when the photosensor is not shielded is defined as "1".
The photoelectric sensor 2 in this embodiment is a set of laser transmitter 21 and receiver, which are mutually sensed in a normal state, i.e. the state signal is 1. Wherein the installation height of the photoelectric sensor 2 is set to a position higher than the upper surface of the die table 5 and lower than the upper surface of the die 51 having the lowest height. In this case, the mold table 5 itself does not block the photoelectric sensor 2, and all the molds 51 of different heights may block the photoelectric sensor.
In the above case, when the mold table 5 moves to the defect detection area, the front side of the first mold 51 on the mold table 5 first shields the photoelectric sensor 2, and it is determined that the mold 51 is reached. This shielding state continues when the first mold 51 passes through the defect detection area until the rear side of the first mold 51 coincides with the position of the photoelectric sensor 2, and then the shielding state of the photoelectric sensor 2 is completed, and it is determined that the first mold 51 has completely left the defect detection area. The second and any subsequent dies 51 on the die table 5 will go through the same process as described above during the movement, and the actual positions of the dies 51 can be determined in the same way.
In order to adapt to the processing of each frame of image in the acquired video stream data in the subsequent process, the present embodiment installs the photoelectric sensor 2 at the position corresponding to the leading edge of the target sensing area. Considering that the front edge of the target sensing area is overlapped with the connecting line direction of the photoelectric sensor 2 arranged in the defect detection area; the method for determining the time when the mold 51 enters/exits the target sensing area in this embodiment is as follows: (1) when the state signal of the photoelectric sensor 2 changes from 1 to 0, it is determined that a certain current mold 51 is entering the target sensing area; (2) when the status signal of the photo sensor 2 changes from 0 to 1, it is judged that the front mold 51 is leaving the target sensing area. Therefore, the start frame and the end frame of each frame image input to the network model for feature extraction can be determined according to the time when the mold 51 enters and exits the target sensing region.
In this embodiment, when determining whether the tooling 61 in the prefabricated part 6 has a defect, it is also necessary to obtain the type information of each prefabricated part 6 and its standard parameters. This part of the work content is done by means of the type information identification component 3. In the present embodiment, the type information identifying unit 3 is installed in the defect detection area, and the type information identifying unit 3 is used to acquire the type information of the prefabricated parts 6 produced in the respective molds 51 arriving in the target detection area. The type information identifying component 3 includes an RFID chip 31 and an RFID reader 32; the RFID chip 31 is mounted on the outer surface of each die 51 corresponding to one side of the moving path of the die table 5. The RFID chip 31 stores therein the type information of the prefabricated part 6 produced in the corresponding mold 51. The RFID card reader 32 is installed in the defect detection area, and when the mold table 5 passes through the defect detection area, the RFID chip 31 and the RFID card reader 32 are close to each other at least at one moment, so that data reading can be realized between the two.
In order to enable the RFID reader 32 in this embodiment to acquire the data in the RFID chip 31 at the first time when the mold 51 enters the target sensor, the present embodiment adjusts the positions of the RFID chip 31 and the RFID reader 32. So that when the front side of a certain mold 51 coincides with the leading edge of the target sensor (i.e. is detected by the photoelectric sensor 2), the RFID chip 31 and the RFID reader 32 are also in the sensing state where data transmission can be realized. Therefore, whether the reference data used for judging whether the prefabricated part 6 is qualified can be acquired at the first time, so that a foundation is laid for data comparison at the later stage, and the real-time performance of the system in the defect detection problem of the processing tool 61 is improved.
In this embodiment, a clamping groove for mounting the RFID chip 31 is formed in a side surface of the mold 51, and a cover plate that can be opened and closed is disposed at the clamping groove and is made of a resin material. The card slot and the cover plate can play a protection effect on the RFID chip 31, and the chip is prevented from being in physical contact with an external object in the using process to cause failure. Meanwhile, the chip can be replaced conveniently due to the openable design, and meanwhile, the RFID communication process can be prevented from being interfered by the resin material serving as the cover plate.
As shown in fig. 15, the processing module 4 includes a position acquisition unit 41, a standard parameter acquisition unit 42, a video processing unit 43, a feature extraction unit 44, and a feature comparison unit 45.
The position acquiring unit 41 is configured to acquire a status signal of the photoelectric sensor 2, and further determine a time when any one of the molds 51 enters/exits the target sensing area.
The standard parameter acquiring unit 42 is configured to acquire the type information of any one of the molds 51 identified by the type information identifying component 3 when the mold 51 reaches the target sensing area, and then query a server for the standard parameters corresponding to the current prefabricated part 6 according to the type information. In the present embodiment, the cloud server stores in advance BIM models of all models of prefabricated parts 6 to be produced on the production line. When the production line starts to test production and performs quality detection, the cloud server judges whether each type of received prefabricated part 6 is the first part, and if the type of the received prefabricated part 6 is the first part, the cloud server returns standard parameters of the type of the prefabricated part 6 and requests to obtain measured values of various parameters of the type of the prefabricated part 6 in a state of meeting error requirements. And after the measured values of all parameters of the qualified products are saved, the values are used for replacing data in the BIM model to serve as subsequent standard parameters.
The video processing unit 43 in this embodiment is configured to extract a corresponding frame from the real-time video stream data associated with each mold 51 according to the time when each mold 51 enters and leaves the target sensing area; and extracting partial images corresponding to the target sensing area in the frame-by-frame images as source images detected by the tool 61. Finally, all source images associated with each mold 51 are input in sequence into the feature extraction unit 44.
The video processing unit 43 in this embodiment supports two modes, on-line detection and off-line detection. For the online detection mode, the video processing unit 43 first determines the time when a certain mold 51 enters the target sensing area, then acquires the frame image in the video for the tool 61 detection, and continuously acquires the subsequent frames until the mold 51 is detected to leave the target sensing area. In the off-line inspection mode, the video processing unit 43 records the time when a certain mold 51 enters/exits the target sensing area. Then, a corresponding video segment is cut out from the real-time video, and the tool 61 is used for detecting according to the frame-by-frame images of the cut-out video segment.
As shown in fig. 16, the feature extraction unit 44 includes a target detection subunit 441 and a target tracking subunit 442; the target detection subunit 441 is configured to perform target detection on the source images associated with the molds 51, and extract all the tools 61 in the prefabricated part 6 from each source image; the target tracking subunit 442 is configured to perform target tracking on the tools 61 appearing in all the source images, sequentially configure an identity identifier with global uniqueness for each newly appearing tool 61 in each frame, and count the number of tools 61 in each mold 51 and the position information corresponding to each tool 61 by the target tracking subunit 442. The feature comparing unit 45 is configured to compare the number information and the position information of the tools 61 extracted by the feature extracting unit 44 with the standard parameters, determine whether the number information and the position information completely match with the standard parameters, determine that the tools 61 of the prefabricated part 6 are defect-free if the number information and the position information completely match with the standard parameters, and determine that the tools 61 of the prefabricated part 6 are defect-free if the number information and the position information do not completely match with the standard parameters.
It should be noted that, in this embodiment, instead of using the complete frame-by-frame image in the real-time video as the input of the feature extraction unit 44, a partial image in the corresponding target sensing region in the frame-by-frame image is used as the input of the feature extraction unit 44. The reason for this is that: the camera 12 in the defect detection system of the tool 61 is mainly an industrial camera, the resolution of the industrial camera is generally high, and the viewing range is relatively large. In this case, the data amount of each frame image of the video tends to be large, which may stress the processing process of the feature extraction unit 44, and affect the real-time performance in the system processing process. Meanwhile, the feature extraction unit 44 in this embodiment has insufficient generalization capability on the high resolution image, and when processing the high resolution image in a large range, Resize processing is performed on the image first, which may cause image distortion and affect the accuracy of target extraction.
This embodiment solves this problem by the arrangement of the target sensing region, which is the optimal region of interest input in the feature extraction unit 44 in this embodiment, and the optimal recognition effect and rate can be obtained while maintaining the image size. Meanwhile, in consideration of the high frame rate of the industrial camera, the target sensing area set in the embodiment does not cause target loss in the frame-by-frame images, because the frame rate is high enough, the part of the image of a certain frame, which is located outside the target sensing area, will certainly appear in other frames, and will not cause interference to the subsequent target detection and target tracking processes.
Specifically, in the target detection subunit 441 in the present embodiment, the tool 61 in the prefabricated part 6 is detected by using a trained network model based on YOLO V5. The target tracking subunit 442 tracks each tool 61 extracted from the target detecting subunit 441 by using a network model based on an SORT algorithm, determines the relevance between the tool 61 extracted from each frame of image and the tool 61 in the previous frame of image, and further counts the number information and the position information of the tools 61 in the prefabricated part 6. The information of the number of the tools 61 may be obtained by a statistical method, and the information of the positions of the tools 61 may be obtained by calculation according to the corresponding pixel positions of the tools 61 in the image. Meanwhile, considering that the position of the photoelectric sensor 2 coincides with the front edge of the target sensing area, the position information of the tool 61 can be calculated by combining the moving speed of the die table 5 and the time when the photoelectric sensor 2 (the front edge of the target sensing area) presses the corresponding tool 61, so that the calculation result obtained based on the pixel position is corrected.
The training, verifying and testing processes of the network model in the target detection subunit 441 are specifically described in embodiment 1, and are not described in detail in this embodiment.
In this embodiment, the defect detection system of the tool 61 further includes an alarm 7, and the alarm 7 is electrically connected with the processing module 4; the processing module 4 is further configured to, when detecting that a certain prefabricated part 6 has a defect of the tool 61, send a control instruction for stopping operation to the motion controller of the die table 5, and control the alarm 7 to send an alarm signal for indicating the defect of the tool 61.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the appended claims.

Claims (10)

1. An on-line detection method for tool loss in an assembled prefabricated part is used for detecting whether the tool in the prefabricated part processed in a mold meets the requirement in real time on a mold table of a production line; the real-time online detection method is characterized by comprising the following steps of:
s1: constructing a tool detection network based on a real-time video; the tool detection network comprises a target detection network and a target tracking network, and the target detection network and the target tracking network are respectively used for carrying out target detection and target tracking processing on a shot real-time video of the prefabricated part to obtain the quantity information and the position information of the tools in the prefabricated part;
s2: acquiring a real-time video of the motion state of the die table with the prefabricated part shot along an obliquely downward fixed visual angle; defining one side of each prefabricated part, which is corresponding to the movement direction, as a front side, and one side, which is corresponding to the movement direction, as a rear side;
s3: setting a rectangular target sensing area for performing online detection, wherein the length of the target sensing area is equal to the length of a video frame of the shot real-time video, and the width of the target sensing area is equal to the width of the video frameaCalculated using the following formula:
Figure FDA0003244937010000011
in the above formula, WvIs the width pixel value, T, of a video framemaxFor the maximum time that the tool stays in the video, FmaxAverage frame rate in real-time video processing, FminThe minimum number of frames for the center point of the tool to stay in the induction area;
defining the front edge of the target induction area at the side of the target induction area corresponding to the moving direction of the mold table; one side of the die table in the direction corresponding to the moving direction is the back edge of the target induction area;
s4: performing target detection and target tracking on the shot real-time video by adopting the tool detection network, and sequentially acquiring type information in each prefabricated part appearing in the real-time video, and quantity signals and position information of tools in the prefabricated parts; the acquisition method specifically comprises the following steps:
s41: judging whether the front side of the prefabricated part to be reached in the real-time video is overlapped with the front edge of the target induction area, if so, acquiring the type information of the prefabricated part currently entering the target induction area, and entering the next step; otherwise, continuing to wait;
s42: sequentially carrying out target detection on parts corresponding to the target sensing area in each frame of the current real-time video through the target detection network, extracting all tools appearing in the target sensing area, and recording position information of each tool;
s43: performing target tracking on each tool extracted from each frame by the target detection network through the target tracking network, so as to allocate an identity mark code with global uniqueness to each newly added tool and calculate position information of the identity mark code; and returning target information with the identity code and the position information to the target detection network;
s44: judging whether the rear side of the prefabricated part currently executing target detection and target tracking is superposed with the front edge of the target induction area; counting the quantity information and the position information of all tools in the current prefabricated part, and returning to the step S41 to wait for executing the target detection and target tracking process of the next prefabricated part; otherwise, returning to the step S42 to continue to execute the target detection and target tracking process of the current prefabricated part;
s5: inquiring a cloud database according to the acquired type information of the current prefabricated part, and acquiring reference values of quantity information and position information of tools in the prefabricated part, which are pre-stored in the cloud database; comparing the reference value with the measured values of the quantity information and the position information of the tools in the current prefabricated part obtained in the previous step, judging whether the reference value and the measured values completely accord with each other, and if so, judging that the tools of the current prefabricated part are complete; otherwise, judging that the tooling of the current prefabricated part is missing.
2. The method for detecting the tooling loss in the assembled prefabricated part according to claim 1, wherein the method comprises the following steps: in step S1, the target detection network selects a network model based on YOLO V5; the target detection network adopts the pictures of the prefabricated parts shot at the same shooting angle as the video obtained in the step S2 to finish the training, testing and verifying processes of the network model in sequence; the target tracking network selects a network model based on an SORT algorithm, and the target tracking life cycle parameter value of the target tracking network is adjusted to be 1-5.
3. The method for detecting the tooling loss in the assembled prefabricated part according to claim 2, wherein the method comprises the following steps: the training, verifying and testing process of the target detection network is as follows:
(1) acquiring original images of various types of prefabricated tools meeting the requirements of shooting angles, and preprocessing the original images; obtaining all clear images of the complete structure of the prefabricated part, wherein each clear image forms an original data set;
(2) and manually marking the images in the original data set, wherein the marked objects are the prefabricated part and the tool on the surface of the prefabricated part, and the marked mark information comprises: the method comprises the steps of obtaining type information of a prefabricated part, and quantity information and position information of tools in the prefabricated part; simultaneously storing the image and the corresponding marking information to obtain a new data set, and randomly dividing the new data set into a training set, a verification set and a test set according to a data ratio of 8:1: 1;
(3) performing multiple rounds of training on the constructed target detection network by using the training set, and verifying the target detection network through the verification set after each round of training is finished to respectively obtain loss values of the target detection network in a training stage and a verification stage; stopping the training process when the loss value obtained by the training set in each round is reduced and the loss value obtained by the verification set is increased; five network models with loss values ranked in the top five obtained in the training stage are stored;
(4) and testing the five stored network models by using the test set, and then taking the network model with the highest mAP value in the test result as the final target detection network.
4. The method for detecting the tooling loss in the assembled prefabricated part according to claim 1, wherein the method comprises the following steps: in step S2, the shooting angle of view of the acquired real-time video satisfies that the depression angle is smaller than 90 °, and the installation position of the shooting device includes the front side or both sides along the moving direction of the mold table.
5. The method for detecting the tooling loss in the assembled prefabricated part according to claim 1, wherein the method comprises the following steps: the target sensing area set in the step 3 is a virtual area, the target sensing area corresponds to a real defect detection area through which the mould platform passes in an actual detection field, and an area shot by a fixed visual angle in the real-time video is the area of the defect detection area; the range of the defect detection area at least comprises the whole area of the target sensing area.
6. The method for detecting the tooling loss in the assembled prefabricated part on line according to claim 5, wherein the method comprises the following steps: in steps S41 and S44, the method of determining whether the front or rear side of the prefabricated part coincides with the leading edge of the target sensing area is as follows:
(1) a group of photoelectric sensors are arranged in the defect inspection area, and the detection direction of each photoelectric sensor is overlapped with the straight line where the front edge of the corresponding target sensing area is located; and the installation position of the photoelectric sensing area meets the following requirements: when any prefabricated part on the die table passes through the photoelectric sensor, the photoelectric sensor is shielded, and the state signal of the photoelectric sensor is further changed; defining the state signal of the photoelectric sensor to be 1 when the photoelectric sensor is not shielded and the state signal of the photoelectric sensor to be 0 when the photoelectric sensor is shielded;
(2) before any prefabricated part reaches the defect detection area, the state signal of the photoelectric sensor is 1; when one prefabricated part enters the defect detection area, the front side of the prefabricated part is firstly coincided with the front edge of the target sensing area; at the moment, the photoelectric sensor is just shielded by the prefabricated part, the state signal of the photoelectric sensor is switched from 1 to 0, and the front side of the prefabricated part is judged to be superposed with the front edge of the target sensing area;
(3) before the prefabricated part leaves the defect detection area, the state signals of the photoelectric sensors are all 0; when the prefabricated part completely leaves the defect detection area, the rear side of the prefabricated part is firstly coincided with the front edge of the target sensing area; and at the moment, the photoelectric sensor just restores to an unshielded state again, the state signal of the photoelectric sensor is switched from 0 to 1, and the rear side of the prefabricated part is judged to coincide with the front edge of the target sensing area.
7. The method for detecting the tooling loss in the assembled prefabricated part on line according to claim 5, wherein the method comprises the following steps: in step S41, the method of acquiring the type information of the prefabricated part currently entering the target sensing area is as follows:
(1) arranging a radio frequency identification chip pre-storing type information of the prefabricated part in the mould on the side surface of each mould in the mould table;
(2) arranging a radio frequency identification card reader for reading the storage data in the radio frequency identification chip in the defect detection area; the installation position of the radio frequency identification card reader meets the following requirements: when the front side of the prefabricated part is overlapped with the front edge of the target induction area, the radio frequency identification chip on the side surface of the prefabricated part is close to the video identification card reader, and the condition of reading the internal information of the radio frequency identification chip is achieved.
8. The method for detecting the tooling loss in the assembled prefabricated part according to claim 1, wherein the method comprises the following steps: in step S4, in the process of performing target detection and target tracking on the relevant video frames of each prefabricated component, storing the images in the corresponding video frames in a classified manner as data files according to the pipeline numbers of each prefabricated component; and the archived data is a part of image frames of the video stream data, wherein the tool and the target induction area are overlapped.
9. The method for detecting the tooling loss in the assembled prefabricated part according to claim 1, wherein the method comprises the following steps: in step S5, the cloud database stores characteristic information of the tooling extracted from the BIM model, which is associated with all types of prefabricated parts to be produced on the production line; when the production line starts to produce and detect, comparing the acquired quantity information and position information of the tools in the first piece of each type of prefabricated part with the corresponding ideal parameters in the BIM model, and when the first piece meets the error requirements of each parameter, taking the quantity information and the position information of the tools detected and acquired in the first piece as reference values for subsequently executing tool missing detection; and when the produced first piece does not meet the error requirement, discarding the current prefabricated part, and reproducing and determining the first piece.
10. The on-line detection method for tooling loss in an assembled prefabricated component according to any one of claims 1 to 9, further comprising:
and when the tool of the current prefabricated part is detected to be missing, stopping the moving process of the die table and sending an alarm signal to the front end of the production line.
CN202111030371.0A 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part Active CN113723841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111030371.0A CN113723841B (en) 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111030371.0A CN113723841B (en) 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part

Publications (2)

Publication Number Publication Date
CN113723841A true CN113723841A (en) 2021-11-30
CN113723841B CN113723841B (en) 2023-07-25

Family

ID=78681288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111030371.0A Active CN113723841B (en) 2021-09-03 2021-09-03 On-line detection method for tool missing in assembled prefabricated part

Country Status (1)

Country Link
CN (1) CN113723841B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821478A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis
CN118396977A (en) * 2024-05-24 2024-07-26 安徽大学 Method for detecting defects of embedded parts in assembled building
CN118479220A (en) * 2024-07-16 2024-08-13 山东格林汇能科技有限公司 Makeup removal wet tissue processing technology online supervision system and method based on big data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104483234A (en) * 2014-12-19 2015-04-01 中冶集团武汉勘察研究院有限公司 Automatic iron ore grade detection system and method
CN105654218A (en) * 2014-11-14 2016-06-08 财团法人资讯工业策进会 Work item checking system and method
US20170083790A1 (en) * 2015-09-23 2017-03-23 Behavioral Recognition Systems, Inc. Detected object tracker for a video analytics system
CN107809452A (en) * 2017-09-05 2018-03-16 杨立军 A kind of data by monitored equipment upload to high in the clouds and carry out regular and analysis system
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
US10260232B1 (en) * 2017-12-02 2019-04-16 M-Fire Supression, Inc. Methods of designing and constructing Class-A fire-protected multi-story wood-framed buildings
CN110348546A (en) * 2019-05-31 2019-10-18 云南齐星杭萧钢构股份有限公司 A kind of air navigation aid of assembled architecture prefabricated components installation
CN110568831A (en) * 2019-09-20 2019-12-13 惠州市新一代工业互联网创新研究院 First workpiece detection system based on Internet of things technology
CN110992349A (en) * 2019-12-11 2020-04-10 南京航空航天大学 Underground pipeline abnormity automatic positioning and identification method based on deep learning
KR20210063673A (en) * 2019-11-25 2021-06-02 연세대학교 산학협력단 Generation Method of management-information on construction sites by using Image Capturing and Computer Program for the same

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654218A (en) * 2014-11-14 2016-06-08 财团法人资讯工业策进会 Work item checking system and method
CN104483234A (en) * 2014-12-19 2015-04-01 中冶集团武汉勘察研究院有限公司 Automatic iron ore grade detection system and method
US20170083790A1 (en) * 2015-09-23 2017-03-23 Behavioral Recognition Systems, Inc. Detected object tracker for a video analytics system
CN107809452A (en) * 2017-09-05 2018-03-16 杨立军 A kind of data by monitored equipment upload to high in the clouds and carry out regular and analysis system
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
US10260232B1 (en) * 2017-12-02 2019-04-16 M-Fire Supression, Inc. Methods of designing and constructing Class-A fire-protected multi-story wood-framed buildings
CN110348546A (en) * 2019-05-31 2019-10-18 云南齐星杭萧钢构股份有限公司 A kind of air navigation aid of assembled architecture prefabricated components installation
CN110568831A (en) * 2019-09-20 2019-12-13 惠州市新一代工业互联网创新研究院 First workpiece detection system based on Internet of things technology
KR20210063673A (en) * 2019-11-25 2021-06-02 연세대학교 산학협력단 Generation Method of management-information on construction sites by using Image Capturing and Computer Program for the same
CN110992349A (en) * 2019-12-11 2020-04-10 南京航空航天大学 Underground pipeline abnormity automatic positioning and identification method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李玺;查宇飞;张天柱;崔振;左旺孟;侯志强;卢湖川;王菡子;: "深度学习的目标跟踪算法综述", 中国图象图形学报, vol. 24, no. 12, pages 2057 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821478A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis
CN114821478B (en) * 2022-05-05 2023-01-13 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis
CN118396977A (en) * 2024-05-24 2024-07-26 安徽大学 Method for detecting defects of embedded parts in assembled building
CN118479220A (en) * 2024-07-16 2024-08-13 山东格林汇能科技有限公司 Makeup removal wet tissue processing technology online supervision system and method based on big data

Also Published As

Publication number Publication date
CN113723841B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN113723325B (en) Frock defect detecting system of prefabricated component of assembled
CN113723841B (en) On-line detection method for tool missing in assembled prefabricated part
CN106226325B (en) A kind of seat surface defect detecting system and its method based on machine vision
CN111951237A (en) Visual appearance detection method
CN109900711A (en) Workpiece, defect detection method based on machine vision
CN114994061B (en) Machine vision-based steel rail intelligent detection method and system
CN108760747A (en) A kind of 3D printing model surface defect visible detection method
CN106651849A (en) Area-array camera-based PCB bare board defect detection method
CN107895362A (en) A kind of machine vision method of miniature binding post quality testing
CN109507205A (en) A kind of vision detection system and its detection method
KR101643713B1 (en) Method for inspecting of product using learning type smart camera
CN113588653A (en) System and method for detecting and tracking quality of aluminum anode carbon block
CN113109364B (en) Method and device for detecting chip defects
CN108230385B (en) Method and device for detecting number of ultra-high laminated and ultra-thin cigarette labels by single-camera motion
CN116091506B (en) Machine vision defect quality inspection method based on YOLOV5
CN114705691B (en) Industrial machine vision control method and device
CN110866917A (en) Tablet type and arrangement mode identification method based on machine vision
CN105975910A (en) Technology processing method used for carrying out video identification on moving object and system thereof
CN110111317A (en) A kind of dispensing visual detection method for quality based on intelligent robot end
CN111504192B (en) Compressor appearance detection method based on machine vision
CN108458655A (en) Support the data configurableization monitoring system and method for vision measurement
CN117890380B (en) Chip appearance defect detection method and detection device
CN220356308U (en) Log gauge system
CN116561167B (en) Intelligent factory yield data retrieval system based on image analysis
CN117253092B (en) Machine vision-based bin video classification and identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant