CN114119479A - Industrial production line quality monitoring method based on image recognition - Google Patents

Industrial production line quality monitoring method based on image recognition Download PDF

Info

Publication number
CN114119479A
CN114119479A CN202111245436.3A CN202111245436A CN114119479A CN 114119479 A CN114119479 A CN 114119479A CN 202111245436 A CN202111245436 A CN 202111245436A CN 114119479 A CN114119479 A CN 114119479A
Authority
CN
China
Prior art keywords
data
model
camera
production line
industrial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111245436.3A
Other languages
Chinese (zh)
Inventor
梁攀峰
周智鹏
宋经华
宋江浩
王营
董梦柯
侯亚龙
周轩轩
李浩祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Shihaohuike Intelligent Technology Co ltd
Original Assignee
Xi'an Shihaohuike Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Shihaohuike Intelligent Technology Co ltd filed Critical Xi'an Shihaohuike Intelligent Technology Co ltd
Priority to CN202111245436.3A priority Critical patent/CN114119479A/en
Publication of CN114119479A publication Critical patent/CN114119479A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an industrial production line quality monitoring method based on image recognition, which comprises the following steps: s1, installing a laser line scanning camera and an industrial optical camera on the monitored production line; s2, simultaneously collecting a large amount of contour data A acquired by a laser line scanning camera and picture data B acquired by an industrial optical camera; s3, taking the maximum value of each frame of data in A as the volume measurement value of the spherical product, and recording as C; s4, splitting C and B into two parts according to a time synchronization principle, wherein C is split into C1 and C2, and B is split into B1 and B2; and S5, training by taking B1 and C1 as training sample data to obtain a model M, wherein the input of the model M is B1, and the output of the model M is C1. By constructing the neural network model, the image data acquired by the industrial optical camera in real time is used for replacing the task of completing the laser line scanning camera to monitor the upper limit of the volume of the spherical product on the production line through the model M, so that the industrial optical camera is used for replacing the laser line scanning camera.

Description

Industrial production line quality monitoring method based on image recognition
Technical Field
The invention belongs to the technical field of quality monitoring of industrial production lines, and particularly relates to an industrial production line quality monitoring method based on image recognition.
Background
At present, the technology of a common optical camera is quite mature, the structure of the camera is increasingly simple, and the cost is relatively low, so the demand is huge; on the other hand, the laser line scan camera has a complex structure and high cost, so the demand is small. In the current industrial production line quality monitoring process, when the volume upper limit of a spherical product on a monitoring production line is monitored, a scheme that a laser line scanning camera is adopted to obtain contour data is mostly adopted, the scheme is influenced by the reasons, and the scheme is high in cost. Therefore, a method for acquiring the volume of a spherical product on a production line by using an industrial optical camera instead of a laser line scan camera is needed, so as to help save the cost of the laser line scan camera under the above-mentioned scenes. The method of the present invention aims to solve this problem.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an industrial production line quality monitoring method based on image recognition, and solves the problems in the background art.
The invention provides the following technical scheme:
an industrial production line quality monitoring method based on image recognition comprises the following steps:
s1, installing a laser line scanning camera and an industrial optical camera on the monitored production line, wherein the laser line scanning camera and the industrial optical camera are used for acquiring images of the same monitoring point area;
s2, simultaneously collecting a large amount of contour data A acquired by a laser line scanning camera and picture data B acquired by an industrial optical camera;
s3, taking the maximum value of each frame of data in A as the volume measurement value of the spherical product, and recording as C;
s4, splitting C and B into two parts according to a time synchronization principle, wherein C is split into C1 and C2, and B is split into B1 and B2;
and S5, training by taking B1 and C1 as training sample data to obtain a model M, wherein the input of the model M is B1, and the output of the model M is C1.
Preferably, the method further comprises the following steps: s6, testing the model M trained in the step S5 by taking B2 and C2 as test sample data, and verifying whether the deviation of the input B2, which passes through the model M, and the output C2 is smaller than a threshold value;
s7, removing the laser line scanning camera, inputting the image data acquired by the optical camera in real time into the model M, and outputting the metric value D of the spherical product on the monitored production line, wherein if the deviation of the metric value D is greater than the threshold value, the volume of the spherical product on the production line is considered to be out of limit.
Preferably, in step S2, the laser line scan camera and the industrial optical camera respectively transmit the captured images to a single chip, and the single chip respectively analyzes and processes the images.
Preferably, the process of analyzing and processing the image of the laser line scan camera by the single chip microcomputer comprises the following steps: a, collecting a reference image of a monitoring point area when no spherical product passes through;
b, when the spherical product passes by, acquiring images of monitoring point areas in real time, performing optimal threshold segmentation and binarization processing on each frame of obtained images, and sequentially storing all contour data of each frame of obtained spherical product into respective corresponding arrays P1;
c, sorting all data in each array P1 from large to small by adopting a quick sorting method to obtain sorted corresponding arrays P2;
d, selecting the first data Ck in each array P2 to store in the data set C.
Preferably, the single chip microcomputer performs necessary processing such as gray scale processing, rotation processing, clipping processing and the like on the image shot by the industrial optical camera to obtain a data set Bk of each frame, and the data set B is formed by the plurality of Bk.
Preferably, the model M is a neural network model, and a training process of the neural network model includes the following steps: a, during forward propagation, sequentially inputting a plurality of data values of a data set Bk from a plurality of nodes of an input layer, processing layer by layer through a hidden layer to obtain a trained network connection weight, and obtaining a numerical value Ch output through network calculation from an output end of an output layer;
and b, judging whether the difference value between Ch and Ck corresponding to the same timestamp is smaller than a threshold value, if so, conforming to expectation, finishing training, and otherwise, entering an error back propagation process.
Preferably, the one-time training process of the neural network model further includes the following steps: and c, when an error back propagation process is started, the Ch and the Ck error corresponding to the same timestamp are back propagated from the network output layer to the input layer, and the connection weight of each layer of neuron is modified in the back propagation process.
Preferably, the laser line scan camera is installed in a monitoring point area of the production line, and vertically measures downwards, the detection laser beam of the laser line scan camera is perpendicular to the movement direction of the track of the production line, the industrial optical camera and the laser line scan camera are installed at a close position and vertically take pictures downwards for obtaining a picture sample, the frame rate of the laser line scan camera is 20f/s, the frame rate of the industrial optical camera is 20f/s, and the resolution is 800x 600.
Preferably, in step S4, the data set C is randomly split into two parts, C1 and C2, C1 is 80% of the data size of C, C2 is 20% of the data size of C, and similarly, B is split into B1 and B2, so that it is necessary to ensure that the timestamps of the C1 and B1 data sets are consistent and the timestamps of the C2 and B2 data sets are consistent.
When the single chip microcomputer processes the image shot by the industrial optical camera, in order to achieve an optimal threshold value and increase the accuracy of data acquisition in the image, the size of the image of the monitoring point area is x, the gray value level is i, and the number of pixel points corresponding to each gray value level i is AiThe segmentation threshold of the background and the foreground (a spherical product area and a non-spherical product area on the image) is T, the proportion of the foreground area pixel points to the whole image is recorded as Pq, the proportion of the background area pixel points to the whole image is recorded as Pb, the average gray levels are respectively recorded as mu q and mu b, and then Pq + Pb is 1; when the average gray scale of the image is represented as mu, Pb and Pq respectively satisfy the following relations:
Figure BDA0003319377220000041
Pq=1-Pb;
according to the above formula, μ b, μ q, and μ satisfy the following relationships:
Figure BDA0003319377220000042
from the above equation, the formula for the inter-class variance g (k) can be obtained:
g(k)=Pb(μb-μ)2+Pq(μq-μ)2
in the above formula, the larger g (k) is, the larger the difference between the background and the foreground is, so that the optimal threshold value is obtained when g (k) is the maximum value.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention relates to an industrial production line quality monitoring method based on image recognition, which is characterized in that a neural network model is constructed, the image data of a laser line scanning camera is used as an input value, the image data of an industrial optical camera is used as an output value to train and learn a model M, so that the image data acquired by the industrial optical camera in real time can replace and complete the task of monitoring the volume of a spherical product on a production line by the laser line scanning camera through the model M, the industrial optical camera replaces the laser line scanning camera, and the cost of the laser line scanning camera is saved.
(2) According to the image recognition-based industrial production line quality monitoring method, all data in each array P1 are sorted from large to small by adopting a quick sorting method to obtain the sorted corresponding arrays P2, and compared with other sorting methods, the quick sorting method has the advantages of being obvious in time complexity, fast in sorting and the like, and accuracy in dynamic detection of a production line can be improved.
(3) The invention relates to an industrial production line quality monitoring method based on image recognition, which is implemented by carrying out comparison on x, y, i and AiAnd the relationship among Pb, Pq, mu b, mu q and mu is limited, so that the optimal threshold value can be reached when the single chip microcomputer processes the image shot by the industrial optical camera, and the accuracy of data acquisition in the image is further increased.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of a frame of data samples in data set a of the present invention.
Fig. 3 is a diagram of a frame data sample in data set B of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described in detail and completely with reference to the accompanying drawings. It is to be understood that the described embodiments are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The first embodiment is as follows:
referring to fig. 1-3, a method for monitoring the quality of an industrial production line based on image recognition includes the following steps:
s1, installing a laser line scanning camera and an industrial optical camera on the monitored production line, wherein the laser line scanning camera and the industrial optical camera are used for acquiring images of the same monitoring point area;
s2, simultaneously collecting a large amount of contour data A acquired by a laser line scanning camera and picture data B acquired by an industrial optical camera;
s3, taking the maximum value of each frame of data in A as the volume measurement value of the spherical product, and recording as C;
s4, splitting C and B into two parts according to a time synchronization principle, wherein C is split into C1 and C2, and B is split into B1 and B2;
and S5, training by taking B1 and C1 as training sample data to obtain a model M, wherein the input of the model M is B1, and the output of the model M is C1.
S6, testing the model M trained in the step S5 by taking B2 and C2 as test sample data, and verifying whether the deviation of the input B2, which passes through the model M, and the output C2 is smaller than a threshold value;
s7, removing the laser line scanning camera, inputting the image data acquired by the optical camera in real time into the model M, and outputting the metric value D of the spherical product on the monitored production line, wherein if the deviation of the metric value D is greater than the threshold value, the volume of the spherical product on the production line is considered to be out of limit.
In step S2, the laser line scan camera and the industrial optical camera respectively transmit the photographed images to the single chip microcomputer, and the single chip microcomputer respectively analyzes and processes the images.
In step S4, the data set C is randomly divided into two parts, C1 and C2, preferably, C1 is 80% of the data amount of C, C2 is 20% of the data amount of C, B is divided into B1 and B2 similarly, B1 is 80% of the data amount of B, and B2 is 20% of the data amount of B, so that it is necessary to ensure that the timestamps of C1 and B1 data sets coincide with each other and that the timestamps of C2 and B2 data sets coincide with each other.
The process of the single chip microcomputer for analyzing and processing the image of the laser line scanning camera comprises the following steps: a, collecting a reference image of a monitoring point area when no spherical product passes through;
b, when the spherical product passes by, acquiring images of monitoring point areas in real time, performing optimal threshold segmentation and binarization processing on each frame of obtained images, and sequentially storing all contour data of each frame of obtained spherical product into respective corresponding arrays P1;
c, sorting all data in each array P1 from large to small by adopting a quick sorting method to obtain sorted corresponding arrays P2;
d, selecting the first data Ck in each array P2 to store in the data set C.
All data in each array P1 are sorted from large to small by adopting a quick sorting method to obtain respective corresponding arrays P2 after sorting, and compared with other sorting methods (bubble sorting, insertion sorting, selection sorting and merging sorting), the quick sorting method has obvious advantages in time complexity, has the advantages of quick sorting, time saving and the like, and can improve the accuracy of dynamic detection of a production line.
The single chip microcomputer performs necessary processing such as gray scale, rotation and cutting on the image shot by the industrial optical camera to obtain a data set Bk of each frame, and the data set B is formed by the multiple Bks.
The model M is a neural network model, and the one-time training process of the neural network model comprises the following steps: a, during forward propagation, sequentially inputting a plurality of data values of a data set Bk from a plurality of nodes of an input layer, processing layer by layer through a hidden layer to obtain a trained network connection weight, and obtaining a numerical value Ch output through network calculation from an output end of an output layer;
and b, judging whether the difference value between Ch and Ck corresponding to the same timestamp is smaller than a threshold value, if so, conforming to expectation, finishing training, and otherwise, entering an error back propagation process.
The one-time training process of the neural network model further comprises the following steps: and c, when an error back propagation process is started, the Ch and the Ck error corresponding to the same timestamp are back propagated from the network output layer to the input layer, and the connection weight of each layer of neuron is modified in the back propagation process.
The laser line scanning camera is installed in the assembly line monitoring point region, and vertical downward measurement, and its detection laser beam perpendicular to assembly line track direction of motion, the industry optical camera is installed at similar position with the laser line scanning camera, vertically shoots downwards for obtain the photo example, the frame rate of laser line scanning camera is 20f/s, the frame rate of industry optical camera is 20f/s, and resolution ratio is 800x 600.
Example two:
on the basis of the first embodiment, when the single chip microcomputer processes the image shot by the industrial optical camera, in order to achieve an optimal threshold value and increase the accuracy of data acquisition in the image, the size of the image in the monitoring point area is x × y, the gray value level is i, and the number of pixel points corresponding to each gray value level i is aiThe segmentation threshold of the background and the foreground (spherical product area and non-spherical product area on the image) is T, the proportion of the foreground area pixel points to the whole image is recorded as Pq, the proportion of the background area pixel points to the whole image is recorded as Pb, and the average gray levels are recorded as mu q and mu b respectively, then the method is characterized in thatWith Pq + Pb ═ 1; when the average gray scale of the image is represented as mu, Pb and Pq respectively satisfy the following relations:
Figure BDA0003319377220000091
Pq=1-Pb;
according to the above formula, μ b, μ q, and μ satisfy the following relationships:
Figure BDA0003319377220000092
from the above equation, the formula for the inter-class variance g (k) can be obtained:
g(k)=Pb(μb-μ)2+Pq(μq-μ)2
in the above formula, the larger g (k) is, the larger the difference between the background and the foreground is, so that the optimal threshold value is obtained when g (k) is the maximum value.
The method obtained through the technical scheme is an industrial production line quality monitoring method based on image recognition, the method carries out training and learning on the model M by constructing a neural network model, taking the maximum value of each frame of image data of the laser line scanning camera as an input value and taking the image data of the industrial optical camera as an output value, so that the image data acquired by the industrial optical camera in real time can replace and complete the task of monitoring the volume of spherical products on a production line through the laser line scanning camera through the model M, the industrial optical camera replaces the laser line scanning camera, and the cost of the laser line scanning camera is saved.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention; any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. An industrial production line quality monitoring method based on image recognition is characterized by comprising the following steps:
s1, installing a laser line scanning camera and an industrial optical camera on the monitored production line, wherein the laser line scanning camera and the industrial optical camera are used for acquiring images of the same monitoring point area;
s2, simultaneously collecting a large amount of contour data A acquired by a laser line scanning camera and picture data B acquired by an industrial optical camera;
s3, taking the maximum value of each frame of data in A as the volume measurement value of the spherical product, and recording as C;
s4, splitting C and B into two parts according to a time synchronization principle, wherein C is split into C1 and C2, and B is split into B1 and B2;
and S5, training by taking B1 and C1 as training sample data to obtain a model M, wherein the input of the model M is B1, and the output of the model M is C1.
2. The image recognition-based industrial pipeline quality monitoring method according to claim 1, further comprising the steps of: s6, testing the model M trained in the step S5 by taking B2 and C2 as test sample data, and verifying whether the deviation of the input B2, which passes through the model M, and the output C2 is smaller than a threshold value;
s7, removing the laser line scanning camera, inputting the image data acquired by the optical camera in real time into the model M, and outputting the metric value D of the spherical product on the monitored production line, wherein if the deviation of the metric value D is greater than the threshold value, the volume of the spherical product on the production line is considered to be out of limit.
3. The method as claimed in claim 2, wherein in step S2, the laser line scan camera and the industrial optical camera respectively transmit the captured images to a single chip, and the single chip respectively analyzes and processes the images.
4. The image recognition-based industrial pipeline quality monitoring method according to claim 3, wherein the image analysis and processing process of the laser line scan camera by the single chip microcomputer comprises the following steps: a, collecting a reference image of a monitoring point area when no spherical product passes through;
b, when the spherical product passes by, acquiring images of monitoring point areas in real time, performing optimal threshold segmentation and binarization processing on each frame of obtained images, and sequentially storing all contour data of each frame of obtained spherical product into respective corresponding arrays P1;
c, sorting all data in each array P1 from large to small by adopting a quick sorting method to obtain sorted corresponding arrays P2;
d, selecting the first data Ck in each array P2 to store in the data set C.
5. The image recognition-based industrial production line quality monitoring method according to claim 4, wherein the single chip microcomputer performs necessary processing such as gray scale processing, rotation processing, clipping processing and the like on the image shot by the industrial optical camera to obtain a data set Bk of each frame, and a plurality of Bks form a data set B.
6. The image recognition-based industrial pipeline quality monitoring method according to claim 5, wherein the model M is a neural network model, and a training process of the neural network model comprises the following steps: a, during forward propagation, sequentially inputting a plurality of data values of a data set Bk from a plurality of nodes of an input layer, processing layer by layer through a hidden layer to obtain a trained network connection weight, and obtaining a numerical value Ch output through network calculation from an output end of an output layer;
and b, judging whether the difference value between Ch and Ck corresponding to the same timestamp is smaller than a threshold value, if so, conforming to expectation, finishing training, and otherwise, entering an error back propagation process.
7. The image recognition-based industrial pipeline quality monitoring method according to claim 6, wherein the one-time training process of the neural network model further comprises the following steps: and c, when an error back propagation process is started, the Ch and the Ck error corresponding to the same timestamp are back propagated from the network output layer to the input layer, and the connection weight of each layer of neuron is modified in the back propagation process.
CN202111245436.3A 2021-10-25 2021-10-25 Industrial production line quality monitoring method based on image recognition Pending CN114119479A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111245436.3A CN114119479A (en) 2021-10-25 2021-10-25 Industrial production line quality monitoring method based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111245436.3A CN114119479A (en) 2021-10-25 2021-10-25 Industrial production line quality monitoring method based on image recognition

Publications (1)

Publication Number Publication Date
CN114119479A true CN114119479A (en) 2022-03-01

Family

ID=80376721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111245436.3A Pending CN114119479A (en) 2021-10-25 2021-10-25 Industrial production line quality monitoring method based on image recognition

Country Status (1)

Country Link
CN (1) CN114119479A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893178A (en) * 2024-03-15 2024-04-16 北京谷器数据科技有限公司 Real-time work progress display method, system, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893178A (en) * 2024-03-15 2024-04-16 北京谷器数据科技有限公司 Real-time work progress display method, system, equipment and medium

Similar Documents

Publication Publication Date Title
US11715190B2 (en) Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device
CN110677639B (en) Non-reference video quality evaluation method based on feature fusion and recurrent neural network
CN111814704A (en) Full convolution examination room target detection method based on cascade attention and point supervision mechanism
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN105931246A (en) Fabric flaw detection method based on wavelet transformation and genetic algorithm
KR100868884B1 (en) Flat glass defect information system and classification method
CN112991271B (en) Aluminum profile surface defect visual detection method based on improved yolov3
CN111666852A (en) Micro-expression double-flow network identification method based on convolutional neural network
CN114360030A (en) Face recognition method based on convolutional neural network
CN109840905A (en) Power equipment rusty stain detection method and system
CN111127454A (en) Method and system for generating industrial defect sample based on deep learning
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN113906451A (en) AI-based pre-training model decision system and AI-based visual inspection management system for product production line using the same
CN114972216A (en) Construction method and application of texture surface defect detection model
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
CN114881987A (en) Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method
CN114119479A (en) Industrial production line quality monitoring method based on image recognition
CN114037684B (en) Defect detection method based on yolov and attention mechanism model
CN113706496A (en) Aircraft structure crack detection method based on deep learning model
CN117557487A (en) Smooth object highlight removing method and system based on pix2pixHD and defect detecting device
CN116539619B (en) Product defect detection method, system, device and storage medium
CN108764177A (en) A kind of moving target detecting method based on low-rank decomposition and expression combination learning
CN112613494A (en) Power line monitoring abnormity identification method and system based on deep countermeasure network
CN110555384A (en) Beef marbling automatic grading system and method based on image data
CN113688789A (en) Online learning investment recognition method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination