CN113487166A - Chemical fiber floating filament quality detection method and system based on convolutional neural network - Google Patents
Chemical fiber floating filament quality detection method and system based on convolutional neural network Download PDFInfo
- Publication number
- CN113487166A CN113487166A CN202110743078.2A CN202110743078A CN113487166A CN 113487166 A CN113487166 A CN 113487166A CN 202110743078 A CN202110743078 A CN 202110743078A CN 113487166 A CN113487166 A CN 113487166A
- Authority
- CN
- China
- Prior art keywords
- parameters
- chemical fiber
- module
- picture
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 44
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 32
- 239000000835 fiber Substances 0.000 title claims abstract description 31
- 239000000126 substance Substances 0.000 title claims abstract description 30
- 230000002159 abnormal effect Effects 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 22
- 230000005540 biological transmission Effects 0.000 claims abstract description 9
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 8
- 239000013589 supplement Substances 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 5
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000011156 evaluation Methods 0.000 abstract description 2
- 230000008713 feedback mechanism Effects 0.000 abstract description 2
- 230000007246 mechanism Effects 0.000 abstract description 2
- 238000009825 accumulation Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06395—Quality analysis or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/04—Manufacturing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Entrepreneurship & Innovation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Development Economics (AREA)
- Primary Health Care (AREA)
- Manufacturing & Machinery (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of chemical fiber industry, and particularly relates to a chemical fiber floating silk quality detection method and system based on a convolutional neural network, which comprises an image acquisition and transmission module, a part detection and positioning module, a parameter feedback module, a part identification and judgment module and a comprehensive judgment module; the method uses a CNN deep learning scheme to detect the abnormal flying filaments, and provides a flying filament abnormal detection technology of 'key part abnormal detection'. A dynamic feedback mechanism and a comprehensive evaluation mechanism are adopted; abnormal conditions such as silk floating and the like are detected by using a robot patrol camera shooting background analysis mode instead of manual work.
Description
Technical Field
The invention belongs to the technical field of chemical fiber industry, and particularly relates to a chemical fiber floating silk quality detection method and system based on a convolutional neural network.
Background
The chemical fiber product is mainly filament spinning cake. The filaments were only hair thick and rolled into a cake at a take-off speed of 60 meters per second. The high-precision production process has strict requirements on the environment and has the 'three constant' requirements of constant temperature, constant humidity and constant wind. Slight variations in environmental conditions tend to cause the filaments to fly off the intended tracks, creating a fly-away phenomenon if the filaments fly to adjacent tracks, and a fly-away phenomenon if the filaments fly elsewhere.
There are two main types of prior art solutions:
(1) the fly filament is inspected and early-warned by means of artificial naked eyes: the worker can go to a machine workshop to carry out patrol detection and early warning, and the early warning is carried out if abnormal conditions such as chemical fiber flying yarn accumulation and the like are seen. The disadvantages of this approach are high cost, time consuming, labor intensive, and inefficient overall system operation.
(2) Using conventional image processing: the method comprises the steps of shooting a picture of each oil nozzle or each hook by using a high-definition camera (assuming that a camera collects corresponding pictures at the position of one or two parts), and detecting and judging abnormal conditions such as flying yarn accumulation and the like by combining traditional image processing methods such as threshold segmentation and line detection. Although the labor cost is saved, the method brings new problems that the detection precision is greatly influenced by the environment, the traditional image processing method has poor robustness, and more devices need to be deployed and the cost is high.
Although the method using image processing can reduce the labor cost, the influence of the abnormal detection capability of the method on the illumination environment is very large, false detection or missing detection is very easy to generate, the quantity of deployed hardware is large, and the hardware cost is high.
Disclosure of Invention
Aiming at the problems, the invention provides a chemical fiber floating filament quality detection method and system based on a convolutional neural network, which are used for detecting and early warning scenes with unqualified quality such as the floating filaments of chemical fiber machinery.
The invention is realized by the following technical scheme:
a chemical fiber floating silk quality detection method based on a convolutional neural network comprises the following steps:
s1, patrolling the robot by means of preset routes and parameters, acquiring a high-definition video of the chemical fiber machine through a camera, and unframing the high-definition video into an original picture;
s2, transmitting the original picture to a background computing server in a network transmission mode;
s3, positioning an oil nozzle and a hook in the original picture by using a CNN detection model;
s4, calculating parameters corresponding to the oil nozzle and the hook according to the information of S3, wherein the parameters comprise ambiguity, brightness and size ratio values, judging whether the parameter values are within a preset threshold range, if so, executing the step S5, and if not, executing the step 6;
s5, calculating an adjustment amount according to the parameters obtained in S4 through a feedback equation, adjusting focusing and fill-in light parameters of the camera based on the adjustment amount, and returning to the step S1;
s6, extracting part pictures in the original pictures, and judging whether the flying silks are abnormal or not by using a CNN identification model;
and S7, obtaining a final result by combining the identification judgment results of the multiple frames, and returning to the step S1.
Further, in step S2, the network is an interconnected local area network.
Further, the adjusting of the focusing and fill-in light parameters of the camera according to the display scale and the effect in the step S3 specifically includes: and dynamically adjusting the focal length and exposure of the camera according to the size of the part frame positioned by the picture, the part definition and the brightness of the part.
Further, the brightness calculation formula of the part is as follows:
v=max;
the luminance calculation converts RGB into values of HSV channel using the value of v channel;
and the definition is obtained by using a gray variance function, and the calculation formula is as follows:
D(f)=∑y∑x(|f(x,y)-f(x,y-1)|+|f(x,y)-f(x+1,y)|)
the parameter feedback adjustment equation is of the form: f (x) ax + b, x is the illumination brightness of the part picture, f (x) is the exposure parameter of the camera which is fed back and adjusted, and a and b are fixed parameters and are obtained by a large amount of data statistics. Further, after the final result is obtained in step S7, the method further includes the steps of:
and S8, recording the system coordinates in navigation at the current moment and the original picture at the moment, and performing early warning and reporting.
A chemical fiber floating silk quality detection system based on a convolutional neural network comprises an image acquisition and transmission module, a part detection and positioning module, a parameter feedback module, a part identification and judgment module and a comprehensive judgment module; wherein
The image acquisition and transmission module is used for acquiring a high-definition video of the chemical fiber machine acquired by the robot during patrol according to a preset route and parameters, decoding the high-definition video into an original picture and transmitting the original picture to the background computing server;
the part detection positioning module is used for positioning an oil nozzle and a hook in the original picture based on a CNN detection model; calculating to obtain parameters corresponding to the oil nozzle and the hook, wherein the parameters comprise the fuzziness, the brightness and the size ratio value;
the parameter feedback module calculates an adjustment amount through a feedback equation based on the parameters, and adjusts focusing and light supplement lamp parameters of the camera based on the adjustment amount;
the part identification and judgment module identifies and judges whether the part picture in the original picture generates the abnormal flying filament or not based on the CNN identification model;
and the comprehensive judgment module is used for obtaining a final result by combining the identification judgment results of multiple frames.
Preferably, the original picture is transmitted to the background computing server through an internet local area network.
Preferably, the adjusting of the focusing and light supplement lamp parameters of the camera according to the display proportion and the effect specifically comprises: and dynamically adjusting the focal length and exposure of the camera according to the size of the part frame positioned by the picture, the part definition and the brightness of the part.
Preferably, the brightness calculation formula of the part is as follows:
v=max;
the luminance calculation converts RGB into values of HSV channel using the value of v channel;
and the definition is obtained by using a gray variance function, and the calculation formula is as follows:
D(f)=∑y∑x(|f(x,y)-f(x,y-1)|+|f(x,y)-f(x+1,y)|)
the parameter feedback adjustment equation is of the form: f (x) ax + b, x is the illumination brightness of the part picture, f (x) is the exposure parameter of the camera which is fed back and adjusted, and a and b are fixed parameters and are obtained by a large amount of data statistics. Preferably, the comprehensive judgment module judges a final result by using multi-frame result fusion, and is further configured to record a system coordinate in navigation at the current time and an original picture at the current time, and perform early warning and reporting.
A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the steps of a method for chemical fiber streamer quality detection based on a convolutional neural network.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for chemical fiber streamer quality detection based on a convolutional neural network when executing the program.
The invention provides a chemical fiber floating fiber quality detection method and system based on a convolutional neural network, and the method has the following advantages:
(1) the detection of the abnormal flying filaments is carried out by using a CNN deep learning scheme, and a flying filament abnormal detection technology of 'abnormal detection of key parts' is provided: namely, the positions of key parts are firstly positioned, and then the accumulation identification of the silk wadding is carried out on the positions of the parts. The problem that the silk wadding, the silk that wafts are hardly detected to the scheme that traditional carries out direct detection to the silk wadding of having avoided of this scheme novelty is owing to receive the influence of illumination environment, and the indirectness detects out the condition such as silk anomaly that wafts, receives illumination influence can be very little to but the distinguishable degree is high, has guaranteed the stability of system.
(2) The parameters of the acquisition equipment are adjusted by adopting a dynamic feedback mechanism, and the parameters of the camera during acquisition can be adjusted in real time according to dynamic scenes and environments, so that the quality of the acquired images is greatly improved; and by using a comprehensive evaluation mechanism, the correctness of the system is ensured by synthesizing the information of multiple frames.
(3) The method of the robot patrol camera shooting and background AI model analysis is used for replacing a manual method to detect abnormal conditions such as flying filaments, the labor cost of the whole process is greatly reduced, and the efficiency of the whole abnormal flying filament detection is improved. And the abnormal position and the corresponding field image can be automatically recorded, and workers can be timely prompted to carry out fault repair.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings;
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a comparison of the nozzle and hook fly pattern, (a) for a normal nozzle and hook, (b) for a nozzle and hook with fly created;
FIG. 3 is a view of the positioning of a part and distinguishing between a nozzle tip and a hanger from an original picture;
fig. 4 is a diagram of the judgment result, the upper first, second, fourth, fifth and sixth frames being normal parts, the upper sixth frame representing a part in which a fly is generated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In one embodiment, by means of a carrier such as a patrol robot (a device with autonomous movement, audio and video and various sensors and certain intelligence), after a series of overall machine videos are shot by using a high-definition camera, analysis and detection are carried out by combining information of each frame of the videos. And positioning key parts of the chemical fiber machine and identifying the abnormal details by using the currently popular deep learning CNN network. The scheme not only reduces the cost of hardware deployment and manpower inspection, but also greatly improves the stability and the detection capability of the system by using the deep learning CNN technology.
Through a large amount of data observation and analysis, the condition of the floating silks is mostly found to occur in the positions of the hooks and the oil nozzles or the silking out at the lower part of the positions. If normal, there is only one thread in this area, but if a fly occurs, a lot of wadding of balls accumulates in the area of the part, which is in sharp contrast to normal, as shown in fig. 2.
According to the scheme, a key part abnormity detection mode is formulated according to the condition to monitor the abnormal condition of the floating filament. The position of each nozzle or hook is first located on the acquired image, as shown in fig. 3, and then the identification of abnormal conditions is performed. Therefore, the unstable method of directly detecting the silk wadding is avoided, parameters of upper-layer acquisition equipment can be adjusted in real time according to the effect, and finally, multi-frame information results are combined, so that the overall effect is greatly improved compared with the traditional image mode.
The overall flow chart of the chemical fiber flying yarn quality detection method is shown in figure 1, and comprises the following steps:
s1, patrolling the robot by means of a preset route and parameters, acquiring a high-definition video, and unframing the high-definition video into a corresponding picture;
s2, transmitting the picture to a background computing server through an Internet local area network;
s3, positioning the oil nozzle and the hook by using a CNN detection model;
s4, judging whether the oil nozzle or the hook is fuzzy or too small, if so, executing a step S5, otherwise, executing a step 6;
s5, adjusting the focusing and fill-in light parameters of the camera according to the proportion and the effect, and returning to the step S1;
s6, extracting part pictures, and judging whether the abnormalities such as the flying yarn and the like are generated by using a CNN identification model;
and S7, combining the detection judgment results of the multiple frames to obtain a final result, and returning to the step S1.
In another embodiment, the system further comprises a chemical fiber flying lead quality detection system, comprising:
1. an image acquisition and transmission module:
when the robot patrols on a designated route, the robot captures a video of the chemical fiber machine along the way, frames the video into corresponding pictures, and transmits the pictures to the background GPU computing server in a network transmission mode to analyze the AI algorithm model.
2. Part detects orientation module:
the part detection and positioning module aims to acquire the positions of parts such as a nozzle and a hook according to an input image and mark a specific coordinate frame in a square frame mode.
The method for detecting the CenterNet has the advantages that the accuracy and the performance are balanced, the detection speed is high relatively due to the AnchorFree mode, the flow is clear and concise, and the method is convenient to achieve and deploy in engineering. According to the scheme, a target is located in a key point mode, a model predicts two branches, namely class heatmap and size heatmap, the class heatmap predicts the specific location (coordinates of x and y of an object on a graph) and class (hook or oil nozzle) of an object (hook and oil nozzle), the size heatmap predicts the size (length, width and height of the object), and the position of the object to be detected can be restored by combining the prediction result.
3. A parameter feedback module:
the parameter feedback module aims at adjusting the focal length and exposure of the camera according to the size of a frame positioned by the image, the definition of parts and the brightness of the parts.
The brightness calculation method of the part comprises the following steps: and converting the RGB image of the part area into an HSV image, and obtaining the brightness value by taking the average value of the brightness channels. The formula for converting RGB into HSV image is as follows:
v=max
the size can be obtained by a prediction box and the sharpness is obtained using an SMD (grayscale variance) function, whose formula is as follows:
D(f)=∑y∑x(|f(x,y)-f(x,y-1)|+|f(x,y)-f(x+1,y)|)
the formula of feedback is negative linear feedback, the formula of feedback follows a linear formula, and the form of a parameter feedback adjustment equation is as follows: f (x) ax + b (x is the input of the feedback, f (x) is the output of the feedback), exposure feedback is used as an example, x is the illumination brightness of the part image, f (x) is the camera exposure parameter that is feedback adjusted, and both parameters a and b can be obtained from experiments and effect statistics. Therefore, a corresponding new exposure parameter can be obtained every time a brightness value is input, and the snapshot parameters of the system can be dynamically adjusted to obtain the optimal image effect.
4. Part discernment judges module:
the part identification module is used for identifying which parts are normal parts and which parts have the flying filaments according to the part images, and the judgment result is shown in figure 4.
The adopted scheme is a more classical CNN multi-classification network model, and BackBone adopts a MobileNet V3 network with more balanced precision and performance. The network is a network structure searched by the NAS, gives consideration to the balance of speed and precision, and is very suitable for tasks of people. The normal part is used as a positive sample, the part picture with silk wadding accumulation is used as a negative sample for training, a series of sample balance modes, data expansion methods and the like are adopted during training, and the accuracy and the generalization of the model are greatly improved.
5. A comprehensive judgment module:
the comprehensive judgment module is used for comprehensively judging the final overall result according to the result of the plurality of frames.
Because the robot is moving constantly, the image information of a period of time can be collected and integrated to judge the result. We generally collect images of k frames (we choose k 5 as one segment) for analysis, and if half the number of pictures in k frames have features with flying filaments, we consider that there is a high probability that the flying filaments are abnormal at this point in time.
At the moment, the system records the system coordinates in the navigation at the moment and the snap-shot pictures at the moment, and then performs early warning and reporting. The staff can inquire the corresponding alarm position and the scene snapshot picture through the webpage system of the background, and after the picture is rapidly and manually screened, the abnormal conditions such as the flying silks can be accurately judged.
The invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the chemical fiber drifting quality detection method based on the convolutional neural network.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the invention are also within the protection scope of the invention.
Claims (10)
1. A chemical fiber floating silk quality detection method based on a convolutional neural network is characterized by comprising the following steps:
s1, patrolling the robot by means of preset routes and parameters, acquiring a high-definition video of the chemical fiber machine through a camera, and unframing the high-definition video into an original picture;
s2, transmitting the original picture to a background computing server in a network transmission mode;
s3, positioning an oil nozzle and a hook in the original picture by using a CNN detection model;
s4, calculating parameters corresponding to the oil nozzle and the hook according to the information of S3, wherein the parameters comprise ambiguity, brightness and size ratio values, judging whether the parameter values are within a preset threshold range, if so, executing the step S5, and if not, executing the step 6;
s5, calculating an adjustment quantity according to the parameters obtained in the step S4 through a feedback equation, adjusting focusing and fill-in light parameters of the camera based on the adjustment quantity, and returning to the step S1;
s6, extracting part pictures in the original pictures, and judging whether the flying silks are abnormal or not by using a CNN identification model;
and S7, obtaining a final result by combining the identification judgment results of the multiple frames, and returning to the step S1.
2. The method for detecting the quality of the chemical fiber floating filaments based on the convolutional neural network as claimed in claim 1, wherein in the step S2, the network is an internet local area network.
3. The method for detecting the quality of the chemical fiber floating filaments based on the convolutional neural network as claimed in claim 1, wherein the step S3 of adjusting the focusing and fill-in light parameters of the camera according to the display ratio and the effect specifically comprises: and dynamically adjusting the focal length and exposure of the camera according to the size of the part frame positioned by the picture, the part definition and the brightness of the part.
4. The method for detecting the quality of the chemical fiber floating filaments based on the convolutional neural network as claimed in claim 3, wherein the brightness calculation formula of the part is as follows:
v=max;
the luminance calculation converts RGB into values of HSV channel using the value of v channel;
and the definition is obtained by using a gray variance function, and the calculation formula is as follows:
D(f)=∑y∑x(|f(x,y)-f(x,y-1)|+|f(x,y)-f(x+1,y)|)
the parameter feedback adjustment equation is of the form: f (x) ax + b, x is the illumination brightness of the part picture, f (x) is the camera exposure parameter of feedback adjustment, and a and b are fixed parameters.
5. The method for detecting the quality of the chemical fiber floating filaments based on the convolutional neural network as claimed in claim 1, wherein after the final result is obtained in step S7, the method further comprises the following steps:
and S8, recording the system coordinates in navigation at the current moment and the original picture at the moment, and performing early warning and reporting.
6. A chemical fiber floating silk quality detection system based on a convolutional neural network is characterized by comprising an image acquisition and transmission module, a part detection and positioning module, a parameter feedback module, a part identification and judgment module and a comprehensive judgment module; wherein
The image acquisition and transmission module is used for acquiring a high-definition video of the chemical fiber machine acquired by the robot during patrol according to a preset route and parameters, decoding the high-definition video into an original picture and transmitting the original picture to the background computing server;
the part detection positioning module is used for positioning an oil nozzle and a hook in the original picture based on a CNN detection model; calculating to obtain parameters corresponding to the oil nozzle and the hook, wherein the parameters comprise the fuzziness, the brightness and the size ratio value;
the parameter feedback module calculates an adjustment amount through a feedback equation based on the parameters, and adjusts focusing and light supplement lamp parameters of the camera based on the adjustment amount;
the part identification and judgment module identifies and judges whether the part picture in the original picture generates the abnormal flying filament or not based on the CNN identification model;
and the comprehensive judgment module is used for obtaining a final result by combining the identification judgment results of multiple frames.
7. The system of claim 6, wherein the raw picture is transmitted to a background computing server via an internet.
8. The system of claim 6, wherein the adjusting of the focusing and fill-in light parameters of the camera according to the display scale and the effect is specifically: and dynamically adjusting the focal length and exposure of the camera according to the size of the part frame positioned by the picture, the part definition and the brightness of the part.
9. The system of claim 8, wherein the brightness of the part is calculated as follows:
v=max;
the luminance calculation converts RGB into values of HSV channel using the value of v channel;
and the definition is obtained by using a gray variance function, and the calculation formula is as follows:
D(f)=∑y∑x(|f(x,y)-f(x,y-1)|+|f(x,y)-f(x+1,y)|)
the parameter feedback adjustment equation is of the form: f (x) ax + b, x is the illumination brightness of the part picture, f (x) is the exposure parameter of the camera which is fed back and adjusted, and a and b are fixed parameters and are obtained by a large amount of data statistics.
10. The system of claim 6, wherein the comprehensive judgment module is configured to perform judgment of a final result by using multi-frame result fusion, record system coordinates in navigation at a current time and an original picture at the current time, and perform early warning and reporting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110743078.2A CN113487166A (en) | 2021-06-30 | 2021-06-30 | Chemical fiber floating filament quality detection method and system based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110743078.2A CN113487166A (en) | 2021-06-30 | 2021-06-30 | Chemical fiber floating filament quality detection method and system based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113487166A true CN113487166A (en) | 2021-10-08 |
Family
ID=77937353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110743078.2A Pending CN113487166A (en) | 2021-06-30 | 2021-06-30 | Chemical fiber floating filament quality detection method and system based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113487166A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114937028A (en) * | 2022-06-21 | 2022-08-23 | 苏州上舜精密工业科技有限公司 | Intelligent identification-based quality detection method and system for linear sliding table module |
CN115100209A (en) * | 2022-08-28 | 2022-09-23 | 电子科技大学 | Camera-based image quality correction method and system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210471A (en) * | 2016-07-19 | 2016-12-07 | 成都百威讯科技有限责任公司 | A kind of outdoor face recognition method and system |
CN110308147A (en) * | 2019-05-31 | 2019-10-08 | 吴江朗科化纤有限公司 | A kind of intelligent inspection method of synthetic fiber spinning technique the Silk Road |
CN110927161A (en) * | 2019-10-18 | 2020-03-27 | 中国移动通信集团浙江有限公司嘉兴分公司 | Visual inspection method suitable for flying fibers and flying impurities |
CN111047655A (en) * | 2020-01-10 | 2020-04-21 | 北京盛开互动科技有限公司 | High-definition camera cloth defect detection method based on convolutional neural network |
CN111899216A (en) * | 2020-06-16 | 2020-11-06 | 西安交通大学 | Abnormity detection method for insulator fastener of high-speed rail contact network |
CN112465784A (en) * | 2020-11-27 | 2021-03-09 | 广州运达智能科技有限公司 | Method for detecting appearance abnormity of subway clamp |
CN112489014A (en) * | 2020-11-27 | 2021-03-12 | 广州高新兴机器人有限公司 | Chemical fiber impurity floating detection method based on vision |
CN112489015A (en) * | 2020-11-27 | 2021-03-12 | 广州高新兴机器人有限公司 | Chemical fiber impurity floating identification method for mobile robot |
CN112505051A (en) * | 2020-11-27 | 2021-03-16 | 广州高新兴机器人有限公司 | High-precision fiber floating filament quality detection method based on laser ray |
CN112651928A (en) * | 2020-12-08 | 2021-04-13 | 东华大学 | Polyester filament yarn uniformity online detection system based on dynamic convolution neural network |
-
2021
- 2021-06-30 CN CN202110743078.2A patent/CN113487166A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210471A (en) * | 2016-07-19 | 2016-12-07 | 成都百威讯科技有限责任公司 | A kind of outdoor face recognition method and system |
CN110308147A (en) * | 2019-05-31 | 2019-10-08 | 吴江朗科化纤有限公司 | A kind of intelligent inspection method of synthetic fiber spinning technique the Silk Road |
CN110927161A (en) * | 2019-10-18 | 2020-03-27 | 中国移动通信集团浙江有限公司嘉兴分公司 | Visual inspection method suitable for flying fibers and flying impurities |
CN110992343A (en) * | 2019-10-18 | 2020-04-10 | 中国移动通信集团浙江有限公司嘉兴分公司 | Flying filament flying impurity visual inspection method based on robot, storage medium and electronic equipment |
CN111047655A (en) * | 2020-01-10 | 2020-04-21 | 北京盛开互动科技有限公司 | High-definition camera cloth defect detection method based on convolutional neural network |
CN111899216A (en) * | 2020-06-16 | 2020-11-06 | 西安交通大学 | Abnormity detection method for insulator fastener of high-speed rail contact network |
CN112465784A (en) * | 2020-11-27 | 2021-03-09 | 广州运达智能科技有限公司 | Method for detecting appearance abnormity of subway clamp |
CN112489014A (en) * | 2020-11-27 | 2021-03-12 | 广州高新兴机器人有限公司 | Chemical fiber impurity floating detection method based on vision |
CN112489015A (en) * | 2020-11-27 | 2021-03-12 | 广州高新兴机器人有限公司 | Chemical fiber impurity floating identification method for mobile robot |
CN112505051A (en) * | 2020-11-27 | 2021-03-16 | 广州高新兴机器人有限公司 | High-precision fiber floating filament quality detection method based on laser ray |
CN112651928A (en) * | 2020-12-08 | 2021-04-13 | 东华大学 | Polyester filament yarn uniformity online detection system based on dynamic convolution neural network |
Non-Patent Citations (1)
Title |
---|
李玉潮 等: "绿色智能制造在化纤行业的应用", 合成纤维, vol. 49, no. 8, pages 203 - 213 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114937028A (en) * | 2022-06-21 | 2022-08-23 | 苏州上舜精密工业科技有限公司 | Intelligent identification-based quality detection method and system for linear sliding table module |
CN114937028B (en) * | 2022-06-21 | 2023-12-08 | 苏州上舜精密工业科技有限公司 | Intelligent identification and recognition linear sliding table module quality detection method and system |
CN115100209A (en) * | 2022-08-28 | 2022-09-23 | 电子科技大学 | Camera-based image quality correction method and system |
CN115100209B (en) * | 2022-08-28 | 2022-11-08 | 电子科技大学 | Camera-based image quality correction method and correction system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738127B (en) | Helmet identification method based on unsupervised deep learning neural network algorithm | |
CN110084165B (en) | Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation | |
CN111881730A (en) | Wearing detection method for on-site safety helmet of thermal power plant | |
CN111047818A (en) | Forest fire early warning system based on video image | |
CN109902633A (en) | Accident detection method and device based on the camera supervised video of fixed bit | |
CN111626139B (en) | Accurate detection method for fault information of IT equipment in machine room | |
CN109167997B (en) | Video quality diagnosis system and method | |
CN113487166A (en) | Chemical fiber floating filament quality detection method and system based on convolutional neural network | |
CN112560816A (en) | Equipment indicator lamp identification method and system based on YOLOv4 | |
CN108596883B (en) | Aerial image vibration damper slip fault diagnosis method based on deep learning and distance constraint | |
CN112116582B (en) | Method for detecting and identifying cigarettes in inventory or display scene | |
CN110096945B (en) | Indoor monitoring video key frame real-time extraction method based on machine learning | |
CN112613361A (en) | Intelligent behavior analysis system for security monitoring | |
CN112419261B (en) | Visual acquisition method and device with abnormal point removing function | |
KR20220023726A (en) | Deep learning based realtime process monitoring system and method | |
CN112528861A (en) | Foreign matter detection method and device applied to track bed in railway tunnel | |
CN116824726A (en) | Campus environment intelligent inspection method and system | |
CN112906488A (en) | Security protection video quality evaluation system based on artificial intelligence | |
CN117351271A (en) | Fault monitoring method and system for high-voltage distribution line monitoring equipment and storage medium thereof | |
CN117496129A (en) | YOLOv 7-based improved factory safety wearing target detection method | |
CN112260402B (en) | Monitoring method for state of intelligent substation inspection robot based on video monitoring | |
Chen et al. | Instance Segmentation of Grape Berry Images Based on Improved Mask R-Cnn | |
CN114355083A (en) | Inspection robot fault identification method and system based on artificial intelligence algorithm | |
CN113657314A (en) | Method and system for recognizing dynamic and static unsafe behaviors in industrial environment | |
CN117876932B (en) | Moving object recognition system based on low-illumination environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |