CN116912201B - Optical fiber fusion quality prediction system - Google Patents
Optical fiber fusion quality prediction system Download PDFInfo
- Publication number
- CN116912201B CN116912201B CN202310861345.5A CN202310861345A CN116912201B CN 116912201 B CN116912201 B CN 116912201B CN 202310861345 A CN202310861345 A CN 202310861345A CN 116912201 B CN116912201 B CN 116912201B
- Authority
- CN
- China
- Prior art keywords
- pixel point
- optical fiber
- type
- sbp
- axis distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000013307 optical fiber Substances 0.000 title claims abstract description 190
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 239000013598 vector Substances 0.000 claims abstract description 64
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000003062 neural network model Methods 0.000 claims abstract description 21
- 238000003466 welding Methods 0.000 claims abstract description 12
- 238000004590 computer program Methods 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims abstract 2
- 238000013528 artificial neural network Methods 0.000 claims 1
- 239000000835 fiber Substances 0.000 abstract description 20
- 238000010276 construction Methods 0.000 abstract description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Mechanical Coupling Of Light Guides (AREA)
Abstract
The invention provides an optical fiber fusion quality prediction system, and relates to the technical field of fusion of optical fibers. The memory of the system comprises a computer program which, when executed by the processor of the system, implements the steps of: acquiring a target fiber image IM 0 The method comprises the steps of carrying out a first treatment on the surface of the A first preset set P'; detection target fiber image IM 0 Corresponding edge image IM' 0 Straight line segments of (a); traversing P ', judging P' a Whether the first type peak pixel point or the second type peak pixel point; acquisition of p' a Corresponding y-axis distance py a And x-axis distance px a The method comprises the steps of carrying out a first treatment on the surface of the Acquisition of sp' a Corresponding y-axis distance syp' a Distance of x-axis sxp' a And pixel point type; building IM 0 A first target vector F of the first end of the first optical fiber 1 1 And a first label vector F 1 2 The method comprises the steps of carrying out a first treatment on the surface of the Construction F 2 1 And F 2 2 The method comprises the steps of carrying out a first treatment on the surface of the Will F 1 1 、F 1 2 、F 2 1 And F 2 2 Input to the trained first neural network model for reasoning. The invention can predict the welding quality.
Description
Technical Field
The invention relates to the technical field of fusion welding of optical fibers, in particular to an optical fiber fusion welding quality prediction system.
Background
The end face of the optical fiber is a smooth plane in an ideal state, however, in actual preparation, the end face of the optical fiber is not a smooth plane, and there are possibly different numbers and different forms of burrs on the end face of the optical fiber, and the numbers and the forms of the burrs affect the welding quality of the optical fiber after welding to different degrees. How to predict the quality of the optical fiber with burrs on the end surface after fusion is a problem to be solved.
Disclosure of Invention
Aiming at the technical problems, the invention adopts the following technical scheme: an optical fiber fusion splice quality prediction system comprising a processor and a memory storing a computer program which, when executed by the processor, performs the steps of:
s100, obtaining a target optical fiber image IM 0 ,IM 0 A first end of a first optical fiber and a second end of a second optical fiber to be fused together; IM (instant Messaging) 0 Shooting direction and IM of (2) 0 Corresponding first optical fiber and IM 0 The extending direction of the corresponding second optical fiber is vertical; IM (instant Messaging) 0 The extending direction of the first optical fiber and the extending direction of the second optical fiber and the IM 0 Is consistent in the x-axis direction.
S200, traversing IM 0 The first end edge pixel point set P in P is added to a first preset set to obtain a first preset set P ' = (P ' ' 1 ,p’ 2 ,…,p’ a ,…,p’ A ),p’ a For the a-th pixel point added to the first preset set, the value range of a is 1 to A, and A is the number of the pixel points added to the first preset set; the initialization of the first preset set is Null.
S300, detecting target optical fiber image IM 0 Corresponding edge image IM' 0 Is a straight line segment in (a).
S400, traversing P ', if P' a In IM' 0 The corresponding pixel point in the array is the intersection pixel point of the straight line segment, and then p 'is judged' a Is a first type peak pixel point; otherwise, judge p' a Is a second type peak pixel point.
S500, traversing P 'to obtain P' a Corresponding y-axis distance py a And x-axis distance px a ,py a Is p' a The y-axis distance, px, between the pixel point with the smallest y-coordinate in P a Is p' a The x-axis distance from the pixel point with the smallest x-coordinate in P.
S600, obtain sp' a Corresponding y-axis distance syp' a Distance of x-axis sxp' a And pixel type, when sp' a In IM' 0 When the corresponding pixel point is the pixel point on the straight line segment, judging sp' a The pixel point type of (1) is a first appointed type; otherwise, determine sp' a The pixel point type of (2) is a second designated type; sp'. a Is p 'in Q' a Pixel points with the same y coordinate, Q is IM 0 An edge pixel point set at the second end of the middle; syp' a Is sp'. a The y-axis distance sxp 'from the pixel point with the smallest y-coordinate in Q' a Is sp'. a The x-axis distance from the pixel point with the largest x-coordinate in Q.
S700, build IM 0 A first target vector F of the first end of the first optical fiber 1 1 And a first label vector F 1 2 ,F 1 1 =(((py 1 ,px 1 ),(syp’ 1 ,sxp’ 1 )),((py 2 ,px 2 ),(syp’ 2 ,sxp’ 2 )),…,((py a ,px a ),(syp’ a ,sxp’ a )),…,
((py A ,px A ),(syp’ A ,sxp’ A )),F 1 2 =((bp 1 ,sbp 1 ),(bp 2 ,sbp 2 ),…,(bp a ,sbp a ),…,(bp A ,sbp A )),bp a Is p' a Corresponding peak pixel point label, when p' a When the pixel point is the peak pixel point of the first type, bp a =1; when p' a When the pixel point is the second type peak pixel point, bp a =0;sbp a Is sp'. a Corresponding appointed pixel point label, when sp' a When the pixel point type of (1) is the first appointed type, sbp a =1; when sp'. a When the pixel point type of (1) is the second designated type, sbp a =0。
S800, build IM 0 A second target vector F of the second end of the second optical fiber 2 1 And a second label vector F 2 2 。
S900, F 1 1 、F 1 2 、F 2 1 And F 2 2 And (3) inputting the data into a trained first neural network model for reasoning, wherein the trained first neural network model is used for predicting the quality of the welded optical fibers after the optical fibers are welded for a preset time by adopting a preset current.
The beneficial effects of the invention at least comprise:
the invention acquires images of a first optical fiber and a second optical fiber to be welded, namely a target optical fiber image IM 0 Due to IM 0 The shooting direction of (a) is perpendicular to the extending direction of the first optical fiber and the second optical fiber, so that the target optical fiber image IM 0 The optical fiber seen in is a side view based on the target fiber image IM 0 The invention obtains the peak pixel types corresponding to the peak pixel in the edge pixel, wherein the burrs corresponding to the peak pixel of the first type are sharp burrs, and the peak pixel of the first type is generally the intersection point of two straight line segments; the burrs corresponding to the second type of peak pixel points are smoother burrs, and the type of peak pixel points are generally on a curve; by associating relative position information (indicative of the size of the burr at the end) corresponding to the peak pixel of the edge pixelsThe type information (used for representing the burr type of the end), the relative position information (used for representing the size of the burr of the opposite end) of the pixel point of the opposite end and the type information (used for representing the burr type of the opposite end) of the pixel point of the opposite end are input into a trained first neural network model (used for predicting the quality of the fusion-spliced optical fibers) to be inferred, so that the quality of the first optical fibers and the second optical fibers after fusion-spliced is obtained, and the purpose of predicting the fusion-spliced quality is achieved.
The foregoing description is only an overview of the present invention, and it is to be understood that the present invention may be embodied in the form of specific details, for the purpose of providing a more thorough understanding of the present invention, and for the purpose of providing a more complete understanding of the present invention, as well as the above-described and other objects, features and advantages of the present invention, and is described in detail below with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method executed by a processor of an optical fiber fusion quality prediction system according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention based on the embodiments of the present invention.
Example 1
The present embodiment provides an optical fiber fusion splice quality prediction system, the system including a processor and a memory storing a computer program, which when executed by the processor, performs the steps of:
s100, obtaining a target optical fiber image IM 0 ,IM 0 A first end of a first optical fiber and a second end of a second optical fiber to be fused together; IM (instant Messaging) 0 Shooting direction and IM of (2) 0 Corresponding first optical fiber and IM 0 The extending direction of the corresponding second optical fiber is vertical; IM (instant Messaging) 0 The extending direction of the first optical fiber and the extending direction of the second optical fiber and the IM 0 Is consistent in the x-axis direction.
In this embodiment, the extending direction of the first optical fiber is the same as the extending direction of the second optical fiber, and the side images of the first optical fiber and the second optical fiber can be acquired by taking the direction perpendicular to the extending direction of the first optical fiber and the second optical fiber as the shooting direction.
Target fiber image IM in this embodiment 0 The positive x-axis direction of (a) is a direction from the first optical fiber to the second optical fiber, and the positive y-axis direction is a direction perpendicular to the x-axis direction and from the upper side to the lower side of the first optical fiber.
S200, traversing IM 0 The first end edge pixel point set P in P is added to a first preset set to obtain a first preset set P ' = (P ' ' 1 ,p’ 2 ,…,p’ a ,…,p’ A ),p’ a For the a-th pixel point added to the first preset set, the value range of a is 1 to A, and A is the number of the pixel points added to the first preset set; the initialization of the first preset set is Null.
In this embodiment, the peak pixel point at the first end refers to a pixel point in which the x coordinate of the edge pixel point corresponding to the first end is larger than the x coordinate of the adjacent edge pixel point; the trough pixel points at the first end refer to pixel points, of which the x coordinates of the edge pixel points corresponding to the first end are smaller than those of the adjacent edge pixel points. Optionally, S200 includes:
s210, obtaining P in P n Is a pixel point set HP of eight neighborhood pixel points n ,p n The value range of N is 1 to N, and N is the number of the edge pixel points in P.
In particular,P=(p 1 ,p 2 ,…,p n ,…,p N )。
S220, if p n Is HP n The pixel point with the largest middle x coordinate is judged to be p n Is a peak pixel point; if p is n Is HP n If the pixel point with the minimum x coordinate in the pixel point is determined to be p n Is a trough pixel point.
Alternatively, a canny operator is used to obtain IM 0 Edge pixel point set P at the first end of the row. Those skilled in the art will appreciate that any method for obtaining edge pixels in the prior art falls within the scope of the present invention.
S300, detecting target optical fiber image IM 0 Corresponding edge image IM' 0 Is a straight line segment in (a).
Those skilled in the art will appreciate that any method of edge image acquisition in the prior art falls within the scope of the present invention.
Optionally, the target fiber image IM is detected using hough transforms 0 Corresponding edge image IM' 0 Is a straight line segment in (a).
S400, traversing P ', if P' a In IM' 0 The corresponding pixel point in the array is the intersection pixel point of the straight line segment, and then p 'is judged' a Is a first type peak pixel point; otherwise, judge p' a Is a second type peak pixel point.
It should be understood that if a pixel is on both the first straight line segment and the second straight line segment, then the pixel is the intersection of the first straight line segment and the second straight line segment.
S500, traversing P 'to obtain P' a Corresponding y-axis distance py a And x-axis distance px a ,py a Is p' a The y-axis distance, px, between the pixel point with the smallest y-coordinate in P a Is p' a The x-axis distance from the pixel point with the smallest x-coordinate in P.
S600, obtain sp' a Corresponding y-axis distance syp' a Distance of x-axis sxp' a And pixel type, when sp' a In IM' 0 The corresponding pixel point in the array is a straight line segmentAt the time of the pixel point above, sp 'is determined' a The pixel point type of (1) is a first appointed type; otherwise, determine sp' a The pixel point type of (2) is a second designated type; sp'. a Is p 'in Q' a Pixel points with the same y coordinate, Q is IM 0 An edge pixel point set at the second end of the middle; syp' a Is sp'. a The y-axis distance sxp 'from the pixel point with the smallest y-coordinate in Q' a Is sp'. a The x-axis distance from the pixel point with the largest x-coordinate in Q.
Alternatively, a canny operator is used to obtain IM 0 Edge pixel point set Q of the first end of the pair. Those skilled in the art will appreciate that any method for obtaining edge pixels in the prior art falls within the scope of the present invention.
Specifically, q= (Q 1 ,q 2 ,…,q m ,…,q M ) M is in the range of 1 to M, M is IM 0 The number of edge pixels at the second end of the array.
S700, build IM 0 A first target vector F of the first end of the first optical fiber 1 1 And a first label vector F 1 2 ,F 1 1 =(((py 1 ,px 1 ),(syp’ 1 ,sxp’ 1 )),((py 2 ,px 2 ),(syp’ 2 ,sxp’ 2 )),…,((py a ,px a ),(syp’ a ,sxp’ a )),…,
((py A ,px A ),(syp’ A ,sxp’ A )),F 1 2 =((bp 1 ,sbp 1 ),(bp 2 ,sbp 2 ),…,(bp a ,sbp a ),…,(bp A ,sbp A )),bp a Is p' a Corresponding peak pixel point label, when p' a When the pixel point is the peak pixel point of the first type, bp a =1; when p' a When the pixel point is the second type peak pixel point, bp a =0;sbp a Is sp'. a Corresponding appointed pixel point label, when sp' a When the pixel point type of (1) is the first appointed type, sbp a =1; when sp'. a When the pixel point type of (1) is the second designated type, sbp a =0。
Specifically, when (bp a ,sbp a ) When= (1, 1), p 'is characterized' a In IM' 0 The corresponding pixel points in the first pixel are crossed pixel points of straight line segments, and p 'is arranged on the first end' a The burrs at the corresponding positions are relatively sharp burrs, and sp 'is arranged on the second end' a The burrs at the corresponding positions are also relatively sharp burrs; when (bp) a ,sbp a ) When= (0, 1), p 'is characterized' a In IM' 0 The corresponding pixel points in the first end are not intersection pixel points of straight line segments, p 'is arranged on the first end' a The burrs at the corresponding positions are smoother burrs, and sp 'is arranged on the second end' a The burrs at the corresponding positions are sharp burrs; when (bp) a ,sbp a ) When= (0, 0), p 'is characterized' a In IM' 0 The corresponding pixel points in the first end are not intersection pixel points of straight line segments, p 'is arranged on the first end' a The burrs at the corresponding positions are smoother burrs, and sp 'is arranged on the second end' a The burrs at the corresponding positions are smoother burrs; when (bp) a ,sbp a ) When= (1, 0), p 'is characterized' a In IM' 0 The corresponding pixel points in the first pixel are crossed pixel points of straight line segments, and p 'is arranged on the first end' a The burrs at the corresponding positions are relatively sharp burrs, and sp 'is arranged on the second end' a The burrs at the corresponding positions are smoother burrs.
S800, build IM 0 A second target vector F of the second end of the second optical fiber 2 1 And a second label vector F 2 2 。
The present embodiment builds IM 0 A second target vector F of the second end of the second optical fiber 2 1 And a second label vector F 2 2 Is to build IM as described above 0 A first target vector F of the first end of the first optical fiber 1 1 And a first label vector F 1 2 Similar to the process of (1), comprising: traversing IM 0 An edge pixel point set Q of the second end of the plurality of pixel points, wherein the peak pixel points in the Q are added to a second preset set to obtain a second preset set Q', and the second preset set is provided with a first pixel point set QInitializing to Null; traversing Q ', and judging the type of each peak pixel point in Q'; traversing Q ', and acquiring a y-axis distance and an x-axis distance corresponding to each peak pixel point in Q', wherein the y-axis distance corresponding to each peak pixel point in Q 'is the y-axis distance between the peak pixel point and the pixel point with the smallest y coordinate in Q, and the x-axis distance corresponding to each peak pixel point in Q' is the x-axis distance between the peak pixel point and the pixel point with the largest x coordinate in Q; acquiring a y-axis distance, an x-axis distance and a pixel type corresponding to a pixel with the same y-coordinate as each peak pixel in Q' in P; building IM 0 A second target vector F of the second end of the second optical fiber 2 1 And a second label vector F 2 2 ;F 2 1 And F 2 2 Is composed of F 1 1 And F 1 2 Is similar in construction and will not be described in detail herein.
In this embodiment, the peak pixel point at the second end refers to a pixel point in which the x coordinate of the edge pixel point corresponding to the second end is smaller than the x coordinate of the adjacent edge pixel point; the trough pixel points at the second end refer to pixel points, of which the x coordinates in the edge pixel points corresponding to the second end are all larger than the x coordinates of the adjacent edge pixel points. Alternatively, q m The judging method for the peak pixel point or the trough pixel point comprises the following steps:
s810, obtaining the Q of the Q m Is a pixel point set HQ of eight neighborhood pixel points m 。
S820, if q m Is HQ m If the pixel point with the largest middle x coordinate is determined to be q m Is a trough pixel point; if q m Is HQ m If the pixel point with the minimum x coordinate in the pixel point is the smallest, judging q m Is a peak pixel point.
S900, F 1 1 、F 1 2 、F 2 1 And F 2 2 And (3) inputting the data into a trained first neural network model for reasoning, wherein the trained first neural network model is used for predicting the quality of the welded optical fibers after the optical fibers are welded for a preset time by adopting a preset current.
Alternatively, in this embodiment, the preset current and the preset time are both empirical values.
Specifically, the training process of the first neural network model includes:
s010, obtaining an optical fiber image sample set IM= (IM) 1 ,IM 2 ,…,IM r ,…,IM R ) Each IM r A first end of a first optical fiber and a second end of a second optical fiber to be fused together; each IM r Shooting direction and IM of (2) r The extending direction of the first optical fiber and the second optical fiber to be welded together is vertical; IM (instant Messaging) r For the R-th optical fiber image sample, the value range of R is 1 to R, and R is the number of the optical fiber image samples.
Each IM in this embodiment r The extending direction of the first optical fiber is the same as the extending direction of the second optical fiber, so as to be the same as IM r The direction perpendicular to the extending direction of the first optical fiber and the second optical fiber is taken as the shooting direction, and the IM can be obtained r Side images of the first optical fiber and the second optical fiber.
Each IM in this embodiment r Is directed in the positive x-axis direction by IM r The first optical fiber points to the direction of the second optical fiber, and the positive y-axis direction is perpendicular to the x-axis direction and points to the lower side from the upper side of the first optical fiber.
S020, traversing IM and constructing IM r A first target vector F of the first end 1 1,r And a first label vector F 1 2,r ,F 1 1,r =(((py 1,r ,px 1,r ),(syp’ 1,r ,sxp’ 1,r )),((py 2,r ,px 2,r ),(syp’ 2,r ,sxp’ 2,r )),…,((py e,r ,px e,r ),(syp’ e,r ,sxp’ e,r )),…,((py E,r ,px E,r ),(syp’ E,r ,sxp’ E,r )),F 1 2,r =((bp 1,r ,sbp 1,r ),(bp 2,r ,sbp 2,r ),…,(bp e,r ,sbp e,r ),…,(bp E,r ,sbp E,r )),bp e,r Is p' e,r Corresponding peak pixel point label, p' e,r Is IM r First optical fiber of (a)The E-th peak pixel point at one end has the value range of E from 1 to E, E is IM r The number of peak pixel points at the first end of the first optical fiber is p' e,r When the pixel point is the peak pixel point of the first type, bp e,r =1; when p' e,r When the pixel point is the second type peak pixel point, bp e,r =0;sbp e,r Is sp'. e,r Corresponding appointed pixel point label, sp' e,r Is Q r Intermediate and p' e,r Pixel points with the same y coordinate, Q r Is IM r An edge pixel point set at the second end of the middle; when sp'. e,r When the pixel point type of (1) is the first appointed type, sbp e,r =1; when sp'. e,r When the pixel point type of (1) is the second designated type, sbp e,r =0;py e,r Is p' e,r Corresponding y-axis distance, px e,r Is p' e,r Corresponding x-axis distance, syp' e,r Is sp'. e,r Corresponding y-axis distance, sxp' e,r Is sp'. e,r Corresponding x-axis distance.
Py in this embodiment e,r Is p' e,r And P r The y-axis distance, px, between the pixel points having the smallest y-coordinate e,r Is p' e,r And P r X-axis distance, P, between pixels having the smallest x-coordinate r Is IM r And the first end of the edge pixel point set. sxp' e,r Is sp'. e,r And Q is equal to r The y-axis distance, px, between the pixel points having the smallest y-coordinate e,r Is sp'. e,r And Q is equal to r The x-axis distance between the pixel points with the largest x-coordinates.
S030, traversing IM and constructing IM r A second target vector F of the second end of the second optical fiber 2 1,r And a second label vector F 2 2,r 。
Construction of IM by this embodiment r A second target vector F of the second end of the second optical fiber 2 1,r And a second label vector F 2 2,r Is to build IM as described above r A first target vector F of the first end of the first optical fiber 1 1,r And a first label vector F 1 2,r Is similar and will not be described in detail herein. F of construction 2 1,r And F 2 2,r The following are provided:
F 2 1,r =(((qy 1,r ,qx 1,r ),(syq’ 1,r ,sxq’ 1,r )),((qy 2,r ,qx 2,r ),(syq’ 2,r ,sxq’ 2,r )),…,((qy g,r ,qx g,r ),(syq’ g,r ,sxq’ g,r )),
…,((qy G,r ,qx G,r ),(syq’ G,r ,sxq’ G,r )),F 2 2,r =((bq 1,r ,sbq 1,r ),(bq 2,r ,sbq 2,r ),…,(bq g,r ,sbq g,r ),…,(bq G,r ,sbq G,r )),bq g,r for q' g,r Corresponding peak pixel point label, q' g,r Is IM r The G peak pixel point of the second end of the second optical fiber has the value range of G from 1 to G, and G is IM r The number of peak pixel points at the second end of the second optical fiber is q' g,r Bq for the first type peak pixel point g,r =1; when q' g,r Bq for the second type peak pixel point g,r =0;sbq g,r For sq' g,r Corresponding appointed pixel point label, sq' g,r Is P r Intermediate and q' g,r Pixel points with the same y-coordinate, P r Is IM r An edge pixel point set of the first end; when sq' g,r When the pixel point type of (1) is the first designated type, sbq g,r =1; when sq' g,r When the pixel point type of (a) is the second designated type, sbq g,r =0;qy g,r For q' g,r Corresponding y-axis distance, qx g,r For q' g,r Corresponding x-axis distance, syq' g,r For sq' g,r Corresponding y-axis distance, sxq' g,r For sq' g,r A corresponding x-axis distance; qy g,r For q' g,r And Q is equal to r The y-axis distance, qx, between pixel points with the smallest y-coordinate g,r For q' g,r And Q is equal to r X-axis distance between pixel points with maximum middle x coordinate, Q r Is IM r An edge pixel point set at the second end of the middle; sxq' g,r For sq' g,r And P r The y-axis distance, qx, between pixel points with the smallest y-coordinate g,r For sq' g,r And P r The x-axis distance between the pixel points of the smallest x-coordinate.
S040, traversing the IM to obtain the IM with the preset current pair r The quality z of the welded optical fiber after the corresponding preset time of optical fiber welding r 。
Optionally, the fusion-spliced fiber quality judgment method in the prior art is utilized to obtain the IM adopting the preset current pair r The quality z of the welded optical fiber after the corresponding preset time of optical fiber welding r . However, the present embodiment also provides a method for obtaining the quality of the fused optical fiber, specifically, S040 includes:
s041, obtaining IM adopting preset current pair r And (5) welding the corresponding optical fibers for a preset attribute value of the welded optical fibers after preset time.
The preset attributes in this embodiment are some parameters used in the prior art to evaluate the quality of the fusion-spliced fibers, such as transmission loss, etc.
S042, constructing IM according to the preset attribute value r Corresponding attribute vector FS r ,FS r =(fs 1,r ,fs 2,r ,…,fs i,r ,…,fs V,r ),fs i,r Is IM r The value range of i is V, and V is the number of preset attributes.
S043, obtain FS r With FS 0 Similarity sim of (2) r ;FS 0 Is a standard attribute vector.
In this example, the standard attribute vector refers to an attribute vector obtained by welding the first optical fiber and the second optical fiber by using a preset current when the quality of the end face of the first optical fiber and the quality of the end face of the second optical fiber are both qualified.
FS in the present embodiment r With FS 0 The same location in (a) characterizes the value corresponding to the same attribute, e.g. FS r With FS 0 The first element in (a) is a value corresponding to the transmission loss.
S044, according to sim r Acquisition of IM using preset current pairs r The quality z of the welded optical fiber after the corresponding preset time of optical fiber welding r 。
Alternatively, sim r =cos(FS r ,FS 0 ) Cos () is a cosine similarity. Those skilled in the art will appreciate that any method of obtaining vector similarity in the prior art falls within the scope of the present invention.
S050, each IM r Corresponding F 1 1,r 、F 1 2,r 、F 2 1,r And F 2 2,r As training samples, each IM is taken r Corresponding z r And training the first neural network model as a label of the training sample to obtain a trained first neural network model.
Optionally, the first neural network model is a transducer model.
Those skilled in the art will appreciate that, after determining the training samples and the corresponding labels, the process of training the neural network model is known in the art and will not be described in detail herein.
The invention acquires images of a first optical fiber and a second optical fiber to be welded, namely a target optical fiber image IM 0 Due to IM 0 The shooting direction of (a) is perpendicular to the extending direction of the first optical fiber and the second optical fiber, so that the target optical fiber image IM 0 The optical fiber seen in is a side view based on the target fiber image IM 0 The invention obtains the peak pixel types corresponding to the peak pixel in the edge pixel, wherein the burrs corresponding to the peak pixel of the first type are sharp burrs, and the peak pixel of the first type is generally the intersection point of two straight line segments; the burrs corresponding to the second type of peak pixel points are smoother burrs, and the type of peak pixel points are generally on a curve; by correlating the relative position information (used to characterize the size of the burr at the end) and type information corresponding to the peak pixel in the edge pixelsInformation (used for representing the burr type of the end), relative position information (used for representing the size of the burr of the opposite end) of the pixel point of the opposite end and type information (used for representing the burr type of the opposite end) of the pixel point of the opposite end are input into a trained first neural network model (used for predicting the quality of the fusion-spliced optical fibers) to be inferred, the quality of the fusion-spliced first optical fibers and the quality of the fusion-spliced second optical fibers are obtained, and the purpose of predicting the fusion-spliced quality is achieved.
Second embodiment
In order to improve the prediction accuracy of the trained first neural network model, the present embodiment further optimizes the process of acquiring the optical fiber image sample set on the basis of the first embodiment. Specifically, the process of acquiring the optical fiber image sample set in this embodiment includes the following steps:
s1000, acquiring an initial optical fiber image sample set CIM= (CIM) 1 ,CIM 2 ,…,CIM j ,…,CIM u ) Each CIM j A first end of a first optical fiber and a second end of a second optical fiber to be fused together; each CIM j Shooting direction and CIM of (c) j The extending direction of the first optical fiber and the second optical fiber to be welded together is vertical; CIM (common information model) j For the jth initial fiber image sample, j has a value ranging from 1 to u, u being the number of initial fiber image samples.
S2000, traversing the CIM to obtain the CIM j Edge pixel point set C of first end and second end of the image j 。
S3000, C j Adding the middle wave crest pixel point to CIM j Corresponding first initial set to obtain CIM j Corresponding first initial set C' j =(c’ 1,j ,c’ 2,j ,…,c’ d,j ,…,c’ D,j ),c’ d,j D is the number of pixels added to the first initial set, and the value range of D is 1 to D; CIM (common information model) j The corresponding first initial set is Null.
S4000, traversing CIM, detecting CIM j Corresponding edge image CIM' j Is a straight line segment in (a).
S5000, traversing C' j If c' d,j In CIM' j If the corresponding pixel point is the intersection pixel point of the straight line segment, judging c' d,j Is a first type peak pixel point; otherwise, judge c' d,j Is a second type peak pixel point.
S6000, obtaining the number N of the first type peak pixel points 1 ,N 1 =∑ u j=1 n j,1 ,n j,1 Is C' j The number of pixels determined to be the first type peak pixels.
S7000 if the first-type peak ratio β is less than or equal to ε 1 Then enter S8000, β= | (N) 1 /D)-0.5|;ε 1 Is a preset duty ratio threshold value of 0<ε 1 ≤0.1。
Alternatively, ε 1 Is an empirical value, e.g. epsilon 1 =0.1。
In this embodiment, if β is greater than ε 1 Adjusting the initial optical fiber image sample set, and executing S8000 after the adjustment is completed; the specific adjustment process is as follows:
s7100, a newly added optical fiber image sample set is obtained, wherein the newly added optical fiber sample set comprises R 'optical fiber image samples and R' < u.
The first end of the first optical fiber and the second end of the second optical fiber to be fused together are also shown in each new fiber image sample in this embodiment; the shooting direction of each new fiber-increasing image sample is perpendicular to the extending direction of the first optical fiber and the second optical fiber to be welded together in each new fiber-increasing image sample.
In this embodiment, the extending direction of the first optical fiber of each new fiber-increasing image sample is the same as the extending direction of the second optical fiber, and the side images of the first optical fiber and the second optical fiber in each new fiber-increasing image sample can be obtained by taking the direction perpendicular to the extending direction of the first optical fiber and the second optical fiber in each new fiber-increasing image sample as the shooting direction.
S7200, combining the newly added optical fiber image sample set with a first part of optical fiber image samples in the initial optical fiber image samples to obtain an optical fiber image sample set after the initial optical fiber image samples are updated for the first time, wherein the number of the optical fiber image samples included in the first part of optical fiber image samples is u-R'.
S7300, if the first-time updated optical fiber image sample set corresponds to a first-type peak ratio greater than ε 1 And performing second updating on the initial optical fiber image sample.
The updating again in this embodiment includes: combining the newly added optical fiber image sample set with a second part of optical fiber image samples in the initial optical fiber image samples to obtain an optical fiber image sample set after the initial optical fiber image samples are updated again; the second portion of fiber optic image samples includes a number u-R' of fiber optic image samples, and the second portion of fiber optic image samples is not equal to the first portion of fiber optic image samples.
Preferably, more than half of the optical fiber image samples in the second portion of the optical fiber image samples are different from the optical fiber image samples in the first portion of the optical fiber samples.
S7400, if the first type peak ratio corresponding to the second updated optical fiber image sample set is less than or equal to ε 1 And taking the fiber image sample set updated for the second time as the initial fiber image sample set after adjustment.
If the first type peak duty ratio corresponding to the second updated optical fiber image sample set is larger than epsilon 1 Thirdly updating the initial optical fiber image sample, and the like until the first type peak duty ratio corresponding to the updated optical fiber image sample set is less than or equal to epsilon 1 And taking the updated optical fiber image sample set as an adjusted initial optical fiber image sample set.
S8000, traversing CIM to obtain CIM j Corresponding first tag vector CF 1 2,j And a second label vector CF 2 2,j 。
The present embodiment obtains CIM j Corresponding first tag vector CF 1 2,j Procedure for acquiring IM in the first embodiment 0 First tag vector F of the first end of the first optical fiber 1 2 Is similar and will not be described in detail herein; the present embodiment obtains CIM j Corresponding second tag vector CF 2 2,j Procedure for acquiring IM in the first embodiment 0 A second label vector F of the second end of the second optical fiber 2 2 Is similar and will not be described in detail herein.
S9000, if the (1, 1), (1, 0), (0, 1) and (0, 0) in the first label vector and the second label vector corresponding to the CIM are in balance, the initial optical fiber image sample set CIM is taken as a final optical fiber image sample set.
Specifically, the process of judging that the (1, 1), (1, 0), (0, 1) and (0, 0) in the first tag vector and the second tag vector corresponding to the CIM are balanced includes:
s9100, obtaining the number n of (1, 1) in the first tag vector corresponding to CIM 1,1 Number n of (1, 0) in the first tag vector corresponding to CIM 1,2 Number n of (0, 1) in the first tag vector corresponding to CIM 1,3 Number n of (0, 0) in first tag vector corresponding to CIM 1,4 。
S9200, obtain n 1,1 、n 1,2 、n 1,3 And n 1,4 Corresponding variance FC 1 。
Those skilled in the art will appreciate that the process of obtaining the variance is known in the art and will not be described in detail herein.
S9300, if FC 1 Less than a preset variance threshold FC 0 S9400 is entered.
FC in this embodiment 0 Is an empirical value.
If FC 1 Greater than or equal to a preset variance threshold FC 0 And judging that the (1, 1), (1, 0), (0, 1) and (0, 0) in the first label vector and the second label vector corresponding to the CIM are unbalanced in duty ratio.
S9400, obtaining the number n of (1, 1) in the second label vector corresponding to CIM 2,1 Number n of (1, 0) in the second tag vector corresponding to CIM 2,2 Number n of (0, 1) in the second tag vector corresponding to CIM 2,3 Number n of (0, 0) in the second tag vector corresponding to CIM 2,4 。
S9500, obtain n 2,1 、n 2,2 、n 2,3 And n 2,4 Corresponding variance FC 2 。
Those skilled in the art will appreciate that the process of obtaining the variance is known in the art and will not be described in detail herein.
S9600, if FC 2 Less than a preset variance threshold FC 0 And if not, judging that the (1, 1), (1, 0), (0, 1) and (0, 0) in the first label vector and the second label vector corresponding to the CIM are unbalanced in duty ratio.
In this embodiment, when it is determined that the (1, 1), (1, 0), (0, 1) and (0, 0) in the first and second tag vectors corresponding to the CIM are unbalanced, the initial optical fiber image sample set is further adjusted until the (1, 1), (1, 0), (0, 1) and (0, 0) in the first and second tag vectors corresponding to the adjusted image sample set are balanced. Optionally, the process of adjusting the initial fiber optic image sample set includes: and newly adding the fiber image samples corresponding to the label vectors with smaller occupation in the initial fiber image sample set.
Compared with the first embodiment, the first embodiment has the advantages of the first embodiment, and the training sample set of the first neural network model is judged and optimized, so that the training samples of the first neural network model are balanced, the fitting problem is avoided, the prediction accuracy of the trained first neural network model obtained by training by using the training samples is improved, and the purpose of improving the prediction accuracy is achieved.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. Those skilled in the art will also appreciate that many modifications may be made to the embodiments without departing from the scope and spirit of the invention. The scope of the present disclosure is defined by the appended claims.
Claims (7)
1. An optical fiber fusion splice quality prediction system, comprising a processor and a memory storing a computer program which, when executed by the processor, performs the steps of:
s100, obtaining a target optical fiber image IM 0 ,IM 0 A first end of a first optical fiber and a second end of a second optical fiber to be fused together; IM (instant Messaging) 0 Shooting direction and IM of (2) 0 Corresponding first optical fiber and IM 0 The extending direction of the corresponding second optical fiber is vertical; IM (instant Messaging) 0 The extending direction of the first optical fiber and the extending direction of the second optical fiber and the IM 0 Is consistent with the x-axis direction of the lens;
s200, traversing IM 0 The first end edge pixel point set P in P is added to a first preset set to obtain a first preset set P ' = (P ' ' 1 ,p’ 2 ,…,p’ a ,…,p’ A ),p’ a For the a-th pixel point added to the first preset set, the value range of a is 1 to A, and A is the number of the pixel points added to the first preset set; initializing the first preset set to be Null;
s300, detecting target optical fiber image IM 0 Corresponding edge image IM' 0 Straight line segments of (a);
s400, traversing P ', if P' a In IM' 0 The corresponding pixel point in the array is the intersection pixel point of the straight line segment, and then p 'is judged' a Is a first type peak pixel point; otherwise, judge p' a Is a second type peak pixel point;
s500, traversing P 'to obtain P' a Corresponding y-axis distance py a And x-axis distance px a ,py a Is p' a The y-axis distance, px, between the pixel point with the smallest y-coordinate in P a Is p' a The x-axis distance between the pixel point with the smallest x coordinate in P;
s600, obtain sp' a Corresponding y-axis distance syp' a Distance of x-axis sxp' a And pixel type, when sp' a In IM' 0 When the corresponding pixel point is the pixel point on the straight line segment, judgingsp’ a The pixel point type of (1) is a first appointed type; otherwise, determine sp' a The pixel point type of (2) is a second designated type; sp'. a Is p 'in Q' a Pixel points with the same y coordinate, Q is IM 0 An edge pixel point set at the second end of the middle; syp' a Is sp'. a The y-axis distance sxp 'from the pixel point with the smallest y-coordinate in Q' a Is sp'. a The x-axis distance between the pixel point with the largest x coordinate in Q;
s700, build IM 0 A first target vector F of the first end of the first optical fiber 1 1 And a first label vector F 1 2 ,F 1 1 =(((py 1 ,px 1 ),(syp’ 1 ,sxp’ 1 )),((py 2 ,px 2 ),(syp’ 2 ,sxp’ 2 )),…,((py a ,px a ),(syp’ a ,sxp’ a )),…,
((py A ,px A ),(syp’ A ,sxp’ A )),F 1 2 =((bp 1 ,sbp 1 ),(bp 2 ,sbp 2 ),…,(bp a ,sbp a ),…,(bp A ,sbp A )),bp a Is p' a Corresponding peak pixel point label, when p' a When the pixel point is the peak pixel point of the first type, bp a =1; when p' a When the pixel point is the second type peak pixel point, bp a =0;sbp a Is sp'. a Corresponding appointed pixel point label, when sp' a When the pixel point type of (1) is the first appointed type, sbp a =1; when sp'. a When the pixel point type of (1) is the second designated type, sbp a =0;
S800, build IM 0 A second target vector F of the second end of the second optical fiber 2 1 And a second label vector F 2 2 ;
S900, F 1 1 、F 1 2 、F 2 1 And F 2 2 Input to a trained first neural network model for reasoning, the trained first neural network modelThe first neural network model is used for predicting the quality of the welded optical fibers after the optical fibers are welded for a preset time by adopting a preset current.
2. The fusion quality prediction system of claim 1, wherein the training process of the first neural network model comprises:
s010, obtaining an optical fiber image sample set IM= (IM) 1 ,IM 2 ,…,IM r ,…,IM R ) Each IM r A first end of a first optical fiber and a second end of a second optical fiber to be fused together; each IM r Shooting direction and IM of (2) r The extending direction of the first optical fiber and the second optical fiber to be welded together is vertical; IM (instant Messaging) r For the R-th optical fiber image sample, the value range of R is 1 to R, and R is the number of the optical fiber image samples;
s020, traversing IM and constructing IM r A first target vector F of the first end 1 1,r And a first label vector F 1 2,r ,F 1 1,r =(((py 1,r ,px 1,r ),(syp’ 1,r ,sxp’ 1,r )),((py 2,r ,px 2,r ),(syp’ 2,r ,sxp’ 2,r )),…,((py e,r ,px e,r ),(syp’ e,r ,sxp’ e,r )),…,((py E,r ,px E,r ),(syp’ E,r ,sxp’ E,r )),F 1 2,r =((bp 1,r ,sbp 1,r ),(bp 2,r ,sbp 2,r ),…,(bp e,r ,sbp e,r ),…,(bp E,r ,sbp E,r )),bp e,r Is p' e,r Corresponding peak pixel point label, p' e,r Is IM r The value range of E is 1 to E, E is IM r The number of peak pixel points at the first end of the first optical fiber is p' e,r When the pixel point is the peak pixel point of the first type, bp e,r =1; when p' e,r When the pixel point is the second type peak pixel point, bp e,r =0;sbp e,r Is sp'. e,r Corresponding appointed pixel point label sp’ e,r Is Q r Intermediate and p' e,r Pixel points with the same y coordinate, Q r Is IM r An edge pixel point set at the second end of the middle; when sp'. e,r When the pixel point type of (1) is the first appointed type, sbp e,r =1; when sp'. e,r When the pixel point type of (1) is the second designated type, sbp e,r =0;py e,r Is p' e,r Corresponding y-axis distance, px e,r Is p' e,r Corresponding x-axis distance, syp' e,r Is sp'. e,r Corresponding y-axis distance, sxp' e,r Is sp'. e,r A corresponding x-axis distance;
s030, traversing IM and constructing IM r A second target vector F of the second end of the second optical fiber 2 1,r And a second label vector F 2 2,r ;
S040, traversing the IM to obtain the IM with the preset current pair r The quality z of the welded optical fiber after the corresponding preset time of optical fiber welding r ;
S050, each IM r Corresponding F 1 1,r 、F 1 2,r 、F 2 1,r And F 2 2,r As training samples, each IM is taken r Corresponding z r And training the first neural network model as a label of the training sample to obtain a trained first neural network model.
3. The optical fiber fusion quality prediction system according to claim 1, wherein in S300, the detection target optical fiber image IM is detected using hough transform 0 Corresponding edge image IM' 0 Is a straight line segment in (a).
4. The optical fiber fusion quality prediction system according to claim 2, wherein S040 comprises:
s041, obtaining IM adopting preset current pair r Preset attribute values of the fusion-spliced optical fibers after the corresponding optical fibers are fused for preset time;
s042, constructing IM according to the preset attribute value r Corresponding genusSex vector FS r ,FS r =(fs 1,r ,fs 2,r ,…,fs i,r ,…,fs V,r ),fs i,r Is IM r The value range of i is V, and V is the number of preset attributes;
s043, obtain FS r With FS 0 Similarity sim of (2) r ;FS 0 Is a standard attribute vector;
s044, according to sim r Acquisition of IM using preset current pairs r The quality z of the welded optical fiber after the corresponding preset time of optical fiber welding r 。
5. The optical fiber fusion quality prediction system according to claim 4, wherein sim r =cos(FS r ,FS 0 ) Cos () is a cosine similarity.
6. The prediction system of fusion quality according to claim 1, wherein the IM is obtained using a canny operator 0 Edge pixel point set P and IM of first end of (E) 0 An edge pixel point set Q at the second end of the pair.
7. The optical fiber fusion quality prediction system according to claim 1, wherein S200 comprises:
s210, obtaining P in P n Is a pixel point set HP of eight neighborhood pixel points n ,p n N is the number of the edge pixel points in P, and the value range of N is 1 to N;
s220, if p n Is HP n The pixel point with the largest middle x coordinate is judged to be p n Is a peak pixel point; if p is n Is HP n If the pixel point with the minimum x coordinate in the pixel point is determined to be p n Is a trough pixel point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310861345.5A CN116912201B (en) | 2023-07-13 | 2023-07-13 | Optical fiber fusion quality prediction system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310861345.5A CN116912201B (en) | 2023-07-13 | 2023-07-13 | Optical fiber fusion quality prediction system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116912201A CN116912201A (en) | 2023-10-20 |
CN116912201B true CN116912201B (en) | 2024-03-08 |
Family
ID=88352471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310861345.5A Active CN116912201B (en) | 2023-07-13 | 2023-07-13 | Optical fiber fusion quality prediction system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116912201B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008080344A1 (en) * | 2006-12-31 | 2008-07-10 | Huawei Technologies Co., Ltd. | Fiber amplifier, fabricating method thereof and fiber communication system |
CN102567745A (en) * | 2011-12-29 | 2012-07-11 | 北京航天时代光电科技有限公司 | Automatic detection method of optical fiber fusion quality |
CN105841716A (en) * | 2016-05-18 | 2016-08-10 | 北方工业大学 | Vision auxiliary control method and device for wire arrangement consistency of optical fiber winding machine |
CN108227077A (en) * | 2017-12-29 | 2018-06-29 | 诺仪器(中国)有限公司 | Ribbon fiber splice loss, splice attenuation evaluation method and system |
CN108788467A (en) * | 2018-06-26 | 2018-11-13 | 南通光湃智能科技有限公司 | A kind of Intelligent Laser welding system towards aerospace structural component |
CN109409432A (en) * | 2018-10-31 | 2019-03-01 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and storage medium |
CN209658433U (en) * | 2019-06-12 | 2019-11-19 | 四川虹涛电子科技有限公司 | The connector that the binding post of High voltage output connector is connected with connection cables |
CN113695713A (en) * | 2021-09-17 | 2021-11-26 | 蕴硕物联技术(上海)有限公司 | Online monitoring method and device for welding quality of inner container of water heater |
WO2022082848A1 (en) * | 2020-10-23 | 2022-04-28 | 深圳大学 | Hyperspectral image classification method and related device |
CN114565595A (en) * | 2022-03-03 | 2022-05-31 | 中山大学 | Welding offset detection method based on ring core optical fiber light spot |
-
2023
- 2023-07-13 CN CN202310861345.5A patent/CN116912201B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008080344A1 (en) * | 2006-12-31 | 2008-07-10 | Huawei Technologies Co., Ltd. | Fiber amplifier, fabricating method thereof and fiber communication system |
CN102567745A (en) * | 2011-12-29 | 2012-07-11 | 北京航天时代光电科技有限公司 | Automatic detection method of optical fiber fusion quality |
CN105841716A (en) * | 2016-05-18 | 2016-08-10 | 北方工业大学 | Vision auxiliary control method and device for wire arrangement consistency of optical fiber winding machine |
CN108227077A (en) * | 2017-12-29 | 2018-06-29 | 诺仪器(中国)有限公司 | Ribbon fiber splice loss, splice attenuation evaluation method and system |
CN108788467A (en) * | 2018-06-26 | 2018-11-13 | 南通光湃智能科技有限公司 | A kind of Intelligent Laser welding system towards aerospace structural component |
CN109409432A (en) * | 2018-10-31 | 2019-03-01 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and storage medium |
CN209658433U (en) * | 2019-06-12 | 2019-11-19 | 四川虹涛电子科技有限公司 | The connector that the binding post of High voltage output connector is connected with connection cables |
WO2022082848A1 (en) * | 2020-10-23 | 2022-04-28 | 深圳大学 | Hyperspectral image classification method and related device |
CN113695713A (en) * | 2021-09-17 | 2021-11-26 | 蕴硕物联技术(上海)有限公司 | Online monitoring method and device for welding quality of inner container of water heater |
CN114565595A (en) * | 2022-03-03 | 2022-05-31 | 中山大学 | Welding offset detection method based on ring core optical fiber light spot |
Non-Patent Citations (4)
Title |
---|
Defect Recognition of Optical Fiber Fusion Based on Wavelet Packet Technique;ZHANG Zhen 等;《2010 2nd International Conference on Industrial and Information Systems》;20100907;192-195 * |
基于多模光纤光斑检测的传感技术研究;朱开强;《中国优秀硕士学位论文全文数据库信息科技辑》;20180115(第01期);I138-1781 * |
新型光纤熔接机图像处理技术研究;陈向;《中国优秀硕士学位论文全文数据库信息科技辑》;20200115(第01期);I136-934 * |
长光纤端面耦合检测技术研究;周猛;《中国优秀硕士学位论文全文数据库基础科学辑》;20220415(第04期);A005-195 * |
Also Published As
Publication number | Publication date |
---|---|
CN116912201A (en) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210063577A1 (en) | Robot relocalization method and apparatus and robot using the same | |
CN105678689B (en) | High-precision map data registration relation determining method and device | |
CN112336342A (en) | Hand key point detection method and device and terminal equipment | |
CN111390439B (en) | Welding seam detection method and device, welding robot and storage medium | |
CN111696059B (en) | Lane line smooth connection processing method and device | |
CN111145271A (en) | Method and device for determining accuracy of camera parameters, storage medium and terminal | |
CN111899247A (en) | Method, device, equipment and medium for identifying lumen region of choroidal blood vessel | |
CN115810133B (en) | Welding control method based on image processing and point cloud processing and related equipment | |
CN110084743A (en) | Image mosaic and localization method based on more air strips starting track constraint | |
CN116912201B (en) | Optical fiber fusion quality prediction system | |
CN113066063B (en) | Optical fiber to-be-aligned end image processing method and optical fiber self-adaptive alignment method | |
CN116912205B (en) | Optical fiber fusion quality prediction method based on neural network model | |
CN110210305B (en) | Travel path deviation determining method and device, storage medium and electronic device | |
CN111754467A (en) | Hough transform-based parking space detection method and device, computer equipment and storage medium | |
CN112070792A (en) | Edge growth connection method and device for image segmentation | |
CN115984211A (en) | Visual positioning method, robot and storage medium | |
CN116309418A (en) | Intelligent monitoring method and device for deformation of girder in bridge cantilever construction | |
CN116912200B (en) | Optical fiber connection system | |
CN116912204B (en) | Treatment method for fusion splicing of optical fibers | |
KR102405298B1 (en) | Apparatus and method for cloud outsorcing task management by using artificial intelligent | |
CN117853429B (en) | Calibration image quality evaluation method | |
CN115147400B (en) | Self-adaptive identification method, system, electronic equipment and medium for reinforcing steel bar intersection | |
CN118379282B (en) | Titanium plate polished surface smoothness detection method based on image processing | |
CN116772801A (en) | Combined surface positioning and pose measuring method based on waist-shaped hole characteristics | |
CN115841453A (en) | Method, device, equipment and medium for measuring thickness of steel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |