CN111310726B - Visual identification and positioning method for ground wire in data wire - Google Patents
Visual identification and positioning method for ground wire in data wire Download PDFInfo
- Publication number
- CN111310726B CN111310726B CN202010175567.8A CN202010175567A CN111310726B CN 111310726 B CN111310726 B CN 111310726B CN 202010175567 A CN202010175567 A CN 202010175567A CN 111310726 B CN111310726 B CN 111310726B
- Authority
- CN
- China
- Prior art keywords
- image
- channel
- signal line
- blue signal
- wire
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a visual identification positioning method of a ground wire in a data wire, which comprises the following steps of S1, reversely rotating a wire harness in an aluminum foil layer along a winding direction and pressing the wire harness into a lantern shape; s2, shooting the wire harness to obtain a data line image, and extracting H, S and V channel images in the image; s3, extracting a significant result image of the blue signal line in the H channel image and performing threshold segmentation to obtain an H channel binary image; s4, extracting a blue signal line image in the V-channel image by using the H-channel binary image, and finding a coordinate point P with the maximum amplitude; s5, calculating the positions of the left side and the right side of the effective area in the blue signal line image to obtain a V-channel ROI image; s6, obtaining an identification result image according to the V channel ROI image; and S7, respectively searching the center positions of the nearest connected components from the upper side and the lower side of the point P in the recognition result image to be used as the positions of the ground lines on the two sides of the blue signal line. The invention uses machine vision identification to replace human eye identification, and can accurately identify the ground wire.
Description
Technical Field
The invention relates to the technical field of industrial vision detection, in particular to a visual identification and positioning method for a ground wire in a data line.
Background
In a common data line, the protective layers are an aluminum foil layer, a shielding wire layer and an insulating skin from inside to outside. The inside of the data transmission wire harness comprises 3 red power lines and 3 signal lines (white lines, blue lines and green lines) which are arranged in an alternating circle, and 6 ground lines and fiber lines are uniformly distributed in the circle along the gaps between every two power lines and the signal lines.
In the production process of the data wire, firstly, the insulating skin on the outermost layer of the wire harness needs to be stripped, then the shielding wire is twisted out, then the aluminum foil layer is removed, and finally the positions of two ground wires which are positioned at two sides of the blue signal wire and are closest to the blue signal wire in the 6 ground wires are identified, and the two ground wires are picked out from the fiber wires.
The existing mode is that an operator identifies the ground wire by naked eyes and then manually clamps the ground wire out through tweezers. Because the ground wire is doped between the fiber wires and the diameter of the ground wire is usually in a submillimeter level, the ground wire is difficult to identify by naked eyes, and the situation of wrong ground wire clamping is difficult to avoid.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and solve the problem that the ground wire is easy to be mistakenly identified by naked eyes, and provides a visual identification and positioning method of the ground wire in a data wire, which replaces human eye identification by machine visual identification to improve the success rate of ground wire identification.
The invention provides a visual identification and positioning method of a ground wire in a data line, which comprises the following steps:
s1, reversely rotating a wire harness in an aluminum foil layer along a winding direction and pressing the wire harness into a lantern shape; the wiring harness comprises a ground wire and a signal wire, and the signal wire comprises a blue signal wire;
s2, the blue signal line is aligned to the blue signal line, the wiring harness pressed into a lantern shape is shot to obtain a data line image, and an H channel image, an S channel image and a V channel image in the data line image are extracted;
s3, extracting a significant result image of a blue signal line in the H-channel image, and performing threshold segmentation on the significant result image to obtain an H-channel binary image;
s4, extracting a blue signal line image in the V-channel image by using the H-channel binary image, and finding the position of a coordinate point P with the maximum amplitude in the blue signal line image;
s5, calculating the positions of the left side and the right side of the effective area in the blue signal line image to obtain a V-channel ROI image, and finding the position of a coordinate point P in the V-channel ROI image;
s6, carrying out image processing on the V channel ROI image to obtain an identification result image;
and S7, respectively searching the center position of the connected component closest to the coordinate point P from the upper side and the lower side of the coordinate point P in the identification result image as the positions of the ground wires at the two sides of the blue signal line.
Preferably, the formula for extracting a significant result image of a blue signal line in the H-channel image:
wherein, I H (x, y) is an H-channel image, I S (x, y) is an S-channel image, I Sat (x, y) is a significant result image of the blue signal line in the H-channel image, and th1 is a set threshold value.
Preferably, for significant result image I Sat (x, y) the formula for threshold segmentation is:
wherein th2 is a set threshold value, I BSat And (x, y) is an H-channel binary image.
Preferably, the formula for extracting the blue signal line image in the V-channel image using the H-channel binary image is:
wherein, I BLUE (x, y) is a blue signal line image.
Preferably, the coordinate P at which the magnitude of the coordinate point found in the blue signal line image is the largest has a coordinate of (x) c ,y c ) (ii) a Wherein, the first and the second end of the pipe are connected with each other,
preferably, the formula for calculating the positions of the left and right sides of the effective area in the blue signal line image is:
y1=min{y|I BLUE (x,y)>0},y2=max{y|I BLUE (x,y)>0};
the obtained V-channel ROI images were:
I ROI (x,y′)=I V (x,y),y1<y′<y2;
the coordinate of the coordinate point P in the V-channel ROI image is (x) c ,y c -y1)。
Preferably, the V-channel ROI image is subjected to threshold segmentation, dilation, hole filling, and small-area removal in sequence.
Preferably, the coordinate point P (x) is selected from the identification result image c ,y c -y 1) respectively searching for distance coordinate points P (x) c ,y c -y 1) the center position of the nearest connected component as the position of the ground line on both sides of the blue signal line.
The invention can obtain the following technical effects:
1. machine vision identification is used for replacing human eye identification, so that the ground wires at two sides of the blue signal wire can be accurately identified, and the success rate of ground wire identification is improved;
2. the ground wires inside the aluminum foil layer are pressed into a lantern shape, so that the ground wires can be roughly smoothed out, subsequent clamping is facilitated, the distance between the ground wires and the fiber wires can be increased, and the space for clamping the ground wires by the tweezers is enlarged.
Drawings
Fig. 1 is a schematic flowchart of a method for visual identification and location of a ground line in a data line according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an H-channel image in an embodiment in accordance with the invention;
FIG. 3 is a schematic diagram of an S-channel image in an embodiment in accordance with the invention;
FIG. 4 is a schematic diagram of a V channel image in an embodiment in accordance with the invention;
FIG. 5 is a schematic diagram of a saliency image of a blue signal line in an embodiment in accordance with the invention;
FIG. 6 is a schematic diagram of an H-channel binary image according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an H-channel binary image after processing according to an embodiment of the invention;
fig. 8 is a schematic diagram of a blue signal line image in an embodiment in accordance with the invention;
FIG. 9 is a schematic diagram of a V-channel ROI image in an embodiment in accordance with the invention;
fig. 10 is a schematic diagram of a recognition result image according to an embodiment of the present invention;
fig. 11 is a diagram illustrating the result of the positioning of the ground wire according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
In order to solve the problems that the ground wire is difficult to identify by human eyes and is easy to make mistakes, the invention adopts a method of combining machine vision identification and image processing to identify and position the ground wire to replace the human eyes for identification.
Fig. 1 shows a flow of a visual identification and positioning method for a ground line in a data line according to an embodiment of the present invention.
As shown in fig. 1, the visual identification and positioning method for ground lines in data lines provided by the embodiment of the present invention includes the following steps:
s1, reversely rotating a wire harness in an aluminum foil layer along a winding direction and pressing the wire harness into a lantern shape; the wiring harness comprises six ground wires and three signal wires, wherein the three signal wires are a blue signal wire, a white signal wire and a green signal wire respectively.
Because the ground wire and the signal wire are in a spiral surrounding state inside the aluminum foil layer, the ground wire and the signal wire need to be rotated along the reverse direction of the winding direction to smooth the ground wire, so that the subsequent visual identification and clamping of the ground wire are facilitated.
The smoothed ground wires are pressed into a lantern shape, and the lantern-shaped ground wires are used for increasing the distance between the ground wires and the distance between the ground wires, namely, the space for clamping the ground wires by the tweezers is increased, and the ground wires are prevented from being clamped wrongly; and the other is to identify two ground lines which are positioned at two sides of the blue signal line and are closest to the blue signal line.
And S2, shooting the wiring harness pressed into a lantern shape just opposite to the blue signal line to obtain a data line image, and extracting an H channel image, an S channel image and a V channel image in the data line image.
The blue signal line is shot, and the purpose is to facilitate subsequent processing and identification for the image of the blue signal line and the ground line nearby the blue signal line.
For example: the extracted H channel image is I H (x, y), extracted S-channel image I S (x, y) the extracted V-channel image is I V (x, y) wherein x =1,2, \8230;, m and y =1,2, \8230;, n, x are row coordinate values of the channel images, and y are column coordinate values of the channel images. The H-channel image, S-channel image, and V-channel image are shown in fig. 2-4.
And S3, extracting a significant result image of the blue signal line in the H-channel image, and performing threshold segmentation on the significant result image to obtain an H-channel binary image.
The formula for extracting the significant result image of the blue signal line in the H-channel image is:
wherein, I Sat (x, y) is a significant result image of the blue signal line in the extracted H-channel image, and th1 is a set threshold value, as shown in fig. 5.
Saliency image I for blue signal lines in H-channel image Sat The formula for (x, y) splitting is:
wherein th2 is a set threshold value, I BSat (x, y) is the obtained H-channel binary image, as shown in FIG. 6.
And S4, extracting a blue signal line image in the V-channel image by using the H-channel binary image, and finding the position of the coordinate point P with the maximum amplitude in the blue signal line image.
Before extracting the blue signal line image in the V-channel image from the H-channel binary image, the H-channel binary image needs to be subjected to hole filling, small area removal, and the like, and the processed H-channel binary image is as shown in fig. 7.
Extracting a blue signal line image in the V-channel image by using the processed H-channel binary image, wherein the extraction formula is as follows:
wherein, I BLUE (x, y) is the extracted blue signal line image, as shown in fig. 8.
In blue signal line image I BLUE The coordinate of the coordinate point P having the largest magnitude found in (x, y) is (x) c ,y c ),
And S5, calculating the positions of the left side and the right side of the effective area in the blue signal line image to obtain a V-channel ROI image, and finding the position of a coordinate point P in the V-channel ROI image.
The formula for calculating the positions of the left and right sides of the effective area in the blue signal line image is:
y1=min{y|I BLUE (x,y)>0},y2=max{y|I BLUE (x,y)>0};
the V-channel ROI images obtained were:
I ROI (x,y′)=I V (x,y),y1<y′<y2;
the coordinate of the coordinate point P in the V-channel ROI image is changed to (x) c ,y c -y1),As shown in fig. 9.
And S6, carrying out image processing on the ROI image of the V channel to obtain an identification result image.
The process of processing the image of the ROI of the V channel comprises the steps of sequentially carrying out threshold segmentation, expansion, hole filling and small-area removal processing to obtain an identification result image J ROI (x, y') as shown in FIG. 10.
And S7, respectively searching the center position of the connected component closest to the coordinate point P from the upper side and the lower side of the coordinate point P in the identification result image as the positions of the ground wires at the two sides of the blue signal line.
At coordinate point P (x) c ,y c -y 1) as search center from coordinate point P (x) c ,y c Upper and lower ones of-y 1) in the recognition result image J ROI (x, y') searching for a distance coordinate point P (x) c ,y c -y 1) the center position of the nearest connected component as the position of the ground line on both sides of the blue signal line.
As shown in fig. 10 and 11, the circular cross in the middle represents the blue signal line, and the circular cross on both sides of the middle circular cross represent the ground lines on both sides of the blue signal line, respectively, which are closer to the blue signal line than the other four ground lines.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (8)
1. A visual identification and positioning method for a ground wire in a data wire is characterized by comprising the following steps:
s1, reversely rotating a wire harness in an aluminum foil layer along a winding direction and pressing the wire harness into a lantern shape; the wiring harness comprises a ground wire and signal wires, and the signal wires comprise blue signal wires;
s2, shooting the wire harness pressed into a lantern shape to obtain a data line image, and extracting an H channel image, an S channel image and a V channel image in the data line image;
s3, extracting a significant result image of a blue signal line in the H channel image, and performing threshold segmentation on the significant result image to obtain an H channel binary image;
s4, extracting a blue signal line image in the V channel image by using the H channel binary image, and finding the position of a coordinate point P with the maximum amplitude in the blue signal line image;
s5, calculating the positions of the left side and the right side of the effective area in the blue signal line image to obtain a V-channel ROI image, and finding the position of the coordinate point P in the V-channel ROI image;
s6, carrying out image processing on the V channel ROI image to obtain an identification result image;
and S7, respectively searching the center position of the connected component nearest to the coordinate point P from the upper side and the lower side of the coordinate point P in the identification result image, and taking the center position as the positions of the ground wires at the two sides of the blue signal line.
2. The method for visually recognizing and locating the ground wire in the data line according to claim 1, wherein in step S3, the formula for extracting the significant result image of the blue signal wire in the H-channel image is as follows:
wherein, I H (x, y) is an H-channel image, I S (x, y) is an S-channel image, I Sat (x, y) is a significant result image of the blue signal line in the H-channel image, and th1 is a set threshold value.
3. The method for visual identification and location of ground wire in data line according to claim 2, wherein in step S3, the significant result image I is processed Sat (x, y) the formula for threshold segmentation is:
wherein th2 is a set threshold value, I BSat And (x, y) is an H-channel binary image.
4. The method for visually recognizing and locating the ground wire in the data line according to claim 3, wherein in step S4, the formula for extracting the blue signal line image in the V-channel image by using the H-channel binary image is as follows:
wherein, I BLUE (x, y) is the blue signal line image.
6. the method of claim 5, wherein in step S5, the formula for calculating the left and right positions of the effective area in the blue signal line image is as follows:
y1=min{y|I BLUE (x,y)>0},y2=max{y|I BLUE (x,y)>0};
the V-channel ROI images obtained were:
I ROI (x,y′)=I V (x,y),y1<y′<y2;
the coordinate of the coordinate point P in the V-channel ROI image is (x) c ,y c -y1)。
7. The method as claimed in claim 6, wherein in step S6, the V-channel ROI image is processed by threshold segmentation, dilation, hole filling and small area removal in sequence.
8. The method according to claim 7, wherein in step S7, the coordinate point P (x) is selected from the identification result image c ,y c -y 1) respectively finding the distance from the coordinate point P (x) c ,y c -y 1) the center position of the nearest connected component as the position of the ground line on both sides of the blue signal line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010175567.8A CN111310726B (en) | 2020-03-13 | 2020-03-13 | Visual identification and positioning method for ground wire in data wire |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010175567.8A CN111310726B (en) | 2020-03-13 | 2020-03-13 | Visual identification and positioning method for ground wire in data wire |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111310726A CN111310726A (en) | 2020-06-19 |
CN111310726B true CN111310726B (en) | 2023-04-07 |
Family
ID=71145629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010175567.8A Active CN111310726B (en) | 2020-03-13 | 2020-03-13 | Visual identification and positioning method for ground wire in data wire |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111310726B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435808A (en) * | 2020-10-30 | 2021-03-02 | 中国科学院长春光学精密机械与物理研究所 | Thread take-up device for multi-core wire harness |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3180872B1 (en) * | 2014-08-12 | 2019-07-31 | ABL IP Holding LLC | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
CN105678310B (en) * | 2016-02-03 | 2019-08-06 | 北京京东方多媒体科技有限公司 | Thermal-induced imagery contour extraction method and device |
CN107507182B (en) * | 2017-09-25 | 2019-10-25 | 电子科技大学 | A kind of BGA soldered ball extracting method based on radioscopic image |
-
2020
- 2020-03-13 CN CN202010175567.8A patent/CN111310726B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111310726A (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109215020B (en) | High-voltage transmission line fault identification method based on computer vision | |
US11922615B2 (en) | Information processing device, information processing method, and storage medium | |
CN110678901B (en) | Information processing apparatus, information processing method, and computer-readable storage medium | |
CN109785285B (en) | Insulator damage detection method based on ellipse characteristic fitting | |
CN103955660B (en) | Method for recognizing batch two-dimension code images | |
CN108154151B (en) | Rapid multi-direction text line detection method | |
CN111310726B (en) | Visual identification and positioning method for ground wire in data wire | |
WO2012074361A1 (en) | Method of image segmentation using intensity and depth information | |
CN105740872B (en) | Image feature extraction method and device | |
CN104966054B (en) | Detection method of small target in unmanned plane visible images | |
CN108734704B (en) | Transmission conductor strand breakage detection method based on gray variance normalization | |
CN114863492B (en) | Method and device for repairing low-quality fingerprint image | |
CN104951440B (en) | Image processing method and electronic equipment | |
CN113516619B (en) | Product surface flaw identification method based on image processing technology | |
Changhui et al. | Overlapped fruit recognition for citrus harvesting robot in natural scenes | |
CN109658388B (en) | Method for detecting and correcting packaging box segmentation errors based on vision and active interaction | |
CN111881803B (en) | Face recognition method based on improved YOLOv3 | |
CN108205641B (en) | Gesture image processing method and device | |
CN117274246A (en) | Bonding pad identification method, computer equipment and storage medium | |
CN115830027B (en) | Machine vision-based automobile wire harness cladding defect detection method | |
CN110349129B (en) | Appearance defect detection method for high-density flexible IC substrate | |
CN111667463A (en) | Cable detection method, robot and storage device | |
CN109117757B (en) | Method for extracting guy cable in aerial image | |
CN109003268B (en) | Method for detecting appearance color of ultrathin flexible IC substrate | |
CN114897974B (en) | Target object space positioning method, system, storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |