CN114792376A - Line segment endpoint extraction method - Google Patents
Line segment endpoint extraction method Download PDFInfo
- Publication number
- CN114792376A CN114792376A CN202210248088.3A CN202210248088A CN114792376A CN 114792376 A CN114792376 A CN 114792376A CN 202210248088 A CN202210248088 A CN 202210248088A CN 114792376 A CN114792376 A CN 114792376A
- Authority
- CN
- China
- Prior art keywords
- area
- picture
- turning point
- trend
- axis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 25
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 238000012905 input function Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Abstract
The invention discloses a line segment endpoint extraction method, which comprises the following steps: processing the picture to obtain a maximum outline area; calculating the coordinate difference value of adjacent areas on the contour lines to judge the trend of the contour; carrying out noise reduction and dislocation processing to obtain turning points; expanding the position area of the turning point; finding a position area with the changed trend of the x axis and/or the y axis; and selecting one turning point from the overlapped part for marking, wherein the marked turning point is an end point. Therefore, the invention realizes convenient extraction of the positions of the two end points of the line segment.
Description
Technical Field
The invention relates to the field of image processing, in particular to a line segment endpoint extraction method.
Background
OpenCV is a software library for cross-platform computer vision and machine learning, and can be operated on Linux, Windows, Android and Mac OS operating systems; the method is light in weight and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
In some electrical circuit experiments, a lot of devices can appear, the devices are connected by adopting wires, because the wiring positions on the devices are more, wiring errors or omission can appear in many times, the experiments are greatly influenced, the wiring positions need to be checked, in order to improve the checking efficiency and correctness, the purpose can be well realized by adopting an image processing mode, and a corresponding image processing method is not available in the prior art.
Disclosure of Invention
The invention aims to provide a line segment endpoint extraction method, which realizes extraction marking of the endpoint of a lead part in an image.
In order to solve the above technical problem, the present invention provides a method for extracting end points of a segment, comprising:
processing the picture to obtain a maximum outline area;
calculating the coordinate difference value of adjacent areas on the contour lines to judge the trend of the contour;
carrying out noise reduction and dislocation treatment to obtain turning points;
expanding the position area of the turning point;
finding a position area with the changed trend of the x axis and/or the y axis; and
and selecting one turning point from the overlapped part for marking, wherein the marked turning point is an end point.
Optionally, processing the picture, and obtaining the maximum contour region includes: carrying out binarization processing on the picture to obtain a black and white picture, extracting the outline of the conducting wire part, wherein the outline is a closed curve, and selecting the maximum outline area;
optionally, before the binarizing processing is performed on the picture to obtain the black-and-white picture, the method further includes:
converting the picture into a gray scale image to obtain structural elements;
the obtained gray-scale map is subjected to a process including expansion-erosion-expansion.
Optionally, in the step of calculating the coordinate difference between adjacent regions on the contour line to determine the trend of the contour, the expression is performed according to the slope or difference of the adjacent regions.
Optionally, two adjacent regions or two regions with the same interval position are selected for judging the profile trend.
Optionally, the denoising process includes: and modifying the directions of the coordinates of adjacent areas of the x axis and the y axis, and keeping the directions consistent with the front and back directions.
Optionally, the modifying the trend includes: when a noise area needing to be changed is encountered, the trend data of the noise area is modified into the data of the previous area or the next area.
Optionally, the misalignment processing includes: calculating the absolute value (| Delta X) of the dislocation difference of the coordinate difference n -ΔX n-i |,|ΔY n -ΔY n-i I is more than or equal to 1 and less than or equal to n, and determining a turning point from the dislocation difference.
Optionally, a certain integer is selected to expand the position area of the turning point, and the integer is different due to different diameters of the target line in the picture.
Optionally, the manner of enlarging the location area includes: the overlapping part is found by enlarging the area of the turning point to two sides, or a convolution range is determined to find the overlapping part.
The invention provides a line segment endpoint extraction method, which comprises the following steps: processing the picture to obtain a maximum outline area; calculating the coordinate difference value of adjacent areas on the contour lines to judge the trend of the contour; carrying out noise reduction and dislocation treatment to obtain turning points; expanding the position area of the turning point; finding a position area with the changed trend of the x axis and/or the y axis; and selecting one turning point from the overlapped part for marking, wherein the marked turning point is an end point. Therefore, the invention realizes convenient extraction of the positions of the two end points of the line segment. In addition, the invention can also realize the extraction of the convex points and the concave points under more complicated conditions (such as the intersection of two line segments).
Drawings
FIG. 1 is a schematic flow chart of a line segment endpoint extraction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of contour generation according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of the outline area tag generation according to one embodiment of the present invention;
FIG. 4 is a diagram of Δ X according to an embodiment of the present invention n A schematic diagram of the distribution of the value of (1);
FIG. 5 shows a denoised Δ X according to one embodiment of the invention n The distribution of the values of (a);
FIG. 6 is a schematic diagram of an x-axis turning point and a y-axis turning point obtained after a misalignment process according to one embodiment of the invention;
FIG. 7 is an expanded view of the x-axis and y-axis turning points of one embodiment of the present invention;
FIG. 8 is a schematic diagram of an end point marker in accordance with one embodiment of the present invention.
Detailed Description
The present invention will now be described in more detail with reference to the accompanying schematic drawings, in which preferred embodiments of the invention are shown, it being understood that one skilled in the art may modify the invention herein described while still achieving the advantageous effects of the invention. Accordingly, the following description should be construed as broadly as possible to those skilled in the art and not as limiting the invention.
The invention is described in more detail in the following paragraphs by way of example with reference to the accompanying drawings. Advantages and features of the present invention will become apparent from the following description and from the claims. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is provided for the purpose of facilitating and clearly illustrating embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for extracting a segment endpoint, including:
s101, processing the picture to obtain a maximum outline area;
s102, calculating a coordinate difference value of adjacent areas on the contour line to judge the trend of the contour;
s103, denoising and dislocation processing are carried out to obtain turning points;
s104, expanding the position area of the turning point;
s105, finding a position area with the changed trend of the x axis and/or the y axis; and
and S106, selecting one turning point at the overlapped part for marking, wherein the marked turning point is an end point.
Through the process, the invention can conveniently find the positions of the line segment end points and can also realize the extraction of the convex points and the concave points under more complex conditions (such as the intersection of two line segments).
The process of the present invention is described in detail below:
in S101, a picture is imported, the picture is read, the read picture is converted into a gray-scale image to obtain structural elements, then expansion-corrosion-expansion processing is performed on the obtained gray-scale image, and binarization processing is performed on the picture to obtain a black-and-white picture, as shown in fig. 2.
For example, a function { cv2.findContours () } is input to extract the contour of the wire portion. In the embodiment of the present invention, the contour refers to a line at a black-white boundary in a black-white picture, and the contour is a closed curve at this time.
We can use the input function cv2.contourarea () } to calculate the outline area to select the largest outline area.
It should be noted that the description of the functions in the present invention is mainly for illustration, and those skilled in the art can select other functions to implement the corresponding functions according to actual needs, which are all covered in the spirit of the present invention.
In S102, for example, coordinates of the outline region in the X-axis and the y-axis are obtained by an input function { cv2.findcontours () }, the coordinates being sequentially noted as (X) 0 ,Y 0 )、(X 1 ,Y 1 )、(X 2 ,Y 3 )......(X n ,Y n ) Wherein, each coordinate can be set with a serial number, and 6 serial numbers as shown in fig. 3 represent 6 coordinates.
Calculating the coordinate difference (X) of the adjacent area on the contour n -X n-i ,Y n -Y n-i ) Wherein i is more than or equal to 1 and less than or equal to n and is marked as (delta X) n ,ΔY n ) Note that (Δ X) n ,ΔY n ) For convenience of recording only, not representing Δ X n As abscissa and in Δ Y n For the ordinate point, attention needs to be paid to the 0 th position (X) 0 ,Y 0 ) And last position (X) n ,Y n ) Are adjacent, so the last term of the coordinate difference is (X) n -X 0 ,Y n -Y 0 ),
For example, taking i ═ 1 as an example, specifically:
ΔX n :-1、-1、-1、-1、......、0、0、0、......、-1、-1、-1、-1、......;
ΔY n :-1、-1、-1、-1、......、0、0、0、......、-1、-1、-1、-1、......。
as in FIG. 4, the portion Δ X is schematically illustrated n The distribution of values of (c). In practice, the color may be used for distinction, for example, red for "-1", blue for "0", and green for "1".
It can be understood that Δ X n And Δ Y n The values of (c) may be other than "1", "0", and "-1".
In S103, the noise reduction process includes: and modifying the directions of the coordinates of adjacent areas of the x axis and the y axis, and keeping the directions consistent with the front and back directions.
In one specific implementation, the processing means for noise reduction is to convert (Δ X) n ,ΔY n ) The value of middle 0 is modified to 1 or-1, namely a noise area needing to be changed is encountered, and the trend data of the noise area is modified to the data of the previous area or the next area. It is understood that when Δ X n And Δ Y n The modified page may be changed to other values while the value of (c) may be other.
For example, after noise reduction:
ΔX n :-1、-1、-1、-1、......、1、1、1、......、-1、-1、-1、-1、......;
ΔY n :-1、-1、-1、-1、......、1、1、1、......、-1、-1、-1、-1、......。
as in FIG. 5, Δ X after noise reduction is schematically shown n The distribution of values of (c). In practice, it is also possible to distinguish by color.
In S103, the misalignment processing includes: calculating the absolute value (| Delta X) of the dislocation difference of the coordinate difference n -ΔX n-i |,|ΔY n -ΔY n-i |) and determining the turning point from the misalignment difference.
Taking i ═ 1 as an example, for example, the obtained misalignment differences include (0, 0), (0, 2), (2, 0) and (2, 2),
for example, specifically:
|ΔX n -ΔX n-1 |:0、0、0、0、0、......、0、0、2、0、0、0、0、0、0、......;
|ΔY n -ΔY n-1 |:0、0、0、0、2、......、0、0、0、0、0、0、2、0、0、......。
in one embodiment, for example, finding a specific location of a turning point may be: the positions of the misalignment difference values (0, 2), (2, 0) and (2, 2) are turning points, wherein the misalignment difference value (0, 2) represents the position of the y-axis turning, the misalignment difference value (2, 0) represents the position of the x-axis turning, and the misalignment difference value (2, 2) represents the position of the x-axis and the y-axis turning simultaneously.
As shown in fig. 6, the x-axis turning point and the y-axis turning point obtained after the misalignment processing are schematically shown.
In S104, the location area of the turning point may be enlarged by selecting an integer (e.g., 3 integers before and after), where the integer (e.g., "3") is different according to the diameter of the target line in the picture.
In one particular implementation, the operations are:
will be provided with
|ΔX n -ΔX n-1 |:......、0、0、0、0、2、0、0、0、0、......;
|ΔY n -ΔY n-1 |:......、0、0、0、0、0、2、0、0、0、......;
Is converted into
|ΔX n -ΔX n-1 |:......、0、2、2、2、2、2、2、2、0、......;
|ΔY n -ΔY n-1 |:......、0、0、2、2、2、2、2、2、2、......。
Namely, the absolute value of the difference is expanded to be 2 after being expanded to be 3 before and after being 2 respectively.
As shown in fig. 7, the case where the x-axis turning point and the y-axis turning point are enlarged is schematically shown.
In S105, depending on the overlapping position, for example (| Δ X) n -ΔX n-1 |,|ΔY n -ΔY n-1 And |)) the value is (2, 2), and a position area where the trends of the x axis and the y axis change simultaneously is found, or the position area of the turning point of the x axis or the y axis is independently used according to different specific target tasks.
In S106, by filtering the positions of the redundant turning points, one of the turning points is selected at the overlapping portion to be marked, and the marked turning point is an end point, as shown in fig. 8, the point showing the coordinates of the reference numerals 914 and 2024 is an end point.
In summary, the method for extracting endpoint of line segment of the present invention includes: processing the picture to obtain a maximum outline area; calculating the coordinate difference value of adjacent areas on the contour lines to judge the trend of the contour; carrying out noise reduction and dislocation processing to obtain turning points; expanding the position area of the turning point; finding a position area with the changed trend of the x axis and/or the y axis; and selecting one turning point from the overlapped part for marking, wherein the marked turning point is an end point. According to the invention, the position of the turning point can be obtained according to the dislocation difference, and the turning point is screened according to the actual target requirement, so that the end point of the line segment, and the convex point and the concave point of the crossed line segment are obtained.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A method for extracting a line segment endpoint is characterized by comprising the following steps:
processing the picture to obtain a maximum outline area;
calculating the coordinate difference value of adjacent areas on the contour lines to judge the trend of the contour;
carrying out noise reduction and dislocation treatment to obtain turning points;
expanding the position area of the turning point;
finding a position area with the changed trend of the x axis and/or the y axis; and
and selecting one turning point from the overlapped part for marking, wherein the marked turning point is an end point.
2. The method of claim 1, wherein the step of processing the picture to obtain the maximum contour region comprises: and carrying out binarization processing on the picture to obtain a black-and-white picture, extracting the outline of the conducting wire part, wherein the outline is a closed curve, and selecting the largest outline area.
3. The method for extracting line segment end points according to claim 2, wherein before the binarization processing is performed on the picture to obtain the black-and-white picture, the method further comprises:
converting the picture into a grey-scale image to obtain structural elements;
the obtained gray-scale map is subjected to a process including expansion-erosion-expansion.
4. The method for extracting the end points of the line segments as claimed in claim 2, wherein in the step of calculating the coordinate difference value of the adjacent area on the contour line to judge the trend of the contour, the expression is performed according to the slope or difference of the adjacent area.
5. The method as claimed in claim 4, wherein the two adjacent regions or the two regions spaced from each other by the same distance are selected for determining the trend of the contour.
6. The method according to claim 1, wherein the denoising process comprises: and modifying the directions of the coordinates of adjacent areas of the x axis and the y axis, and keeping the directions consistent with the front and back directions.
7. The method of claim 6, wherein the modifying the trend comprises: when a noise area needing to be changed is encountered, the trend data of the noise area is modified into the data of the previous area or the next area.
8. The method of claim 7, wherein the misalignment processing comprises: calculating the absolute value (| delta X) of the dislocation difference of the coordinate difference n -ΔX n-i |,|ΔY n -ΔY n-i I) is more than or equal to 1 and less than or equal to n, and the turning point is determined from the dislocation difference.
9. The method as claimed in claim 1, wherein an integer is selected to expand the location area of the turning point, and the integer is different according to the diameter of the target line in the picture.
10. The method as claimed in claim 1, wherein the step of enlarging the location area comprises: the overlapping part is found by enlarging the area of the turning point to two sides, or a convolution range is determined to find the overlapping part.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210248088.3A CN114792376A (en) | 2022-03-14 | 2022-03-14 | Line segment endpoint extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210248088.3A CN114792376A (en) | 2022-03-14 | 2022-03-14 | Line segment endpoint extraction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114792376A true CN114792376A (en) | 2022-07-26 |
Family
ID=82460490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210248088.3A Pending CN114792376A (en) | 2022-03-14 | 2022-03-14 | Line segment endpoint extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114792376A (en) |
-
2022
- 2022-03-14 CN CN202210248088.3A patent/CN114792376A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4926116B2 (en) | Image inspection device | |
JPH09237338A (en) | Method and device for quickly smoothing contour | |
EP0952548A2 (en) | Method and apparatus for multi-level rounding and pattern inspection | |
US5222158A (en) | Pattern recognition apparatus | |
CN108197567B (en) | Method, apparatus and computer readable medium for image processing | |
CN107545223B (en) | Image recognition method and electronic equipment | |
CN112419207A (en) | Image correction method, device and system | |
CN114792376A (en) | Line segment endpoint extraction method | |
US5568565A (en) | Handwriting input method and apparatus | |
US5355448A (en) | Method of generating dot signals corresponding to character pattern and the system therefor | |
JP2006227824A (en) | Drawing recognition method and device | |
JPS63193282A (en) | Contour extraction system | |
JPH01126774A (en) | Graphic input device | |
CN114757957A (en) | Image segmentation method, image segmentation device and electronic equipment | |
CN112634302A (en) | Method for detecting edge of moving end type rectangular object based on deep learning | |
JP2001243469A (en) | Method for extracting heavy line | |
CN115761039A (en) | SEM image contour drawing method, device and program product | |
JP2002334301A (en) | Method and program for extracting feature point of binary image | |
JP5505953B2 (en) | Image discrimination system, method and program | |
JP2001265319A (en) | Data display method and recording medium with recorded program for processing data display | |
JP2801246B2 (en) | Character processing apparatus and method | |
JPH02266478A (en) | Method for recognizing drawing | |
JP3025365B2 (en) | Image binarization processing device | |
JP2688666B2 (en) | How to vectorize shapes | |
JP2974167B2 (en) | Large Classification Recognition Method for Characters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |