CN106023191A - Optical drawing character edge extraction and edge fitting method based on structure features - Google Patents
Optical drawing character edge extraction and edge fitting method based on structure features Download PDFInfo
- Publication number
- CN106023191A CN106023191A CN201610327790.3A CN201610327790A CN106023191A CN 106023191 A CN106023191 A CN 106023191A CN 201610327790 A CN201610327790 A CN 201610327790A CN 106023191 A CN106023191 A CN 106023191A
- Authority
- CN
- China
- Prior art keywords
- edge
- line segment
- end points
- character
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24143—Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an optical drawing character edge extraction and edge fitting method based on structure features, and the method comprises the following steps: extracting an edge through employing a Canny operator, converting a gray scale matrix corresponding to a true and false edge template into a characteristic vector, carrying out the marking, and removing a false edge of a sample according to the marked characteristic vector through employing a K-nearest neighbor algorithm; building different connection modes according to the distance between the stroke structure features of a character and an end point of a discontinuous edge line, and forming pixel scenes; determining the connection mode of a line end point according to the position relation of all edge lines in each pixel scene, the gray scale information of lines and the distance between the lines, carrying out the edge fitting, and forming a character outline. The extracted edge is accurate, and the marked character outline obtained through fitting is complete. The method brings great convenience to the subsequent extraction of the character features.
Description
Technical field
The present invention relates to a kind of optics based on architectural feature delineation character edge extract and edge fitting method.
Background technology
Character edge extraction and matching thereof may be constructed the integrity profile of character, are the important steps in character recognition process.For
Optics delineation character, conventional edge extracting method and approximating method all can not obtain good result.Due to delineation character picture
Generation during, use strip source to make the stroke with source parallel can produce the pixel of high gray value, and vertical with light source
Stroke can produce the pixel of low gray value, the pixel value of background is between both the above pixel, as shown in Figure 1.Thus,
Canny operator cannot extract the true edge of delineation character, this is because the edge that Canny operator is fundamentally based on gradient carries
Access method.Low gray-value pixel and two kinds of pixels of high gray value is created due to the difference of direction of illumination in the centre of a stroke,
Creating significantly grey scale change at its intersection, the gradient non-maximum Restrainable algorithms of Canny operator captures these changes,
Using the pixel of relevant position as edge, but they are not the true edge point of delineation character.
Removing pseudo-edge point someway even if using, also can produce the discontinuous edge that intermittent intervals is bigger.To often use
Edge fit method such as thresholding order edge connect contour extraction method multi thresholds choose the rim detection being connected with edge
Algorithm and Marginal fitting based on neural network are all without obtaining good result, this is because these methods are the most only suitable for making-breaking point
Between situation close together.
Summary of the invention
The present invention is to solve the problems referred to above, it is proposed that a kind of optics based on architectural feature delineation character edge extracts and edge is intended
Conjunction method, this method, first by Canny operator extraction edge, is extracted pixel characteristic based on half-tone information and structural information, is made
Eliminating, by k arest neighbors method, the pseudo-edge point that Canny operator produces, architectural feature based on character realizes interrupted edge then
Matching forms the profile of character, and the delineation character outline of formation is complete, accurately.
To achieve these goals, the present invention adopts the following technical scheme that
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, comprises the following steps:
(1) use Canny operator extraction edge, pixel grey scale matrix conversion corresponding for true and false border template become characteristic vector,
It is marked, utilizes K nearest neighbor algorithm to remove the pseudo-edge of sample according to the characteristic vector of labelling;
(2) different connection modes is built according to the distance between stroke structure feature and the end points of interrupted edge line segment of character
Formula, forms pixel scene;
(3) according to the position relationship of each edge line segment in each pixel scene, between the half-tone information of line segment and line segment away from
From, determine the connected mode of line segment end points, carry out edge fitting, form character outline.
In described step (1), concrete steps include:
(1-1) pixel grey scale matrix conversion corresponding for true and false border template is become characteristic vector, the corresponding classification of distribution simultaneously
Label;
(1-2) in feature space, k closest sample is selected according to Euclidean distance;
(1-3) number of times that in statistics K-nearest samples, each classification designator occurs;
(1-4) the class label class label as marginal point to be sorted of frequency of occurrences maximum is selected.
Preferably, in described step (1-1), pixel grey scale matrix conversion corresponding for true and false border template is become the spy of a length of 9
Levy vector.
In described step (2), the distance considered between the stroke structure feature of character and the end points of interrupted edge line segment is summarized
Going out multiple connection mode, using every kind of connection mode as a pixel scene, different connection modes takes different edges to fit
Method.
In described step (2), according to treating that the edge line section of matching is positioned at the position of stroke, arrangement architecture each other, pixel
The magnitude relationship of gray value and the difference of position relationship determine different pixel scenes.
In described step (3), using the end points of the line segment of each pixel scene as element, form chained list, carry out edge fitting,
From the beginning of first end points of the first of chained list edge line segment, determine each end points and the distance of lower end point and end by chained list
Line segment number in some set distance range around.
In described step (3), in chained list, start to search for the distance whether existed with the first end points be less than the end points of setting value, if
Existing, be then attached by two end points, extreme coordinates uses an order polynomial to fit, and deletes the said two end points in chained list
Connecting line segment, chained list length subtracts 1 simultaneously.
In described step (3), in chained list, detection range the first end points is less than the line segment of set distance range, if there is above-mentioned line
Section, then record the end points of line segment, adds up line segment number, and number, gray level segment positions according to line segment are relative with pixel scene
Should, determine pixel scene belonging to it.
In described step (3), the method being connected by coordinate fitting of end points.
Preferably, described set distance range is 2-3 times of character stroke width.
The invention have the benefit that
(1) edge that the present invention extracts is accurate, and the delineation character outline that matching obtains is complete, carries for follow-up character feature
Take and bring great convenience.
(2) present invention architectural feature based on character goes to determine interrupted edge line segmented mode, actually simulates the vision of people and recognizes
Know principle;No matter which line segment first line segment is, the most no matter the distance between these line segment end points is much, due to them all
It is on character certain or adjacent two strokes, correct connection can be realized.
Accompanying drawing explanation
Fig. 1 is the delineation character picture schematic diagram of the present invention;
Fig. 2 is the pseudo-edge template set schematic diagram of the present invention;
Fig. 3 is the true border template set schematic diagram of the present invention;
Fig. 4 is the scene template set schematic diagram of the present invention;
Fig. 5 is connection mode 3 schematic diagram of the present invention;
Fig. 6 is the edge graph of traditional method Canny operator extraction, has pseudo-edge point;
Fig. 7 is the result schematic diagram that the present invention removes pseudo-edge;
Fig. 8 is fit procedure and result figure thereof;
Fig. 9 is the flow chart of the present invention.
Detailed description of the invention:
The invention will be further described with embodiment below in conjunction with the accompanying drawings.
Optics based on architectural feature delineation character edge extracts and edge fitting method, comprises the following steps:
Step one, removes pseudo-edge based on k nearest neighbor algorithm.
Rationally selecting of feature space is the correct most critical key element classified, and should consider the accuracy of feature description, it is also contemplated that
The equilibrium problem of sample.The shortcoming of K nearest neighbor algorithm is when sample imbalance, it is possible to cause when inputting a new samples,
In k neighbours of this sample, the sample of Large Copacity class occupies the majority.
The intensity profile of the marginal point neighboring pixel analyzing canny operator extraction finds, true marginal point and 8 neighborhoods of false marginal point
Intensity profile has obvious difference.Pseudo-edge point is always at high gray-scale pixels and the boundary of low gray-scale pixels in a stroke,
True marginal point or be in the boundary of gray-scale pixels in high gray-scale pixels and background, or it is in gray scale in low gray-scale pixels and background
The boundary of pixel.Use 8 neighborhoods of pixel as its feature for this, for sake of convenience, 8 neighborhoods of pixel are called border template,
8 neighborhoods of true edge pixel point are called true border template, and 8 neighborhoods of pseudo-edge pixel are called pseudo-edge template, true, pseudo-side
Edge template constitutes the feature space using k nearest neighbor algorithm.
Owing to pseudo-edge point is always at the boundary of the pixel region of two kinds of grey levels in a stroke, upper and lower according to pseudo-edge point
Pseudo-edge template is divided into following 8 classes, totally 32 kinds of structures by the gray scale value condition of left and right surrounding pixel point.In front 16 kinds of templates
The pseudo-edge point being in center is low gray value, and in rear 16 kinds of templates, pseudo-edge point is high gray value, as shown in Figure 2.
True marginal point or be in high gray-scale pixels region and the boundary of background, or it is in the boundary of low gray areas and background.
Thus according to the gray scale value condition of the marginal point point of surrounding pixel up and down true border template is divided into following six classes, 24 kinds of structures,
As shown in Figure 3.
It is to be appreciated that the value of three kinds of gray scales is the gray scale by the high, medium and low three kinds of gray-scale pixels of histogram analysis in template
Average represents.
Algorithm steps:
Pixel grey scale matrix conversion corresponding for true and false border template is become the characteristic vector of a length of 9, the corresponding classification of distribution simultaneously
Label;
In feature space, k closest sample is selected according to Euclidean distance;
The number of times that in statistics K-nearest samples, each classification designator occurs;
Select the class label class label as marginal point to be sorted of frequency of occurrences maximum.
Step 2, interrupted edge fitting based on architectural feature.
1. feature description
The distance considered between the stroke structure feature of character and the end points of interrupted edge line segment summarizes 16 kinds of connection modes,
Relating to 16 kinds of actual pixel scenes, different connection modes takes different edges to fit method.It is exactly in difference specifically
Connection mode under, want according to the position relationship of each edge line segment, distance between the half-tone information of line segment and line segment etc. three
Element selects correct edge point annexation.
As shown in Figure 4, their stroke feature is described as follows 16 kinds of pixel scenes.
Scene 1, three edge line segments to be fitted are in the end of a stroke, and two parallel edge line segments have identical ash
Degree rank, the pixel grey scale of another one edge line segment is higher than the former, has the end points at two other edges near an edge point,
Need to select one of them to carry out edge fitting.
Four edge line segments that scene 2 is to be fitted are in the middle part of a stroke, and symmetrically arrangement architecture, grey scale pixel value
A pair high line segment relative position is in upper right side, and the low a pair line segment relative position of grey scale pixel value is in lower left.One limit
There is the end points at three other edges near acies point, need to select one of them to carry out edge fitting.
Scene 3 is all similar with scene 2 to scene 5, and difference is that a pair line segment that in scene 3, grey scale pixel value is high is relative to position
Being in upper left side, the low a pair line segment relative position of grey scale pixel value is in lower right;A pair that in scene 4, grey scale pixel value is high
Line segment relative position is in lower right, and the low a pair line segment relative position of grey scale pixel value is in upper left side;Pixel ash in scene 5
The high a pair line segment relative position of angle value is in lower left, and the low a pair line segment relative position of grey scale pixel value is in upper right side;
Scene 6 to scene 8 chats several situations that scene 15 derives after edge fitting after being.Scene 6 is similar with 5,
Difference is that a pair line segment that grey scale pixel value is low is in lower right relative to one, position, and another is in upper right side.Scene 8 He
3 is similar, and a pair line segment that grey scale pixel value is high is in upper left side relative to one, position, and one is in upper right side.Picture in scene 7
A pair line segment that element gray value is high is left-right relation relative to position, and a pair line segment that grey scale pixel value is high is upper and lower relation relative to position.
Several situations that scene 9 to scene 12 is subsequent scenario 13 and scene 14 derives after edge fitting.Wait to fit
Four edge line segments in, having three line segments is low gray-scale pixel values, and another is high gray value.Three that grey scale pixel value is low
Line segment is all vertical or near vertical line segment, in regularly arranged relation parallel or unanimous between the higher and lower levels.The line segment that grey scale pixel value is high
It it is the line segment of level.There is the end points at three other edges near one edge point, need to select one of them to carry out edge plan
Close.
Scene 13, six edge line segments to be fitted are in horizontal stroke and perpendicular stroke intersection, and symmetrically arrangement architecture, pixel
The high a pair line segment relative position of gray value is in right side, and two pairs of line segments that grey scale pixel value is low are respectively at upper left side relative to position
And lower left.There is the end points at 5 other edges near one edge point, need to select one of them to carry out edge fitting.
Scene 14 is similar in appearance to scene 13, and the two is horizontal scene relation.
Scene 15, six edge line segments to be fitted are the symmetrical relation of star, are divided into three groups of line segments according to slope size, tiltedly
Rate be zero a pair line segment all there is higher grey scale pixel value, a pair line segment that slope is negative has relatively low grey scale pixel value,
Another to line segment then one there is higher grey scale pixel value one there is relatively low grey scale pixel value.
Scene 16, eight edge line segments to be fitted are the symmetrical relation of star, are divided into four according to slope size and coordinate position
To line segment, it is respectively at upper left, lower-left, upper right, lower right.The end points at 7 other edges is had near one edge point,
Need to select one of them to carry out edge fitting.
2. specifically describe:
Representing any edge line segment in chained list with Ci (i=1,2....n), Ci (1) and Ci (2) represents its 1st end points and the 2nd end points.
During edge fitting, always from the beginning of the C1 (1) of first edge line segment from chained list, we are concerned about C1 (1) and other edge line segment
Distance D1i (1) of the end points Ci (1) and end points Ci (2) of Ci (i=2....n) and D1i (2).Edge fitting is also concerned about end points C1 (1) week
Enclose the line segment number N in certain distance scope Dn, in order to judge that current connection belongs to any connection mode.Use sciagraphy is examined
Surveying the width w of stroke, it is suitable for observing and test discovery and taking Dn=2.5W.
First, at { Ci (1), Ci (2) i=2, search D1i (1) or the D1i (2) end points less than 0.3*w in 3...}
Cj (k) (j=2,3...k=1,2), if Cj (k) exists, C1 (1) is directly connected to Cj (k), and extreme coordinates uses an order polynomial to fit,
Deleting the Cj in chained list, chained list length subtracts 1.This is connection mode 1.
Secondly, Ci i=2, searches for the min (D1i (1), D1i (2)) the line segment Cj less than 2.5*w in 3...}, and if Cj exists, note
Record these end points Cj (k) k=1,2.According to line segment number N umber, it is divided into following several ways:
(1) if Number=1, and not being last interrupted edge line segment of character outline, C1 (1) is directly connected to Cj (k).
This is connection mode 1.
(2) if Number=2, corresponding to scene 1, use gray scale and slope information to select access path.This is for connecting
Pattern 2.
(3) if Number=3, corresponding to scene 2 to scene 12, half-tone information and the position of line segment of line segment are used
Relation makes a distinction, and selects different end points methods of attachment.Correspondingly it is referred to as connection mode 3,6......13.
(4) if Number=4,5, remove apart from farthest line segment, according to Number=3 process.
(5) if Number=6 is corresponding to scene 13,14,15, use the half-tone information of line segment, slope information and
Position relationship makes a distinction, and selects different methods of attachment.It is referred to as connection mode 14,15,16.
(6) if Number > 7, interrupted edge line segment length is usually occurred in shorter, interrupted edge line in the range of 2.5W
The situation that section is denser, the mid portion such as character " B " just belongs to this situation.Wherein as Number=8, permissible
Slope according to line segment judges if corresponding to scene 16 (two pairs of parallel segments), is just attached according to the position relationship of line segment,
It is referred to as connection mode 16;Otherwise equally make the most identical process with the situation of other Number > 7, reduce hunting zone, take
Dn=1.5W, re-searches for the line segment number N in the range of end points C1 (1) Dn around so that N≤6, then according to said method
Determine connection mode.
3. edge fitting method
Actually scene Recognition is so that it is determined that connection mode, below explanation as a example by connection mode 3.As Number=3,
Total scene 2 around has 3 line segments to scene 12 ten kinds of scenes such as grade, C1 (1), it is thus necessary to determine that C1 (1) and other three line segments
In which end points of which bar line segment go to connect, arrive to 11 kinds of scenes generation connection modes 3 such as scene 12 grade in scene 2
Connection mode 13 11 kinds of connection modes such as grade.
The method using set to decompose carries out scene Recognition, needs to use the half-tone information of each line segment, range information and tiltedly during identification
Rate information.Line segment C1 is continuously increased along with line segment connecting length, from C1 (1) start to take its 10 points ask its average coordinates XY1 (x, y)
With average gray Bgray1;The average coordinates of its excess-three bar line segment and average gray respectively with XY2 (x, y), XY3 (x, y), XY4 (x, y)
And Bgray1, Bgray2, Bgray3, Bgray4 represent.
11 kinds of scenes constitute set B.First, according to the half-tone information of three line segments, set B is divided into subset B1 and B2.
B1 comprises scene 2 and arrives scene 8 totally seven scenes, the average gray Bgray1 of four line segments, Bgray2, Bgray3, Bgray4
In, the grey scale pixel value of a pair line segment is higher, and another is relatively low to the grey scale pixel value of line segment;B2 comprises scene 9 to scene
12, in four line segments, the grey scale pixel value of a line segment is higher, and the grey scale pixel value of other three line segments is relatively low.
Secondly, according to the range information of set line segment, B1 is divided into subset B11 and B12.Use average coordinates XY1 (x, y)
Calculating the distance of the line segment of two gray scales equal (closely), distance Hgd between a pair line segment of high gray value represents, low ash
Distance between a pair line segment of angle value is Lgd.B11 comprises scene 3, and 4,5,6, four scenes, Hgd and Lgd is
Less than 1.5W;B12 comprises 6,7,8 three scenes of scene, and in Hgd and Lgd, only one of which is less than 1.5W.
Set B11, B12 and B2 are the minimal set that can not divide again.Set B11 four kinds of scenes corresponding to connection mode 3,
4、5、6;Three kinds of scenes of set B12 are corresponding to connection mode 7,8,9;Four kinds of scenes of set B2 are corresponding to connection mode
Formula 10,11,12,13.
In set B11, distinguish four kinds of scenes according to the relative position relation of line segment.A pair line segment pair of low gray value
Row, column meansigma methods LL and LH represent, row, column meansigma methods HL of a pair line segment pair of high gray value and HH represent,
If LL>HL and LH<HH, it is scene 2, that is connection mode 3;If LL > HL and LH > HH, it it is scene 3, Yi Jilian
Connect pattern 4;If LL < HL and LH < HH, it is scene 4, that is connection mode 5;If LL<HL and LH>HH, it it is scene
5, that is connection mode 6.
In set B12, the slope information of line segment is used to distinguish three kinds of scenes.The slope of high gray-scale segment pair respectively with HS1 and
HS2 represents, the slope of low gray-scale segment pair represents with LS1 and LS2 respectively.If HS1=HS2 is scene 6 that is connects
Mode 7, if LS1=LS2, is scene 8 that is connection mode 9, is otherwise scene 7 that is connection mode 8.
In set B2, need from three low gray-scale segment, to get rid of left and right according to half-tone information and column position information arranged in parallel
Low a pair gray-scale segment, calculate the row average LL1 value of a remaining each pixel of low gray-scale segment and column average value LH1, simultaneously
Calculate the row average HL1 value of each pixel of high gray-scale segment and column average value HH1, utilize positional information LL1, LH1, HL1,
HH1 determines the position relationship of the two thus distinguishes scene 9 and arrive scene 12, that is connection mode 10 to 13.
After determining connection mode, need to further determine that the particular location in its scene of each line segment, can determine concrete
End points annexation, thus realize edge and fit.Still explanation as a example by connection mode 3.
Have four line segments { C1 Ci Cj Ck}, ij k=2,3,4...N, the Article 1 line segment always C1 in set.Features described above identification
In gray scale ranking results with { Ca Cb Cc Cd} represents, Ca and Cb is a pair line segment that a pair grey scale pixel value is relatively low, Cc
It is then a pair line segment that a pair grey scale pixel value is higher with Cd.
The relatively column average value of Ca and Cb, 4., the label of the tax that column average value is big is 3. for the label of the tax that column average value is little;Relatively
The row meansigma methods of Cc and Cd, 1., the label of the tax that row meansigma methods is big is 2. for the label of the tax that row meansigma methods is little.If line segment C1
Be numbered 1., just select to be numbered line segment 4. as connecting line segment, if 4. line segment C1's is numbered, just select to be numbered
1. line segment is as connecting line segment;If 2. line segment C1 is numbered, just select to be numbered line segment 3. as connecting line segment, as
Really line segment C1 be numbered 3., just select to be numbered line segment 2. as connecting line segment.The connection end point of line segment C1 always takes
C1 (1), the connection end point of connected line segment in features described above identification it has been determined that.
End points is connected by the method that coordinate fits, if the distance between end points is less than 0.8W, uses 1 order polynomial to fit,
2 order polynomials are otherwise used to fit.
Simulation result:
Illustrate as a example by typical case character G, as shown in Figure 6, the edge of existing simple use Canny operator extraction, there is puppet
Marginal point.As it is shown in fig. 7, use step one to remove the result of pseudo-edge, in conjunction with step 2 fit procedure and result thereof in Fig. 8,
It can be seen that the present invention can remove discontinuous problem well, and not having pseudo-edge, recognition accuracy is high, and matching obtains
Delineation character outline complete.
Although the detailed description of the invention of the present invention is described by the above-mentioned accompanying drawing that combines, but not limit to scope
System, one of ordinary skill in the art should be understood that on the basis of technical scheme, and those skilled in the art need not pay
Go out various amendments or deformation that creative work can make still within protection scope of the present invention.
Claims (10)
1. optics based on architectural feature delineation character edge extracts and an edge fitting method, it is characterized in that: include following step
Rapid:
(1) use Canny operator extraction edge, pixel grey scale matrix conversion corresponding for true and false border template become characteristic vector,
It is marked, utilizes K nearest neighbor algorithm to remove the pseudo-edge of sample according to the characteristic vector of labelling;
(2) different connection modes is built according to the distance between stroke structure feature and the end points of interrupted edge line segment of character
Formula, forms pixel scene;
(3) according to the position relationship of each edge line segment in each pixel scene, between the half-tone information of line segment and line segment away from
From, determine the connected mode of line segment end points, carry out edge fitting, form character outline.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (1), concrete steps include:
(1-1) pixel grey scale matrix conversion corresponding for true and false border template is become characteristic vector, the corresponding classification of distribution simultaneously
Label;
(1-2) in feature space, k closest sample is selected according to Euclidean distance;
(1-3) number of times that in statistics K-nearest samples, each classification designator occurs;
(1-4) the class label class label as marginal point to be sorted of frequency of occurrences maximum is selected.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (1-1), pixel grey scale matrix conversion corresponding for true and false border template is become the feature of a length of 9 to
Amount.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (2), considers the distance between the stroke structure feature of character and the end points of interrupted edge line segment
Summarize multiple connection mode, every kind of connection mode is taked different edges as a pixel scene, different connection modes
Fit method.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (2), according to treating that the edge line section of matching is positioned at the position of stroke, and arrangement architecture each other,
The magnitude relationship of grey scale pixel value and the difference of position relationship determine different pixel scenes.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (3), using the end points of the line segment of each pixel scene as element, forms chained list, carries out edge plan
Close, from the beginning of first end points of the first of chained list edge line segment, determined the distance of each end points and lower end point by chained list
And the line segment number in set distance range around end points.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (3), starts to search for the distance whether existed with the first end points and be less than the end points of setting value in chained list,
If existing, being then attached by two end points, extreme coordinates uses an order polynomial to fit, and deletes the said two end in chained list
The connecting line segment of point, chained list length subtracts 1 simultaneously.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (3), and in chained list, detection range the first end points is less than the line segment of set distance range, if on Cuning
State line segment, then record the end points of line segment, add up line segment number, according to number, gray level segment positions and the pixel scene of line segment
Corresponding, determine pixel scene belonging to it.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method, its
Feature is: in described step (3), the method being connected by coordinate fitting of end points.
A kind of optics based on architectural feature delineation character edge extracts and edge fitting method,
It is characterized in that: described set distance range is 2-3 times of character stroke width.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610327790.3A CN106023191B (en) | 2016-05-16 | 2016-05-16 | A kind of optics delineation character edge extraction and edge fitting method based on structure feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610327790.3A CN106023191B (en) | 2016-05-16 | 2016-05-16 | A kind of optics delineation character edge extraction and edge fitting method based on structure feature |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106023191A true CN106023191A (en) | 2016-10-12 |
CN106023191B CN106023191B (en) | 2018-11-27 |
Family
ID=57098697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610327790.3A Active CN106023191B (en) | 2016-05-16 | 2016-05-16 | A kind of optics delineation character edge extraction and edge fitting method based on structure feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106023191B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107194435A (en) * | 2017-06-19 | 2017-09-22 | 山东建筑大学 | A kind of character representation true and false based on the optics delineation character edge point for simplifying neighborhood and sorting technique and application |
CN109596059A (en) * | 2019-01-07 | 2019-04-09 | 南京航空航天大学 | A kind of aircraft skin gap based on parallel lines structure light and scale measurement method |
CN118036628A (en) * | 2023-08-28 | 2024-05-14 | 武汉金顿激光科技有限公司 | Work piece intelligent management method, system and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030198386A1 (en) * | 2002-04-19 | 2003-10-23 | Huitao Luo | System and method for identifying and extracting character strings from captured image data |
CN105279507A (en) * | 2015-09-29 | 2016-01-27 | 山东建筑大学 | Method for extracting outline of carved character |
-
2016
- 2016-05-16 CN CN201610327790.3A patent/CN106023191B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030198386A1 (en) * | 2002-04-19 | 2003-10-23 | Huitao Luo | System and method for identifying and extracting character strings from captured image data |
CN105279507A (en) * | 2015-09-29 | 2016-01-27 | 山东建筑大学 | Method for extracting outline of carved character |
Non-Patent Citations (3)
Title |
---|
JING YU等: "Double-edge-model based character stroke extraction from complex backgrounds", 《19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 * |
宋怀波等: "基于三叉点特征的激光刻蚀标牌字符识别", 《光电子·激光》 * |
张思俊等: "基于C army算子边缘检测的车牌图像增强方法", 《重庆交通大学学报(自然科学版)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107194435A (en) * | 2017-06-19 | 2017-09-22 | 山东建筑大学 | A kind of character representation true and false based on the optics delineation character edge point for simplifying neighborhood and sorting technique and application |
CN107194435B (en) * | 2017-06-19 | 2020-07-31 | 山东建筑大学 | Simplified neighborhood based optical scoring character edge point true and false feature representation and classification method and application |
CN109596059A (en) * | 2019-01-07 | 2019-04-09 | 南京航空航天大学 | A kind of aircraft skin gap based on parallel lines structure light and scale measurement method |
CN118036628A (en) * | 2023-08-28 | 2024-05-14 | 武汉金顿激光科技有限公司 | Work piece intelligent management method, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106023191B (en) | 2018-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110175576B (en) | Driving vehicle visual detection method combining laser point cloud data | |
CN110032962B (en) | Object detection method, device, network equipment and storage medium | |
CN106709436B (en) | Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system | |
CN105574543B (en) | A kind of vehicle brand type identifier method and system based on deep learning | |
CN105528588A (en) | Lane line recognition method and device | |
KR101403876B1 (en) | Method and Apparatus for Vehicle License Plate Recognition | |
CN107369159B (en) | Threshold segmentation method based on multi-factor two-dimensional gray level histogram | |
Zhong et al. | Multi-scale feature fusion network for pixel-level pavement distress detection | |
CN105975929A (en) | Fast pedestrian detection method based on aggregated channel features | |
CN1312625C (en) | Character extracting method from complecate background color image based on run-length adjacent map | |
CN103473785B (en) | A kind of fast multi-target dividing method based on three-valued image clustering | |
CN102722707A (en) | License plate character segmentation method based on connected region and gap model | |
CN105069451B (en) | A kind of Car license recognition and localization method based on binocular camera | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
CN110889398A (en) | Multi-modal image visibility detection method based on similarity network | |
CN109870458B (en) | Pavement crack detection and classification method based on three-dimensional laser sensor and bounding box | |
CN104182952A (en) | Multi-focusing sequence image fusion method | |
JP2008064628A (en) | Object detector and detecting method | |
CN105868683A (en) | Channel logo identification method and apparatus | |
CN102779157A (en) | Method and device for searching images | |
CN106023191A (en) | Optical drawing character edge extraction and edge fitting method based on structure features | |
CN103743750A (en) | Method for generating distribution diagram of surface damage of heavy calibre optical element | |
JP2020038132A (en) | Crack on concrete surface specification method, crack specification device, and crack specification system, and program | |
CN111414938B (en) | Target detection method for bubbles in plate heat exchanger | |
CN112381034A (en) | Lane line detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |