CN105160300B - A kind of text abstracting method based on level-set segmentation - Google Patents
A kind of text abstracting method based on level-set segmentation Download PDFInfo
- Publication number
- CN105160300B CN105160300B CN201510474071.XA CN201510474071A CN105160300B CN 105160300 B CN105160300 B CN 105160300B CN 201510474071 A CN201510474071 A CN 201510474071A CN 105160300 B CN105160300 B CN 105160300B
- Authority
- CN
- China
- Prior art keywords
- text
- region
- image
- regions
- level set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000011218 segmentation Effects 0.000 title claims abstract description 32
- 238000000605 extraction Methods 0.000 claims description 22
- 238000001914 filtration Methods 0.000 claims description 19
- 239000000284 extract Substances 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 19
- 230000000694 effects Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000013179 statistical model Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/414—Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Character Input (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of text abstracting method based on level-set segmentation, including:Image data information is read, determines boundary curve;Gray processing is carried out to the image of reading;Extract gray feature value;Image is divided by two regions using level set function according to gray feature value;Binaryzation is carried out to two regions being partitioned into;Connected member calibration is carried out respectively to two regions after binaryzation;The connected member demarcated in two regions is filtered;Polarity judging is carried out to filtered region, judges text pixel region and background pixel region;It is filtered to text filed, wiping out background noise;It exports text and extracts result.The present invention can not only extract the text message in complex background, and be extracted also very accurately to the image text containing outline letters, have certain versatility and practicability.
Description
Technical Field
The invention relates to a text extraction method in the field of image processing, in particular to a text extraction method based on level set segmentation.
Background
With the development of network and computer technologies, more and more information appears in the form of multimedia such as images or videos. The image or video contains abundant text information which plays a role in explaining and interpreting the image or video. The text information is extracted and recognized, and the method has important significance in the aspects of image understanding, video content analysis, intelligent transportation, machine vision, intelligent control and the like. However, since text information is often in a complex background, it is difficult for a general OCR system to recognize the text information. Text detection therefore requires a background removal process, i.e. a text extraction process, before it is submitted to the OCR system. Therefore, how to extract text information from a complex background image becomes a key task for understanding the image content with the text information as a clue.
The existing image text extraction technology is mainly divided into a threshold-based method, a cluster-based method and a statistical model-based method. The threshold-based method mainly utilizes segmentation of text and background colors, and sets a threshold to separate the text and the background. The threshold value is selected from a global threshold value and a local threshold value. The extraction effect of the method depends on the discrimination of the threshold value to the image background and the text, and is generally suitable for the condition that the image background is single. The clustering-based method generally divides text block images into K classes by using color information, and then aggregates the classes meeting the rules according to a certain clustering algorithm and a set threshold value, thereby gradually reducing the classification number of colors. The text pixels finally correspond to one of the classes, with the remaining classes being the background. In this type of method, when the background contains components with the same or similar colors as the text, the components are mistakenly classified into the text class, so that a large amount of residual background is generated and the OCR recognition is affected. The method based on the statistical model establishes a probability model for all pixels in the text block, then sets parameters in a reasonable probability model, and then determines whether each pixel belongs to a text pixel according to a maximum likelihood rule. Model parameters in the probabilistic model method generally need to be obtained through statistical learning, and a large number of learning samples are needed.
The above various text extraction methods only use the local gray scale or color information of the bottom layer of the image, when extracting the text or the hollow characters in the complex background image, the residual background often exists, and the text extraction effect is not good.
Disclosure of Invention
The present invention is directed to solving the above problems, and an object of the present invention is to provide a text extraction method based on level set segmentation. The method comprises the steps of firstly dividing an image into two regions by adopting a level set function, then judging the polarity of the two regions to judge a text region and a background region, and finally filtering the text region to remove background noise. The method utilizes the full image information of the image, not only can extract the text information in the complex background, but also has ideal extraction effect on the hollow digital image. Has certain universality and practicability.
In order to achieve the purpose, the invention adopts the following technical scheme:
a text extraction method based on level set segmentation comprises the following steps:
reading image data information and determining a boundary curve; graying the read image; extracting a gray characteristic value; dividing the image into an area inside the boundary curve and an area outside the boundary curve by adopting a level set function according to the gray characteristic value; carrying out binarization on the two divided regions; respectively calibrating connected elements of the two binarized areas; filtering the calibrated connected elements in the two areas; judging the polarity of the filtered area to judge a text pixel area and a background pixel area; filtering the text region to filter background noise; and outputting a text extraction result.
The method comprises the following specific steps:
step (1): given image u0(x, y), (x, y) belongs to omega, omega is an image area, omega is an open subset of omega, C is a boundary curve of omega, and image information is read;
step (2): graying the read image;
and (3): extracting a gray characteristic value of the image;
and (4): dividing the image into an area inside the boundary curve and an area outside the boundary curve by adopting a level set function;
and (5): judging whether the segmentation is finished or not, if so, entering the step (6), and otherwise, returning to the step (4);
and (6): binarizing the two divided regions, namely representing the region inside the curve by using black pixels and representing the region outside the curve by using white pixels;
and (7): respectively adopting a region growing method to calibrate the connected elements of the two binarized regions;
and (8): judging whether the calibration of the connected element is finished, if so, entering the step (9), and otherwise, returning to the step (7);
and (9): filtering connected elements in the two regions;
step (10): judging whether the filtering of the two area connected elements is finished, if so, entering the step (11), and otherwise, returning to the step (9);
step (11): judging the polarity of the two filtered areas to judge which area of the two areas is a text area; comparing the number of the connected elements in the two regions, taking the region with a large number of the connected elements as a text region, and taking the region with a small number of the connected elements as a background region;
step (12): further filtering the determined text area to remove residual background;
step (13): and outputting a text extraction result.
In the step (4), the energy function of the level set segmentation is as follows:
wherein, mu, v, lambda1,λ2Are all normal numbers, c1,c2Are respectively an image u0Gradation level inside and outside curve boundary C in (x, y)
Mean, H (z) and δ (z) represent regularized Heaviside function H (z) and Dirac function δ (z), respectively; wherein,
the specific method in the step (4) comprises the following steps:
step (4-1): using the boundary curve C as a level set functionInstead, if point (x, y) is inside curve C, thenIf the point (x, y) is outside the curve C, thenIf point (x, y) is on curve C, then
Step (4-2): initialize level set function, orderk=0;Is a constant value;
step (4-3): minimizing energy function of level setFixingFor the K-th iterationA value of (c) is calculated1 kAnd c2 kA value of (d);
step (4-4): minimizing energy function of level setFastening of c1 kAnd c2 kCalculatingWhereinWhen representing the k-th iterationA value of (d);
step (4-5): judgment ofIf not, returning to the step (4-3) to continue the iterative operation, otherwise, stopping the iteration and entering the step (4-6);
step (4-6): and outputting a level set function segmentation result.
The above-mentionedStep (4-3) calculating c at the k iteration1And c2The method of the value is:
wherein u is0(x, y) are points on a given image,is a regularized Heaviside function.
ComputingThe specific method comprises the following steps:
using c calculated in step (4-3)1 kAnd c2 kFirst, according to the following formulaThen, the integral is calculated
Wherein div represents the divergence operator,Representing the gradient operator, μ, v, λ1,λ2Are all normal numbers, c1,c2Are respectively an image u0The gray level average value inside and outside the curve boundary C in (x, y).
The method for calibrating the connected components of the two binarized regions by using the region growing method in the step (7) respectively comprises the following steps:
step (7-1): searching pixels in the region from top to bottom and from left to right respectively, and assigning a new mark number to the pixel if the pixel is not marked;
step (7-2): carrying out 8 neighborhood search by taking the newly marked pixel point as a starting point, if an unmarked pixel point is searched in the 8 neighborhood, assigning the same label to the searched unmarked pixel point, and carrying out 8 neighborhood search by taking the newly marked pixel point as the starting point;
step (7-3): if the unmarked pixel point is not searched in the 8-neighborhood, the search is finished;
step (7-4): judging whether all the pixel point marks are finished or not; entering step (7-5) if the step is finished; if the step (7-1) is not finished, marking all unmarked pixel points in the area until all pixel point marks are finished;
step (7-5): and taking the pixel points with the same label as a connected element.
The method for filtering the connected component in the step (9) comprises the following steps:
and respectively judging the positions of the connected elements in the two regions and the number of pixel points in the connected elements, and deleting the connected elements if the connected elements are connected with the boundary or the number of the pixel points in the connected elements is less than a set threshold value.
In the step (11), the method for determining the polarity of the two filtered regions includes:
step (11-1): after filtering, taking the pixel points with the same label in the two areas as a connected element;
step (11-2): respectively counting the number of connected elements in the two regions, and respectively setting the number of the connected elements in the two regions as n1And n2;
Step (11-3): comparison of n1And n2If n is1>n2Then n is1The corresponding region is a text region, otherwise n2The corresponding region is a text region.
In the step (12), for the determined text region, a method for further removing the residual background includes:
the gray level average value of each connected element in the region is counted, the gray level average values of the connected elements are arranged from small to large, then the difference value of the adjacent gray level average values is calculated, the gray level difference value is compared with a set threshold value in sequence, if the gray level difference value is larger than the set threshold value, the difference value is used as a segmentation position, after all the difference values are judged, N segmentation positions are obtained, the section with the largest number of corresponding pixel points in each segmentation is taken as a text region section, the connected element corresponding to the text region section is taken as a text connected element, the position corresponding to the text connected element is taken as a text region, and other regions in the image are taken as background regions.
The invention has the beneficial effects that:
according to the characteristics of text information in a complex background image, firstly, a level set function is adopted to segment the image, and then polarity judgment and background filtering are carried out on a segmented area to obtain a text extraction result. The method utilizes the global information of the text image, not only can extract the text information in the complex background image, but also has very accurate extraction effect on the text of the hollow characters, has certain universality and practicability, and avoids the influence of residual background on the extraction result. The achievement of the invention can be directly applied to the fields of image understanding, video content analysis, intelligent transportation, machine vision, intelligent control and the like, and has wide application prospect.
Drawings
FIG. 1 is a flow chart of a text extraction method based on level set segmentation according to the present invention.
The specific implementation mode is as follows:
the invention is further illustrated by the following examples in conjunction with the accompanying drawings:
the basic hardware conditions required to implement the system architecture of the present invention are: a computer with a main frequency of 2.4GHZ and a memory of 1G has the following software conditions: the programming environment is Visual C + +.
A text segmentation method based on a level set active contour model is shown in FIG. 1, and comprises the following specific steps:
step (1): starting, reading an image;
step (2): graying the read image;
and (3): extracting a gray characteristic value of the image;
and (4): adopting a level set function to divide the image into two areas;
given image u0(x, y), (x, y) e Ω, Ω being called the image region, ω being a subset of Ω, C being a boundary curve of ω, which can be a level set functionInstead, if point (x, y) is inside curve C, thenIf the point (x, y) is outside the curve C, thenIf point (x, y) is on curve C, then
The level set energy function can be expressed as:
wherein, mu, v, lambda1,λ2Is a normal number, c1,c2Is an image u0(x, y) mean gray levels inside and outside the curve boundary C, H (z) and δ (z) represent regularized Heaviside function H (z) and Dirac function δ (z), respectively
Minimizing the energy function, fixingCan estimate c1,c2The value of (a) is,
then, c is fixed1,c2Minimizing the energy function, can be obtained
The concrete implementation steps are as follows:
step (4-1): initialize level set function, orderk is 0, and 5 circles are selected as the initialization curve of the level set;
step (4-2): c is calculated according to the formulas (4), (5)1 kAnd c2 k;
Step (4-3): according to the calculated c1 kAnd c2 kIs calculated according to the formula (6)
Step (4-4): judging whether the solution tends to be stable, if not, turning to the step (4-2) if the solution is not stable, continuing iterative operation, otherwise, stopping iteration and entering the step (4-5);
step (4-5): and outputting a level set segmentation result.
And (5): judging whether the division is finished or not, if so, entering the step (6), and if not, entering the step (4);
and (6): binarizing the two divided regions, namely representing the region inside the curve by using black pixels and representing the region outside the curve by using white pixels;
and (7): carrying out 8-connected element calibration on the two divided areas by adopting an area increasing method;
the method comprises the following specific steps:
step (7-1): searching pixels in the region from top to bottom and from left to right respectively, and assigning a new mark number to the pixel if the pixel is not marked;
step (7-2): carrying out 8 neighborhood search by taking the newly marked pixel point as a starting point, if an unmarked pixel point is searched in the 8 neighborhood, assigning the same label to the searched unmarked pixel point, and carrying out 8 neighborhood search by taking the newly marked pixel point as the starting point;
step (7-3): if the unmarked pixel point is not searched in the 8-neighborhood, the search is finished;
step (7-4): and judging whether all the pixel point marks are finished or not. Entering step (7-5) if the step is finished; if the step (7-1) is not finished, marking all unmarked pixel points in the area until all pixel point marks are finished;
step (7-5): and taking the pixel points with the same label as a connected element.
And (8): judging whether the calibration of the connected element is finished or not, if so, entering the step (9), and if not, returning to the step (7);
and (9): and filtering the connected elements in the two regions, respectively judging the positions of the connected elements in the two regions and the number of pixel points in the connected elements, and deleting the connected elements if the connected elements are connected with the boundary or the number of the pixel points in the connected elements is less than a given threshold value.
Step (10): judging whether the filtering of the two area connected elements is finished or not, if so, entering the step (11), and if not, entering the step (9);
step (11): and performing polarity judgment on the two filtered areas to judge which area of the two areas is a text area. Comparing the number of pixel points contained in the two regions, wherein the region with more pixel points is a text region, and the region with less pixel points is a background region;
the method comprises the following specific steps:
step (11-1): after filtering, taking the pixel points with the same label in the two areas as a connected element;
step (11-2): respectively counting the number of connected elements in the two regions, and respectively setting the number of the connected elements in the two regions as n1And n2;
Step (11-3): comparison of n1And n2If n is1>n2Then n is1Corresponding regionThe field is a text area, otherwise n2The corresponding region is a text region.
Step (12): further filtering the determined text area to remove residual background;
the method comprises the following specific steps:
step (12-1): solving the gray level average value of each connected element in the area;
step (12-2): arranging the gray average values of all connected elements in a sequence from small to large;
step (12-3): calculating the difference between each gray level average value and the adjacent gray level average value;
step (12-4): comparing the difference values obtained in the step (12-3) with a set threshold value respectively, and if the difference values are larger than the set threshold value, taking the difference values as segmentation positions;
step (12-5): judging whether comparison of all the difference values with the threshold values is finished, if so, entering the step (12-6), and if not, entering the step (12-4);
step (12-6): after the comparison is finished, N segmentation positions are obtained, and each connected element is divided into N +1 segments by the N segmentation positions;
step (12-7): and respectively counting the number of pixel points contained in the connected elements corresponding to each segment in the N +1 segments, wherein the connected element corresponding to the segment with the largest number of pixels is a text connected element, the region corresponding to the text connected element is a text region, and the regions corresponding to the rest segments are background regions.
Step (12-8): the background area is deleted.
Step (13): and outputting a text extraction result.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (8)
1. A text extraction method based on level set segmentation is characterized by comprising the following steps:
reading image data information and determining a boundary curve; graying the read image; extracting a gray characteristic value; dividing the image into an area inside the boundary curve and an area outside the boundary curve by adopting a level set function according to the gray characteristic value; carrying out binarization on the two divided regions; respectively calibrating connected elements of the two binarized areas; filtering the calibrated connected elements in the two areas; judging the polarity of the filtered area to judge a text pixel area and a background pixel area; filtering the text region to filter background noise; outputting a text extraction result;
the method comprises the following specific steps:
step (1): given image u0(x, y), (x, y) belongs to omega, omega is an image area, omega is an open subset of omega, C is a boundary curve of omega, and image information is read;
step (2): graying the read image;
and (3): extracting a gray characteristic value of the image;
and (4): dividing the image into an area inside the boundary curve and an area outside the boundary curve by adopting a level set function;
and (5): judging whether the segmentation is finished or not, if so, entering the step (6), and otherwise, returning to the step (4);
and (6): binarizing the two divided regions, namely representing the region inside the curve by using black pixels and representing the region outside the curve by using white pixels;
and (7): respectively adopting a region growing method to calibrate the connected elements of the two binarized regions;
and (8): judging whether the calibration of the connected element is finished, if so, entering the step (9), and otherwise, returning to the step (7);
and (9): filtering connected elements in the two regions;
step (10): judging whether the filtering of the two area connected elements is finished, if so, entering the step (11), and otherwise, returning to the step (9);
step (11): judging the polarity of the two filtered areas to judge which area of the two areas is a text area; comparing the number of the connected elements in the two regions, taking the region with a large number of the connected elements as a text region, and taking the region with a small number of the connected elements as a background region;
step (12): further filtering the determined text area to remove residual background;
step (13): outputting a text extraction result;
in the step (4), the energy function of the level set segmentation is as follows:
wherein, mu, v, lambda1,λ2Are all normal numbers, c1,c2Are respectively an image u0(x, y) mean gray levels inside and outside the boundary curve C, h (z) and δ (z) representing regularized Heaviside function h (z) and Dirac function δ (z), respectively;representing a level set function, (x, y) belongs to omega, and omega is an image area; wherein,
2. the method for extracting text based on level set segmentation as claimed in claim 1, wherein the specific method in step (4) is as follows:
step (4-1): using the boundary curve C as a level set functionInstead, if the point (x, y) is inside the boundary curve C, thenIf the point (x, y) is outside the boundary curve C, thenIf the point (x, y) is on the boundary curve C, then
Step (4-2): initialize level set function, orderk=0;Is a constant value;as a function of the level setAn initial value of (1);
step (4-3): minimizing energy function of level setFixingFor the k-th iterationA value of (c) is calculated1 kAnd c2 kA value of (d); c. C1 kIs the mean value of the gray levels inside the boundary curve C at the k-th iteration, C2 kThe gray level average value outside the boundary curve C in the k iteration is obtained;
step (4-4): minimizing energy function of level setFastening of c1 kAnd c2 kCalculatingWhereinWhen representing the (k + 1) th iterationA value of (d);
step (4-5): judgment ofIf not, returning to the step (4-3) to continue the iterative operation, otherwise, stopping the iteration and entering the step (4-6);
step (4-6): and outputting a level set function segmentation result.
3. The method as claimed in claim 2, wherein c is calculated at the k-th iteration of the step (4-3)1And c2The method of the value is:
wherein u is0(x, y) is the number of given images,is a regularized Heaviside function.
4. The method of claim 2, wherein the computing is based on a level set segmentation text extraction methodThe specific method comprises the following steps:
using c calculated in step (4-3)1 kAnd c2 kFirst, according to the following formulaThen, the integral is calculated
Wherein div represents the divergence operator, ▽ represents the gradient operator, μ, v, λ1,λ2Are all normal numbers, c1,c2Are respectively an image u0The gray level average value inside and outside the boundary curve C in (x, y).
5. The method for extracting text based on level set segmentation as claimed in claim 1, wherein the method for performing connected component calibration on the two binarized regions by using the region growing method in step (7) comprises:
step (7-1): searching pixels in the region from top to bottom and from left to right respectively, and assigning a new mark number to the pixel if the pixel is not marked;
step (7-2): carrying out 8 neighborhood search by taking the newly marked pixel point as a starting point, if an unmarked pixel point is searched in the 8 neighborhood, assigning the same label to the searched unmarked pixel point, and carrying out 8 neighborhood search by taking the newly marked pixel point as the starting point;
step (7-3): if the unmarked pixel point is not searched in the 8-neighborhood, the search is finished;
step (7-4): judging whether all the pixel point marks are finished or not; entering step (7-5) if the step is finished; if the step (7-1) is not finished, marking all unmarked pixel points in the area until all pixel point marks are finished;
step (7-5): and taking the pixel points with the same label as a connected element.
6. The method as claimed in claim 1, wherein the method for extracting text based on level set segmentation in step (9) comprises the following steps:
and respectively judging the positions of the connected elements in the two regions and the number of pixel points in the connected elements, and deleting the connected elements if the connected elements are connected with the boundary or the number of the pixel points in the connected elements is less than a set threshold value.
7. The method for extracting text based on level set segmentation as claimed in claim 1, wherein in the step (11), the method for determining the polarity of the two filtered regions comprises:
step (11-1): after filtering, taking the pixel points with the same label in the two areas as a connected element;
step (11-2): respectively counting the number of connected elements in the two regions, and respectively setting the number of the connected elements in the two regions as n1And n2;
Step (11-3): comparison of n1And n2If n is1>n2Then n is1The corresponding region is a text region, otherwise n2The corresponding region is a text region.
8. The method for extracting text based on level set segmentation as claimed in claim 1, wherein in the step (12), for the determined text region, the method for further removing the residual background comprises:
the gray level average value of each connected element in the region is counted, the gray level average values of the connected elements are arranged from small to large, then the difference value of the adjacent gray level average values is calculated, the gray level difference value is compared with a set threshold value in sequence, if the gray level difference value is larger than the set threshold value, the difference value is used as a segmentation position, after all the difference values are judged, N segmentation positions are obtained, the section with the largest number of corresponding pixel points in each segmentation is taken as a text region section, the connected element corresponding to the text region section is taken as a text connected element, the position corresponding to the text connected element is taken as a text region, and other regions in the image are taken as background regions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510474071.XA CN105160300B (en) | 2015-08-05 | 2015-08-05 | A kind of text abstracting method based on level-set segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510474071.XA CN105160300B (en) | 2015-08-05 | 2015-08-05 | A kind of text abstracting method based on level-set segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105160300A CN105160300A (en) | 2015-12-16 |
CN105160300B true CN105160300B (en) | 2018-08-21 |
Family
ID=54801152
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510474071.XA Active CN105160300B (en) | 2015-08-05 | 2015-08-05 | A kind of text abstracting method based on level-set segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105160300B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754443B (en) * | 2019-01-30 | 2021-04-20 | 京东方科技集团股份有限公司 | Image data conversion method, device and storage medium |
CN112001406B (en) * | 2019-05-27 | 2023-09-08 | 杭州海康威视数字技术股份有限公司 | Text region detection method and device |
CN112749599A (en) * | 2019-10-31 | 2021-05-04 | 北京金山云网络技术有限公司 | Image enhancement method and device and server |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147863A (en) * | 2010-02-10 | 2011-08-10 | 中国科学院自动化研究所 | Method for locating and recognizing letters in network animation |
CN102332097A (en) * | 2011-10-21 | 2012-01-25 | 中国科学院自动化研究所 | Method for segmenting complex background text images based on image segmentation |
CN103077391A (en) * | 2012-12-30 | 2013-05-01 | 信帧电子技术(北京)有限公司 | Automobile logo positioning method and device |
CN104091332A (en) * | 2014-07-01 | 2014-10-08 | 黄河科技学院 | Method for optimizing multilayer image segmentation of multiclass color texture images based on variation model |
-
2015
- 2015-08-05 CN CN201510474071.XA patent/CN105160300B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147863A (en) * | 2010-02-10 | 2011-08-10 | 中国科学院自动化研究所 | Method for locating and recognizing letters in network animation |
CN102332097A (en) * | 2011-10-21 | 2012-01-25 | 中国科学院自动化研究所 | Method for segmenting complex background text images based on image segmentation |
CN103077391A (en) * | 2012-12-30 | 2013-05-01 | 信帧电子技术(北京)有限公司 | Automobile logo positioning method and device |
CN104091332A (en) * | 2014-07-01 | 2014-10-08 | 黄河科技学院 | Method for optimizing multilayer image segmentation of multiclass color texture images based on variation model |
Non-Patent Citations (2)
Title |
---|
文档图像段落分割技术研究与应用;赵娜;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20110430;I138-1156 * |
车牌识别系统的研究;顾钰彪;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20140831;I138-1367 * |
Also Published As
Publication number | Publication date |
---|---|
CN105160300A (en) | 2015-12-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105868758B (en) | method and device for detecting text area in image and electronic equipment | |
CN106780440B (en) | Destruction circuit plate relic image automatic comparison recognition methods | |
CN102968637B (en) | Complicated background image and character division method | |
CN105205488B (en) | Word area detection method based on Harris angle points and stroke width | |
JP2013235577A (en) | Character division for number plate using likelihood maximization | |
CN113158808A (en) | Method, medium and equipment for Chinese ancient book character recognition, paragraph grouping and layout reconstruction | |
CN108038481A (en) | A kind of combination maximum extreme value stability region and the text positioning method of stroke width change | |
CN105760891A (en) | Chinese character verification code recognition method | |
CN108629286A (en) | A kind of remote sensing airport target detection method based on the notable model of subjective perception | |
CN108509950B (en) | Railway contact net support number plate detection and identification method based on probability feature weighted fusion | |
CN110598581B (en) | Optical music score recognition method based on convolutional neural network | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
CN105160300B (en) | A kind of text abstracting method based on level-set segmentation | |
CN103310435A (en) | Method for partitioning number plate characters by combining vertical projection and optimal path | |
Shi et al. | Adaptive graph cut based binarization of video text images | |
CN112819840A (en) | High-precision image instance segmentation method integrating deep learning and traditional processing | |
CN108256518B (en) | Character area detection method and device | |
Feild et al. | Scene text recognition with bilateral regression | |
CN110533049B (en) | Method and device for extracting seal image | |
Chen et al. | A knowledge-based system for extracting text-lines from mixed and overlapping text/graphics compound document images | |
Wang et al. | MRF based text binarization in complex images using stroke feature | |
Gui et al. | A fast caption detection method for low quality video images | |
Huang | A novel video text extraction approach based on Log-Gabor filters | |
Gaceb et al. | A new mixed binarization method used in a real time application of automatic business document and postal mail sorting. | |
Tian et al. | A new algorithm for license plate localization in open environment using color pair and stroke width features of character |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |