CN104036272A - Text detection method and electronic device - Google Patents
Text detection method and electronic device Download PDFInfo
- Publication number
- CN104036272A CN104036272A CN201410289155.1A CN201410289155A CN104036272A CN 104036272 A CN104036272 A CN 104036272A CN 201410289155 A CN201410289155 A CN 201410289155A CN 104036272 A CN104036272 A CN 104036272A
- Authority
- CN
- China
- Prior art keywords
- image
- area image
- target
- character
- object area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Character Input (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
The invention discloses a text detection method and an electronic device. The text detection method includes: confirming a first object region image in an object image according to received selection operation performed by a user on the object image; performing image background segmentation on the first object region image so as to obtain a second object region image in the first object region image; performing single character segmentation on the second object region image through a preset pixel point scanning mode so as to obtain at least one object character image; respectively obtaining object character data corresponding to the at least one object character image according to a corresponding relationship between a character image and character data in a preset recognition library, and forming text data through the obtained object character data. Furthermore, the image area of the second object region image is less than the image area of the first object region image.
Description
Technical field
The application relates to technical field of image processing, particularly a kind of Method for text detection and electronic equipment.
Background technology
At present, can from image data, extract target data by image processing techniques as text or numerical information, as extracted contact phone number or bank number etc. in picture, can share or the operation such as storage information such as the text extracting or numerals, improve user's experience.
The existing scheme of extracting text or numerical information from picture, the normally operating system based on large internal memory, high configuration, utilizes specific information on picture picture to be carried out to prospect and background segment and realize the extraction of text data as texture, color, the degree of depth etc.
But the existing information such as texture or color of utilizing is carried out the scheme that text data obtains, because needs are processed as texture, color, the degree of depth etc. specific information, its calculated amount is larger, make text data to obtain efficiency lower.
Summary of the invention
Technical problems to be solved in this application are to provide a kind of Method for text detection and electronic equipment, utilize the information such as texture or color to carry out the scheme that target data is obtained in order to solve in prior art, its calculated amount is larger, makes the technical matters that efficiency is lower of obtaining of text data.
The application provides a kind of Method for text detection, comprising:
First object area image is determined in selection operation according to the user who receives to target image in target image;
Described first object area image is carried out to image background and cut apart, obtain the second target area image in described first object area image, the image area of described the second target area image is less than the image area of described first object area image;
By default pixel scan mode, described the second target area image is carried out to monocase and cut apart, obtain at least one target character image;
According to the corresponding relation of character picture and character data in default identification storehouse, obtain respectively the each self-corresponding target character data of each described target character image, by the target character data composition text data getting.
Said method, preferred, describedly described first object area image is carried out to image background cut apart, obtain the second target area image, comprising:
Determine the regional frame in UNICOM region in described first object area image;
According to the background image outside regional frame described in described first object area image, utilize image to cut apart GrabCut algorithm and obtain the second target area image in described first object area image;
Wherein, described regional frame, separates described the second target area image and described background image in described first object area image.
Said method, preferred, the described regional frame of determining UNICOM region in described first object area image, comprising:
Described first object area image is carried out to gray scale processing and binaryzation operation;
According to the first object area image through binaryzation operation, determine the regional frame in UNICOM region in described first object area image.
Said method, preferred, described by default pixel scan mode, described the second target area image is carried out to monocase and cut apart, obtain at least one target character image, comprising:
With the pixel scan mode of presetting, described the second target area image is carried out to blank pixel spot scan;
Wherein, in the time scanning blank pixel point, determine the blank pixel point that scans the previous first area image of being expert at, described first area image is carried out to image to be cut apart, obtain the target character image in the image of described first area, and determine the blank pixel point that the scans second area image afterwards of being expert at, the described pixel scan mode of foundation is proceeded the scanning of blank pixel point to described second area image.
Said method, preferred, describedly described first area image is carried out to image cut apart, obtain the target character image in the image of described first area, comprising:
Determine the first difference between starting point and terminating point in the row pixel of described first area image, and determine the second difference between starting point and terminating point in the row pixel of described first area image;
Be greater than described the first difference or described the second difference in the absolute difference between described the first difference and described the second difference, in the image of described first area, carry out character picture and cut apart, obtain the target character image in the image of described first area.
The application also provides a kind of electronic equipment, and described electronic equipment comprises:
Target image determining unit for the selection operation to target image according to the user who receives, is determined first object area image in target image;
Background segment unit, for being carried out to image background, described first object area image cuts apart, obtain the second target area image in described first object area image, the image area of described the second target area image is less than the image area of described first object area image;
Character segmentation unit, for the pixel scan mode passing budgets, carries out monocase to described the second target area image and cuts apart, and obtains at least one target character image;
Text data acquiring unit, for the corresponding relation according to default identification storehouse character picture and character data, obtains respectively the each self-corresponding target character data of each described target character image, by the target character data composition text data getting.
Above-mentioned electronic equipment, preferred, described background segment unit comprises:
Regional frame is determined subelement, for determining the regional frame in described first object area image UNICOM region;
Image is cut apart subelement, for the background image according to outside regional frame described in described first object area image, utilizes GrabCut algorithm to obtain the second target area image in described first object area image;
Wherein, described regional frame, separates described the second target area image and described background image in described first object area image.
Above-mentioned electronic equipment, preferred, described regional frame determines that subelement comprises:
Region operation module, for carrying out gray scale processing and binaryzation operation to described first object area image;
Regional frame determination module, for the first object area image according to through binaryzation operation, determines the regional frame in UNICOM region in described first object area image.
Above-mentioned electronic equipment, preferred, described Character segmentation unit comprises:
Pixel scanning subelement, for the pixel scan mode to preset, carries out blank pixel spot scan to described the second target area image, and wherein, in the time scanning blank pixel point, triggering first area determines that subelement and second area determine subelement;
Subelement is determined in first area, and the first area image before being expert at for definite blank pixel point scanning, triggers Target Segmentation subelement;
Target Segmentation subelement, cuts apart for described first area image is carried out to image, obtains the target character image in the image of described first area;
Second area is determined subelement, and the second area image after being expert at for definite blank pixel point scanning triggers described pixel scanning subelement, according to described pixel scan mode, described second area image proceeded to blank pixel spot scan.
Above-mentioned electronic equipment, preferred, described Target Segmentation subelement comprises:
Difference determination module, for determining the first difference between row pixel starting point and the terminating point of described first area image, and determines the second difference between starting point and terminating point in the row pixel of described first area image;
Character segmentation module, for being greater than described the first difference or described the second difference in the absolute difference between described the first difference and described the second difference, in the image of described first area, carry out character picture and cut apart, obtain the target character image in the image of described first area.
From such scheme, a kind of Method for text detection and electronic equipment that the application provides, after the selection operation of target image being determined to the first object area image in target image by user, first object area image is carried out to image background to be cut apart, obtain the second target area image in first object area image, again the second target area image being carried out to monocase cuts apart, to obtain at least one target character image, and then utilize the corresponding relation of character picture and character data in identification storehouse, obtain the each self-corresponding target character data of each target character image, with the text data that obtains being formed by target character data.The application is in the text detection process of realize target image, process as data such as texture, color, the degree of depth without the specific information in target image, but cut apart and the mode such as pixel scanning completes the recognition detection to character data by image, to obtain text data, its calculated amount is significantly less than existing text data obtains the calculated amount of scheme, obviously improves the efficiency of obtaining text data.
Brief description of the drawings
In order to be illustrated more clearly in the technical scheme in the embodiment of the present application, below the accompanying drawing of required use during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiment of the application, for those of ordinary skill in the art, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The process flow diagram of a kind of Method for text detection embodiment mono-that Fig. 1 provides for the application;
The part process flow diagram of a kind of Method for text detection embodiment bis-that Fig. 2 provides for the application;
Fig. 3 is the application example figure of the embodiment of the present application;
Fig. 4 is the part process flow diagram of the embodiment of the present application two;
Fig. 5 is the Another Application exemplary plot of the embodiment of the present application;
The part process flow diagram of a kind of Method for text detection embodiment tri-that Fig. 6 provides for the application;
The structural representation of a kind of electronic equipment embodiment tetra-that Fig. 7 provides for the application;
The part-structure schematic diagram of a kind of electronic equipment embodiment five that Fig. 8 provides for the application;
Fig. 9 is another part structural representation of the embodiment of the present application five;
The part-structure schematic diagram of a kind of electronic equipment embodiment six that Figure 10 provides for the application;
Figure 11 is another part structural representation of the embodiment of the present application six.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, the technical scheme in the embodiment of the present application is clearly and completely described, obviously, described embodiment is only some embodiments of the present application, instead of whole embodiment.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtaining under creative work prerequisite, all belong to the scope of the application's protection.
With reference to figure 1, the process flow diagram of a kind of Method for text detection embodiment mono-providing for the application, the object of the present embodiment is the text data in view data to detect and obtain, and in the present embodiment, described method can comprise the following steps:
Step 101: the selection operation according to the user who receives to target image, in target image, determine first object area image.
Wherein, before the present embodiment operation, in advance described target image is shown to user, user selects to operate in by gesture and in described target image, delimit the interested region of user, such as image cut operation etc., thus, in described step 101, the selection operation of identification user to described target image, the first object area image of definite firm interest of user in described target image.
It should be noted that, determine first object area image in described step 101 after, described first object area image can be preserved as the local temporary files of the present embodiment operational system.
Step 102: described first object area image is carried out to image background and cut apart, obtain the second target area image in described first object area image.
Wherein, the image area of described the second target area image is less than the image area of described first object area image.
It should be noted that, the images such as the character text on described first object area image are distinguished mutually with background image, therefore, in described step 102, first the background image in first object area image is cut apart the combination of the maximum UNICOM image that the second target area image obtaining is character text or UNICOM's image of multiple character texts.
Step 103: by default pixel scan mode, described the second target area image is carried out to monocase and cut apart, obtain at least one target character image.
Wherein, described pixel scan mode can be blank pixel point or the colourity row or column pixel scan mode lower than preset value.
Step 104: according to the corresponding relation of character picture and character data in default identification storehouse, obtain respectively the each self-corresponding target character data of each described target character image, by the target character data composition text data getting.
Wherein, the gesture identification storehouse that described identification storehouse can provide for the present embodiment operational system, in the specific implementation of described step 104, can pass through the interface function of this gesture identification storehouse GestureLib, transfer the corresponding relation of character picture and character data in described identification storehouse, by each described target character image respectively with described identification storehouse in character picture carry out recycle ratio, to obtain the each self-corresponding target character data of each described target character image, and then by each described target character data are copied and are obtained, so that the text data to target character data composition copies, the operations such as stickup.
From such scheme, a kind of Method for text detection embodiment mono-that the application provides, after the selection operation of target image being determined to the first object area image in target image by user, first object area image is carried out to image background to be cut apart, obtain the second target area image in first object area image, again the second target area image being carried out to monocase cuts apart, to obtain at least one target character image, and then utilize the corresponding relation of character picture and character data in identification storehouse, obtain the each self-corresponding target character data of each target character image, with the text data that obtains being formed by target character data.The embodiment of the present application is in the text detection process of realize target image, process as data such as texture, color, the degree of depth without the specific information in target image, but cut apart and the mode such as pixel scanning completes the recognition detection to character data by image, to obtain text data, its calculated amount is significantly less than existing text data obtains the calculated amount of scheme, obviously improves the efficiency of obtaining text data.
With reference to figure 2, the realization flow figure of step 102 described in a kind of Method for text detection embodiment bis-providing for the application, wherein, described step 102 can realize by following steps:
Step 121: the regional frame of determining UNICOM region in described first object area image.
Wherein, described regional frame is the regional frame in maximum UNICOM region in described first object area image.First object area image 301 is example as shown in Figure 3, in described first object area image 301, there is maximum UNICOM region 302, described regional frame 303 can be understood as the edge frame in the maximum UNICOM region 302 in described first object area image, and described UNICOM region can be understood as: character picture concentrated area or join domain.
Concrete, with reference to figure 4, be the realization flow figure of step 121 described in the embodiment of the present application, wherein, described step 121 can realize by following steps:
Step 401: described first object area image is carried out to gray scale processing and binaryzation operation.
Step 402: according to the first object area image through binaryzation operation, determine the regional frame in UNICOM region in described first object area image.
Wherein, the specific implementation of described step 401 can be: converting described first object area image is gray image, first object area image after using central peripheral histogramming algorithm to described conversion operates, obtain the Saliency maps picture of first object area image, afterwards, utilize its average gray threshold value to carry out binaryzation to first object area image, obtain the first object area image of binaryzation.In described step 402, while determining the regional frame in its maximum UNICOM region according to the first object area image of process binaryzation, can improve the accuracy rate of regional frame.
Step 122: according to the background image outside regional frame described in described first object area image, utilize GrabCut algorithm to obtain the second target area image in described first object area image.
Concrete, in described step 122, can be: carry out initialization GrabCut algorithm taking the background image outside regional frame described in described first object area image as initialization value, obtain thus the second target area image of energy minimization by iteration, to reach the object of cutting apart the second target area image in described first object area image, now, described regional frame is separated the second target image in described first object area image and described background image, regional frame 303 as shown in Figure 5 is separated described background image 304 with described the second target area image 305, described background image 304 forms described first object area image 301 with described the second target area image 305.
With reference to figure 6, the realization flow figure of step 103 described in a kind of Method for text detection embodiment tri-providing for the application, wherein, described step 103 can comprise the following steps:
Step 131: with the pixel scan mode of presetting, described the second target area image is carried out to blank pixel spot scan, in the time scanning blank pixel point, execution step 132 and step 134.
Wherein, in described step 131, can complete the scanning to whole the second target area image by the mode successively every row pixel being scanned, as the implementation in the present embodiment; Also can complete the scanning to whole the second target area image by the mode successively every row pixel being scanned, its specific implementation can, with reference to the present embodiment implementation, be not described in detail in this article.
Step 132: determine the blank pixel point that scans the previous first area image of being expert at, perform step 133.
Step 133: described first area image is carried out to image and cut apart, obtain the target character image in the image of described first area.
Concrete, described step 133 can realize in the following manner:
Determine the first difference between starting point and terminating point in the row pixel of described first area image, and determine the second difference between starting point and terminating point in the row pixel of described first area image, afterwards to comparing between the absolute difference between described the first difference and described the second difference and described the first difference and described the second difference, and be greater than described the first difference or described the second difference in the absolute difference between described the first difference and described the second difference, in the image of described first area, carrying out character picture cuts apart, obtain the target character image in the image of described first area.
Step 134: determine the blank pixel point that the scans second area image after being expert at, return to the described pixel scan mode of foundation in the described step 131 of execution described second area image is proceeded the scanning of blank pixel point.
Above-mentioned implementation can be understood as: the entire image of described the second target area image is carried out to blank pixel spot scan, in the time there is blank pixel point in row or column scan round, just again the second target area image entirety is done to once segmentation and cut, the part before the row or column at blank pixel point place is directly carried out cutting apart of follow-up single character picture; And the blank pixel point place row or column scanning part afterwards first follows described the second target area image pixel scan mode before, after determining the border of single character, carry out cutting apart of single character picture.For example, numerical character is similar to and thinks a square area, while liking to determine the border of single character picture, difference in the pixel that relatively continues to capture in scanning process between starting point and terminating point, and the difference in the pixel that continues to capture in column scan process between starting point and terminating point, if the result absolute value that two differences differ is much larger than one of them difference, think to contain digital foreground target pixel, the like, complete obtaining cutting apart of target character image in entire image.
With reference to figure 7, the structural representation of a kind of electronic equipment embodiment tetra-providing for the application, wherein, described electronic equipment can comprise following structure:
Target image determining unit 701 for the selection operation to target image according to the user who receives, is determined first object area image in target image.
Wherein, before the present embodiment operation, in advance described target image is shown to user, user selects to operate in by gesture and in described target image, delimit the interested region of user, such as image cut operation etc., thus, in described target image determining unit 701, the selection operation of identification user to described target image, the first object area image of definite firm interest of user in described target image.
It should be noted that, after described target image determining unit 701 is determined first object area image, described first object area image can be preserved as the local temporary files of the present embodiment operational system.
Background segment unit 702, cuts apart for described first object area image is carried out to image background, obtains the second target area image in described first object area image.
Wherein, the image area of described the second target area image is less than the image area of described first object area image.
It should be noted that, the images such as the character text on described first object area image are distinguished mutually with background image, therefore, in described background segment unit 702, first the background image in first object area image is cut apart the combination of the maximum UNICOM image that the second target area image obtaining is character text or UNICOM's image of multiple character texts.
Character segmentation unit 703, for the pixel scan mode passing budgets, carries out monocase to described the second target area image and cuts apart, and obtains at least one target character image.
Wherein, described pixel scan mode can be blank pixel point or the colourity row or column pixel scan mode lower than preset value.
Text data acquiring unit 704, for the corresponding relation according to default identification storehouse character picture and character data, obtains respectively the each self-corresponding target character data of each described target character image, by the target character data composition text data getting.
Wherein, the gesture identification storehouse that described identification storehouse can provide for the present embodiment operational system, in the specific implementation of described text data acquiring unit 704, can pass through the interface function of this gesture identification storehouse GestureLib, transfer the corresponding relation of character picture and character data in described identification storehouse, by each described target character image respectively with described identification storehouse in character picture carry out recycle ratio, to obtain the each self-corresponding target character data of each described target character image, and then by each described target character data are copied and are obtained, so that the text data to target character data composition copies, the operations such as stickup.
From such scheme, a kind of electronic equipment embodiment tetra-that the application provides, after the selection operation of target image being determined to the first object area image in target image by user, first object area image is carried out to image background to be cut apart, obtain the second target area image in first object area image, again the second target area image being carried out to monocase cuts apart, to obtain at least one target character image, and then utilize the corresponding relation of character picture and character data in identification storehouse, obtain the each self-corresponding target character data of each target character image, with the text data that obtains being formed by target character data.The embodiment of the present application is in the text detection process of realize target image, process as data such as texture, color, the degree of depth without the specific information in target image, but cut apart and the mode such as pixel scanning completes the recognition detection to character data by image, to obtain text data, its calculated amount is significantly less than existing text data obtains the calculated amount of scheme, obviously improves the efficiency of obtaining text data.
With reference to figure 8, the structural representation of background segment unit 702 described in a kind of electronic equipment embodiment five providing for the application, wherein, described background segment unit 702 can comprise following structure:
Regional frame is determined subelement 721, for determining the regional frame in described first object area image UNICOM region.
Wherein, described regional frame is the regional frame in maximum UNICOM region in described first object area image.First object area image 301 is example as shown in Figure 3, in described first object area image 301, there is maximum UNICOM region 302, described regional frame 303 can be understood as the edge frame in the maximum UNICOM region 302 in described first object area image, and described UNICOM region can be understood as: character picture concentrated area or join domain.
Concrete, with reference to figure 9, for regional frame described in the embodiment of the present application is determined the structural representation of subelement 721, wherein, described regional frame is determined in subelement 721 and can be comprised with lower module:
Region operation module 901, for carrying out gray scale processing and binaryzation operation to described first object area image.
Regional frame determination module 902, for the first object area image according to through binaryzation operation, determines the regional frame in UNICOM region in described first object area image.
Wherein, the specific implementation of described region operation module 901 can be: converting described first object area image is gray image, first object area image after using central peripheral histogramming algorithm to described conversion operates, obtain the Saliency maps picture of first object area image, afterwards, utilize its average gray threshold value to carry out binaryzation to first object area image, obtain the first object area image of binaryzation.Afterwards, while determining the regional frame in its maximum UNICOM region according to the first object area image of process binaryzation in described regional frame determination module 902, can improve the accuracy rate of regional frame.
Image is cut apart subelement 722, for the background image according to outside regional frame described in described first object area image, utilizes GrabCut algorithm to obtain the second target area image in described first object area image.
Concrete, the operating scheme that described image is cut apart subelement 722 can be: carry out initialization GrabCut algorithm taking the background image outside regional frame described in described first object area image as initialization value, obtain thus the second target area image of energy minimization by iteration, to reach the object of cutting apart the second target area image in described first object area image, now, described regional frame is separated the second target image in described first object area image and described background image, regional frame 303 as shown in Figure 5 is separated described background image 304 with described the second target area image 305, described background image 304 forms described first object area image 301 with described the second target area image 305.
With reference to Figure 10, the structural representation of Character segmentation unit 703 described in a kind of electronic equipment embodiment six providing for the application, wherein, described Character segmentation unit 703 can comprise following structure:
Pixel scanning subelement 731, for the pixel scan mode to preset, described the second target area image is carried out to blank pixel spot scan, wherein, in the time scanning blank pixel point, trigger first area and determine that subelement 732 and second area determine subelement 734.
Wherein, in described pixel scanning subelement 731, can complete the scanning to whole the second target area image by the mode successively every row pixel being scanned, as the implementation in the present embodiment; Also can complete the scanning to whole the second target area image by the mode successively every row pixel being scanned, its specific implementation can, with reference to the present embodiment implementation, be not described in detail in this article.
Subelement 732 is determined in first area, and the first area image before being expert at for definite blank pixel point scanning, triggers Target Segmentation subelement 733.
Target Segmentation subelement 733, cuts apart for described first area image is carried out to image, obtains the target character image in the image of described first area.
Concrete, with reference to Figure 11, be the structural representation of the subelement of Target Segmentation described in the embodiment of the present application 733, wherein, described Target Segmentation subelement 733 can comprise with lower module:
Difference determination module 1101, for determining the first difference between row pixel starting point and the terminating point of described first area image, and determines the second difference between starting point and terminating point in the row pixel of described first area image.
Character segmentation module 1102, for being greater than described the first difference or described the second difference in the absolute difference between described the first difference and described the second difference, in the image of described first area, carry out character picture and cut apart, obtain the target character image in the image of described first area.
Second area is determined subelement 734, second area image after being expert at for definite blank pixel point scanning, triggers described pixel scanning subelement 731, according to described pixel scan mode, described second area image is proceeded to blank pixel spot scan.
The implementation of Character segmentation described above unit 703 can be understood as: described Character segmentation unit 703 carries out blank pixel spot scan to the entire image of described the second target area image, in the time there is blank pixel point in row or column scan round, just again the second target area image entirety is done to once segmentation and cut, the part before the row or column at blank pixel point place is directly carried out cutting apart of follow-up single character picture; And the blank pixel point place row or column scanning part afterwards first follows described the second target area image pixel scan mode before, after determining the border of single character, carry out cutting apart of single character picture.For example, numerical character is similar to and thinks a square area, while liking to determine the border of single character picture, difference in the pixel that relatively continues to capture in scanning process between starting point and terminating point, and the difference in the pixel that continues to capture in column scan process between starting point and terminating point, if the result absolute value that two differences differ is much larger than one of them difference, think to contain digital foreground target pixel, the like, complete obtaining cutting apart of target character image in entire image.
It should be noted that, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed is and the difference of other embodiment, between each embodiment identical similar part mutually referring to.
Finally, also it should be noted that, in this article, relational terms such as the first and second grades is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply and between these entities or operation, have the relation of any this reality or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby the process, method, article or the equipment that make to comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or be also included as the intrinsic key element of this process, method, article or equipment.The in the situation that of more restrictions not, the key element being limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
A kind of Method for text detection and the electronic equipment that above the application are provided are described in detail, applied principle and the embodiment of specific case to the application herein and set forth, the explanation of above embodiment is just for helping to understand the application's method and core concept thereof; , for one of ordinary skill in the art, according to the application's thought, all will change in specific embodiments and applications, in sum, this description should not be construed as the restriction to the application meanwhile.
Claims (10)
1. a Method for text detection, is characterized in that, comprising:
First object area image is determined in selection operation according to the user who receives to target image in target image;
Described first object area image is carried out to image background and cut apart, obtain the second target area image in described first object area image, the image area of described the second target area image is less than the image area of described first object area image;
By default pixel scan mode, described the second target area image is carried out to monocase and cut apart, obtain at least one target character image;
According to the corresponding relation of character picture and character data in default identification storehouse, obtain respectively the each self-corresponding target character data of each described target character image, by the target character data composition text data getting.
2. method according to claim 1, is characterized in that, describedly described first object area image is carried out to image background cuts apart, and obtains the second target area image, comprising:
Determine the regional frame in UNICOM region in described first object area image;
According to the background image outside regional frame described in described first object area image, utilize image to cut apart GrabCut algorithm and obtain the second target area image in described first object area image;
Wherein, described regional frame, separates described the second target area image and described background image in described first object area image.
3. method according to claim 2, is characterized in that, the described regional frame of determining UNICOM region in described first object area image, comprising:
Described first object area image is carried out to gray scale processing and binaryzation operation;
According to the first object area image through binaryzation operation, determine the regional frame in UNICOM region in described first object area image.
4. method according to claim 1, is characterized in that, described by default pixel scan mode, described the second target area image is carried out to monocase and cut apart, and obtains at least one target character image, comprising:
With the pixel scan mode of presetting, described the second target area image is carried out to blank pixel spot scan;
Wherein, in the time scanning blank pixel point, determine the blank pixel point that scans the previous first area image of being expert at, described first area image is carried out to image to be cut apart, obtain the target character image in the image of described first area, and determine the blank pixel point that the scans second area image afterwards of being expert at, the described pixel scan mode of foundation is proceeded the scanning of blank pixel point to described second area image.
5. method according to claim 4, is characterized in that, describedly described first area image is carried out to image cuts apart, and obtains the target character image in the image of described first area, comprising:
Determine the first difference between starting point and terminating point in the row pixel of described first area image, and determine the second difference between starting point and terminating point in the row pixel of described first area image;
Be greater than described the first difference or described the second difference in the absolute difference between described the first difference and described the second difference, in the image of described first area, carry out character picture and cut apart, obtain the target character image in the image of described first area.
6. an electronic equipment, is characterized in that, described electronic equipment comprises:
Target image determining unit for the selection operation to target image according to the user who receives, is determined first object area image in target image;
Background segment unit, for being carried out to image background, described first object area image cuts apart, obtain the second target area image in described first object area image, the image area of described the second target area image is less than the image area of described first object area image;
Character segmentation unit, for the pixel scan mode passing budgets, carries out monocase to described the second target area image and cuts apart, and obtains at least one target character image;
Text data acquiring unit, for the corresponding relation according to default identification storehouse character picture and character data, obtains respectively the each self-corresponding target character data of each described target character image, by the target character data composition text data getting.
7. electronic equipment according to claim 6, is characterized in that, described background segment unit comprises:
Regional frame is determined subelement, for determining the regional frame in described first object area image UNICOM region;
Image is cut apart subelement, for the background image according to outside regional frame described in described first object area image, utilizes GrabCut algorithm to obtain the second target area image in described first object area image;
Wherein, described regional frame, separates described the second target area image and described background image in described first object area image.
8. electronic equipment according to claim 7, is characterized in that, described regional frame determines that subelement comprises:
Region operation module, for carrying out gray scale processing and binaryzation operation to described first object area image;
Regional frame determination module, for the first object area image according to through binaryzation operation, determines the regional frame in UNICOM region in described first object area image.
9. electronic equipment according to claim 6, is characterized in that, described Character segmentation unit comprises:
Pixel scanning subelement, for the pixel scan mode to preset, carries out blank pixel spot scan to described the second target area image, and wherein, in the time scanning blank pixel point, triggering first area determines that subelement and second area determine subelement;
Subelement is determined in first area, and the first area image before being expert at for definite blank pixel point scanning, triggers Target Segmentation subelement;
Target Segmentation subelement, cuts apart for described first area image is carried out to image, obtains the target character image in the image of described first area;
Second area is determined subelement, and the second area image after being expert at for definite blank pixel point scanning triggers described pixel scanning subelement, according to described pixel scan mode, described second area image proceeded to blank pixel spot scan.
10. electronic equipment according to claim 9, is characterized in that, described Target Segmentation subelement comprises:
Difference determination module, for determining the first difference between row pixel starting point and the terminating point of described first area image, and determines the second difference between starting point and terminating point in the row pixel of described first area image;
Character segmentation module, for being greater than described the first difference or described the second difference in the absolute difference between described the first difference and described the second difference, in the image of described first area, carry out character picture and cut apart, obtain the target character image in the image of described first area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410289155.1A CN104036272A (en) | 2014-06-24 | 2014-06-24 | Text detection method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410289155.1A CN104036272A (en) | 2014-06-24 | 2014-06-24 | Text detection method and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104036272A true CN104036272A (en) | 2014-09-10 |
Family
ID=51467038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410289155.1A Pending CN104036272A (en) | 2014-06-24 | 2014-06-24 | Text detection method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104036272A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915664A (en) * | 2015-05-22 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Contact object identification acquisition method and device |
CN106327449A (en) * | 2016-09-12 | 2017-01-11 | 厦门美图之家科技有限公司 | Image restoration method, image restoration application, and calculating equipment |
CN107944449A (en) * | 2017-10-26 | 2018-04-20 | 福建网龙计算机网络信息技术有限公司 | The method and terminal of a kind of scale characters |
CN108563967A (en) * | 2018-03-15 | 2018-09-21 | 青岛海信移动通信技术股份有限公司 | A kind of screenshot method and device |
CN110288626A (en) * | 2019-06-13 | 2019-09-27 | 北京云测信息技术有限公司 | The method and apparatus for detecting the text in primary electronic image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240618A1 (en) * | 2007-03-30 | 2008-10-02 | Sharp Kabushiki Kaisha | Image-document retrieving apparatus, method of retrieving image document, program, and recording medium |
CN102446266A (en) * | 2010-09-30 | 2012-05-09 | 北京中远通科技有限公司 | Industrial number automatic identification device, system and method |
CN102592268A (en) * | 2012-01-06 | 2012-07-18 | 清华大学深圳研究生院 | Method for segmenting foreground image |
CN102663388A (en) * | 2012-03-27 | 2012-09-12 | 复旦大学 | Method for segmenting handwritten character from background image |
CN102663377A (en) * | 2012-03-15 | 2012-09-12 | 华中科技大学 | Character recognition method based on template matching |
-
2014
- 2014-06-24 CN CN201410289155.1A patent/CN104036272A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240618A1 (en) * | 2007-03-30 | 2008-10-02 | Sharp Kabushiki Kaisha | Image-document retrieving apparatus, method of retrieving image document, program, and recording medium |
CN102446266A (en) * | 2010-09-30 | 2012-05-09 | 北京中远通科技有限公司 | Industrial number automatic identification device, system and method |
CN102592268A (en) * | 2012-01-06 | 2012-07-18 | 清华大学深圳研究生院 | Method for segmenting foreground image |
CN102663377A (en) * | 2012-03-15 | 2012-09-12 | 华中科技大学 | Character recognition method based on template matching |
CN102663388A (en) * | 2012-03-27 | 2012-09-12 | 复旦大学 | Method for segmenting handwritten character from background image |
Non-Patent Citations (1)
Title |
---|
李坤等: "车牌自动识别系统的研究与实现", 《青岛大学学报(工程技术版)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915664A (en) * | 2015-05-22 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Contact object identification acquisition method and device |
CN104915664B (en) * | 2015-05-22 | 2021-02-09 | 腾讯科技(深圳)有限公司 | Contact object identifier obtaining method and device |
CN106327449A (en) * | 2016-09-12 | 2017-01-11 | 厦门美图之家科技有限公司 | Image restoration method, image restoration application, and calculating equipment |
CN106327449B (en) * | 2016-09-12 | 2019-05-03 | 厦门美图之家科技有限公司 | A kind of image repair method, device and calculate equipment |
CN107944449A (en) * | 2017-10-26 | 2018-04-20 | 福建网龙计算机网络信息技术有限公司 | The method and terminal of a kind of scale characters |
CN107944449B (en) * | 2017-10-26 | 2020-08-18 | 福建网龙计算机网络信息技术有限公司 | Method and terminal for scaling characters |
CN108563967A (en) * | 2018-03-15 | 2018-09-21 | 青岛海信移动通信技术股份有限公司 | A kind of screenshot method and device |
CN110288626A (en) * | 2019-06-13 | 2019-09-27 | 北京云测信息技术有限公司 | The method and apparatus for detecting the text in primary electronic image |
CN110288626B (en) * | 2019-06-13 | 2021-05-25 | 北京云测信息技术有限公司 | Method and apparatus for detecting text in native electronic images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110941594B (en) | Splitting method and device of video file, electronic equipment and storage medium | |
CN103955660B (en) | Method for recognizing batch two-dimension code images | |
CN104036272A (en) | Text detection method and electronic device | |
CN102324019B (en) | Method and system for automatically extracting gesture candidate region in video sequence | |
CN107392141B (en) | Airport extraction method based on significance detection and LSD (least squares distortion) line detection | |
CN103116754B (en) | Batch images dividing method and system based on model of cognition | |
Chen et al. | Shadow-based Building Detection and Segmentation in High-resolution Remote Sensing Image. | |
JP5701181B2 (en) | Image processing apparatus, image processing method, and computer program | |
WO2012074361A1 (en) | Method of image segmentation using intensity and depth information | |
CN1343339A (en) | Video stream classifiable symbol isolation method and system | |
CN108345888A (en) | A kind of connected domain extracting method and device | |
CN103577818A (en) | Method and device for recognizing image characters | |
CN108550099B (en) | Method and device for removing watermark in image | |
CN112036232B (en) | Image table structure identification method, system, terminal and storage medium | |
CN103218601A (en) | Method and device for detecting gesture | |
CN105023264A (en) | Infrared image remarkable characteristic detection method combining objectivity and background property | |
CN113592881B (en) | Picture designability segmentation method, device, computer equipment and storage medium | |
CN111210434A (en) | Image replacement method and system based on sky identification | |
CN104992176B (en) | A kind of Chinese character extracting method towards an inscription on a tablet | |
Kaur et al. | Image segmentation based on color | |
Yang et al. | A local voting and refinement method for circle detection | |
CN109741426B (en) | Cartoon form conversion method and device | |
Yang et al. | Caption detection and text recognition in news video | |
CN102521594A (en) | Method for accurately extracting object and system thereof | |
CN105551027A (en) | Boundary tracking method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140910 |