CN103577816A - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
CN103577816A
CN103577816A CN201210258772.6A CN201210258772A CN103577816A CN 103577816 A CN103577816 A CN 103577816A CN 201210258772 A CN201210258772 A CN 201210258772A CN 103577816 A CN103577816 A CN 103577816A
Authority
CN
China
Prior art keywords
unit
intersection point
image
limit
colour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210258772.6A
Other languages
Chinese (zh)
Inventor
张庆久
乐宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Priority to CN201210258772.6A priority Critical patent/CN103577816A/en
Publication of CN103577816A publication Critical patent/CN103577816A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an image processing device and an image processing method. The image processing device comprises an input unit, a detection unit, a determining unit and an extraction unit, wherein the input unit is used for inputting an image, the detection unit is used for detecting color marked portions from the input image, the determining unit is used for determining edges defined by the color marked portions and closed regions, composed of intersection points, of the image, and the extraction unit is used for extracting the parts, in the determined closed regions, of the image. The determining unit is provided with a unit used for finding out the edges defined by the color marked portions and the intersection points of the edges, a unit used for making the number of the edges extending out of each intersection point as the number of times and conducting recording according to each intersection point, a unit used for finding the edges passing through the intersection points with the numbers of times being greater than 2 and enabling the edges to be connected to form the closed regions, a unit used for subtracting the numbers of times of the intersection points contained in the found closed regions, and a unit used for removing the intersection points with the numbers of times being less than 1 from the processed image.

Description

Image processing apparatus and image processing method
Technical field
The present invention relates to image processing apparatus and image processing method.
Background technology
On the original copy of holding user, user carries out enclosing to carry out mark with marker pen to interesting article, if use scanner to be entered in device, installing position, the coordinate of the colour-coded part to comprising in input picture identifies, if can only extract the image-region that impaled by colour-coded as topography, can only extract to user Useful Information, such device is useful.Such device is disclosed in patent documentation 1~3.
Patent documentation 1(US6351559B1) following technology is disclosed:
" the original image that user obtains from scanning, extract the region surrounding.Utilize detection analysis and filter device to remove little word, the chart similar to the mark of user annotation.”
Patent documentation 2(Japanese kokai publication hei 5-328101) following technology is disclosed:
" original copy is extracted to the image of the enclosing region of utilizing mark appointment, the image extracting is implemented to the image of appointment and processed.”
Patent documentation 3(TOHKEMY 2000-115522) following technology is disclosed:
" in the situation that owing to surrounding the unintelligible of line None-identified enclosing region, that carries out line adds bulk processing, interpolation processing.”
But, if there is position, the coordinate of colour-coded part, be not accurately identified such situation, destroyed the convenience of using.Especially the in the situation that of having comprised a plurality of encirclement frame in utilizing colour-coded region, exist a part for the plurality of encirclement frame to have overlapping such situation (with reference to Fig. 4), easily such problem is identified in generation by mistake.
Summary of the invention
The object of the invention is to, a kind of mistake identification being suppressed at when image recognition is carried out in encirclement frame region is provided, realize device easy to use.And, even if mark line lacks a part, also can correctly identify.
To achieve these goals, the image processing apparatus the present invention relates to possesses the input block of input picture, from the image being transfused to, detect colour-coded part detecting unit, determine the determining unit of the enclosed region on the image being formed by the limit of partly being stipulated by colour-coded and intersection point and extract the extraction unit of the image in determined enclosed region, this image processing apparatus is characterised in that, above-mentioned determining unit has: the limit that retrieval is partly stipulated by colour-coded and the unit of intersection point; Using from the number on the extended limit of intersection point as number of times, the unit recording by each intersection point; Find out number of times and be the unit of the enclosed region that more than 2 intersection points links by limit; Deduct the unit of the number of times of the intersection point comprising in the enclosed region of finding out; With by number of times, be the unit that intersection point below 1 is removed from process object.
And in above-mentioned image processing apparatus, above-mentioned determining unit has: extract the horizontal line of colour-coded part and the unit of perpendicular line; The unit that the line of discontented sufficient dimension threshold is removed; With the unit that many lines that approach are linked.
In addition, to achieve these goals, the image processing method the present invention relates to possesses the input step of input picture, from the image being transfused to, detect colour-coded part detecting step, determine the determining step of the enclosed region on the image being formed by the limit of partly being stipulated by colour-coded and intersection point and extract the extraction step of the image in determined region, wherein, above-mentioned determining step has: the limit that detection is partly stipulated by colour-coded and the step of intersection point; Using from the number on the extended limit of intersection point as number of times, the step recording by each intersection point; Find out number of times and be more than 2 intersection point by the step that is connected to become enclosed region on limit; The step that the number of times of the intersection point comprising in the enclosed region of finding out is deducted; With by number of times, be the step that intersection point below 1 is removed from process object.
Accompanying drawing explanation
Fig. 1 represents that whole system forms.
The summary of the processing in Fig. 2 presentation video treating apparatus.
Fig. 3 represents the detailed content of binaryzation step.
Fig. 4 represents the example of input picture.
Fig. 5 represents the image after whole pixel binaryzations of the input picture of Fig. 4.
Fig. 6 represents the detailed content of line detecting step.
Fig. 7 A represents the extraction of level and vertical scanning width.
Fig. 7 B means the figure of the generation that links component.
Fig. 7 C means according to the figure that links component generation line segment.
Fig. 7 D means the figure of limit list.
Fig. 7 E means the figure of intersection point list.
Fig. 8 means the figure of the example of the chart generating according to the image shown in Fig. 4.
Fig. 9 means the figure of process of the processing of candidate areas extraction step.
Figure 10 means the figure as the enclosing region being impaled by a1~a4.
Figure 11 means the figure as the enclosing region being impaled by a2, a3, a5, a6, a7.
Figure 12 means the figure of the region disconnecting in character area and photo region.
Embodiment
Fig. 1 represents that total system forms.
The system of this device comprises: read original copy, to the original document reading apparatus (being equivalent to image input block) of signal conditioning package output image data; With the image processing apparatus of image that extracts the region of getting up with colour-coded circle segment of input picture.According to user's request, to other image editing software of the imagery exploitation being extracted by graphics processing unit, in graphics processing unit, editing, utilizing printing equipment to print, to signal conditioning package, send, be saved in information record carrier.
In addition, example as image-input device, in the situation that image is sent to image processing apparatus via communication line, communication interface is equivalent to image-input device, in the situation that the image of preserving in the recording mediums such as dish is transfused to, the recording medium reading devices such as CD driver are equivalent to image-input device.
The summary of the processing in Fig. 2 presentation video treating apparatus.
According to by 1. binaryzation step, 2. line detecting step, 3. chart construction step, 4. candidate areas extraction step, 5. the order of region disconnecting step is processed the image of input.
Fig. 3 represents the detailed content of binaryzation step.
For the pixel comprising in input picture (pixel) ρ, in residual quantity calculation procedure, calculate the residual quantity (residual quantity between R=G of R value, G value, B value, residual quantity between G=B, residual quantity between R=B), in residual quantity evaluation procedure, evaluate, when any residual quantity surpasses threshold value, pixel is detected as colour-coded point, sets " 1 " as the value ρ ' after the binary conversion treatment corresponding with pixel ρ.When any residual quantity is all threshold value when following, being judged to be pixel is grey rather than colour-coded point, in this situation, sets " 0 " as the value ρ ' after the binary conversion treatment corresponding with pixel ρ.
Fig. 4 represents the example of input picture.Marker pen for dotted line in image (mark pen) writes, the encirclement line based on colour-coded.
Fig. 5 represents the image after whole pixel binaryzations of the input picture of Fig. 4.ρ ' is set to 1 part with white appearance, and ρ ' is set to 0 part and shows with black.
Method as beyond above-mentioned, can also, according to user's setting, detect specific color as colour-coded.In this situation, for example, when R value, G value, B value have represented respectively the value as in the specialized range in appointed particular color is represented, be judged to be colour-coded.
Wherein, because above-mentioned processing needs the processing time, for example, so before starting whole processing, can first will temporarily be converted to after the image of low resolution input picture, then carry out above-mentioned processing (being transformed into 100dpi from 300dpi).As long as after the position of having determined enclosing region, coordinate, apply it to the coordinate position in the input picture of former resolution, extract the image of enclosing region.
Fig. 6 represents the detailed content of line detecting step.
Below, according to the order of A~E, process.
A. the extraction (with reference to Fig. 7 A) of level and vertical scanning width (Run Length)
From binary image, extract respectively horizontal scanning width and vertical scanning width.The generation of sweep length can utilize known generation method.It is for example following such method.
The extraction of horizontal scanning width
Horizontal scanning width is the pixel column that the binary value of extension is " 1 " continuously in the horizontal direction.In order to extract horizontal scanning width, coordinate (0 from the upper left of binary image, 0) pixel starts scanning direction to the right, the pixel that is initially " 1 " of take is starting point, in the situation that its right-hand adjacent continuous a plurality of pixels while being " 1 " and the length of a sweep length continuous more than defined threshold, be extracted as horizontal scanning width.That is, length is not reached the sweep length setting " 0 " of threshold value.Threshold value is for example 8 pixels.
After the scanning of 1 row of epimere completes, carry out the scanning of next column, and process successively, until scan only classifying as of hypomere, finish dealing with.Preparing ensuing processing, for each sweep length, is list by the XY coordinate record at two ends in advance.
By such processing, from the state of Fig. 7 A upper left, as shown in Fig. 7 A upper right, only extract the horizontal scanning width of length more than 8 pixels.
The extraction of vertical scanning width
Equally, vertical scanning width is the pixel column that the binary value of extension is " 1 " continuously in vertical direction.Same with the processing of horizontal scanning width, coordinate (0 from binary image, 0) pixel scans downwards, the pixel that is initially " 1 " of take is starting point, in the situation that when downwards adjacent continuous a plurality of pixels are " 1 " and continuous more than defined threshold, be extracted as vertical scanning width.Threshold value is similarly 8 pixels.
After the scanning of the most left 1 row completes, carry out the scanning of row to the right, and process successively, until scan only classifying as of the rightmost side, finish dealing with.Preparing ensuing processing, is list by the XY coordinate record at the two ends of sweep length in advance.
By such processing, from the state of Fig. 7 A upper left, as shown in Fig. 7 A bottom right, only extract vertical scanning width more than 8 pixels.
By processing as described above, can effectively remove the colour-coded part of unwanted oblique line.
B. the sweep length longer than setting removes
Owing to calculating length according to the XY coordinate of recorded sweep length, so in the situation that the length of sweep length is the 2nd more than threshold value it to be removed from the list of sweep length.The 2nd threshold value is for example 500 pixels.
C. link the generation (with reference to Fig. 7 B) of component
The generation that links component can utilize known generation method.
For example, when generating link component according to horizontal scanning width, for the XY coordinate at the two ends of horizontal scanning width in the Y direction by one pay close attention to that line is clipped in the middle on for the sweep length that rolls off the production line, if the X coordinate at its two ends relatively pay close attention to line horizontal scanning width two ends coordinate X coordinate within the limits prescribed, can be considered and pay close attention to this horizontal scanning width of line and the horizontal scanning width that X coordinate becomes in scope links in the Y direction.The scope of regulation for example can be made as 3 pixels.Like this, Yi Bian Yi Bian the concern line that offsets in turn generates link component to the linking portion of whole horizontal scanning width detection directions Xs, and be recorded as list.
In the example of Fig. 7 B, represented the horizontal scanning width that comprises in the scope of the scope (X1, Y1) to (X2, Y2) of coordinate due to the X coordinate at two ends within the limits prescribed, so link component CP1 as a level, be recorded in list.
Level links component as described above by the above-below direction inspection of the concern line of along continuous straight runs extension is generated, the generation of the link component of vertical scanning width is too by the left and right directions inspection of the concern line vertically extending is generated, and is recorded as list.
D. according to linking component, generate line segment (with reference to Fig. 7 C)
According to level, link component, vertically link component and generate respectively line segment.
In most cases, owing to linking component, there is fineness degree, so become a line segment in order to make level link component, width Y1~the Y2 of the above-below direction of each level link component is generated as to the line segment of their intermediate value Y3, and be (X1 by the coordinate that list changes into line segment LN1~n by name, end, Y3) (X2, Y3) and preserve.
The vertical component that links is also similarly generated line segment.
E. lose the exploration of line
In a plurality of line segments, the line that end is approaching is bonded to each other.For example Y coordinate approximately equal, the approaching toe-in of X coordinate in 2 horizontal line sections are closed.
Or, X coordinate approximately equal, the approaching toe-in of Y coordinate in 2 vertical line segments are closed.So the end of line segment is bonded to each other to be registered as a new line list.
The line of the level that the step by D. is generated and vertical line segment and the new registration by previous combination is the limit of the chart in ensuing chart (graph) construction step all, and the information of limit name E1~n, coordinate is registered to (with reference to Fig. 7 D) in the list of limit.
3. chart construction step is described in detail as follows.
With reference to limit list, extract the consistent point of the coordinate of end on a plurality of limits as intersection point.Then, consistent limit and its number (number of times) of the coordinate in this end, intersection point place registered in intersection point list.So generate intersection point list (with reference to Fig. 7 E, Figure 10 s010).
Chart can be stipulated by intersection point list and limit list.
Fig. 8 be take the chart that generates according to image shown in Fig. 4 and is represented as example.
Fig. 9 represents the process of the processing of candidate areas extraction step.
S010: the state that represents the chart before processing.For each intersection point a1~a7, will be recited as number of times from the number on the extended limit of intersection point.Because the number from intersection point a1, a4, a5, a6, the extended limit of a7 is 2, so number of times is 2, because the number on a2, the extended limit of a3 is 3, so number of times is 3.
Before processing below starting, the intersection point that is 1 by number of times is in advance removed from chart.
S100: be selected to the intersection point of rotating starting point from chart.For example, select to be arranged in whole intersection points position intersection point and then that be positioned at top side of the leftmost side intersection point, be that a1 is starting point.
First from starting point a1, to right-hand beginning along limit, turn round, move on to next intersection point a2.Each, from number of times, deduct 1 while arriving next intersection point, and directly advance to next intersection point or the if there is no limit of forward direction, to right-hand by going direction changing 90 ゜, move to next intersection point.Which about whether having limit at direct of travel, can, with reference to " the extended limit " of registering in intersection point list, according to the coordinate of registering in the list of limit, obtain limit and to direction extend.
For example, owing to can not directly advancing to right-hand advancing to reaching intersection point a2 from a1, so 90 ゜ change after directions, move to the intersection point a3 of below to the right.
S110: so move between intersection point, complete revolution as a1 → a2 → a3 → a4 moves and be back to starting point a1 in this wise, extract the image shown in Figure 10 as the enclosing region being impaled by a1~a4.
S120: whenever completing rev, just remove number of times and become 1 intersection point from chart.Here, thus because becoming 1, the number of times of a1, a4 is removed, and left intersection point a2, a3, a5, a6, a7.
S130: be selected to the intersection point of rotating starting point from chart.Select to be arranged in whole intersection points intersection point the most left intersection point and then that be positioned at the position of going up most, be a5.
Again turn round, as the enclosing region being impaled by a2, a3, a5, a6, a7, extract the image shown in Figure 11.
S140: because the number of times of a5, a6, a3, a7, a2 becomes 1, thus these intersection points are removed, because there not being intersection point to be processed, therefore complete processing.
5. the detailed content of region disconnecting step is as follows.
In region disconnecting step, identify character area or photo region in addition separated processing.
As recognition methods, can utilize known method.For example can utilize in character image because the concentration difference of the pixel of word segment and the pixel of background is large, so there is large this feature of position (marginal portion) of concentration difference between the pixel in region, and photograph image is because concentration change is steady, so have the little feature of concentration difference between the pixel in region.
Figure 12 represents example.In order to make to process simply, establish and utilize gray shade scale (256 gray scale) to process.If coloured image is assumed to be the image after the gray shade scale that converts 256 gray scales to, when in the region 8 * 8, the concentration difference of the pixel of maximum concentration and the pixel of least concentration is larger than setting, by this 8 * 8 regional determination, be character image region.Setting is for example made as 150.In the example on Figure 12 left side, the pixel comprising in 8 * 8 regions is because maximum concentration is 255, least concentration is 0, and concentration difference surpasses 150, so be judged to be character image region.In addition, in the example on Figure 12 the right, because maximum concentration is 200, least concentration is 160, concentration difference is 40 not surpass 150, so be judged to be photograph image region.So identification character area or photo region, from the final only image in output character region of image extracting 4. candidate areas extraction step.
By such processing, even if suppose to comprise photochrome in input picture, also can suppress photochrome to be extracted as this unfavorable condition of region impaling with mark.

Claims (3)

1. an image processing apparatus, possess the input block of input picture, from the image being transfused to, detect the detecting unit of colour-coded part, determine the determining unit of the enclosed region on the image being formed by the limit of partly being stipulated by colour-coded and intersection point and extract the extraction unit of the image in determined enclosed region, this image processing apparatus is characterised in that
Above-mentioned determining device has:
Find out the limit of partly being stipulated by colour-coded and the unit of intersection point;
Using from the number on the extended limit of intersection point as number of times, the unit recording by each intersection point;
Find out number of times and be the unit of the enclosed region that more than 2 intersection points links by limit;
Deduct the unit of the number of times of the intersection point comprising in the enclosed region of finding out; With
By number of times, it is the unit that the intersection point below 1 is removed from process object.
2. image processing apparatus according to claim 1, is characterized in that,
Above-mentioned determining device has:
Extract the horizontal line of colour-coded part and the unit of perpendicular line;
The unit that line beyond discontented full size cun defined threshold is removed; With
The unit that many lines that approach are linked.
3. an image processing method, possess the input step of input picture, from the image being transfused to, detect the detecting step of colour-coded part, determine the determining step of the enclosed region on the image being formed by the limit of partly being stipulated by colour-coded and intersection point and extract the extraction step of the image in determined region, this image processing method is characterised in that
Above-mentioned determining step has: the limit that detection is partly stipulated by colour-coded and the step of intersection point;
Using from the number on the extended limit of intersection point as number of times, the step recording by each intersection point;
Find out number of times and be more than 2 intersection point by the step that is connected to become enclosed region on limit;
The step that the number of times of the intersection point comprising in the enclosed region of finding out is deducted; With
By number of times, it is the step that the intersection point below 1 is removed from process object.
CN201210258772.6A 2012-07-24 2012-07-24 Image processing device and image processing method Pending CN103577816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210258772.6A CN103577816A (en) 2012-07-24 2012-07-24 Image processing device and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210258772.6A CN103577816A (en) 2012-07-24 2012-07-24 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
CN103577816A true CN103577816A (en) 2014-02-12

Family

ID=50049567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210258772.6A Pending CN103577816A (en) 2012-07-24 2012-07-24 Image processing device and image processing method

Country Status (1)

Country Link
CN (1) CN103577816A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358227A (en) * 2017-06-29 2017-11-17 努比亚技术有限公司 A kind of mark recognition method, mobile terminal and computer-readable recording medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5381241A (en) * 1991-08-13 1995-01-10 Sharp Corporation Method for discriminating between figure and text areas of an image
CN1678016A (en) * 2004-03-30 2005-10-05 东芝解决方案株式会社 Image processing apparatus and image processing method
EP1993071A1 (en) * 2007-05-14 2008-11-19 Fujitsu Limited Image zone detection
CN101369314A (en) * 2007-07-31 2009-02-18 夏普株式会社 Image processing apparatus, image forming apparatus, image processing system, and image processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5381241A (en) * 1991-08-13 1995-01-10 Sharp Corporation Method for discriminating between figure and text areas of an image
CN1678016A (en) * 2004-03-30 2005-10-05 东芝解决方案株式会社 Image processing apparatus and image processing method
EP1993071A1 (en) * 2007-05-14 2008-11-19 Fujitsu Limited Image zone detection
CN101369314A (en) * 2007-07-31 2009-02-18 夏普株式会社 Image processing apparatus, image forming apparatus, image processing system, and image processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358227A (en) * 2017-06-29 2017-11-17 努比亚技术有限公司 A kind of mark recognition method, mobile terminal and computer-readable recording medium

Similar Documents

Publication Publication Date Title
US7702131B2 (en) Segmenting images and simulating motion blur using an image sequence
US6929183B2 (en) Reconstruction of virtual raster
KR101235226B1 (en) Image processor and image processing method and recording medium
CN107909009B (en) Obstacle detection method and device based on road surface learning
TWI751426B (en) Image processing system, image processing method and program product
US9171218B2 (en) Image processing apparatus, image processing method, and computer readable medium that recognize overlapping elements in handwritten input
CN103489254B (en) Lottery recognition method and lottery recognition system
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
CN103336961A (en) Interactive natural scene text detection method
JP4777024B2 (en) Image processing apparatus and image processing apparatus control method
KR101328266B1 (en) Apparatus for detecting surface crack of hot slab
US20140086473A1 (en) Image processing device, an image processing method and a program to be used to implement the image processing
JP6828333B2 (en) Image processing equipment and image processing program
JP6221283B2 (en) Image processing apparatus, image processing method, and image processing program
JP4793868B2 (en) Writing medium, writing information detection device, writing information detection method, program, and recording medium
CN103577816A (en) Image processing device and image processing method
JP2007041832A (en) Difference image extraction apparatus
JP4928325B2 (en) Image processing method, image processing apparatus, program, and storage medium
CN106161829A (en) Image processing equipment and method
CN109977740B (en) Depth map-based hand tracking method
JP5476884B2 (en) Image processing apparatus and image processing program
CN101923698B (en) Method and device for embedding and detecting watermark information
JP4552757B2 (en) Image processing apparatus, image processing method, and image processing program
JP2016181111A (en) Image processing apparatus and image processing program
JP2006277509A (en) Dot texture superposition notation part shape restoration method and program therefor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140212