CN101668116B - Image processing apparatus - Google Patents

Image processing apparatus Download PDF

Info

Publication number
CN101668116B
CN101668116B CN2009101683029A CN200910168302A CN101668116B CN 101668116 B CN101668116 B CN 101668116B CN 2009101683029 A CN2009101683029 A CN 2009101683029A CN 200910168302 A CN200910168302 A CN 200910168302A CN 101668116 B CN101668116 B CN 101668116B
Authority
CN
China
Prior art keywords
candidate
rectangle
line segment
opposite side
rectangle candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009101683029A
Other languages
Chinese (zh)
Other versions
CN101668116A (en
Inventor
吉井雅一
樱井敬一
山本量平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2009072298A external-priority patent/JP4835713B2/en
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN101668116A publication Critical patent/CN101668116A/en
Application granted granted Critical
Publication of CN101668116B publication Critical patent/CN101668116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing apparatus includes: a line segment detecting unit configured to detect vertical line segments and horizontal line segments in an image; a facing-lines candidate generating unit configured to generate vertical and horizontal facing-lines candidates from the vertical and horizontal line segments, the vertical and horizontal facing-lines candidates being candidates for pairs of facing lines configuring quadrangle areas in the image; a rectangle candidate generating unit configured to generate a plurality of pairs of one of the vertical facing-lines candidates and one of the horizontal facing-lines candidates, and to generate the quadrangle areas as rectangle candidates; and a calculating unit configured to calculate likelihood of each of the rectangle candidates based on a relationship between line segments constituting the vertical facing-lines candidates and the horizontal facing-lines candidates and the rectangle candidates.

Description

Image processing apparatus
This application based on and advocate priority at the preceding Japanese patent application No.2008-224709 that files an application on September 2nd, 2008; With the priority at the preceding Japanese patent application No.2009-072298 that files an application on March 24th, 2009, its full content is attached among this paper by reference.
Technical field
The present invention relates in image, extract the image processing apparatus and the computer program in quadrangle (quadrangle) zone (area) of containing reference object (subject) profile (contour).
Background technology
In the past; Known have; For can be, have the quadrilateral area function of in image, extracting the profile that comprises reference object, be the filming apparatus (image capturing device) of profile quadrangle abstraction function the image processing such as reference object fulfillment Coordinate Conversion that comprise in the photographs.Such filming apparatus utilizes Hough transformation (Hough transform); By many straight lines (straight line) that detect the profile that constitutes reference object in the edge image (edge image) of the edge pixel that comprises the profile of representing reference object (edge pixels); By the straight line of confirming to form quadrilateral area in detected many straight lines, thereby extract quadrilateral area.An example of this filming apparatus is opened in the 2005-267457 communique open the spy of Japan.
Existing filming apparatus; In the image (edge image) edge pixel (edge pixels) that exists on every calculated line in detected many straight lines is counted on the edge of, formed the straight line of quadrilateral area according to what next definite (select) of the pixel count that calculates.But, according to such structure, when the size (size) of the quadrilateral area that extracts when not knowing or at an image memory when a plurality of reference object images (subject image), can extract the quadrilateral area of defective (inadequate).
In addition; When in photographs, comprising a plurality of reference object image; Existing filming apparatus according to evaluations of estimate such as the size order from high to low of quadrilateral area the candidate that shows a plurality of quadrilateral areas in the display frame (below; Brief note is rectangle candidate (rectangle candidate)), the rectangle candidate of being utilized when selecting image processing a plurality of rectangle candidates of user on being presented at display frame.But; According to so existing filming apparatus; Since frequent demonstration of switching the rectangle candidate between different reference object images, or show other rectangle candidates relevant with the reference object image of having selected the rectangle candidate, and user successfully (smoothly) selects the rectangle candidate.
Summary of the invention
According to an aspect of the present invention, a kind of image processing apparatus is provided, comprises:
The line segment detecting unit detects vertical line segment and horizontal line segment in image;
The opposite side candidate makes the unit, according to by detected vertical line segment of said line segment detection unit and horizontal line segment, in said image, makes the vertical opposite side candidate and horizontal opposite side candidate that constitute quadrilateral area;
The rectangle candidate makes the unit, makes a plurality of said vertical opposite side candidates and the laterally combination of opposite side candidate, and making with the vertical opposite side candidate of each combination and the horizontal intersection point of opposite side candidate is that the said quadrilateral area on 4 summits is as said rectangle candidate; And
The scoring unit according to the relation (relationship) between the line segment (line segment) of said rectangle candidate and vertical opposite side candidate of formation (constituting) and horizontal opposite side candidate, calculates the probability (likelihood) of each said rectangle candidate.
Description of drawings
Realize the general configuration of each characteristic of the present invention with reference to description of drawings.Accompanying drawing and explanation are used for explaining the working of an invention mode, rather than restriction the present invention.
Figure 1A and Figure 1B are the oblique views of the structure of the expression digital camera that constitutes the 1st execution mode of the present invention, and Figure 1A mainly representes preceding structure, and Figure 1B mainly representes the structure at the back side.
Fig. 2 is the module map of structure of the control system of expression digital camera shown in Figure 1.
Fig. 3 is the flow chart of the flow process handled of the profile rectangular extraction that relates to of expression the 1st execution mode.
Fig. 4 A~Fig. 4 C is the figure of expression by an example of the edge image of the processing extraction of the step S5 of Fig. 3.
Fig. 5 is the figure that is used for the structure of your filter (Sobel filter) of rope pool that the edge image of the step S5 of key diagram 3 extract to handle adopts.
Fig. 6 A and Fig. 6 B are the figure of an example of the edge image after the refinement/binary conversion treatment of step S6 of presentation graphs 3.
Fig. 7 A and Fig. 7 B are an example of the line segment information that obtains is handled in expression by the mark (labeling) of the step S7 of Fig. 3 figure.
Fig. 8 is the concept map of line segment dividing processing that is used for the step S8 of key diagram 3.
Fig. 9 is the concept map of line segment connection processing that is used for the step S9 of key diagram 3.
Figure 10 A and Figure 10 B are that the horizontal figure of an example of opposite side candidate longitudinally that reaches that obtains is handled in expression by the pairing (pairing) of the step S10 of Fig. 3.
Figure 11 is the expression horizontal figure of an example of the combination of opposite side candidate longitudinally that reaches shown in Figure 10.
Figure 12 is the figure of expression by an example of the horizontal and rectangle candidate that the combination of opposite side candidate longitudinally obtains shown in Figure 11.
Figure 13 is the figure that the evaluation (scoring) that is used to explain step S12 shown in Figure 3 is handled.
The figure of an example of Figure 14 A~Figure 14 D rectangle candidate of showing of order of counting that to be expression calculate according to the processing by step S12 shown in Figure 3.
The figure of the situation that Figure 15 rectangle candidate that to be expression be shown according to user's operation shifts according to the order of counting.
Figure 16 is the figure of demonstration example of the rectangle candidate of the expression situation that comprises a plurality of reference objects in the photographs.
Figure 17 is that expression is according to the figure of the rectangle candidate that is shown to the flow process of the processing of reference object image when carrying out image processing.
Figure 18 is the flow chart of the flow process of the packet transaction that relates to of expression the 2nd execution mode of the present invention.
Figure 19 A and Figure 19 B are the figure of an example of the rectangle candidate of the expression processing that is used to explain step S21 shown in Figure 180.
Figure 20 A is the figure of an example of a plurality of rectangle candidates of expression, the figure of the length of Figure 20 B apex coordinate, barycentric coodinates and periphery that to be expression calculate to each rectangle candidate.
Figure 21 is the flow chart of the rectangle candidate that relates to of expression the 2nd execution mode flow process selecting to handle.
Figure 22 is the figure of an example of a plurality of rectangle candidates of expression.
Figure 23 is the result's after expression comes a plurality of rectangle candidates shown in Figure 22 are divided into groups with barycentric coodinates figure.
Figure 24 is expression with the x coordinate of the position of centre of gravity figure with big or small result after coming a plurality of rectangle candidates shown in Figure 22 are divided into groups.
Figure 25 is expression with the y coordinate of the position of centre of gravity figure with big or small result after coming a plurality of rectangle candidates shown in Figure 22 are divided into groups.
Figure 26 is the figure of an example of the state transitions of expression selection operation.
Embodiment
Benq is in execution mode of the present invention with reference to accompanying drawing.In the example that the scope of invention of being advocated should not be limited to illustrate in the accompanying drawings and the example of following explanation.
(the 1st execution mode)
Below, to the structure detailed description of the digital camera that constitutes the present invention's the 1st execution mode.
At first, with reference to Figure 1A, Figure 1B, describe to the overall structure of the digital camera (digital camera) 1 that constitutes the 1st execution mode of the present invention.
Shown in Figure 1A; The digital camera 1 that constitutes the 1st execution mode of the present invention possesses in the front (front face) of lamellar (the flat box shape) of essentially rectangular shape body (body) (below, note by abridging be body) 2: phtographic lens 3, individual wait lamp (timer indicator) 4, viewfinder 5, photoflash lamp illuminating part 6 and microphone portion (microphone) 7.(based on the user) right-hand member side of (top face) is provided with power key (power switch) 8 and shutter key (shutter button) 9 on body 2.Phtographic lens 3 has the focal length of making continually varying zoom function and AF (AutoFocus) function, is retracted to body 2 inside when power-off and during regeneration mode.Power key 8 is keys of when each on/off power supply, operating, and shutter key 9 is keys of indication photography timing when photograph mode.
Shown in Figure 1B; At the back side of body 2 (back face), be provided with photograph mode (R) key (recording mode key) 10, regeneration mode (P) key (playing mode key) 11, electronic viewfinder (EVF) 12, speaker portion 13, zoom (macro) key 14, photoflash lamp key 15, menu (MENU) key 16, annular key 17, confirm (SET) key 18 and liquid crystal display part (display unit) 19.For photograph mode key 10; State through being broken off by power supply operates on it; Thereby automatic energized, the photograph mode of convert to static picture, on the other hand; Through it being operated repeatedly, thereby set the photograph mode of still frame and motion picture circularly by the state of power connection.In the 1st execution mode, the photograph mode of still frame comprises following photograph mode: individual photograph mode (single shotmode) that carries out common shooting operation with the time for exposure of regulation; Adopt the time for exposure shorter than individual photograph mode continuously reference object to be photographed, shadow pattern (multi shot mode) is taken by the company that generates an image behind synthetic a plurality of picture frames.
For regeneration mode key 11, operate on it through the state that breaks off by power supply, thereby automatic energized converts regeneration mode into.EVF12 is to use the eyepiece type view finder of liquid crystal panel, on liquid crystal panel, shows viewfinder image (live view image) during photograph mode, and on the other hand, regeneration shows selected image during regeneration mode.For zoom key 14, operate on it when under the photograph mode of still frame, switching photography usually and zoom shot.For photoflash lamp key 15, when switching the light-emitting mode of photoflash lamp illuminating part 6, operate on it.For Menu key 16, when selecting various menu items etc., operate on it.Annular key 17 is the keys that form to the key one (monolithically) of project choice (itemselection) usefulness of left and right sides all directions up and down, is positioned at definite key 18 of the central authorities of this annular key 17, operates on it during item selected setting this in moment.
Liquid crystal display part 19 is made up of the color liquid crystal panel of band background light, and the display that when photograph mode, carries out viewfinder image shows that on the other hand, regeneration shows selected image when regeneration mode.In addition, liquid crystal display part 19 also can replace liquid crystal panel and possess other display device.Though not shown, be provided with in the bottom surface of digital camera 1 memory card slot that is used to load and unload the storage card that uses as recording medium with as USB (the Universal Serial Bus) interface of the serial line interface that is used for being connected etc. with external personal computer etc.
Below, with reference to Fig. 2, Fig. 3 the camera system of the digital camera 1 that constitutes the 1st execution mode and the structure of control system are described.
In the digital camera 1 that constitutes the 1st execution mode; During photograph mode; At the photographic element of the camera axis rear of the lens optical system 32 that constitutes phtographic lens 3 configuration, be that CCD33 is by timing generator (TG) 34 and vertical driver 35 turntable driving; Export the corresponding opto-electronic conversion of light image with by per fixed cycle imaging of 1 picture and export, wherein, photographic lens 3 moves focusing position and aperture position by the driving of motor (M) 31.This opto-electronic conversion output; Under the signal condition of analog form, by each primary components of each RGB, adjustment suitably gains; Afterwards; By sample-and-hold circuit (S/H:sample-and-hold circuit) 36 maintenance of taking a sample, and convert numerical data into, comprise by chroma processing circuit 38 again that pixel interpolation is handled and the color processing of γ compensation deals by A/D converter 37; Thereby generate brightness signal Y and color difference signal Cb, the Cr of digital value, and export it to DMA (Direct Memory Access) controller 39.
Dma controller 39 adopt composite synchronizing signal from same chroma processing circuit 38, memory can write signal and clock signal the brightness signal Y of chroma processing circuit 38 outputs and color difference signal Cb, Cr are once write in the inner buffering area of dma controller 39, carry out DMA through DRAM interface (I/F) 40 to the DRAM41 that uses as buffer storage and transmit.The ROM of the computer program that control part 42 is carried out by CPU by CPU, fixed storage and the RAM that uses as working storage etc. constitute, and are used for controlling the whole operation of digital camera 1.
Control part 42 is carrying out brightness Y and color difference signal Cb, Cr after DMA transmits to DRAM41, reads this brightness Y and color difference signal Cb, Crp through DRAM interface 40 by DRAM41, and writes VRAM44 through VRAM controller 43.Digital video encoder 45 is regularly read said brightness Y and color difference signal Cb, Cr by VRAM44 through VRAM controller 43, serves as that the basis produces vision signal with these data, and exports EVF12 and liquid crystal display part 19 to.EVF12 and liquid crystal display part 19 show according to the vision signal from digital video encoder 45, thus the real-time display image of obtaining by VRAM controller 43 in this moment basis of image information.
So, will the image show state that show in real time as display image, so-called viewfinder image in this moment in EVF12 and liquid crystal display part 19 under, if adopt the timing of carrying out the still frame photography to operate shutter key 9, then produce triggering signal.These triggering signals of control part 42 response stop to carry out brightness Y and color difference signal Cb, the Cr that DMA is transmitted in 1 picture that this moment obtained by CCD33 to DRAM41; F-number and shutter speed to obtain according to the correct exposure condition are come driven CCD 33; Obtain brightness Y and color difference signal Cb, the Cr of 1 picture once more and transmit to DRAM41; Afterwards, stop this path, be transferred to the state of recorded and stored.
Under the state of this recorded and stored; Read brightness and the color difference signal that writes DRAM41 by control part 42 by each composition of Y, Cb, Cr through DRAM interface 40; And write image processing part 47; The adaptive discrete cosine transform), entropy coding (entropy coding) mode is that processing such as Huffman (Huffman) coding comes packed data in this image processing part 47, adopt ADCT (Adaptive Discrete CosineTransform:.Then, read the symbol data that obtains, write any in the fixing built-in internal memory (not shown) in the storage card 48 that freely loads and unloads as the recording medium of digital camera 1 or the digital camera 1 by image processing part 47.Then, follow the processed compressed of brightness Y and color difference signal Cb, Cr and write the end of full compression data to storage card 48 or internal memory, control part 42 starts the path by CCD33 to DRAM41 once more.
Control part 42 is connected with key input part (user interface) 49, acoustic processing portion 50 and photoflash lamp drive division 51.Key input part 49 by said power key 8, shutter key 9, photograph mode key 10, regeneration mode key 11, zoom key 14, photoflash lamp key 15, Menu key 16, annular key 17, confirm that key 18 etc. constitutes, the signal subsidiary mutually with these key operations sends to direct control part 42.Acoustic processing portion 50 comprises sounding circuits such as PCM source of sound, when sound is recorded, the voice signal by 7 inputs of microphone portion is carried out digitlization; Document form data according to the rules, for example MP3 (MPEG-1audio layer 3) standard are carried out data compression, make sound data file, send to storage card 48 or internal memory; On the other hand; When sound reproduction, the sound data file of being sent by storage card 48 or internal memory is decompressed, and simulated; Drive speaker portion (SP) 13, broadcast amplifies.Photoflash lamp drive division 51 charges to the large value capacitor that not shown photoflash lamp is used when still image photographing, afterwards according to from the control of control part 42 to photoflash lamp illuminating part 6 driving of glistening.
Digital camera 1 with this spline structure extracts processing through carrying out profile quadrangle as follows, thereby extracts the quadrilateral area of the profile that contains reference object.Below, with reference to flow chart shown in Figure 3, explain and carry out the operation that this profile quadrangle extracts the digital camera 1 when handling.
The user is through operation annular key 17 and definite key 18, from photographing by selection " taking pictures or file " among the sight photograph mode, " taking blank (white board) etc. " such pattern.These patterns are carried out front compensation (the skew correction: slope compensation) of reference object; Obtain the image that adopts such pattern photography at image processing part 47; Extract the timing of handling according to carrying out the profile quadrangle; Begin flow chart shown in Figure 3, the profile quadrangle extracts handles the processing that gets into step S1.In addition; Below shown in the operation of digital camera 1 realize that through following process promptly, the CPU in the control part 42 will be stored in computer program loads among the ROM to RAM; Be loaded on the computer program of RAM through execution, thereby control the image processing of being undertaken by image processing part 47.
In the processing of step S1, image processing part 47 is through carrying out the compensation deals of distortion to the photographs imported, thereby comes the photographs of compensating distortion by the lens properties of lens optical system 32.Thus, the processing of step S1 finishes, and the profile rectangular extraction is handled the processing that gets into step S2.
In the processing of step S2, image processing part 47 is contracted to the size (picture size) of the photographs behind the compensating distortion size of regulation.Specifically, image processing part 47 calculates the size of the photographs behind the compensating distortions, according to the size that calculates come the microfilming image vertically and horizontal length so that the size of photographs becomes (indulging) * (horizontal stroke): the size of 320 * 240 (pixels).Thus, the processing of step S2 finishes, and the profile quadrangle extracts handles the processing that gets into step S3.
In the processing of step S3, image processing part 47 converts the display format of the colouring information of photographs into YUV (Y: luminance signal, U: luminance signal and blue component poor, V: luminance signal and red component poor) form by bitmap form.Thus, the processing of step S3 finishes, and the profile quadrangle extracts handles the processing that gets into step S4.
In the processing of step S4, image processing part 47 through the view data with photographs place central authorities (median: Median) filter, thereby from the view data of photographs, remove noise component(s).The central filter that so-called this execution mode relates to, it arranges the pixel value in the regional area of 3 * 3 (pixels) according to order from small to large, and will be positioned at the pixel value of the pixel value of central authorities as the pixel of zone central authorities.Thus, the processing of step S4 finishes, and the profile rectangular extraction is handled the processing that gets into step S5.
Shown in Fig. 4 A, Fig. 4 B, Fig. 4 C; In the processing of step S5; That image processing part 47 extracts is vertical by extracting respectively in the view data of removing behind the noise component(s) (vertical, x) to and horizontal (level, y) to the edge after and the edge image (edge images) that obtains.In this execution mode; Image processing part 47 adopts through calculating Suo Boer (Sobel) filter that space shown in Figure 51 rank differential detects profile; Extract vertical (vertically) to and horizontal stroke (level) to each edge image (verticaledge image, horizontal edge image).Thus, the processing of step S5 finishes, and the profile quadrangle extracts handles the processing that gets into step S6.
In the processing of step S6, shown in Fig. 6 A, Fig. 6 B, image processing part 47 is implemented refinement (thinning) and binary conversion treatment (binarization) to the vertical and horizontal edge image that the processing by step S5 extracts respectively.Specifically, image processing part 47 detects the pixel of the coordinate position x of satisfy condition in the edge pixel (edge pixels) that comprises in the edge image (vertical edge image) longitudinally { pixel value of the pixel value of the pixel value of coordinate position x-1<coordinate position x >=coordinate position x+1 }.Likewise, image processing part 47 detects the pixel of the coordinate position y of satisfy condition in the edge pixel (edge pixels) that comprises in the horizontal edge image (horizontaledge image) { pixel value of the pixel value of the pixel value of coordinate position y-1<coordinate position y >=coordinate position y+1 }.Then, in constituting the pixel of edge image, image processing part 47 is set at 255 with the coordinate position x that extracts, the pixel value of y, and the pixel value of the coordinate position beyond this is set at 0.Thus, the processing of step S6 finishes, and the profile quadrangle extracts handles the processing that gets into step S7.
In the processing of step S7, image processing part 47 is through marking processing to vertical and horizontal edge image respectively, thereby is shaped as vertical and horizontal line segment (line segment) information of the profile of the reference object shown in Fig. 7 A, Fig. 7 B.In this execution mode, image processing part 47 is to horizontal edge image, begin by the coordinate position of x=0, on x direction scan on one side, also reference and y direction adjacent pixels on one side, thus the edge pixel that is comprised in the edge image detected.Then; Detecting under the situation of edge pixel; Image processing part 47 judges whether the pixel value of detected edge pixel is 255 and whether is connected with other pixels; Pixel value be 255 and not with situation that other pixels are connected under, on the x direction, begin to contain the tracking (trace) of the line segment of detected edge pixel.Specifically, image processing part 47 to be positioned at the coordinate of following the tracks of the starting position (x, 3 points at right horizontal place y) (x+1, y-1), (x+1, y), (x+1 y+1) carries out and follows the tracks of.
Then; Under the situation of image processing part 47 any one condition in satisfying 3 conditions as follows; Give this line segment additional peculiar numbering (mark); Finish the tracking of this line segment, continuing under the situation of following the tracks of, the x coordinate position that has at last detected edge pixel is set at the next starting position of following the tracks of.
More marked in the condition 1:3 point.
It in the condition 2:3 point pixel that constitutes edge image.
Condition 3: in tracing process, in 3 o'clock, do not detect the pixel that constitutes edge image for 2 times.
On the other hand; For edge image longitudinally; Image processing part 47 begins on the x direction, to scan through the coordinate position by y=0, is included in the edge pixel in the edge image thereby detect, and also carries out and the identical processing of being carried out of processing for horizontal edge image.Then; Image processing part 47 calculate through the coordinate of starting point (start point) of following the tracks of each line segment (line segment) that is marked and terminal point (end point), gradient (obtaining) by starting point and terminal point, constitute line segment each point with respect to the coordinate position of the average and error maximum of the error (if ordinate then is the skew (displacement) of x direction, if horizontal line then is the skew (displacement) of y direction) of the gradient of line segment and with its value as line segment information.Thus, the processing of step S7 finishes, and the profile quadrangle extracts handles the processing that gets into step S8.
In the processing of step S8; The line segment information that image processing part 47 makes with reference to the processing by step S7; Judging whether to exist the maximum that comprises with respect to the error of the gradient of line segment is the line segment of the point more than the setting; Be under the situation of line segment of the point more than the setting in the maximum that existence comprises error, as shown in Figure 8, go up at this point (being a some P in the example shown in Figure 8) line segment is divided into 2 line segments.In addition, cut-point can be attached to and cut apart on the short line segment of back length.In addition, image processing part 47 line segment length be the situation more than the 1st threshold value or cut apart after the length of line segment be line segment not to be cut apart under the situation below the 2nd threshold value.Then, upgrade line segment information under the situation of the line segment of image processing part 47 after existence is cut apart.Thus, the processing of step S8 finishes, and the profile quadrangle extracts handles the processing that gets into step S9.
In the processing of step S9; Image processing part 47 is with reference to the line segment information of being upgraded by the processing of step S8; The length of extracting defined amount according to the order from growing to lacking is that line segment more than the setting is as connecting the source line segment; As shown in Figure 9, go up connection connection source line segment at the line segment that satisfies following 3 conditions (connecting the purpose line segment).Then, image processing part 47 is calculated through connecting purpose line segment and the starting point that is connected the line segment that the source line segment forms and the coordinate position of terminal point by least square method after having connected the purpose line segment and being connected the source line segment.Thus, the processing of step S9 finishes, and the profile quadrangle extracts handles the processing that gets into step S10.
Condition 1: connect the source line segment and be connected the purpose line segment and do not leave more than the setting.
Condition 2: connect the source line segment and be not completely contained in the connection purpose line segment.
Condition 3: when the beginning or end that prolongs connection source line segment when connecting the purpose line segment, the error deficiency setting of the part that is prolonged and the position of starting point that is connected the source line segment and terminal point.
In the processing of step S10; Shown in Figure 10 A and Figure 10 B, image processing part 47 is made opposite side candidate (the candidate for a pairof facing edge lines of quadrangularly (quadrangle) by vertically and laterally each line segment (line segment) of the dividing processing of having implemented step S8 and step S9 and connection processing; Hereinafter simply referred to as " facing-lines candidate ") (in the example shown in Figure 10 A, Figure 10 B; The pairing of line segment H1 and line segment H2 as horizontal opposite side candidate, is represented the pairing of line segment V1 and line segment V2 as opposite side candidate longitudinally).Specifically; Image processing part 47 respectively to vertically and the length that laterally makes and the line segment above of distance between many group line segments for setting be in the line segment of (for example, 1/3~3 times) in the scope of regulation with respect to the ratio of another line segment pairing as the opposite side candidate.Thus, the processing of step S10 finishes, and the profile quadrangle extracts handles the processing that gets into step S11.
In the processing of step S11, shown in figure 11, image processing part 47 makes the combination of the vertical and horizontal opposite side candidate that the processing by step S10 makes respectively.Then, image processing part 47 calculates 4 intersection points of opposite side candidate to each combination.At this moment, suppose that image processing part 47 only adopts the inclination information of line segment, has intersection point to get final product in the prolongation of line segment.That is to say, be also included within the actual situation that does not have line segment to intersect on the detected intersection point.Then, to make a plurality of be the rectangle candidate S shown in figure 12 on summit with 4 intersection points that calculated to image processing part 47.Thus, the processing of step S11 finishes, and the profile quadrangle extracts handles the processing that gets into step S12.
In the processing of step S12, image processing part 47 calculates the length L 1 of the periphery of the rectangle candidate S that the processing by step S11 makes.The length L 1 of periphery can be through calculating apart from the phase Calais between 4 summits that will constitute rectangle candidate S.In addition, image processing part 47, shown in figure 13, the summation (total) of length of calculating the part on the periphery that is in rectangle candidate S among vertical and horizontal line segment (line segments) L is as length L 2.Then; Image processing part 47 adopts mathematical expression 1 as follows, and the length summation L2 that calculates line segment L (estimates and handles: scoring) with respect to the ratio of the length L 1 of the periphery of rectangle candidate S count (score) (probability of rectangular area (likelihood)) as each rectangle candidate S.In the mathematical expression 1, the FACTOR P meaning is under the situation that the line segment that exceeds 4 summits (corner points) that constitute rectangle candidate S (peripheral part and the extended line segment that for example exceed region R shown in Figure 13 1, R2, rectangle candidate S) exists, and is used to reduce the punishment of counting (penalty) coefficient of rectangle candidate S; If for example to exceed the part on 4 summits be 0 to line segment, then be set at 1.0, if 1 part; Then be set at 0.8; If 2 parts then are set at 0.64, etc.In addition, the punishment coefficient is not defined as said example, for example is typing at reference object; And know in advance under the situation of aspect rate, depart from this ratio more, then punish the value heavy more (being made as below 1) of coefficient if can be made as; Perhaps under the situation of the length of the periphery of knowing reference object; Can be made as with the error of this periphery greatly more, then punish the value heavy more (being made as below 1) of coefficient etc., can be various application.Thus, the processing of step S12 finishes, and the profile rectangular extraction is handled the processing that gets into step S13.
[1-mathematical expression 1]
Score=L2/L1×100×P
In the processing of step S13; Shown in Figure 14 A~14D; From high to low the order of counting that image processing part 47 calculates according to the processing by step S12, be probability order from high to low with one among rectangle candidate S1~S4 overlapping (overlap) on photographs, in liquid crystal display part 19, show.Specifically, shown in figure 15, image processing part 47 cooperates the operation of users to annular key 17, according to probability order from high to low with rectangle candidate S1~S4 circularly (circularly) be presented in the liquid crystal display part 19.In addition, in example shown in Figure 15,, also can be simultaneously displayed in the liquid crystal display part 19 with hue distinguishes rectangle candidate according to counting though the rectangle candidate is presented in the liquid crystal display part 19 according to probability order from high to low.
In addition and since sometimes at image memory at a plurality of reference objects, so for example also can prepare a plurality of compensation models of changeable ON/OFF, whether the user can be selected to select to a plurality of rectangle candidates.Specifically; When a plurality of compensation models is under the situation of OFF, for example shown in figure 16, and image processing part 47 cooperates the operation of user to annular key 17; According to probability order from high to low with the rectangle candidate circularly (circularly) be presented in the liquid crystal display part 19; Pixel for the zone of being selected by the user that the rectangle candidate fenced up is carried out image processing such as Coordinate Conversion, and afterwards, the profile quadrangle extracts processing to be finished.In addition; When a plurality of compensation models is under the situation of ON; For example shown in figure 17; Image processing part 47 cooperates the operation of users to annular key 17, according to probability order from high to low the rectangle candidate is presented at (part among Figure 17 (a) is (b)) in the liquid crystal display part 19 circularly, carry out image processing (part among Figure 17 (c)) such as Coordinate Conversion for the pixel in the zone of selecting by the user that the rectangle candidate fenced up after; The rectangle candidate of not selected by the user as possible selection, is further carried out image processing (part among Figure 17 (d) is (f) (e)).Thus, the user just can carry out the image finishing to a plurality of reference objects correct rectangle candidate of (sequentially) selection successively.By more than, the processing of step S13 finishes, a series of profile quadrangle extracts processing to be finished.
As described abovely know; Extract processing according to the profile quadrangle that constitutes the 1st execution mode; Image processing part 47 makes the opposite side candidate (facing-lines candidate) of the vertical and horizontal that constitute the rectangular area by the line segment information of photographs detection vertical and horizontal according to the line segment information of detected vertical and horizontal.In addition; Image processing part 47 makes the combination (pairs of the facing-lines candidate for verticaledge lines and the facing-lines candidate for horizontal edge lines) of a plurality of candidates of opposite side longitudinally and horizontal opposite side candidate, and making intersection point with opposite side candidate longitudinally and horizontal opposite side candidate to each combination (pairs) is that the rectangular area on summit is as rectangle candidate S.Then; Image processing part 47 calculates the ratio of length L 1 of periphery of the relative rectangle candidate of summation L2 S of the length of the line segment L on the periphery that is positioned at rectangle candidate S; Be used as counting of each rectangle candidate S, show rectangle candidate S with photographs according to result of calculation.Therefore, handle according to such profile rectangular extraction, can be after the probability of the rectangle candidate S that considers to be extracted, to user prompt rectangle candidate S.Therefore, extract the digital camera of handling 1, can make the user select rectangle candidate S smoothly according to carrying out said profile quadrangle.In addition; In said explanation; As extracting the method for handling a plurality of rectangle candidate S that generate through the profile quadrangle to user prompt; To be that example is illustrated, still be not particularly limited reminding method to a plurality of rectangle candidate S of user based on the overlapping prompts displayed of hue distinguishes, based on the prompts displayed successively that circulates.In addition, also can be all are extracted by the profile quadrangle and handle a plurality of rectangle candidate S that generated to user prompt, and according to from high to low the order of counting that is calculated, restriction is as the number of the rectangle candidate S of prompting object.Under this situation, can reduce the number of the rectangle candidate S that is prompted to the user, prevent the miscellaneous of user's selection operation.
In addition; In said explanation; Although clear following example, that is, to user prompt extract by the profile quadrangle and handle after a plurality of rectangle candidate S that generated; Be chosen in the rectangle candidate S of follow-up phase (ate subsequent stage) by the user, but also can counted (automatically) selection automatically as the rectangle candidate S of the object of image processing according to what calculate in follow-up phase as the object of image processing.Under this situation, needn't ask selection operation to the user, can be when simplifying user's operation (simplifying user operation), make that a series of processing that match with the image processing of follow-up phase are smooth and easy to be carried out.
In addition; Extract the follow-up phase of handling at the profile quadrangle; As the image processing that the pixel in the selected rectangle candidate S that goes out is carried out; Also can carry out extracting the various image processing such as handling, amplify/dwindle processing, contrast adjustment processing, mark (label) compensation deals or the combination of these image processing based on slope compensation processing, the image of Coordinate Conversion.
(the 2nd execution mode)
Below, describe to the 2nd execution mode that the present invention relates to.In said the 1st execution mode, after the processing by step S11 generates a plurality of rectangle candidate S, each rectangle candidate S is estimated (step S12), according to the result after estimating successively to a plurality of rectangle candidate of user prompt S (step S13).The 2nd execution mode promptly, replaces the step S12 of the 1st execution mode, the processing of S13 following different with the 1st execution mode on a bit, and mode illustrated below pressing is divided into groups a plurality of rectangle candidate S.
The illustrated example of the 1st execution mode is the preference under the following situation, that is, the reference object through selecting to be comprised in the image, be rectangle candidate S as the number in the zone in the image of the object of image processing be 1 to a plurality of situation.On the other hand, the example of explaining in the 2nd execution mode be to comprise a plurality of more than preference under the situation (case, situation) that is suitable for of the image of similar reference object.As concrete example in this case; For example; Photo album (photo album) is overlooked each photo is extracted in photography (taking an overview image of) afterwards by photographs situation, to the notice board that posts a plurality of memorandums photograph (taking a snapshot of a bulletinboard having a plurality of memos pinned down) extract the situation etc. of each memorandum afterwards.
The digital camera 1 that the 2nd execution mode relates to; Illustrated profile quadrangle extracts and handles (step S1~S11) in 1 photographs, generate after a plurality of rectangle candidate S in passing through the 1st execution mode; Through carrying out following packet transaction, according to barycentric coodinates (coordinate of centerof gravity; Hereinafter simply referred to as " center position ") and the size (size) come a plurality of rectangle candidate S are divided into groups.Below, the operation of the digital camera 1 when carrying out this packet transaction with reference to flowchart text shown in Figure 180.
Flow chart shown in Figure 180 begins in timing place that said profile quadrangle extracts the processing end, and packet transaction gets into the processing of step S21.In addition; CPU in the control part 42 will be stored in computer program loads (load) among the ROM in RAM; Be loaded into the computer program among the RAM through execution, control the image processing that image processing part 47 is carried out, thereby realize operation like the digital camera 1 shown in following.
In the processing of step S21, image processing part 47 calculates the barycentric coodinates of each rectangle candidate S.Specifically; Image processing part 47 at first calculates the coordinate (Ax on 4 summits of rectangle candidate S shown in Figure 19 A, 19B; Ay), (Bx; By), (Cx, Cy), (Dx, Dy), the mathematical expression 2,3 shown in below adopting is calculated the triangle ABD that constitutes rectangle candidate S and barycentric coodinates G1, the G2 (with reference to Figure 19 A) of triangle BDC.Then, the mathematical expression 4,5 shown in below image processing part 47 adopts is calculated the triangle ABC of formation rectangle candidate S and barycentric coodinates G3, the G4 (with reference to Figure 19 B) of triangle ACD.Image processing part 47 calculates straight line that connects center of gravity G1, G2 and the straight line that connects center of gravity G3, G4 then, calculates that (Kx is Ky) as the barycentric coodinates of rectangle candidate S by the intersection point K of 2 represented straight lines of the mathematical expression shown in following 6.More particularly, extract to be handled by the profile quadrangle under the situation that is shown in 5 rectangle candidate S1~S5 of generation in 1 photographs like Figure 20 A current, image processing part 47 calculates 4 apex coordinates and the barycentric coodinates of rectangle candidate S1~S5 shown in Figure 20 B.Thus, the processing of step S21 finishes, and packet transaction gets into the processing of step S22.
[2-mathematical expression 2]
G1=((Ax+Bx+Dx)/3,(Ay+By+Dy)/3)
[2-mathematical expression 3]
G2=((Bx+Dx+Cx)/3,(By+Dy+Cy)/3)
[2-mathematical expression 4]
G3=((Ax+Bx+Cx)/3,(Ay+By+Cy)/3)
[2-mathematical expression 5]
G4=((Ax+Cx+Dx)/3,(Ay+Cy+Dy)/3)
[2-mathematical expression 6]
Kx=((G3y-((G4y-G3y)/(G4x-G3x))G3x)-(G1y-((G2y-
G1y)/(G2x-G1x))G1x))/
((G2y-G1y)/(G2x-G1x)-(G4y-G3y)/(G4x-G3x))
Ky=(((G4y-G3y)/(G4x-G3x))(G1y-((G2y-G1y)/(G2x-
G1x))G1x))-(((G2y-G1y)/(G2x-G1x))(G3y-((G4y-
G3y)/(G4x-G3x))G3x))/
(((G2y-G1y)/(G2x-G1x))-((G4y-G3y)/(G4x-G3x)))
In the processing of step S22, image processing part 47 is to the barycentric coodinates by each rectangle candidate S that processing calculated of step S21, and judging whether to exist the aggregate value of absolute value of difference of the coordinate figure of XY direction is the group of the barycentric coodinates of defined threshold α.Specifically, to the barycentric coodinates of rectangle candidate S1 (X1 Y1) carries out under the situation about handling, and image processing part 47 judges whether to exist and satisfies mathematical expression: | the barycentric coodinates of X1-X2|+|Y1-Y2|≤α (X2, group Y2).Then; In the judged result; In the aggregate value of the absolute value of the difference of the coordinate figure of XY direction is under the group (group of the rectangle candidate that the position of barycentric coodinates is nearer) of the barycentric coodinates below the defined threshold α situation about existing; Image processing part 47 is logined the group to relevant barycentric coodinates with the processing of step S23 with these barycentric coodinates, afterwards, packet transaction is got in the processing of step S25.On the other hand; In the aggregate value of the absolute value of the difference of the coordinate figure of XY direction is that the group of the barycentric coodinates below the setting α does not exist under the situation; Image processing part 47 is with the processing of step S24; Make the group of new barycentric coodinates, to the group of the new barycentric coodinates that made, make packet transaction get into the processing of step S25 this barycentric coodinates login.In addition,, can consider the whole bag of tricks, for example can consider to organize method in the attribute information that peculiar identifying information is attached to barycentric coodinates etc. as the method for dividing into groups.
In the processing of step S25, image processing part 47 judges whether whole barycentric coodinates that the processing by step S21 calculates have been carried out the processing of said step S22.Then, differentiating under the result carries out said step S22 for the whole barycentric coodinates that processing by step S21 do not calculated the situation of processing, image processing part 47 makes packet transaction return the processing of step S22.On the other hand, carried out in whole barycentric coodinates that the processing by step S21 is calculated under the situation of processing of said step S22, image processing part 47 makes packet transaction get into the processing of step S26.
In the processing of step S26, image processing part 47 adopts following mathematical expression 7 to calculate the length L of the periphery of each rectangle candidate S.Specifically, under the current situation that shown in Figure 20 A, generates 5 rectangle candidate S1~S5, image processing part 47 calculates the length L of periphery to each rectangle candidate S1~S5 shown in Figure 20 B.Thus, the processing of step S26 finishes, and packet transaction gets into the processing of step S27.
[2-mathematical expression 7]
L = ( ( Bx - Ax ) ^ 2 + ( By - Ay ) ^ 2 ) + ( ( Dx - Bx ) ^ 2 + ( Dy - By ) ^ 2 ) +
( ( Cx - Dx ) ^ 2 + ( Cy - Dy ) ^ 2 ) + ( ( Ax - Cx ) ^ 2 + ( Ay - Cy ) ^ 2 )
In the processing of step S27, image processing part 47 is to the group of each barycentric coodinates, and the length L that judges whether to comprise the periphery that the processing by step S26 calculates is the rectangle candidate S in the scope of the threshold value ± β of regulation not.Then; By judged result; Under the situation that comprises such rectangle candidate S, image processing part 47 is with the processing of step S28, makes the group (position of barycentric coodinates nearer but the group of the rectangle candidate that varies in size) of new barycentric coodinates; Will with the corresponding barycentric coodinates of this rectangle candidate S login to the group of the new barycentric coodinates that made in after, packet transaction gets into the processing of step S29.On the other hand, under the situation that does not comprise such rectangle candidate S, image processing part 47 makes packet transaction get into the processing of step S29.
In the processing of step S29, image processing part 47 judges whether the group of all barycentric coodinates has all been carried out the processing of said step S27.Then, by judged result, under the situation of the processing of group of all barycentric coodinates not being carried out said step S27, image processing part 47 makes packet transaction return the processing of step S27.On the other hand, under the situation of the processing of the group of all barycentric coodinates all having been carried out said step S27, image processing part 47 finishes a series of packet transaction.
Through this packet transaction,, divide into groups according to its barycentric coodinates (center position) and size (size) to all rectangle candidate S that comprised in 1 photographs.
In addition, in said explanation, the example of following situation has been described, that is, has been calculated the length L of the periphery of rectangle candidate S, and the information of the length L of utilizing periphery size (size) of employed each rectangle candidate S when being illustrated in the grouping of carrying out rectangle candidate S.Yet; Image processing part 47 also can replace the length L of periphery; The mean value etc. of catercorner length of internal area or rectangle candidate S of mean value Z, rectangle candidate S of length on four limits that for example utilizes rectangle candidate S carries out the grouping of rectangle candidate S as the information of expression size.
Said digital camera 1 is selected to handle through carrying out following rectangle candidate after said packet transaction finishes, even in 1 photographs, exist under the situation of a plurality of rectangle candidate S, the user is the desirable rectangle candidate S of (smoothly) selection smoothly also.Below, with reference to flow chart shown in Figure 21, the operation of the digital camera 1 when carrying out this selection handles is described.
Flow chart shown in Figure 21 begins in timing place that said packet transaction finishes, and selects to handle the processing that gets into step S31.In addition; Below; Extract processing according to said profile quadrangle; Extraction A shown in figure 22, B1, B2, C1, C2, D1, D2, E1, E2, F1, F2, G1, G2, H1, H2, I1, I2 amount to 17 rectangle candidates, adopt by said packet transaction like Figure 23,24, (X Y) obtains example after with the combination of size (mean value of the length on four limits) Z these rectangle candidates being divided into groups and specifies and select processing according to barycentric coodinates shown in 25.In addition, the CPU in the control part 42 will be stored in computer program loads among the ROM in RAM, be carried in the operation that computer program among the RAM is realized the digital camera 1 shown in following through execution.
In the processing of step S31, control part 42 judgements advance not operated annular key 17.Then, under the timing of operation annular key 17, select to handle the processing that gets into step S32.
In the processing of step S32, for the rectangle candidate S that the ability identification selection goes out, control part 42 outstanding show (highlight) are by annular key 17 selected rectangle candidate S (compensation candidate).Specifically, under the situation of the rectangle candidate A that has been selected maximum shown in Figure 22 by annular key 17, control part 42 becomes green through the color with the frame of rectangle candidate A by white, thus outstanding (highlight) rectangle candidate A that shows.Thus, the processing of step S32 finishes, and selects to handle the processing that gets into step S33.
In the processing of step S33, whether control part 42 judges confirm that through pressing operation key 18 will be the compensation candidate by the rectangle candidate S decision that the treatment of selected of step S32 is selected out.By judged result, under definite key 18 was not pressed the situation of operating, control part 42 makes selected to handle the processing that is back to step S31.On the other hand, confirm that key 18 is pressed under the situation of operation, control part 42 makes the processing that select to handle gets into step S34.
In addition; In example shown in Figure 22; Under the state of selecting rectangle candidate A, to operate downwards the user under the situation of annular key 17 rather than definite key 18, control part 42 is according to Figure 24, the group result shown in 25; The outstanding size of rectangle candidate that shows is near the rectangle candidate B1 under in the group of the size of rectangle candidate A (in other words, be on the Z axle near rectangle candidate A group).Then, under the state of outstanding demonstration rectangle candidate B1, further operate downwards under the situation of annular key 17 the user, control part 42 is in the XY plane, and outstanding the demonstration with rectangle candidate B1 belongs to same group rectangle candidate B2.
On the other hand; Given prominence under the state that shows at rectangle candidate B1; As user during to right-hand operation annular key 17, control part 42 is outstanding to be shown on the Z axles with rectangle candidate B1 and belongs to same group and on the XY plane, belong to the rectangle candidate C1 of the group (group that barycentric coodinates position nearer) adjacent with right periphery.In addition; Under the state of outstanding demonstration rectangle candidate B1; When the user operated annular key 17 left, control part 42 is outstanding to be shown on the Z axles with rectangle candidate B1 and belongs to same group and on the XY plane, belong to the rectangle candidate F1 of the group (group that barycentric coodinates position nearer) adjacent with left side periphery.
In addition, under the outstanding state that shows rectangle candidate C1, when the user operated annular key 17 to the right, control part 42 is outstanding to be shown on the Z axles with rectangle candidate C1 and belongs to same group and on the XY plane, belong to the rectangle candidate G1 of the group adjacent with right periphery.In addition, control part 42 is given prominence to the rectangle candidate that shows in next the group after being organized under the situation about shifting to other by 1 group on the Z axle, providing the side-play amount of regulation.This be because, when the size that changes the rectangle candidate the user is searched for, with size much at one but the different rectangle candidate in position by way of compensation candidate give prominence to demonstration also nonsensical.Specifically, as the next one of rectangle candidate B2, control part 42 is outstanding to show rectangle candidate E1, rather than some compensation candidate C1s littler than rectangle candidate B2, C2 or compensation candidate G1, G2.Example such as Figure 26 of above state transitions illustrate.
In the processing of step S34; Image processing part 47 calculates with by the rectangle candidate of the processing decision of the step S33 mapping transformation matrix as the rectangular area; Through with the mapping transformation matrix application that calculates in the pixel value that fences up by rectangle candidate and this rectangle candidate, thereby generate rectangular image.Thus, the processing of step S34 finishes, and selects to handle the processing that gets into step S35.
In the processing of step S35, the demonstration of the pairing rectangle candidate of all barycentric coodinates that comprises in the barycentric coodinates group that image processing part 47 cancellations are belonged to by the rectangle candidate of the processing of step S33 decision.Thus, the processing of step S35 finishes, and selects to handle the processing that gets into step S36.
In the processing of step S36, whether control part 42 judges have indicated the selection operation end through operation keys input part 49.By judged result, under the situation of not indicating selection operation to finish, control part 47 makes selects to handle the processing of returning step S31.On the other hand, under the situation of having indicated selection operation to finish, control part 42 finishes a series of selection to be handled.
Can know by above explanation; According to the digital camera 1 that constitutes the 2nd execution mode; Image processing part 47 comes a plurality of rectangle candidate S are divided into groups according to barycentric coodinates and size; Among a plurality of rectangle candidate S, select the rectangle candidate S that image processing adopted, according to the result of packet transaction, the demonstration of the rectangle candidate S that comprises in the group that the rectangle candidate S that cancellation is selected belongs to.According to such structure, even, also can select the desirable rectangle candidate of user smoothly existing a plurality of position of centre of gravitys and size to have under the situation of rectangle candidate S of trickle different (differs in small degree).
In above explanation, although clear digital camera 1 still also can apply the present invention in the DV that moving image is taken, the image processing apparatus with photographing section etc. as the 1st and the 2nd execution mode that the present invention relates to.That is, also can obtain the image that adopts outside filming apparatus to take, carry out a series of profile quadrangle shown in the said execution mode and extract processing by storage card, USB cable etc.In addition; In said execution mode; Point out the rectangle candidate according to scoring order from high to low, after the user selects, carry out compensation, but also can begin from the rectangle candidate of marking higher the rectangle candidate to be compensated by Coordinate Conversion based on Coordinate Conversion; Point out this compensation result successively, select most preferred rectangle candidate by the user.
Should be understood that the present invention is not defined in said embodiment, the present invention can enough disengagings the invention is intended to show with the composition of scope.The present invention can be presented as various forms through the composition that is disclosed in the said execution mode of appropriate combination.For example, some composition can be deleted in all compositions from said execution mode.Further, the composition in the different execution modes can be by appropriate combination.

Claims (11)

1. image processing apparatus comprises:
The line segment detecting unit detects vertical line segment and horizontal line segment in image;
The opposite side candidate makes the unit, according to vertical line segment that is gone out by said line segment detection and horizontal line segment, in said image, makes the vertical opposite side candidate and horizontal opposite side candidate that constitute quadrilateral area;
The rectangle candidate makes the unit, makes a plurality of said vertical opposite side candidates and the laterally combination of opposite side candidate, and making with the vertical opposite side candidate of each combination and the horizontal intersection point of opposite side candidate is that the said quadrilateral area on 4 summits is as said rectangle candidate; And
The scoring unit according to the length of said vertical opposite side candidate on the periphery that is positioned at said rectangle candidate and the said horizontal opposite side candidate ratio with respect to the length of the periphery of said rectangle candidate, calculates the probability (likelihood) of each said rectangle candidate.
2. image processing apparatus according to claim 1 is characterized in that,
Said scoring unit, the line segment that comprises in said vertical opposite side candidate or the said horizontal opposite side candidate are outwards outstanding and when surpassing said 4 summits and extending from said rectangle candidate, said probability is reduced regulation count.
3. image processing apparatus according to claim 1 is characterized in that,
Also possess indicative control unit, according to the said probability that is calculated by said scoring unit, the control display device is presented on the said image said rectangle candidate is overlapping.
4. image processing apparatus according to claim 3 is characterized in that,
Said indicative control unit is controlled said display device, shows said rectangle candidate according to the said probability order from high to low that is calculated by said scoring unit.
5. image processing apparatus according to claim 3 is characterized in that,
Said indicative control unit is controlled said display device, according to the said probability that is calculated by said scoring unit, a plurality of said rectangle candidates is carried out color differentiating.
6. image processing apparatus according to claim 1 is characterized in that,
Also has the capture apparatus that said image is taken.
7. image processing apparatus comprises:
The line segment detecting unit detects vertical line segment and horizontal line segment in image;
The opposite side candidate makes the unit, according to vertical line segment that is gone out by said line segment detection and horizontal line segment, in said image, makes the vertical opposite side candidate and horizontal opposite side candidate that constitute quadrilateral area;
The rectangle candidate makes the unit, makes a plurality of said vertical opposite side candidates and the laterally combination of opposite side candidate, and making with the vertical opposite side candidate of each combination and the horizontal intersection point of opposite side candidate is that the said quadrilateral area on 4 summits is as said rectangle candidate; And
The scoring unit according to the relation of said rectangle candidate with the line segment that constitutes vertical opposite side candidate and horizontal opposite side candidate, calculates the probability (likelihood) of each said rectangle candidate,
Said opposite side candidate makes the unit; Through the 1st line segment that comprises in said vertical line segment or the said horizontal line segment and the 2nd line segment are set at one of said vertical opposite side candidate and said horizontal opposite side candidate; Make said vertical opposite side candidate and said horizontal opposite side candidate
Said the 1st line segment and said the 2nd line segment have the distance bigger than defined threshold,
The length ratio of said the 1st line segment and said the 2nd line segment is in the scope of regulation.
8. image processing apparatus comprises:
The rectangle candidate makes the unit, makes the interior a plurality of quadrilateral areas of image as the rectangle candidate;
Grouped element according to the barycentric coodinates and the size of each said rectangle candidate, divides into groups to said rectangle candidate;
Display unit is presented on the said image said rectangle candidate is overlapping;
The 1st operating unit is selected selection rectangle candidate from the said rectangle candidate that is demonstrated by said display unit;
The 2nd operating unit is the decision rectangle candidate to be adopted in the reprocessing with said selection rectangle candidate decision; And
When indicative control unit, said the 2nd operating unit determine said decision rectangle candidate in said group, control said display unit, the demonstration of the said rectangle candidate that stops to comprise in the group under the said selection rectangle candidate.
9. image processing apparatus according to claim 8 is characterized in that,
Said the 1st operating unit has the 1st direction of operating and the 2nd direction of operating,
Said indicative control unit when on said the 1st direction of operating, having operated said the 1st operating unit, is controlled said display unit, other rectangle candidates under showing in the 1st group under the rectangle candidate that current selection shows according to selection mode successively,
Said indicative control unit when on said the 2nd direction of operating, having operated said the 1st operating unit, is controlled said display unit, according to selection mode show with said the 1st group different the 2nd group under the rectangle candidate.
10. image processing apparatus according to claim 9 is characterized in that,
Said indicative control unit; When on said the 2nd direction of operating, having operated said the 1st operating unit, show than said the 2nd group rectangle candidate under the rectangle candidate more than the big setting of current size according to the said rectangle candidate that said selection mode showed according to selection mode.
11. image processing apparatus according to claim 8 is characterized in that,
Also possesses the capture apparatus that said image is taken.
CN2009101683029A 2008-09-02 2009-08-27 Image processing apparatus Active CN101668116B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2008-224709 2008-09-02
JP2008224709 2008-09-02
JP2008224709A JP4715888B2 (en) 2008-09-02 2008-09-02 Image processing apparatus and computer program
JP2009072298 2009-03-24
JP2009-072298 2009-03-24
JP2009072298A JP4835713B2 (en) 2009-03-24 2009-03-24 Image processing apparatus and computer program

Publications (2)

Publication Number Publication Date
CN101668116A CN101668116A (en) 2010-03-10
CN101668116B true CN101668116B (en) 2012-02-01

Family

ID=41804544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101683029A Active CN101668116B (en) 2008-09-02 2009-08-27 Image processing apparatus

Country Status (2)

Country Link
JP (1) JP4715888B2 (en)
CN (1) CN101668116B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767354B2 (en) 2009-02-10 2017-09-19 Kofax, Inc. Global geographic information retrieval, validation, and normalization
JP5724454B2 (en) * 2011-02-25 2015-05-27 村田機械株式会社 Image processing apparatus and image processing method
JP5742399B2 (en) * 2011-04-06 2015-07-01 富士ゼロックス株式会社 Image processing apparatus and program
US8855375B2 (en) * 2012-01-12 2014-10-07 Kofax, Inc. Systems and methods for mobile image capture and processing
US10146795B2 (en) 2012-01-12 2018-12-04 Kofax, Inc. Systems and methods for mobile image capture and processing
JP5362052B2 (en) * 2012-01-24 2013-12-11 Eizo株式会社 Display device, image processing device, image region detection method, and computer program
US9355312B2 (en) 2013-03-13 2016-05-31 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9208536B2 (en) 2013-09-27 2015-12-08 Kofax, Inc. Systems and methods for three dimensional geometric reconstruction of captured image data
US20140316841A1 (en) 2013-04-23 2014-10-23 Kofax, Inc. Location-based workflows and services
WO2015073920A1 (en) 2013-11-15 2015-05-21 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
CN104835184B (en) * 2014-02-10 2018-03-20 成都理想境界科技有限公司 The extracting method of quadrilateral area in image
US9760788B2 (en) 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US10803350B2 (en) 2017-11-30 2020-10-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
WO2020208742A1 (en) * 2019-04-10 2020-10-15 楽天株式会社 Polygon detection device, polygon detection method, and polygon detection program
CN113706510B (en) * 2021-08-31 2023-07-28 杭州师范大学钱江学院 Weld joint detection positioning method based on gray value mutation point interpolation line segment fitting
CN117197073B (en) * 2023-09-07 2024-03-05 石家庄铁道大学 Rectangular object automatic counting method based on machine vision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694148A (en) * 1993-07-01 1997-12-02 Intel Corporation Vertically scaling image signals using selected weight factors
CN1073778C (en) * 1995-05-25 2001-10-24 三星电子株式会社 Method for compensating image-jitter of camera-videorecorder
CN1893559A (en) * 2005-06-28 2007-01-10 佳能株式会社 Information processing method and apparatus
CN101248454A (en) * 2005-08-25 2008-08-20 株式会社理光 Image processing method and image processor, digital camera equipment, and recording medium with image processing program stored thereon

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005122323A (en) * 2003-10-14 2005-05-12 Casio Comput Co Ltd Photographing apparatus, image processor, and image processing method and program for photographing device
JP4525519B2 (en) * 2005-08-18 2010-08-18 日本電信電話株式会社 Quadrilateral evaluation method, apparatus and program
JP4662258B2 (en) * 2005-08-31 2011-03-30 株式会社リコー Image processing method and apparatus, digital camera apparatus, and recording medium recording image processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694148A (en) * 1993-07-01 1997-12-02 Intel Corporation Vertically scaling image signals using selected weight factors
CN1073778C (en) * 1995-05-25 2001-10-24 三星电子株式会社 Method for compensating image-jitter of camera-videorecorder
CN1893559A (en) * 2005-06-28 2007-01-10 佳能株式会社 Information processing method and apparatus
CN101248454A (en) * 2005-08-25 2008-08-20 株式会社理光 Image processing method and image processor, digital camera equipment, and recording medium with image processing program stored thereon

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JP特开2004-40496A 2004.02.05
图20.

Also Published As

Publication number Publication date
JP4715888B2 (en) 2011-07-06
JP2010062722A (en) 2010-03-18
CN101668116A (en) 2010-03-10

Similar Documents

Publication Publication Date Title
CN101668116B (en) Image processing apparatus
KR101032058B1 (en) Image processing apparatus and computer readable medium
CN101621628B (en) Photographic apparatus, setting method of photography conditions, and recording medium
CN101600054B (en) Camera, and camera control method
CN101753812B (en) Imaging apparatus and imaging method
CN102915534B (en) Image processing apparatus and image processing method
US20100302595A1 (en) Image Reproducing Apparatus And Imaging Apparatus
CN101076997B (en) Image processing and image processing method used therein
CN101872113B (en) Method and device for shooting panoramic photo
CN104885440B (en) Image processing apparatus, camera device and image processing method
JP5601407B2 (en) Image classification program, image classification apparatus, and electronic camera
JP4474885B2 (en) Image classification device and image classification program
US8355056B2 (en) Image processing device, imaging device, and image processing program
CN104871058B (en) Image processing device, imaging device and image processing method
CN103873764A (en) Information processing apparatus, information processing method, and program
CN104641624B (en) Image processing apparatus, camera device and image processing method
JP4947136B2 (en) Image processing apparatus, image processing method, and program
CN104919789B (en) Image processing apparatus, photographic device and image processing method
CN101390381A (en) Blur detecting device, blur correcting device, imaging device, and blur detecting method
CN104813648A (en) Image processing device, image capture device, image processing method, and image processing program
CN103685877A (en) Print target data processing apparatus, and print target data processing method
CN104903769B (en) Image processing apparatus, camera head and image processing method
US8334919B2 (en) Apparatus and method for digital photographing to correct subject area distortion caused by a lens
CN107005626A (en) Camera device and its control method
JP5181935B2 (en) Image processing apparatus, program, and subject detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant