JP2007058634A - Image processing method and image processor, digital camera equipment, and recording medium with image processing program stored thereon - Google Patents

Image processing method and image processor, digital camera equipment, and recording medium with image processing program stored thereon Download PDF

Info

Publication number
JP2007058634A
JP2007058634A JP2005243958A JP2005243958A JP2007058634A JP 2007058634 A JP2007058634 A JP 2007058634A JP 2005243958 A JP2005243958 A JP 2005243958A JP 2005243958 A JP2005243958 A JP 2005243958A JP 2007058634 A JP2007058634 A JP 2007058634A
Authority
JP
Japan
Prior art keywords
line segment
quadrilateral
image processing
line
evaluation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2005243958A
Other languages
Japanese (ja)
Other versions
JP4712487B2 (en
Inventor
Shin Aoki
Takeshi Maruyama
剛 丸山
伸 青木
Original Assignee
Ricoh Co Ltd
株式会社リコー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd, 株式会社リコー filed Critical Ricoh Co Ltd
Priority to JP2005243958A priority Critical patent/JP4712487B2/en
Priority claimed from PCT/JP2006/316076 external-priority patent/WO2007023715A1/en
Publication of JP2007058634A publication Critical patent/JP2007058634A/en
Application granted granted Critical
Publication of JP4712487B2 publication Critical patent/JP4712487B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

An object of the present invention is to recognize a quadrilateral from a photographed image with high accuracy and at high speed, and to correct the tilt of the photographed image.
A means for detecting an edge region from an input image (captured image), a means for extracting a line segment corresponding to the detected edge region, and a combination of line segment pairs from the plurality of extracted line segments. Selecting and classifying each line segment pair according to the relative position of the two line segments constituting the line segment pair, and calculating the evaluation value of the line segment pair from the plurality of line segment pairs, Select a combination of line segment pairs, generate quadrilaterals from the four line segments of the two line segment pairs, and for each quadrilateral, based on the classification and evaluation values of the line segment pairs that make up the quadrilateral Means 250 for calculating a quadrilateral evaluation value; means 260 for selecting a quadrilateral based on the calculated quadrilateral evaluation value; calculating a projective transformation matrix from the selected quadrilateral; and means 270 for the input image. Provide.
[Selection] Figure 3

Description

  The present invention relates to a tilt correction preprocessing technique for a captured image, and in particular, an image processing method and apparatus for recognizing a quadrilateral from an input image, a digital camera apparatus having the function, and a recording in which an image processing program is recorded. It relates to the medium.

  In recent years, digital cameras have become widespread and have been used not only for landscapes and people, but also for shooting timetables, posters, bulletin boards, etc. instead of memos. However, the timetables and posters that were shot are “anchored” depending on the shooting position, and the images are distorted. Therefore, the shot posters are difficult to read, and are optimal for reusing the shot images as they are. It was not a thing.

  The tilt is a phenomenon in which an actual subject is a rectangle, but a captured image is distorted into a trapezoid or the like depending on the shooting position. When photographing a planar object such as a timetable or a poster with a digital camera, it is necessary to correct such tilt and convert it to an image photographed from a position facing the object.

  Conventionally, various methods for correcting a tilt at the time of photographing from an image photographed by a digital camera have been proposed, but typical methods include the following methods (see, for example, Patent Document 1 and Patent Document 2). First, a reduced image is generated from the captured image, an edge region is extracted from the reduced image, and distortion correction is performed on the edge region. Next, the Hough transform or Radon transform is performed on the edge region after the distortion correction to detect a straight line, and the quadrilateral of the subject is recognized from the combination of the straight lines. A projective transformation matrix is calculated based on this quadrilateral, and a tilt correction is performed by projective transformation of the captured image.

JP-A-2005-122320 JP 2005-122328 A

  In the above prior art, a straight line that is far from the center of the image is preferentially recognized as one side of the quadrilateral. Therefore, when the subject is not located at the center of the image (when the four sides to be recognized are not located in four directions from the image center position), the subject cannot be recognized. Further, since the straight line is detected by Hough transform or the like, the processing time becomes enormous. In particular, the Hough transform voting is a very time-consuming processing method. In Patent Document 2, the X-axis direction candidate straight line slope is limited to 45 ° ≦ θ ≦ 135 °, and the Y-axis candidate straight line slope is limited to 135 ° ≦ θ ≦ 225 ° to increase the speed. However, there is a limit to speeding up. Further, if the quadrangle is limited to a vertical side, a horizontal side, or the like based on the detected inclinations of the N straight lines, the recognizable shooting conditions are limited. For example, FIG. 17A can recognize a quadrilateral, but the quadrilateral of FIG. 17B becomes unrecognizable even though the camera is skewed and photographed with respect to FIG.

  The present invention provides, as preprocessing for tilt correction, as a technique for recognizing one or more quadrilaterals from an input image, an image processing method and apparatus capable of reducing the processing time with higher accuracy than the prior art, and functions thereof And a recording medium on which an image processing program is recorded.

  In the present invention, in order to detect a straight line without using Hough transform or the like that requires a large processing time, an edge direction is defined for each pixel of the input image, an edge region is detected for each edge direction, and an edge region is detected. The corresponding line segment (straight line) is detected every time. This can speed up the edge detection process.

  As described above, if the quadrangle is limited to a vertical side, a horizontal side, or the like based on the detected inclinations of N line segments (straight lines), the recognizable imaging conditions are limited. Therefore, in the present invention, all the quadrilaterals that are formed by combining four of the detected N line segments are taken into consideration, thereby relaxing the restriction on the photographing conditions. However, it takes a very long processing time to consider all quadrilaterals formed by combining four of the detected N line segments. Therefore, a line segment pair is formed by combining two line segments from N line segments, and for each line segment pair, for example, classified into the opposite side, the adjacent side, and the unrelated three, an evaluation value is given, and the opposite side and The quadrilateral is recognized by paying attention to the line segment pair of the adjacent side, and the quadrilateral is selected based on the evaluation value. Thereby, the processing time can be greatly shortened as follows.

  Now, there are a maximum of K = 3 × N × (N−1) × (N−2) × (N−3) / 24 quadrilaterals obtained by extracting four line segments from N line segments. (Also consider the order of adjoining four line segments). Therefore, to increase the processing time, it is important to reduce the number of K.

Normally, when a person shoots a rectangular signboard, the photograph is taken from a state close to a directly facing position to a size that fits in the image range. At that time, the shape of the rectangular signboard on the photographed image has the following two characteristics.
1. 1. The opposite sides of the rectangle are close to parallel and are at a distance more than a certain distance. The angle between adjacent sides of the rectangle is close to 90 degrees, and of course, the adjacent sides have intersections.

  Therefore, two line segments are extracted from N line segments, and the two line segments (straight lines) are classified as being opposite sides when the angles formed by the two lines (straight lines) are parallel, adjacent sides when the angle is close to 90 degrees, and irrelevant otherwise. . In addition, two line segments are extended indefinitely from N line segments, the intersection of the line segments is calculated, and the distance between the intersection and the two line segments is calculated, so that the adjacent edges of the two edges are similar. (= Evaluation value) is calculated.

  Since the four line segments of a quadrilateral can always be made from a pair of two opposite sides and a pair of four adjacent sides, it is generated from a pair of two opposite sides and four adjacent sides among K quadrilaterals. An unfinished quadrilateral can be ignored. Further, since the four vertices of the quadrilateral are where the line segments intersect, the quadrilateral can be evaluated based on the magnitude of the four evaluation values.

  In the present invention, the quadrilateral can be extracted even when the line segment is interrupted due to noise or the like, the input image has distortion or the background of the subject has a complex color. Therefore, a new line segment is generated by combining a plurality of line segments as necessary.

  Further, when edge detection is performed on an image in which a white rectangle is drawn on a black background using a normal Sobel filter, all pixels on the outer periphery of the rectangle are extracted as one edge region. In the present invention, each side of the rectangle can be extracted as a different edge region by dividing the edge direction and obtaining the edge region without using the Hough transform.

  Further, in the present invention, when the photographer normally shoots, it is common to shoot the subject large within the image range, and in order to use the characteristics, the photographer further shoots an image in the correction mode. In order to make it easy to determine the composition at the time, an evaluation value is calculated based on the area of the quadrilateral, and the quadrilateral can be selected in consideration of the evaluation value.

  In the present invention, since most of the subjects on the plane selected by the user as the object to be photographed are rectangular, it is limited to only recognizing only the rectangle, so that the user can photograph the rectangle as the subject. To greatly improve recognition accuracy, calculate a projection transformation matrix that converts a quadrilateral into a parallelogram, calculate an evaluation value based on the shape of the parallelogram after the projective transformation, and consider the evaluation value Lets you select a shape.

  According to the present invention, it is possible to recognize a quadrilateral from a captured image with high accuracy and high speed, and to convert the image into an image equivalent to a case where the recognized quadrilateral is captured from a directly facing position.

Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is an overall configuration diagram showing an embodiment of a digital camera device having an image processing function of the present invention. In FIG. 1, the photographing unit 11 includes a lens 111, a diaphragm 112, a shutter 113, a photoelectric conversion element 114, a preprocessing unit 115, and the like. By operating the shutter 113, subject light is received by the photoelectric conversion element 114 through the lens 111 and the aperture 112, and converted into an analog image signal. For the photoelectric conversion element 114, for example, a CCD (charge coupled device) is used. The preprocessing unit 115 includes an analog signal processing unit such as a preamplifier and AGC (automatic gain control), and an A / D conversion unit. The analog signal output from the photoelectric conversion element 114 is amplified and clamped. Then, the analog image signal is converted into a digital image signal.

  The digital image signal output from the preprocessing unit 115 of the photographing unit 11 is stored in the frame memory 15 through the camera signal processing unit 12. The frame memory 15 is a semiconductor memory such as a VRAM, SRAM, or DRAM, and is used to temporarily hold an image signal to be processed by the camera signal processing unit 12.

  The camera signal processing unit 12 is configured by a digital signal processor (DSP) or the like. The camera signal processing unit 12 is provided with a tilt correction unit 120 as an image processing function of the present invention, details of which will be described later. The ROM 13 is a program memory that holds programs used in the camera signal processing unit 12, and the RAM 14 is a work memory that is used to temporarily hold data being processed by the camera signal processing unit 12 and other necessary data. It is memory.

  The CPU 16 is configured by a microcomputer or the like, and controls operations of the imaging unit 11 and the camera signal processing unit 12. Note that the ROM 13 and the RAM 14 may be shared by the CPU 16.

  The image signal in the frame memory 15 is read out to the camera signal processing unit 12, subjected to processing such as image compression in the camera signal processing unit 12, and then the external storage device 19 via the interface unit (I / F) 17. It is preserved by being recorded on. As the external storage device 19, an IC memory card or a magneto-optical disk is used, but an image signal may be transmitted to a remote terminal or the like via a network using a modem card or ISDN card. Is possible. Conversely, the image signal read from the external storage device 19 is transmitted to the camera signal processing unit 12 via the I / F 17, subjected to decompression processing in the camera signal processing unit 12, and stored in the frame memory 15. .

  The display of the image signal is performed by transmitting the image signal in the frame memory 15 to the display unit 18 via the camera signal processing unit 12 and the I / F 17. The display unit 18 is configured by, for example, a liquid crystal display device installed in the casing of the digital camera device.

  Here, the tilt correction unit 120 in the camera signal processing unit 12 receives, for example, a captured image digital image signal stored in the frame memory 15 and receives a quadrilateral (rectangular) object to be corrected for tilt from the input image. Extract the subject and correct the tilt distortion. The tilt-corrected digital image signal is stored again in the frame memory 15, for example, and used for subsequent processing. FIG. 2 shows a specific processing image for tilt correction. The tilt correcting unit 120 stores an image processing program for correcting the tilt in the ROM 13 and causes the digital signal processor (DSP) of the camera signal processing unit 12 to execute the program, or a part or all of the processing function is performed. It may be configured as hardware. The detailed configuration and processing of the tilt correction unit 120 will be described below.

  FIG. 3 is a detailed configuration diagram illustrating an embodiment of the tilt correction unit 120. The main tilt correction unit 120 includes an edge region detection unit 210, a line segment extraction unit 220, a line segment generation unit 230, a line segment pair classification / evaluation unit 240, a quadrilateral evaluation unit 250, a quadrilateral selection unit 260, and a projective transformation unit. 270. Here, the parts 210 to 260 are collectively referred to as a quadrilateral extraction unit 200. This quadrilateral extraction unit 200 forms the main part of the present invention. Hereinafter, the process in each part is explained in full detail.

<Edge region extraction>
The edge region detection unit 210 detects an edge region from an input image that is a captured image. Specifically, a portion having a large luminance change is extracted and set as an edge region. FIG. 4 shows a processing flowchart of edge region detection.

  First, the input image is filtered by an edge detection filter such as a Sobel filter or a Canny filter, and the luminance change amount (gh) in the X direction and the luminance change amount (gv) in the Y direction are calculated for each pixel. (Step 1001). Then, a pixel whose return value of the function funcl (gh, gv) having the X change amount gh and the Y change amount gv as an input is equal to or greater than the threshold value is defined as an edge portion (edge pixel), and the return value of funcl (gh, gv) is the threshold value The following pixels are not regarded as edge portions and are non-edge pixels (step 1002). Next, the two-dimensional space based on the X change amount gh and the Y change amount gv is divided into a plurality of groups, and each edge pixel is grouped according to its direction (step 1003). In the embodiment, as described later, the two-dimensional space by gh and gv is divided into eight, and each edge pixel is classified into eight groups from group 1 to group 8. Finally, an edge image is created by identifying each group with a label or the like (step 1004), and edge region division is performed (step 1005).

  Hereinafter, the edge detection process will be described more specifically. Here, the luminance image of the input image is as shown in FIG. 5C, and the Sobel filter shown in FIGS. 5A and 5B is used as the edge detection filter. In FIG. 5C, (x00, y00) represents pixel coordinates, and v00 represents a pixel value. The same applies to other pixels.

Now, let the target pixel be a pixel (x11, y11). The luminance change amount gh in the X direction of the pixel (x11, y11) can be obtained as follows by applying the Sobel filter in the X direction shown in FIG. 5A to the luminance image in FIG.
gh = v00 × (−1) + v10 × (−2) + v20 × (−1) + v02 × 1 + v12 × 2 + v22 × 1
Further, the luminance change amount gv in the Y direction of the pixel (x11, y11) can be obtained as follows by applying the Sobel filter in the Y direction shown in FIG. 5B to the luminance image in FIG.
gv = v00 × (−1) + v01 × (−2) + v02 × (−1) + v20 × 1 + v21 × 2 + v22 × 1
The luminance change amount g of the pixel (x11, y11) is obtained as g = gh 2 + gv 2 , and when g is a predetermined threshold value (for example, 50) or more, the pixel (x11, y11) is set as an edge pixel, If it is less than or equal to the threshold value, it is determined as a non-edge pixel.

  By repeating the above process for each pixel, an edge portion of the input image is extracted. When the edge portion of the input image is extracted, as shown in FIG. 6, the two-dimensional space by the luminance change amount gh in the X direction and the luminance change amount gv in the Y direction is divided into eight, and according to the direction, Each edge pixel is grouped into one of group 1 to group 8. By distinguishing edge pixels in this way in the edge direction, it is possible to distinguish and handle four edges that will be present around the subject (each side of the rectangle can be extracted as a different edge region). . In addition, since the edge is detected without using Hough transform or the like, the processing can be speeded up.

  An edge image is created by assigning 0 (black) to edge pixels and 255 (white) to non-edge pixels. Here, it is assumed that an edge image is created for each direction group of 1 to 8. That is, eight edge pixels are created. To which direction group the eight edge images belong can be identified by a label or the like. Then, for each edge image, the area is divided for each black connected area, and each divided area is set as an edge area. Here, of the edge regions, an edge region (black connected region) that is composed of a smaller number of edge pixels than a predetermined threshold is removed as noise.

  Note that it is possible to distinguish only one edge image by assigning a different color or the like to each edge group, for example, for each edge group.

<Line segment extraction>
The line segment extraction unit 220 extracts a line segment corresponding to each edge region by performing principal component analysis on the pixel information of each edge region detected by the edge region detection unit 210. This line segment extraction is performed for each direction group. FIG. 7 shows an overall process flowchart of line segment extraction. A specific example is shown in FIG.

  First, principal component analysis is performed on the pixel information of each edge region (step 1101), and a line segment (straight line) is extracted (step 1102). Now, assume that an edge region as shown in FIG. By performing principal component analysis using the pixel information of the edge region, a line segment (straight line) as illustrated in FIG. 8B is extracted. Here, when the principal component analysis is performed, the contribution ratio of the first principal component is obtained at the same time, and is stored together with the straight line as the edge-likeness. Subsequently, as shown in FIG. 8C, a minimum rectangle surrounding the edge region is determined, an intersection of the rectangle and the line segment is obtained, and two end points (coordinates) of the line segment corresponding to the edge region are determined. (Step 1103).

  By performing the above processing for each edge region detected from the eight edge images, extraction of line segments corresponding to each edge region in the input image is completed for the time being.

  Subsequently, in order to make up for the case where a line that is originally a single line is cut off due to the influence of noise or the like, in the principal component direction (two directions) of each edge region for every eight edge images. A search is performed to find adjacent edge regions, and if necessary, adjacent edge regions are integrated, and line segment extraction is performed again (step 1104). FIG. 9 shows a flowchart of adjacent edge region integration processing. FIG. 10 shows a specific processing example. FIG. 10 shows that three edge regions 301, 302, and 303 exist in a part of one edge image.

  First, a search is performed for the number of pixels defined in the principal component direction (two directions) of the edge region of interest (step 1110), and it is determined whether there is an adjacent edge region (step 1111). In the case of FIG. 10, as indicated by arrows 311 and 312, for example, a predetermined number of pixels are searched from two left and right end points of the edge region 301. In FIG. 10, the length of the arrow indicates the prescribed number of pixels to be searched. The prescribed number of pixels may be a fixed value or may be set based on the length of the line segment corresponding to the edge region.

  In the example of FIG. 10, since the edge region 302 is located within the prescribed number of pixels from the end point of the edge region 301, the edge region 301 and the edge region 302 are determined to be adjacent edge regions. Since 303 is more than the prescribed number of pixels, it is not determined as an adjacent edge region.

  Next, when there is an adjacent edge region, composite pixel information obtained by combining the pixel information of the adjacent edge regions is created (step 1112), and principal component analysis is performed on the composite pixel information (step 1113). Then, it is determined whether the calculated linearity of the edge is equal to or greater than a threshold value (step 1114). If it is equal to or greater than the threshold value (the ratio of the main component is large), an edge area obtained by integrating adjacent edge areas is created. The original edge region is removed (step 1115). Then, the processing from step 1110 is performed again on the created area. This is repeated for all edge regions, and then the processing of FIG. 7 is performed again.

  In the case of FIG. 10, composite pixel information of the pixel information of the edge region 301 and the edge region 302 determined to be adjacent edge regions is created, and principal component analysis is performed on the composite pixel information. If the edge linearity is equal to or greater than the threshold, the edge region 301 and the edge region 302 are integrated to create a new edge region, and the edge region 301 and the edge region 302 are removed. Then, the processing from step 1110 is performed again focusing on the new edge region.

  The above processing is repeated for all edge regions for eight edge images. Then, the extraction of the line segment is completed by performing the process of FIG. 7 on each finally remaining edge region.

  In the processing so far, for the sake of easy understanding, the edge regions extracted from the eight edge images are processed separately. However, in the following processing, the edge regions extracted from the eight edge images are not distinguished. To deal with. Here, the total number of edge regions is N1, and therefore the total number of line segments extracted by the line segment extraction unit 220 is N1. It is assumed that a serial number is assigned to each line segment.

<Line segment generation>
The line segment generation unit 230 performs a process of generating a new line segment as necessary from the N1 line segments extracted by the line segment extraction unit 220. In the edge region extraction unit 210, there are cases where there are line segments that are recognized as being divided into a plurality of line segments despite the fact that the edge directions are divided into eight directions. is there. The line generation unit 230 performs processing to compensate for such a case. The line segment generation unit 230 is a process for dealing with a case where the input image has distortion and a case where the background of the subject has a complex color. FIG. 11 shows a process flowchart of the line segment generation unit 230. A specific example is shown in FIG.

  Using the N1 line segments extracted by the line segment extraction unit 220 as input (step 1200), two line segments of number i and number j are extracted (step 1201), and two line segments are extracted from the N1 line segments. N1 × (N1-1) / 2 line segment pairs, which are all combinations for selecting, are generated (step 1202). Here, a serial number is assigned to each line segment pair. Then, the count value Cnt is initialized to 1, and after N2 = N1 (step 1203), the following processing is performed. Cnt represents the number of the line segment to be processed, and N2 represents the total of existing line segments (N1) + new line segments.

  It is determined whether the count value Cnt has exceeded N1 × (N−1) / 2 (step 1204). If not, the Cnt-th (initially first) line segment pair is selected (step 1205), and the angle formed by the two line segments (line segment A and line segment B) constituting the line pair is determined. Calculation is performed in the range of 0 to 90 ° (step 1206). Then, it is determined whether or not the angle formed by the line segment pair A and B is equal to or smaller than a predetermined threshold (for example, 5 degrees) (step 1207), and if it is equal to or larger than the threshold, Cnt is incremented by 1 (step 1216). Return. For example, when the positional relationship between the line segment A and the line segment B is as shown in FIGS. 12A and 12B, the angle formed by the line segment pair A and B is represented by θ. Here, FIG. 12A shows a case where the angle θ formed by the line segment pairs A and B is equal to or larger than the threshold value, and FIG. 12B shows a case where the angle θ formed by the line segment pair is equal to or smaller than the threshold value. To do.

If the angle formed by the line segment pairs A and B is equal to or smaller than the threshold (for example, in the case of FIG. 12B), the distance between the line segment pairs is measured (step 1208). Here, the distance between the line segment pairs A and B is defined as the minimum value among the following 1 to 4 distances.
1. 1. Distance between a straight line obtained by infinitely extending line segment B and the start point of line segment A 2. Distance between a straight line obtained by extending line segment B indefinitely and the end point of line segment A 3. Distance between a straight line obtained by extending line segment A infinitely and the start point of line segment B The distance between a straight line obtained by extending line segment A indefinitely and the end point of line segment B

  It is determined whether the distance between the obtained line segment pair A and B is equal to or smaller than a predetermined threshold (step 1209). If the distance is equal to or larger than the threshold (the distance is too far), Cnt is incremented by 1 (step 1216). Return to.

On the other hand, when the distance of the line segment pair is less than or equal to a predetermined threshold, four distances of the combination of the start point and end point of line segment A and the start point and end point of line segment B are calculated, The maximum value (distance 1) and the minimum value (distance 2) are obtained (step 1210). And the following formula (1)
V <(length of line segment A + length of line segment B + distance 2) / distance 1 (1)
Is satisfied (step 1211). Here, V is a predetermined threshold value. If not satisfied, Cnt is incremented by 1 (step 1216), and the process returns to step 1204.

When the above formula (1) is satisfied, the magnitude relation between the X and Y coordinates of the line segment A and the line segment B constituting the line segment pair is compared.
(The X coordinate of the start point and end point of line segment A is larger than the X coordinate of the start point and end point of line segment B.
Alternatively, the X coordinate of the start point and end point of line segment A is smaller than the X coordinate of the start point and end point of line segment B. )
And,
(The Y coordinate of the start point and end point of line segment A is larger than the Y coordinate of the start point and end point of line segment B.
Alternatively, the Y coordinate of the start point and end point of line segment A is smaller than the Y coordinate of the start point and end point of line segment B. )
Is determined (step 1212). If not satisfied, Cnt is incremented by 1 (step 1216), and the process returns to step 1204.

  If the above condition is satisfied, a new line segment is generated (step 1213). The new line segment is a set of two vertices having the maximum distance among the four combinations of the start point and end point of line segment A and the start point and end point of line segment B calculated in step 1210. The line segment C at the start and end points is assumed. In the case of the example of FIG. 12B, a new line segment C is generated as shown in FIG. The existing line segment is left as it is, and the generated line segment is added with a subsequent serial number (step 1214). N2 is incremented by 1 (step 1215), Cnt is incremented by 1 (step 1216), and the process returns to step 1204.

  By repeating the above processing for all N1 × (N1-1) / 2 line segment pairs, a desired line segment is generated and added. In this way, a total of N2 line segments are obtained by adding the line segments newly generated and added by the line segment generation unit 230 to the existing N1 line segments.

  In this example, a new line segment is generated when all the conditions of steps 1207, 1209, 1211, and 1213 in FIG. 11 are satisfied. However, if some of the conditions are satisfied as necessary, a new line segment is generated. A line segment may be generated. Alternatively, a new line segment pair may be created using the generated line segment C and an existing line segment, and it may be determined whether or not a new line segment should be generated for this line segment pair.

<Line segment pair classification and evaluation>
In the line segment pair classification / evaluation unit 240, the number is determined from the N2 line segments obtained by adding the existing N1 line segments and the (N2-N1) line segments newly generated by the line segment generation unit 230. Two line segments of i and number j are taken out (referred to as line segment pairs i and j), and the line segment pair classification and evaluation value are set. Here, it is assumed that they are classified into three types, irrelevant, opposite-side relationship, and adjacent relationship. FIG. 13 shows a process flowchart of the line segment pair classification / evaluation unit 240.

  The N2 line segments obtained by adding the line segments generated by the line segment generation unit 230 to the existing line segments are input (step 1300), and two line segments number i and number j (line segment pair i, j) is extracted (step 1301), and N2 (N2-1) / 2 line segment pairs, which are all combinations of selecting two line segments from N2 line segments, are generated (step 1302). A serial number is assigned to each line segment pair. Then, after the count value Cnt is initialized to 1 (step 1303), the following processing is performed.

  It is determined whether the count value Cnt exceeds N2 × (N2-1) / 2 (step 1304). If not, the Cnt-th (initially first) line segment pair is selected (step 1305), and the angle formed by the two line segments (line segment A and line segment B) constituting the line segment pair is determined. Calculation is made within the range of 0 to 90 ° (step 1306). The angle formed by the pair of line segments is the same as that shown in FIG. Here, the following processing is performed according to the angle formed by the line segment pair. Note that α and β are determined in advance by, for example, statistics.

  When the angle formed by the line segment pair is 0 to α degrees, the distance of the line segment pair is measured (step 1307). Then, it is determined whether the distance between the pair of line segments is equal to or smaller than a predetermined threshold value (step 1308). If the distance is equal to or smaller than the threshold value, the line segment pair classification is set to “irrelevant” and the evaluation value of the line segment pair is set to 0. (Step 1309). When the distance of the line segment pair is equal to or greater than the threshold, the line segment pair classification is set to “opposite side relationship”, and the evaluation value of the line segment pair is set to 0 (step 1310). Thereafter, Cnt is incremented by 1 (step 1314), and the process returns to step 1304.

  When the angle formed by the line segment pair is α to β degrees, the line segment pair is classified as “opposite side relationship” (step 1311), and the process proceeds to step 1313. When the angle formed by the line segment pair is β to 90 degrees, the line segment pair is classified as “adjacent relationship” (step 1312), and the process proceeds to step 1313. Thereafter, Cnt is incremented by 1 (step 1314), and the process returns to step 1304.

In step 1313, the evaluation value of the line segment pair is obtained and set as follows. The evaluation value is expressed by a value from 0 to 1.
1. An intersection point O of a straight line obtained by infinitely extending the line segment A and a straight line obtained by extending the line segment B indefinitely is obtained.
2. The Euclidean distance between the intersection point O and the start point of the line segment A and the Euclidean distance between the intersection point O and the end point of the line segment A are obtained, and the smaller distance is defined as the distance A.
3. The Euclidean distance between the intersection point O and the start point of the line segment B and the Euclidean distance between the intersection point O and the end point of the line segment B are obtained, and the smaller distance is defined as the distance B.
4). An evaluation value (Value) is calculated by substituting distance A and distance B into equation (2).

Const. 1 is a constant corresponding to the image size. When the intersection O exists outside the image area, Const. By changing the value of 1, it is possible to cope with the case where the vertex of the quadrilateral to be extracted exists outside the image area.

  In this embodiment, there are three types of line segment pairs (opposite side relationship, adjacent relationship, irrelevant) and one type of evaluation value for the line segment pair, but a method of providing evaluation values for the opposite side relationship is also conceivable. However, the processing time increases when the number of classifications and evaluation values is increased.

<Rectangular evaluation>
The quadrangle evaluation unit 250 sequentially extracts two sets from the R pairs (R = N2 (N2-1) / 2) of line pairs obtained by the line segment pair classification / evaluation unit 240, and their types and evaluation values. Based on the above, the evaluation value is set for the quadrilateral formed by the two line segment pairs. FIG. 14 shows a processing flowchart of the quadrangle evaluation unit 250.

  Using N2 × (N2-1) / 2 line segment pairs obtained by the line segment pair classifying / evaluating unit 240 as input (step 1400), P = 1, R = N2 × (N2-1) / 2 Set (step 1401), extract all combinations of two line segment pairs from R line pairs (R = N2 × (N2-1) / 2) (steps 1402 to 1406), and perform the following processing I do.

  Two pairs of line segments are extracted and set as a line segment pair P and a line segment pair Q (step 1407). Note that the line segment pair P is equivalent to the line segment pair i, j composed of the line segment i and the line segment j, and similarly, the line segment pair Q is equivalent to the line segment pair k, l.

  First, it is checked whether or not the line segment pair P and the line segment Q are “opposite side relationship” (step 1408). When both the line segment pair P and the line segment Q are “opposite side relationship”, the line segment i, the line segment j, the line segment k, and the line segment l constituting the line segment pair P and Q may form a quadrilateral. There is. Therefore, next, whether or not the evaluation value of the four line segment pairs (line segment pair j, k, line segment pair i, l, line segment pair j, k, line segment pair j, l) is greater than 0 is determined. Check (step 1409). When all four line segment pairs have evaluation values greater than 0, the intersection point m1 of the intersection k of the line segment (straight line) i and the line segment k, the intersection point m2 of the line segment i and the line segment l, the line segment j and the line segment A quadrangle is generated which is composed of an intersection m3 of l and an intersection m4 of the line segment j and the line segment k (step 1410). Then, the evaluation value V (i, k, j, l) of this quadrilateral is set as the sum of the evaluation values of the four line segment pairs (step 1411).

  In this embodiment, the determination is made based on whether or not the evaluation value of the line segment pair is larger than 0. However, the evaluation value of the line segment pair is sorted in advance, and only the line segment pair having the higher evaluation value is used. Thus, if a threshold value is provided for the evaluation value, the processing time is further shortened. When the coordinates of m1, m2, m3, and m4 are present at a position far from the image area, V (i, k, j, l) may be set to zero. In addition, V (i, k, j, l) may be set to 0 even when the quadrangle m1m2m3m4 is not a convex quadrangle.

  Next, the area S of the quadrangle m1m2m3m4 is obtained and multiplied by V (i, k, j, l) (step 1412). Instead of multiplying S, a function g (S) that monotonically increases with S may be created, and g (S) may be multiplied and added to V (i, k, j, l).

  Next, the quadrangle m1m2m3m4 is evaluated according to the shape (step 1413). This is performed as follows, for example. A projective transformation matrix is obtained in which the intersection of the line segment pair i, j and the intersection of the line segment pair k, l are two vanishing points, and the two vanishing points are converted into infinity points. In order to obtain this projective transformation matrix, the unit normal vector (a, b, c) of the plane is calculated by assuming that the quadrangle m1m2m3m4 is a parallelogram existing on a three-dimensional plane (for example, (See Morikita Publishing, Kenichi Kanaya, "Image Understanding")), and a rotational movement matrix that matches the unit normal vector with the optical axis of the camera can be obtained using the focal length at the time of capturing the input image. . Then, a projection parallelogram n1n2n3n4 obtained by projective transformation of the quadrangle m1m2m3m4 is considered, and one angle θ (0 ° to 90 °) of the projection parallelogram is calculated. If θ is 90 ° or more, the angle of the other parallelogram is calculated. The obtained θ is multiplied by V (i, k, j, l). Instead of multiplying θ, a function f (θ) that monotonically increases with θ may be created, and f (θ) may be multiplied and added to V (i, k, j, l). Further, V (i, k, j, l) weighted by the area S or g (s) may be further weighted by θ or f (0).

  Next, the intersections m1m2m3m4 of the four line segment pairs constituting the quadrilaterals i, k, j, l and the evaluation value V (i, k, j, l) are registered in a memory or the like (step 1414).

<Selection of quadrilateral>
The quadrilateral selection unit 260 selects one or more quadrilaterals in descending order of the evaluation value V (i, k, j, l) from among the quadrilaterals registered by the quadrilateral evaluation unit 250. In addition, you may select using either the evaluation value by an area, or the evaluation value by a shape as needed.

<Projective transformation>
The projective transformation unit 270 calculates a projective transformation matrix based on the quadrilateral selected by the quadrilateral selection unit, performs projective transformation on the input image, and performs tilt correction.

  The calculation of the projective transformation matrix is performed as follows, for example. First, the order of the vertices of the quadrangle m1m2m3m4 is rearranged clockwise with the vertex closest to the origin as the head, and again the quadrangle m1m2m3m4. This is shown in FIG. Next, a projected parallelogram is calculated in the same manner as in step 1413, and a value of n1n2: n1n4 is obtained. When the image size of the input image is IMGW × IMGH, a rectangle satisfying the horizontal length: vertical length = n1n2: n1n4 and having the largest area and the center of the rectangle matching the image center is obtained. Let the vertexes of the rectangle be u1, u2, u3, u4 clockwise. As in the case of the quadrangle m1m2m3m4, u1 is a vertex closest to the origin among u1u2u3u4. Then, a projective transformation matrix corresponding to m1 → u1, m2 → u2, m3 → u3, m4 → u4 is obtained.

  Using the projective transformation matrix thus obtained, projective transformation is performed on the input image. When projective transformation is performed, image enlargement / reduction, translation / rotation can be added as necessary.

  A specific example is shown in FIG. For example, if the captured image (input image) is FIG. 16A, and the quadrilateral extracted from the projected image is the quadrilateral shape indicated by 1600 in FIG. 16B, for example, FIG. A tilt-corrected image such as c) can be obtained.

  As mentioned above, although one Embodiment of this invention was described, this invention is not limited to this Embodiment, A various change and expansion are possible.

1 is a configuration diagram of an embodiment of a digital camera device to which an image processing function of the present invention is applied. It is a conceptual diagram of tilt correction. It is a block diagram which shows one Embodiment of the tilt correction part which is an image processing function of this invention. FIG. 4 is a processing flowchart example of an edge region detection unit in FIG. 3. FIG. It is an example of an edge detection filter, and an example of the brightness | luminance image to which this filter is applied. It is an example of the division | segmentation of the two-dimensional space by the luminance vertical change amount and horizontal change amount. FIG. 4 is an example of an overall process flowchart of a line segment extraction unit in FIG. 3. FIG. It is a specific example of line segment extraction. It is an example of a detailed process flowchart of step 1104 in FIG. It is a specific example of the search of an adjacent edge area | region. FIG. 4 is a processing flowchart example of a line segment generation unit in FIG. 3. FIG. It is a specific example of line segment generation. It is a processing flowchart example of the line segment pair classification / evaluation unit in FIG. It is a process flowchart example of the quadrangle evaluation part in FIG. It is a figure explaining a part of process in the projection conversion part in FIG. This is a specific example in which a quadrilateral is extracted from a captured image to obtain a tilt corrected image. It is a figure explaining the problem of a prior art.

Explanation of symbols

DESCRIPTION OF SYMBOLS 120 tilt correction part 200 quadrilateral extraction part 210 edge area detection part 220 line segment extraction part 230 line segment generation part 240 line segment pair classification / evaluation part 250 quadrilateral evaluation part 260 quadrilateral selection part 270 projective transformation part

Claims (20)

  1. An image processing method for recognizing one or more quadrilaterals from an input image,
    An edge region detection step of detecting a plurality of edge regions from the input image;
    A line segment extraction step of extracting a plurality of line segments corresponding to the detected plurality of edge regions;
    A combination of two line segments (hereinafter referred to as line segment pairs) is selected from the plurality of extracted line segments, and each line segment pair is classified according to the relative position of the two line segments constituting the line segment pair. , Line segment pair classification / evaluation process for calculating the evaluation value of the line segment pair,
    A combination of two line segment pairs is selected from a plurality of line segment pairs, a quadrilateral is generated from each of the four line segments of the two line segment pairs, and for each quadrilateral, the line segment constituting the quadrilateral is generated. A quadrilateral evaluation process for calculating a quadrilateral evaluation value based on the classification and evaluation value of the pair,
    A quadrilateral selection step of selecting a quadrilateral based on the calculated quadrilateral evaluation value;
    An image processing method comprising:
  2. The image processing method according to claim 1,
    Using a plurality of line segments extracted in the line segment extraction step as input, select a plurality of combinations of line segment pairs, and generate a new line segment based on the positional relationship between the two line segments that constitute each line segment pair. A line generation step of adding to the line segment of
    In the line segment pair classification / evaluation step, the line segment extracted in the line segment extraction step and the line segment generated in the line segment generation step are input, and all combinations of line segment pairs are selected.
    An image processing method.
  3. The image processing method according to claim 1 or 2,
    In the edge region detection step, the vertical change amount and the horizontal change amount of the luminance are calculated for each pixel of the input image, the edge region is detected based on the vertical change amount and the horizontal change amount, and the vertical change amount And dividing the two-dimensional space by the lateral change amount into a plurality of groups, and grouping the edge region into a plurality of groups (hereinafter referred to as direction groups) according to the direction,
    An image processing method.
  4. The image processing method according to claim 3.
    In the line segment extraction step, for each direction group, a principal component analysis is performed using pixel information of each edge region, and a line segment (straight line) is extracted.
    An image processing method.
  5. The image processing method according to claim 4,
    In the line segment extraction step, a search is performed in the principal component direction of each edge region, a plurality of adjacent edge regions within a predetermined number of pixels are integrated, the original edge region is removed, and the integrated edge region is Extract the corresponding line segment,
    An image processing method.
  6. The image processing method according to any one of claims 1 to 5,
    In the line segment pair classification / evaluation step, each line pair is classified into an opposite side, an adjacent side or irrelevant according to the relative positional relationship between the two line segments constituting the line segment pair, and an evaluation value is calculated. ,
    In the quadrilateral evaluation step, a quadrilateral is generated based on the line segment pair of the opposite side and the adjacent side, and a quadrangle evaluation value is calculated.
    An image processing method.
  7. The image processing method according to any one of claims 1 to 6,
    In the quadrilateral evaluation step, the area of the quadrilateral is obtained, and a value obtained by weighting the area to the quadrilateral evaluation value according to the evaluation value of the line segment pair is newly set as a quadrilateral evaluation value.
    An image processing method.
  8. The image processing method according to any one of claims 1 to 7,
    In the quadrilateral evaluation step, a projection transformation matrix for converting the quadrilateral into a parallelogram is obtained, and a value obtained by weighting the quadrilateral evaluation value based on the shape of the parallelogram after the projective transformation is used as the quadrilateral evaluation value. ,
    An image processing method.
  9. The image processing method according to any one of claims 1 to 8,
    A projective transformation step of calculating a projective transformation matrix from the quadrilateral selected by the quadrilateral selecting step, and performing a projective transformation on the input image;
    An image processing method.
  10. An image processing device that recognizes one or more quadrilaterals from an input image,
    Edge region detecting means for detecting a plurality of edge regions from the input image;
    Line segment extraction means for extracting a plurality of line segments corresponding to the detected plurality of edge regions;
    A combination of two line segments (hereinafter referred to as line segment pairs) is selected from the plurality of extracted line segments, and each line segment pair is classified according to the relative position of the two line segments constituting the line segment pair. , Line segment pair classification / evaluation means for calculating an evaluation value of the line segment pair,
    A combination of two line segment pairs is selected from a plurality of line segment pairs, a quadrilateral is generated from each of the four line segments of the two line segment pairs, and for each quadrilateral, the line segment constituting the quadrilateral is generated. A quadrangle evaluation means for calculating a quadrangle evaluation value based on the classification and evaluation value of the pair;
    A quadrilateral selection means for selecting a quadrilateral based on the calculated quadrilateral evaluation value;
    An image processing apparatus comprising:
  11. The image processing apparatus according to claim 10.
    Using a plurality of line segments extracted by the line segment extraction means as input, a plurality of combinations of line segment pairs are selected, and a new line segment is generated based on the positional relationship between the two line segments constituting each line segment pair. A line generation means for adding to the line segment of
    In the line pair classifying / evaluating means, the line segments extracted by the line segment extracting means and the line segments generated by the line segment generating means are input, and all combinations of line segment pairs are selected.
    An image processing apparatus.
  12. The image processing apparatus according to claim 10 or 11,
    The edge area detection means calculates the vertical and horizontal changes in luminance for each pixel of the input image, detects the edge area based on the vertical and horizontal changes, and detects the vertical change. And dividing the two-dimensional space by the lateral change amount into a plurality of groups, and grouping the edge region into a plurality of groups (hereinafter referred to as direction groups) according to the direction,
    An image processing apparatus.
  13. The image processing apparatus according to claim 12.
    In the line segment extraction means, for each direction group, a principal component analysis is performed using pixel information of each edge region, and a line segment (straight line) is extracted.
    An image processing apparatus.
  14. The image processing apparatus according to claim 13.
    The line segment extraction means performs a search in the principal component direction of each edge region, integrates a plurality of adjacent edge regions within a predetermined number of pixels, removes the original edge region, and creates the integrated edge region. Extract the corresponding line segment,
    An image processing apparatus.
  15. The image processing apparatus according to any one of claims 9 to 14,
    The line segment pair classification / evaluation means classifies each line segment pair according to the relative positional relationship between the two line segments constituting the line segment pair, and calculates an evaluation value. ,
    In the quadrilateral evaluation means, a quadrilateral is generated based on a line segment pair of an adjacent side and an adjacent side, and a quadrangle evaluation value is calculated.
    An image processing apparatus.
  16. The image processing device according to any one of claims 9 to 15,
    In the quadrilateral evaluation means, the area of the quadrilateral is obtained, and a value obtained by weighting the area to the quadrilateral evaluation value by the evaluation value of the line segment pair is newly set as a quadrilateral evaluation value.
    An image processing apparatus.
  17. The image processing apparatus according to any one of claims 9 to 16,
    The quadrangle evaluation means obtains a projective transformation matrix for converting the quadrilateral into a parallelogram, and newly calculates the quadrilateral evaluation value obtained by weighting the quadrilateral evaluation value based on the shape of the parallelogram after the projective transformation. Value
    An image processing apparatus.
  18. The image processing apparatus according to any one of claims 9 to 17,
    Further comprising a projective transformation means for calculating a projective transformation matrix from the quadrilateral selected by the quadrilateral selection means, and performing a projective transformation on the input image;
    An image processing apparatus.
  19.   19. A digital camera device comprising each means of the image processing device according to claim 10.
  20.   10. A recording medium on which an image processing program for causing a computer to execute each step of the image processing method according to claim 1 is recorded.
JP2005243958A 2005-08-25 2005-08-25 Image processing method and apparatus, digital camera apparatus, and recording medium recording image processing program Active JP4712487B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005243958A JP4712487B2 (en) 2005-08-25 2005-08-25 Image processing method and apparatus, digital camera apparatus, and recording medium recording image processing program

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2005243958A JP4712487B2 (en) 2005-08-25 2005-08-25 Image processing method and apparatus, digital camera apparatus, and recording medium recording image processing program
PCT/JP2006/316076 WO2007023715A1 (en) 2005-08-25 2006-08-09 Image processing method and apparatus, digital camera, and recording medium recording image processing program
CN 200680030612 CN101248454B (en) 2005-08-25 2006-08-09 Image processing method and image processor, digital camera equipment, and recording medium with image processing program stored thereon
US12/063,684 US8120665B2 (en) 2005-08-25 2006-08-09 Image processing method and apparatus, digital camera, and recording medium recording image processing program
EP06782754.3A EP1917639A4 (en) 2005-08-25 2006-08-09 Image processing method and apparatus, digital camera, and recording medium recording image processing program
KR1020087004468A KR100947002B1 (en) 2005-08-25 2006-08-09 Image processing method and apparatus, digital camera, and recording medium recording image processing program

Publications (2)

Publication Number Publication Date
JP2007058634A true JP2007058634A (en) 2007-03-08
JP4712487B2 JP4712487B2 (en) 2011-06-29

Family

ID=37922065

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005243958A Active JP4712487B2 (en) 2005-08-25 2005-08-25 Image processing method and apparatus, digital camera apparatus, and recording medium recording image processing program

Country Status (2)

Country Link
JP (1) JP4712487B2 (en)
CN (1) CN101248454B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010062722A (en) * 2008-09-02 2010-03-18 Casio Comput Co Ltd Image processing apparatus and computer program
JP2010113653A (en) * 2008-11-10 2010-05-20 Kyodo Printing Co Ltd Frame detection method, frame detector, and frame detection program
JP2010541087A (en) * 2007-10-05 2010-12-24 ソニー コンピュータ エンタテインメント ヨーロッパ リミテッド Image analysis apparatus and method
JP4630936B1 (en) * 2009-10-28 2011-02-09 シャープ株式会社 Image processing apparatus, image processing method, image processing program, and recording medium recording image processing program
JP2011034387A (en) * 2009-08-03 2011-02-17 Sharp Corp Image output device, mobile terminal device, captured image processing system, image output method, program and recording medium
JP2011035942A (en) * 2010-11-12 2011-02-17 Casio Computer Co Ltd Image processing apparatus and computer program
JP2011134322A (en) * 2009-12-23 2011-07-07 Intel Corp Model-based play field registration
US8125544B2 (en) 2008-09-02 2012-02-28 Casio Computer Co., Ltd. Image processing apparatus for extracting quadrangle area in image
JP2012212346A (en) * 2011-03-31 2012-11-01 Sony Corp Image processing apparatus, image processing method and image processing program
JP2012216184A (en) * 2012-01-24 2012-11-08 Nanao Corp Display device, image processing device, image area detecting method, and computer program
JP2013033406A (en) * 2011-08-02 2013-02-14 Ntt Comware Corp Image processing device, image processing method, and image processing program
JP2013041315A (en) * 2011-08-11 2013-02-28 Fujitsu Ltd Image recognition device and image recognition method
JP2013089234A (en) * 2011-10-17 2013-05-13 Sharp Corp Image processing system
JP2013114380A (en) * 2011-11-28 2013-06-10 Kddi Corp Information terminal device
JP2014021647A (en) * 2012-07-17 2014-02-03 Kurabo Ind Ltd Tilt correction device, tilt correction method and computer program for tilt correction
US8744170B2 (en) 2011-08-04 2014-06-03 Casio Computer Co., Ltd. Image processing apparatus detecting quadrilateral region from picked-up image
JP2014106922A (en) * 2012-11-29 2014-06-09 Samsung R&D Institute Japan Co Ltd Pointing device and program for pointing device
JP2015035040A (en) * 2013-08-08 2015-02-19 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
JP2015049776A (en) * 2013-09-03 2015-03-16 国立大学法人 東京大学 Image processor, image processing method and image processing program
JP2015068707A (en) * 2013-09-27 2015-04-13 シャープ株式会社 Defect determination device, defect inspection device, and defect determination method
JP2015153190A (en) * 2014-02-14 2015-08-24 Kddi株式会社 Information terminal device, method and program
JP2016126447A (en) * 2014-12-26 2016-07-11 キヤノン株式会社 Image processing apparatus and image processing method

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101720771B1 (en) * 2010-02-02 2017-03-28 삼성전자주식회사 Digital photographing apparatus, method for controlling the same, and recording medium storing program to execute the method
US8781152B2 (en) * 2010-08-05 2014-07-15 Brian Momeyer Identifying visual media content captured by camera-enabled mobile device
CN102637252B (en) * 2011-02-11 2014-07-02 汉王科技股份有限公司 Calling card positioning method and device
JP2011134343A (en) * 2011-02-24 2011-07-07 Nintendo Co Ltd Image processing program, image processing apparatus, image processing system, and image processing method
JP5822664B2 (en) * 2011-11-11 2015-11-24 株式会社Pfu Image processing apparatus, straight line detection method, and computer program
JP5854774B2 (en) 2011-11-11 2016-02-09 株式会社Pfu Image processing apparatus, straight line detection method, and computer program
JP5951367B2 (en) 2012-01-17 2016-07-13 シャープ株式会社 Imaging apparatus, captured image processing system, program, and recording medium
CN102881027A (en) * 2012-07-26 2013-01-16 方正国际软件有限公司 Method and system for detecting quadrangle of given region in image
JP2014092899A (en) * 2012-11-02 2014-05-19 Fuji Xerox Co Ltd Image processing apparatus and image processing program
CN103327262B (en) * 2013-06-19 2016-08-10 北京视博数字电视科技有限公司 A kind of method and system of Video segmentation
CN103399695B (en) * 2013-08-01 2016-08-24 上海合合信息科技发展有限公司 Quadrangle frame identification method and device for intelligent wireless communication terminal
CN104822069B (en) * 2015-04-30 2018-09-28 北京爱奇艺科技有限公司 A kind of image information detecting method and device
CN105260997B (en) * 2015-09-22 2019-02-01 北京医拍智能科技有限公司 A kind of method of automatic acquisition target image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05253352A (en) * 1992-03-13 1993-10-05 Ace Denken:Kk Ball circulation apparatus in pachinko hall
JPH06194138A (en) * 1992-12-24 1994-07-15 Nippon Telegr & Teleph Corp <Ntt> Attitude estimating method for object and its device
JPH09288741A (en) * 1996-04-19 1997-11-04 Nissan Motor Co Ltd Graphic designation supporting device
JP2000341501A (en) * 1999-03-23 2000-12-08 Minolta Co Ltd Device and method for processing image and recording medium with image processing program stored therein
JP2001177716A (en) * 1999-12-17 2001-06-29 Ricoh Co Ltd Image processing method and image processor
JP2002359838A (en) * 2001-03-28 2002-12-13 Matsushita Electric Ind Co Ltd Device for supporting driving
JP2003058877A (en) * 2001-08-20 2003-02-28 Pfu Ltd Method, device and program for correcting distortion
JP2005018195A (en) * 2003-06-24 2005-01-20 Minolta Co Ltd Image processing apparatus and image processing program
JP2005122320A (en) * 2003-10-14 2005-05-12 Casio Comput Co Ltd Photographing apparatus, and its image processing method and program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05253352A (en) * 1992-03-13 1993-10-05 Ace Denken:Kk Ball circulation apparatus in pachinko hall
JPH06194138A (en) * 1992-12-24 1994-07-15 Nippon Telegr & Teleph Corp <Ntt> Attitude estimating method for object and its device
JPH09288741A (en) * 1996-04-19 1997-11-04 Nissan Motor Co Ltd Graphic designation supporting device
JP2000341501A (en) * 1999-03-23 2000-12-08 Minolta Co Ltd Device and method for processing image and recording medium with image processing program stored therein
JP2001177716A (en) * 1999-12-17 2001-06-29 Ricoh Co Ltd Image processing method and image processor
JP2002359838A (en) * 2001-03-28 2002-12-13 Matsushita Electric Ind Co Ltd Device for supporting driving
JP2003058877A (en) * 2001-08-20 2003-02-28 Pfu Ltd Method, device and program for correcting distortion
JP2005018195A (en) * 2003-06-24 2005-01-20 Minolta Co Ltd Image processing apparatus and image processing program
JP2005122320A (en) * 2003-10-14 2005-05-12 Casio Comput Co Ltd Photographing apparatus, and its image processing method and program

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8363955B2 (en) 2007-10-05 2013-01-29 Sony Computer Entertainment Europe Limited Apparatus and method of image analysis
JP2010541087A (en) * 2007-10-05 2010-12-24 ソニー コンピュータ エンタテインメント ヨーロッパ リミテッド Image analysis apparatus and method
US8125544B2 (en) 2008-09-02 2012-02-28 Casio Computer Co., Ltd. Image processing apparatus for extracting quadrangle area in image
JP2010062722A (en) * 2008-09-02 2010-03-18 Casio Comput Co Ltd Image processing apparatus and computer program
JP4715888B2 (en) * 2008-09-02 2011-07-06 カシオ計算機株式会社 Image processing apparatus and computer program
JP2010113653A (en) * 2008-11-10 2010-05-20 Kyodo Printing Co Ltd Frame detection method, frame detector, and frame detection program
JP2011034387A (en) * 2009-08-03 2011-02-17 Sharp Corp Image output device, mobile terminal device, captured image processing system, image output method, program and recording medium
WO2011052276A1 (en) * 2009-10-28 2011-05-05 シャープ株式会社 Image processing device, image processing method, image processing program, and recording medium with recorded image processing program
JP2011097251A (en) * 2009-10-28 2011-05-12 Sharp Corp Image processor, image processing method, image processing program, and recording medium with image processing program recorded thereon
EP2495949A4 (en) * 2009-10-28 2014-05-07 Sharp Kk Image processing device, image processing method, image processing program, and recording medium with recorded image processing program
JP4630936B1 (en) * 2009-10-28 2011-02-09 シャープ株式会社 Image processing apparatus, image processing method, image processing program, and recording medium recording image processing program
CN102648622A (en) * 2009-10-28 2012-08-22 夏普株式会社 Image processing device, image processing method, image processing program, and recording medium with recorded image processing program
EP2495949A1 (en) * 2009-10-28 2012-09-05 Sharp Kabushiki Kaisha Image processing device, image processing method, image processing program, and recording medium with recorded image processing program
US8731321B2 (en) 2009-10-28 2014-05-20 Sharp Kabushiki Kaisha Image processing device, image processing method, image processing program, and recording medium with recorded image processing program
JP2011134322A (en) * 2009-12-23 2011-07-07 Intel Corp Model-based play field registration
JP2011035942A (en) * 2010-11-12 2011-02-17 Casio Computer Co Ltd Image processing apparatus and computer program
US9715743B2 (en) 2011-03-31 2017-07-25 Sony Corporation Image processing apparatus, image processing method, and program
JP2012212346A (en) * 2011-03-31 2012-11-01 Sony Corp Image processing apparatus, image processing method and image processing program
US9443348B2 (en) 2011-03-31 2016-09-13 Sony Corporation Image processing apparatus, image processing method, and program
US10360696B2 (en) 2011-03-31 2019-07-23 Sony Corporation Image processing apparatus, image processing method, and program
JP2013033406A (en) * 2011-08-02 2013-02-14 Ntt Comware Corp Image processing device, image processing method, and image processing program
US8744170B2 (en) 2011-08-04 2014-06-03 Casio Computer Co., Ltd. Image processing apparatus detecting quadrilateral region from picked-up image
JP2013041315A (en) * 2011-08-11 2013-02-28 Fujitsu Ltd Image recognition device and image recognition method
JP2013089234A (en) * 2011-10-17 2013-05-13 Sharp Corp Image processing system
US9390342B2 (en) 2011-10-17 2016-07-12 Sharp Laboratories Of America, Inc. Methods, systems and apparatus for correcting perspective distortion in a document image
JP2013114380A (en) * 2011-11-28 2013-06-10 Kddi Corp Information terminal device
JP2012216184A (en) * 2012-01-24 2012-11-08 Nanao Corp Display device, image processing device, image area detecting method, and computer program
JP2014021647A (en) * 2012-07-17 2014-02-03 Kurabo Ind Ltd Tilt correction device, tilt correction method and computer program for tilt correction
JP2014106922A (en) * 2012-11-29 2014-06-09 Samsung R&D Institute Japan Co Ltd Pointing device and program for pointing device
JP2015035040A (en) * 2013-08-08 2015-02-19 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
JP2015049776A (en) * 2013-09-03 2015-03-16 国立大学法人 東京大学 Image processor, image processing method and image processing program
JP2015068707A (en) * 2013-09-27 2015-04-13 シャープ株式会社 Defect determination device, defect inspection device, and defect determination method
JP2015153190A (en) * 2014-02-14 2015-08-24 Kddi株式会社 Information terminal device, method and program
JP2016126447A (en) * 2014-12-26 2016-07-11 キヤノン株式会社 Image processing apparatus and image processing method

Also Published As

Publication number Publication date
CN101248454A (en) 2008-08-20
JP4712487B2 (en) 2011-06-29
CN101248454B (en) 2012-11-21

Similar Documents

Publication Publication Date Title
JP4488233B2 (en) Video object recognition device, video object recognition method, and video object recognition program
US9898856B2 (en) Systems and methods for depth-assisted perspective distortion correction
KR101247147B1 (en) Face searching and detection in a digital image acquisition device
Piva An overview on image forensics
AU2007224085B2 (en) Model- based dewarping method and apparatus
US9652663B2 (en) Using facial data for device authentication or subject identification
US8417059B2 (en) Image processing device, image processing method, and program
EP2375755B1 (en) Apparatus for detecting direction of image pickup device and moving body comprising same
CN101558416B (en) Text detection on mobile communications devices
JP4950290B2 (en) Imaging apparatus, method, system integrated circuit, and program
Farid A survey of image forgery detection
ES2252309T3 (en) Method and apparatus for determining regions of interest in images and for a transmission of images.
US8345921B1 (en) Object detection with false positive filtering
Föckler et al. PhoneGuide: museum guidance supported by on-device object recognition on mobile phones
KR100929085B1 (en) An image processing apparatus, image processing method and a computer program storage medium
US8351662B2 (en) System and method for face verification using video sequence
KR101126466B1 (en) Photographic document imaging system
US20070242900A1 (en) Combining multiple exposure images to increase dynamic range
US20090001165A1 (en) 2-D Barcode Recognition
US7583858B2 (en) Image processing based on direction of gravity
US8755573B2 (en) Time-of-flight sensor-assisted iris capture system and method
US9053388B2 (en) Image processing apparatus and method, and computer-readable storage medium
JP2011165008A (en) Image recognition apparatus and method
JP5256806B2 (en) Target image detection method and image detection apparatus
JP2008059081A (en) Image processing apparatus, image processing method and computer program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080624

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100901

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20101028

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110316

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110323

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140401

Year of fee payment: 3