WO2013094231A1 - Information terminal device, captured image processing system, method, and recording medium recording program - Google Patents

Information terminal device, captured image processing system, method, and recording medium recording program Download PDF

Info

Publication number
WO2013094231A1
WO2013094231A1 PCT/JP2012/056327 JP2012056327W WO2013094231A1 WO 2013094231 A1 WO2013094231 A1 WO 2013094231A1 JP 2012056327 W JP2012056327 W JP 2012056327W WO 2013094231 A1 WO2013094231 A1 WO 2013094231A1
Authority
WO
WIPO (PCT)
Prior art keywords
captured image
trimming
area
region
vertex coordinates
Prior art date
Application number
PCT/JP2012/056327
Other languages
French (fr)
Japanese (ja)
Inventor
浩明 金田
和久 中林
Original Assignee
ナカバヤシ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ナカバヤシ株式会社 filed Critical ナカバヤシ株式会社
Publication of WO2013094231A1 publication Critical patent/WO2013094231A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3872Repositioning or masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/626Detection of non-electronic marks, e.g. fluorescent markers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present invention relates to a captured image processing method, and more specifically, recorded an information terminal device, a captured image processing system, a method, and a program capable of trimming and storing a predetermined area of a captured image even when away from home.
  • the present invention relates to a computer-readable recording medium.
  • Patent Document 1 there is a technology for storing image data captured by an imaging device included in a mobile terminal device such as a mobile phone or a smartphone via a network (see, for example, Patent Document 1). .
  • a portable information device such as a PDA with a camera or a portable personal computer
  • Image processing such as adjustment of brightness, color, and size is performed on the data.
  • Patent Document 1 Although the technique described in Patent Document 1 has the convenience that image data collected on the go can be saved via a network, the process of trimming only the necessary portions after converting to image data is very troublesome. It was. In addition, even when trying to trim image data, there is a problem that the operability is insufficient with a mobile phone held by a user who is out, and even a compact information device such as a mobile phone can be easily used. Therefore, development of a simple method capable of trimming image data has been desired.
  • the captured image is distorted (perspective).
  • the plane containing the imaging object (document page) is fixed at the position of the glass surface of the flatbed. Does not occur.
  • the imaging unit captures an image from a direction that does not coincide with the normal direction of the plane including the imaging target. There is a problem that the captured image is distorted and the content of the article is difficult to read even if the user browses to confirm the content of the captured image.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide an information terminal device, a captured image processing system, and a method capable of trimming and storing a predetermined region of a captured image even when away from home. Another object of the present invention is to provide a computer-readable recording medium on which a program is recorded.
  • an object of the present invention is to provide an information terminal device capable of correcting a distortion and storing a picked-up image when picking up an object to be picked up by a portable information terminal device having an image pickup unit, and picked-up image processing It is an object to provide a computer-readable recording medium on which a system, a method, and a program are recorded.
  • an information terminal device is an information terminal device that includes an imaging unit and a control unit, and performs a trimming process on a predetermined area of a captured image acquired from the imaging unit.
  • a predetermined region of the object is surrounded by a line having a predetermined color
  • the control unit acquires a captured image obtained by capturing the predetermined region of the imaging object from the imaging unit, and the imaging
  • An extracted image obtained by extracting only the predetermined color to be trimmed is generated from the image, and a plurality of contour lines corresponding to the predetermined region of the captured image are extracted from the extracted image and surrounded by the contour lines
  • This is an information terminal device that determines a region having the largest area as a trimming region and trims a region specified by the trimming region of the captured image.
  • the information terminal device is the information terminal device according to the present invention, wherein the control unit further determines a trimming region, and then extends a plurality of line segments constituting the trimming region.
  • a plurality of intersection candidates formed by two straight lines are obtained from a combination of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are first trapezoidal correction.
  • a trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a trapezoidally corrected second key is calculated based on the calculated index.
  • first vertex coordinate and the second vertex coordinate are used to perform perspective projection transformation of the captured image to generate the captured image in front view, and the first vertex Coordinates and the second Performing perspective projection transformation of the trimming area using point coordinates, generating the trimming area in front view, and trimming the area specified by the trimming area in front view of the captured image in front view preferable.
  • the captured image processing system includes an information terminal device having an imaging unit and a control unit, and a writing instrument, and the information terminal device performs a trimming process on a predetermined region of the captured image acquired from the imaging unit.
  • a predetermined region of the imaging target is surrounded by a line having a predetermined color using the writing instrument
  • the control unit is configured to transmit the imaging target from the imaging unit.
  • a captured image obtained by capturing the predetermined area is acquired, and an extracted image obtained by extracting only the predetermined color to be trimmed is generated from the captured image.
  • the predetermined area of the captured image is extracted.
  • a plurality of corresponding contour lines are extracted, a region having the maximum area surrounded by the contour lines is determined as a trimming region, and a region specified by the trimming region of the captured image is trimmed.
  • a captured image processing system To ring, a captured image processing system.
  • the control unit further extends a plurality of line segments constituting the trimming area after determining the trimming area.
  • a plurality of intersection candidates formed by two straight lines are obtained from a combination of a plurality of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are obtained as trapezoidal correction firsts.
  • a trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and after trapezoidal correction based on the calculated index
  • the second vertex coordinates are determined, and the first vertex coordinates and the second vertex coordinates are used to perform perspective projection conversion of the captured image to generate the captured image in front view, and the first Vertex coordinates And the perspective projection conversion of the trimming area using the second vertex coordinates to generate the trimming area for front view, and the area specified by the trimming area for front view of the captured image in front view Is preferably trimmed.
  • a captured image processing method is a method for trimming a predetermined region of a captured image acquired from the image capturing unit in an information terminal device including an image capturing unit and a control unit.
  • a step in which a predetermined region is surrounded by a line having a predetermined color, and the control unit acquires a captured image obtained by capturing the predetermined region of the imaging object from the imaging unit;
  • a captured image including a step of determining a region having the maximum enclosed area as a trimming region, and a step of trimming a region specified by the trimming region of the captured image It is a management method.
  • the captured image processing method according to the present invention is the captured image processing method according to the present invention, wherein the control unit determines a trimming region and then extends a plurality of line segments constituting the trimming region.
  • a plurality of intersection candidates formed by two straight lines are obtained from a combination of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are first trapezoidal correction.
  • a step of determining a vertex coordinate, and calculating a trapezoidal distortion index defined by the four first vertex coordinates from the four first vertex coordinates, and after correcting the trapezoid based on the calculated index Determining the second vertex coordinates, and performing perspective projection transformation of the captured image using the first vertex coordinates and the second vertex coordinates to generate the captured image in front view; Performing a perspective projection transformation of the trimming area using the first vertex coordinates and the second vertex coordinates to generate the trimming area in a front view; and a front view view of the captured image. It is preferable that the method further includes a step of trimming an area designated by the trimming area.
  • a computer-readable recording medium that records a captured image processing program according to the present invention performs a trimming process on a predetermined area of a captured image acquired from the imaging unit in an information terminal device including an imaging unit and a control unit.
  • the computer-readable recording medium recording the captured image processing program according to the present invention is a computer-readable recording medium recording the captured image processing program according to the present invention, wherein the control unit further includes the trimming area. Is determined, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming region, and coordinates among the obtained intersection candidates are determined. Are determined as four first vertex coordinates from the four first vertex coordinates, and the four intersection candidate candidates that are positioned on the outermost side are determined as first vertex coordinates for trapezoidal correction.
  • Calculating a trapezoidal distortion index determining a second vertex coordinate after trapezoid correction based on the calculated index; and the first vertex coordinate And using the second vertex coordinates to perform perspective projection transformation of the captured image to generate the captured image in front view, using the first vertex coordinates and the second vertex coordinates, Performing a perspective projection conversion of the trimming area to generate the trimming area in front view, and trimming an area specified by the trimming area in front view of the captured image in front view; It is preferable to execute.
  • the present invention it is possible to trim and save a predetermined area of a captured image even on the go.
  • the imaging target is imaged by a portable information terminal device having an imaging unit
  • the captured image can be stored after correcting the distortion. There is no distortion in newspaper articles and magazine clippings that have been captured and converted into image information, and the user can accurately read the contents of the articles.
  • FIG. 1 is a schematic configuration diagram of a mobile terminal device according to an embodiment of the present invention.
  • (A) is a front view
  • (b) is a back view.
  • FIG. 2 is a block diagram of the mobile terminal device.
  • the mobile terminal device 1 includes a device main body 2, an imaging unit 3, a display unit 4, a touch panel 5, a storage unit 6, and a control unit 7.
  • the mobile terminal device 1 includes an antenna for wireless communication, a microphone and a speaker for voice calls, and the like (none of which are shown).
  • a portable terminal device 1 is not particularly limited, and examples thereof include a smartphone, a mobile phone, and a PDA.
  • the imaging unit 3 is a known configuration that images a subject image incident through a lens, and can capture a newspaper article or magazine clipping to be captured.
  • Such an imaging unit 3 includes, for example, an imaging element such as a CCD (Charge-Coupled Device) or CMOS (Complementary Metal-Oxide Semiconductor) that outputs an analog electrical signal, photoelectrically converts a subject image incident through a lens, and performs imaging.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • a device that converts an analog electric signal from the element into a digital electric signal and outputs image data can be used.
  • the display unit 4 a known display device such as a liquid crystal display or an organic EL display can be used, and image information or the like can be displayed on the display screen.
  • the touch panel 5 is a known touch panel that can recognize the touch position when the user touches the surface, and is disposed on the display unit 4.
  • a known type such as a resistive film type, a surface acoustic wave type, an electromagnetic induction type, a capacitance type, or the like can be used.
  • various inputs can be made by the user touching the touch panel 5. For example, by touching the touch panel 5, an instruction for imaging by the imaging unit 3 and display on the display unit 4 can be input.
  • the storage unit 6 includes a storage medium such as a known hard disk or semiconductor memory that stores programs and data for information processing.
  • the storage unit 6 can store image information captured by the imaging unit 3 (hereinafter also simply referred to as “captured image”).
  • the captured image is stored in various image formats such as jpeg and gif, for example.
  • the control unit 7 includes a processor such as a known CPU that performs information processing based on programs and data.
  • the control unit 7 can control each of the above constituent elements by executing a program stored in the storage unit 6.
  • the processing performed by the mobile terminal device 1 actually means processing performed by the control unit 7 of the mobile terminal device 1.
  • the control unit 7 temporarily stores necessary data (such as intermediate data during processing) using the storage unit 6 as a work area, and appropriately records data to be stored for a long period of time such as a calculation result in the storage unit 6.
  • the mobile terminal device 1 generates, for example, a program used for performing the processing of steps S1 to S15 described below in an execution format (for example, generated by being converted from a programming language such as C language by a compiler).
  • the mobile terminal device 1 is recorded in advance in the storage unit 6 and performs processing using the program recorded in the storage unit 6.
  • FIG. 3 is a flowchart showing the processing order of the image processing method performed by the mobile terminal device according to the embodiment of the present invention.
  • the processing order of the image processing method according to the embodiment of the present invention will be described in detail based on the flowchart shown in FIG.
  • trapezoidal correction refers to a captured image in which the normal direction of the plane including the object to be imaged and the image capturing direction do not coincide with each other, and the image capturing direction coincides with the normal direction. This means that the image is converted to a front-view image without any.
  • step S1 a predetermined area of a newspaper article to be captured is imaged.
  • the user marks a predetermined region Rn of the newspaper article N with a predetermined color (for example, red) using the marker pen M, and the user uses the imaging unit 3 to mark the predetermined region surrounded by the marker.
  • An area including Rn is imaged.
  • the portable terminal device 1 records a captured image.
  • the captured image is parsed in the Y-axis direction in the figure.
  • the capture target region Rn is also parsed in the Y-axis direction in the figure.
  • the marker pen M a marker pen M having a V-shaped or U-shaped shape, for example, in which a colored portion at the tip is divided into two protrusions is used.
  • the capture target region Rn is a region surrounded by a double line pattern.
  • an image is generated by extracting only the color to be trimmed from the captured image.
  • the extracted image is image data used for recognizing a trimming region Rt described later. Since the vertical and horizontal image sizes of the extracted image match the vertical and horizontal image sizes of the captured image, the extracted image functions as a so-called “layer” for the captured image.
  • the generation of the extracted image is performed by, for example, a known processing method that extracts using a threshold value. Since the region Rn to be captured by the newspaper article N is marked by being surrounded by a predetermined color, when only a predetermined color in the captured image is extracted, the largest region surrounded by the extracted color in the extracted image is This becomes a trimming region Rt described later.
  • the extracted image is also parsed. Further, since the region Rn to be captured is surrounded by a double line pattern, the region surrounded by the extracted color in the extracted image is also surrounded by the double line pattern.
  • the layer of the extracted image is converted into coordinates, and the extracted image is specified using coordinates instead of the layer.
  • step S3 the extracted double line image is converted into a single line. That is, the width of the extracted double line is expanded and contracted to be converted into a single line (single line).
  • the double line is detected by a known processing method that detects a boundary using a threshold value, for example.
  • step S4 coordinates for trimming are detected. Since the extracted image is image data of only a predetermined color (red), it is first binarized (monochrome data). Next, all contour lines are extracted from the binarized image data, and the extracted contour lines are linearly approximated. Further, a region where the area of the region surrounded by the contour line is maximized is determined by combining the linearized contour lines, and the region where the area is maximized is set as a trimming region Rt. The method of contour extraction and straight line approximation is performed by a known processing method. Since the coordinates of the extracted image are parsed, the shape of the trimming region Rt is also parsed.
  • step S5 it is determined whether or not the area of the trimming region Rt is larger than a predetermined area. If it is larger, the process of step S6 is performed, and if smaller, the process is terminated.
  • step S6 the vertex coordinates for keystone correction are calculated.
  • the vertex coordinates for trapezoid correction are obtained from the vertex coordinates constituting the trimming region Rt obtained in step S4.
  • FIG. 5 is a schematic diagram for explaining a method of determining vertex coordinates for trapezoid correction.
  • the trimming region Rt is composed of a plurality of line segments that connect the vertices r1 to r10 in order. An angle formed by two line segments among the plurality of line segments is obtained. If the obtained angle is equal to or greater than a predetermined angle (for example, 70 degrees), the coordinates of the intersection formed by two straight lines obtained by extending the two line segments are obtained and set as a candidate for the intersection.
  • a predetermined angle for example, 70 degrees
  • intersection candidate is excluded. Processing for obtaining such intersection candidate is performed for all combinations (brute force) in which the combinations of the two line segments are changed. Then, of the obtained plurality of intersection candidates, the four intersections whose coordinates are located on the outermost side are designated as vertex coordinates P1 to P4 for trapezoid correction.
  • step S7 the user finely adjusts the coordinate positions of the vertex coordinates P1 to P4 on the display screen.
  • the mobile terminal device 1 displays the trapezoidal correction vertex coordinates P1 to P4 obtained in step S6, the coordinates indicating the trimming area Rt, and the captured image indicating the capture target area Rn on the display unit 4 in an overlapping manner.
  • the coordinate positions of the vertex coordinates P1 to P4 finely adjusted based on the user input from the touch panel 5 are recorded.
  • step S8 the vertex coordinates after the keystone correction are calculated.
  • the vertex coordinates after the trapezoid correction are obtained from the trapezoid correction vertex coordinates P1 to P4 determined in step S6 or S7.
  • FIG. 6 is a schematic diagram for explaining a method of determining vertex coordinates after trapezoid correction. First, the angles woven by two opposite sides (two straight lines) of the trapezoid defined by the vertex coordinates P1 to P4 are obtained, and the two sides having the smaller inclination are selected. This is a process for discriminating the direction of the perspective. In the present embodiment, since the perspective is attached in the Y-axis direction shown in FIG. Is selected.
  • the inclinations ⁇ 1 and ⁇ 2 with the adjacent sides are obtained.
  • an average value of the two obtained inclinations ⁇ 1 and ⁇ 2 is obtained, and this is set as a trapezoidal inclination ⁇ av .
  • the obtained trapezoidal inclination ⁇ av becomes an index representing the degree of parsing of the captured image.
  • the aspect ratio correspondence table is a correspondence table between the trapezoid inclination ⁇ av and the aspect ratio of the image after the trapezoid correction, and is a table prepared in advance by actual measurement using the imaging unit 3 of the mobile terminal device 1.
  • Table 1 shows an example of the aspect ratio correspondence table.
  • the constant k is the lateral magnification of the short side (P1-P4) when the long side of the trapezoid (side P2-P3 in this embodiment) is 1.
  • the coordinate positions of the four vertices P1x to P4x after trapezoid correction are calculated and recorded.
  • the sides P2-P3 are the long sides of the trapezoid and the sides P1-P4 are the short sides, so the coordinate positions of the points P1, P4 of the sides P1-P4, which are the short sides, and the horizontal side magnification Based on k, coordinate positions P1x and P4x after trapezoidal correction are calculated.
  • step S9 the trapezoidal correction of the captured image is performed, and the captured image of the front view without distortion is created from the captured image distorted with the perspective. Since the vertex coordinates P1 to P4 before the trapezoid correction and the vertex coordinates P1x to P4x after the trapezoid correction are already obtained on the extracted image, the vertex coordinates P1 to P4 and P1x to P4x before and after the correction are corrected.
  • the perspective projection conversion is performed using the information, and the trapezoidal correction of the captured image with the perspective is performed.
  • the captured image after the keystone correction is a captured image of front view without perspective. Since perspective projection conversion is known, detailed description thereof is omitted in this specification.
  • step S10 the trapezoidal correction of the trimming area is performed, and a front-view trimming area without distortion is created from the trimming area Rt that is distorted with perspective.
  • Perspective projection conversion is performed using the information of the vertex coordinates P1 to P4 and P1x to P4x before and after correction, and the keystone correction of the coordinates of the trimmed region Rt with the perspective is performed.
  • the trimming area after the keystone correction is a trimming area in front view without perspective.
  • step S11 the user finely adjusts the coordinate position of the vertex coordinates of the trimming area on the display screen.
  • the mobile terminal device 1 displays on the display unit 4 the coordinates indicating the front-view trimming area obtained in step S10 and the captured image obtained in step S9 on the display unit 4, and the user from the touch panel 5 Based on the input, the coordinate position of the finely adjusted vertex coordinates of the trimming area in front view is recorded.
  • step S12 the captured image is trimmed.
  • a mask image for trimming processing is created from the coordinates of the front-view trimming region created in step S10 or step S11.
  • trimming processing of the captured image of the front view created in step S9 is performed using the created mask image. Since the coordinate position of the vertex coordinates and the captured image are both front-view images, the captured image after trimming is a front-view image without perspective.
  • a known method is used for image trimming using a mask image.
  • a region outside the trimming region in the captured image is filled with, for example, a predetermined color (for example, white).
  • step S13 the image quality of the trimmed captured image is adjusted.
  • the white balance (color temperature) of the trimmed image is adjusted according to the shooting environment of the captured image, the histogram is made uniform, and gamma correction is performed.
  • step S14 it is determined whether or not there is a double line in the captured image.
  • a double line detection process is performed on the captured image with the perspective captured in step S1, and if a double line exists, the process of step S15 is performed. If a double line does not exist, the double line detection process is performed. The process ends.
  • the double line is detected by a known processing method that detects a boundary using a threshold value, for example.
  • step S15 the trimmed image whose image quality has been adjusted is stored in the storage unit 6, and the process is terminated.
  • the mobile terminal device 1 trims the area marked with the image and images the image. It can be saved as information. Since it is a simple method of enclosing and marking a region to be trimmed of an article using the marker pen M, it is possible to easily trim image data even when away from home. Further, since the stored trimmed image is a front-view image without perspective, there is no distortion in a newspaper article or magazine clipped as image information, and the user can accurately read the contents of the article.
  • the mobile terminal device 1 applies both the trimming process of the captured image and the trapezoidal correction.
  • the mobile terminal device 1 may apply only the trimming process of the captured image. That is, the trapezoidal correction of the captured image is an arbitrary process according to the user's desire.
  • all the contour lines are extracted and the trimming area is determined in step S4.
  • the method for obtaining the trimming area is not limited to this.
  • the user uses the marker pen M.
  • a region surrounded by freehand may be used as a trimming region as it is.
  • the region R to be captured is marked with a double line pattern using the marker pen M in which the colored portion at the tip is divided into two protrusions.
  • the pattern of the line surrounding is not limited to this.
  • a marker pen in which the colored portion at the tip is divided into three protrusions may be used to mark the region R to be captured in a triple line pattern.
  • the region R to be captured may be surrounded by a pattern in which the line widths of the three lines are different using marker pens having different widths of the three protrusions.
  • the process of converting a double line image into a single line in step S3 and the process of determining the presence of a double line in step S14 are arbitrary processes.
  • the line surrounding the region R to be captured is a single line
  • these processes can be omitted.
  • the captured image can be trimmed using the region surrounded by the single line as a trimming region.
  • a normal dye-based or pigment-based ink is used for the marker pen, but instead, an infrared ink that reflects or absorbs infrared light may be used. If the area R to be captured is marked by using infrared reflective ink, the infrared light is reflected or absorbed at the mark portion, and this is detected on the mobile terminal device 1 side to identify the area Rn to be captured. May be.
  • step S7 the mobile terminal device 1 records the coordinate positions of the finely adjusted vertex coordinates P1 to P4, and in step S11, the mobile terminal device 1 detects the trimming area of the front view.
  • the coordinate positions of the finely adjusted vertex coordinates are recorded, but the process of finely adjusting these coordinate positions or the trimming area is an arbitrary process, and the process may be omitted.
  • step S14 determines whether the double line existed in the captured image in step S14, the result of having detected the double line with respect to the captured image in step S3, for example, It may be stored as a Boolean flag, and in step S14, it may be determined whether or not a double line exists in the captured image based on the stored information of the flag.
  • white balance adjustment, histogram equalization, and gamma correction are illustrated as examples of adjusting the image quality of a trimmed captured image.
  • the present invention is not limited to these. Any method of adjusting the image quality suitable for browsing the trimmed image on the mobile terminal device 1 can be applied as appropriate.
  • the area outside the trimming area in the captured image is filled with a predetermined color.
  • this area may be made transparent in the form of a transparent GIF, for example.
  • the present invention it is possible to trim and save a predetermined area of a captured image even on the go.
  • the imaging target is imaged by a portable information terminal device having an imaging unit
  • the captured image can be stored after correcting the distortion. There is no distortion in newspaper articles and magazine clippings that have been captured and converted into image information, and the user can accurately read the contents of the articles.

Abstract

The purpose of the present invention is to provide an information terminal device capable of trimming and storing a prescribed region of a captured image even away from home, a captured image processing system, method, and computer-readable recording medium recording a program. This portable information terminal device is provided with an imaging unit and a control unit, and performs trimming processing on a prescribed region of a captured image acquired from the imaging unit. The prescribed region of an imaging subject is surrounded by a line having a prescribed color, and a control unit acquires from the imaging unit a captured image in which the prescribed region of the imaging subject is captured (S1). From the captured image, an extracted image is generated by extracting only a prescribed color which is the trimming subject (S2). In the extracted image, multiple contour lines are extracted which correspond to the prescribed region of the captured image, and the trimming region is set to be the region with the largest area surrounded by said contour lines (S4). The region of the captured image indicated by the trimming region is then trimmed (S12).

Description

情報端末装置、撮像画像処理システム、方法、およびプログラムを記録した記録媒体Information terminal device, captured image processing system, method, and recording medium recording program
 本発明は撮像画像の処理方法に関し、より詳細には、外出先においても撮像画像の所定の領域をトリミングして保存することができる情報端末装置、撮像画像処理システム、方法、およびプログラムを記録したコンピュータ読み取り可能な記録媒体に関する。 The present invention relates to a captured image processing method, and more specifically, recorded an information terminal device, a captured image processing system, a method, and a program capable of trimming and storing a predetermined area of a captured image even when away from home. The present invention relates to a computer-readable recording medium.
 新聞記事や雑誌の切り抜き等を画像データとして取り込む場合、従来は一般的なスキャナーを使用していた。しかしながら、スキャナーの設置箇所は、例えばオフィスやコンビニエンス・ストア等の特定の場所に限られているので、興味が引かれる記事等を外出先において見かけた場合であっても、手軽に画像データ化して保存することができなかった。 When capturing newspaper articles or magazine clippings as image data, a conventional scanner has been used. However, because scanner installation locations are limited to specific locations such as offices and convenience stores, for example, even if you find interesting articles on the go, you can easily convert them to image data. Could not save.
 一方、近年の情報処理技術の発達により、例えば、携帯電話やスマートフォンなどの携帯端末装置が備える撮像装置により撮像した画像データを、ネットワークを介して保存する技術がある(例えば、特許文献1参照)。この特許文献1に記載の技術では、カメラ付きPDAや携帯パソコン等の携帯型の情報機器で収集したデジタル形式の画像データを、ネットワークを介してサーバに送信し、サーバ側で、受信したデジタル画像データに対して、明るさや色、サイズの調整などの画像処理を行っている。 On the other hand, with recent development of information processing technology, for example, there is a technology for storing image data captured by an imaging device included in a mobile terminal device such as a mobile phone or a smartphone via a network (see, for example, Patent Document 1). . In the technique described in Patent Document 1, digital image data collected by a portable information device such as a PDA with a camera or a portable personal computer is transmitted to a server via a network, and the received digital image is received on the server side. Image processing such as adjustment of brightness, color, and size is performed on the data.
特開2002-41502号公報JP 2002-41502 A
 特許文献1に記載の技術では、外出先において収集した画像データをネットワークを介して保存できるという利便性はあるものの、画像データ化した後に、必要な部分のみをトリミングする処理が非常に面倒であった。また、画像データをトリミングしようとしても、外出中のユーザが手にしている携帯電話等では、操作性が不十分であるという問題があり、携帯電話等のコンパクトな情報機器であっても、手軽に画像データをトリミングすることができる簡便な手法の開発が望まれていた。 Although the technique described in Patent Document 1 has the convenience that image data collected on the go can be saved via a network, the process of trimming only the necessary portions after converting to image data is very troublesome. It was. In addition, even when trying to trim image data, there is a problem that the operability is insufficient with a mobile phone held by a user who is out, and even a compact information device such as a mobile phone can be easily used. Therefore, development of a simple method capable of trimming image data has been desired.
 また、携帯端末装置のカメラ機能を使用して、新聞記事や雑誌の切り抜き等を撮像すると、撮像画像に歪み(パースペクティブ)が生じるという問題もある。文書の保存を目的として、一般的なスキャナーを使用する場合には、撮像対象物(文書ページ)を含む平面がフラットベッドのガラス面の位置に固定されるので、このような撮像画像の歪みは生じない。しかしながら、携帯端末装置のカメラ機能を使用する場合には、撮像部が、撮像対象物を含む平面の法線方向とは一致しない方向から撮像することになるので、撮像方向を正確に合わせるのが困難となり、撮像画像に歪みが生じてしまい、ユーザが撮像画像の内容を確認しようとして閲覧しても、記事の内容が読みにくいという問題があった。 Also, when a newspaper article or magazine clipping is imaged using the camera function of the mobile terminal device, there is a problem that the captured image is distorted (perspective). When a general scanner is used for the purpose of document storage, the plane containing the imaging object (document page) is fixed at the position of the glass surface of the flatbed. Does not occur. However, when the camera function of the mobile terminal device is used, the imaging unit captures an image from a direction that does not coincide with the normal direction of the plane including the imaging target. There is a problem that the captured image is distorted and the content of the article is difficult to read even if the user browses to confirm the content of the captured image.
 本発明は、上記課題を解決するためになされたものであり、その目的は、外出先においても撮像画像の所定の領域をトリミングして保存することができる情報端末装置、撮像画像処理システム、方法、およびプログラムを記録したコンピュータ読み取り可能な記録媒体を提供することにある。 The present invention has been made to solve the above-described problems, and an object of the present invention is to provide an information terminal device, a captured image processing system, and a method capable of trimming and storing a predetermined region of a captured image even when away from home. Another object of the present invention is to provide a computer-readable recording medium on which a program is recorded.
 さらに、本発明の目的は、撮像部を有する携帯型の情報端末装置にて撮像対象物を撮像する際に、歪みを補正したうえで撮像画像を保存することができる情報端末装置、撮像画像処理システム、方法、およびプログラムを記録したコンピュータ読み取り可能な記録媒体を提供することにある。 Furthermore, an object of the present invention is to provide an information terminal device capable of correcting a distortion and storing a picked-up image when picking up an object to be picked up by a portable information terminal device having an image pickup unit, and picked-up image processing It is an object to provide a computer-readable recording medium on which a system, a method, and a program are recorded.
 上記目的の達成のために、本発明に係る情報端末装置は、撮像部と制御部とを備え、前記撮像部から取得した撮像画像の所定の領域をトリミング処理する情報端末装置であって、撮像対象物の所定の領域が、所定の色を有する線で囲まれており、前記制御部が、前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得し、前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成し、前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定し、前記撮像画像の、前記トリミング領域で指定される領域をトリミングする、情報端末装置である。 In order to achieve the above object, an information terminal device according to the present invention is an information terminal device that includes an imaging unit and a control unit, and performs a trimming process on a predetermined area of a captured image acquired from the imaging unit. A predetermined region of the object is surrounded by a line having a predetermined color, and the control unit acquires a captured image obtained by capturing the predetermined region of the imaging object from the imaging unit, and the imaging An extracted image obtained by extracting only the predetermined color to be trimmed is generated from the image, and a plurality of contour lines corresponding to the predetermined region of the captured image are extracted from the extracted image and surrounded by the contour lines This is an information terminal device that determines a region having the largest area as a trimming region and trims a region specified by the trimming region of the captured image.
 また、本発明に係る情報端末装置は、本発明に係る情報端末装置において、前記制御部が、さらに、前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定し、4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定し、前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成し、前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成し、正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングすることが好ましい。 Further, the information terminal device according to the present invention is the information terminal device according to the present invention, wherein the control unit further determines a trimming region, and then extends a plurality of line segments constituting the trimming region. A plurality of intersection candidates formed by two straight lines are obtained from a combination of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are first trapezoidal correction. A trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a trapezoidally corrected second key is calculated based on the calculated index. 2 is determined, and the first vertex coordinate and the second vertex coordinate are used to perform perspective projection transformation of the captured image to generate the captured image in front view, and the first vertex Coordinates and the second Performing perspective projection transformation of the trimming area using point coordinates, generating the trimming area in front view, and trimming the area specified by the trimming area in front view of the captured image in front view preferable.
 また、本発明に係る撮像画像処理システムは、撮像部および制御部を有する情報端末装置と、筆記具とを備え、前記撮像部から取得した撮像画像の所定の領域を前記情報端末装置がトリミング処理する撮像画像処理システムであって、撮像対象物の所定の領域が、前記筆記具を用いて、所定の色を有する線で囲まれており、前記制御部が、前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得し、前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成し、前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定し、前記撮像画像の、前記トリミング領域で指定される領域をトリミングする、撮像画像処理システムである。 The captured image processing system according to the present invention includes an information terminal device having an imaging unit and a control unit, and a writing instrument, and the information terminal device performs a trimming process on a predetermined region of the captured image acquired from the imaging unit. In the captured image processing system, a predetermined region of the imaging target is surrounded by a line having a predetermined color using the writing instrument, and the control unit is configured to transmit the imaging target from the imaging unit. A captured image obtained by capturing the predetermined area is acquired, and an extracted image obtained by extracting only the predetermined color to be trimmed is generated from the captured image. In the extracted image, the predetermined area of the captured image is extracted. A plurality of corresponding contour lines are extracted, a region having the maximum area surrounded by the contour lines is determined as a trimming region, and a region specified by the trimming region of the captured image is trimmed. To ring, a captured image processing system.
 また、本発明に係る撮像画像処理システムは、本発明に係る撮像画像処理システムにおいて、前記制御部が、さらに、前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定し、4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定し、前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成し、前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成し、正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングすることが好ましい。 Further, in the captured image processing system according to the present invention, in the captured image processing system according to the present invention, the control unit further extends a plurality of line segments constituting the trimming area after determining the trimming area. A plurality of intersection candidates formed by two straight lines are obtained from a combination of a plurality of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are obtained as trapezoidal correction firsts. 1 is determined as a vertex coordinate, a trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and after trapezoidal correction based on the calculated index The second vertex coordinates are determined, and the first vertex coordinates and the second vertex coordinates are used to perform perspective projection conversion of the captured image to generate the captured image in front view, and the first Vertex coordinates And the perspective projection conversion of the trimming area using the second vertex coordinates to generate the trimming area for front view, and the area specified by the trimming area for front view of the captured image in front view Is preferably trimmed.
 また、本発明に係る撮像画像処理方法は、撮像部と制御部とを備える情報端末装置において、前記撮像部から取得した撮像画像の所定の領域をトリミング処理する方法であって、撮像対象物の所定の領域が、所定の色を有する線で囲まれており、前記制御部が、前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得するステップと、前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成するステップと、前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定するステップと、前記撮像画像の、前記トリミング領域で指定される領域をトリミングするステップとを含む撮像画像処理方法である。 A captured image processing method according to the present invention is a method for trimming a predetermined region of a captured image acquired from the image capturing unit in an information terminal device including an image capturing unit and a control unit. A step in which a predetermined region is surrounded by a line having a predetermined color, and the control unit acquires a captured image obtained by capturing the predetermined region of the imaging object from the imaging unit; A step of generating an extracted image obtained by extracting only the predetermined color to be trimmed, and extracting a plurality of contour lines corresponding to the predetermined region of the captured image in the extracted image; A captured image including a step of determining a region having the maximum enclosed area as a trimming region, and a step of trimming a region specified by the trimming region of the captured image It is a management method.
 また、本発明に係る撮像画像処理方法は、本発明に係る撮像画像処理方法において、前記制御部が、前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定するステップと、4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定するステップと、前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成するステップと、前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成するステップと、正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングするステップとをさらに含むことが好ましい。 Further, the captured image processing method according to the present invention is the captured image processing method according to the present invention, wherein the control unit determines a trimming region and then extends a plurality of line segments constituting the trimming region. A plurality of intersection candidates formed by two straight lines are obtained from a combination of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are first trapezoidal correction. A step of determining a vertex coordinate, and calculating a trapezoidal distortion index defined by the four first vertex coordinates from the four first vertex coordinates, and after correcting the trapezoid based on the calculated index Determining the second vertex coordinates, and performing perspective projection transformation of the captured image using the first vertex coordinates and the second vertex coordinates to generate the captured image in front view; Performing a perspective projection transformation of the trimming area using the first vertex coordinates and the second vertex coordinates to generate the trimming area in a front view; and a front view view of the captured image. It is preferable that the method further includes a step of trimming an area designated by the trimming area.
 また、本発明に係る撮像画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体は、撮像部と制御部とを備える情報端末装置において、前記撮像部から取得した撮像画像の所定の領域をトリミング処理するプログラムを記録したコンピュータ読み取り可能な記録媒体であって、撮像対象物の所定の領域が、所定の色を有する線で囲まれており、前記制御部が、前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得するステップと、前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成するステップと、前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定するステップと、前記撮像画像の、前記トリミング領域で指定される領域をトリミングするステップと、をコンピュータに実行させるための撮像画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体である。 In addition, a computer-readable recording medium that records a captured image processing program according to the present invention performs a trimming process on a predetermined area of a captured image acquired from the imaging unit in an information terminal device including an imaging unit and a control unit. A computer-readable recording medium on which a program is recorded, wherein a predetermined area of the imaging object is surrounded by a line having a predetermined color, and the control unit is configured to transmit the imaging object from the imaging unit. Obtaining a captured image obtained by imaging the predetermined area; generating an extracted image obtained by extracting only the predetermined color to be trimmed from the captured image; and A plurality of contour lines corresponding to a predetermined region are extracted, and a region having the maximum area surrounded by the contour lines is determined as a trimming region. And-up, of the captured image, a computer-readable recording medium which records a captured image processing program for executing the steps of trimming the region specified, to a computer in the trimming area.
 また、本発明に係る撮像画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体は、本発明に係る撮像画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体において、前記制御部が、さらに、前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定するステップと、4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定するステップと、前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成するステップと、前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成するステップと、正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングするステップと、をコンピュータに実行させることが好ましい。 The computer-readable recording medium recording the captured image processing program according to the present invention is a computer-readable recording medium recording the captured image processing program according to the present invention, wherein the control unit further includes the trimming area. Is determined, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming region, and coordinates among the obtained intersection candidates are determined. Are determined as four first vertex coordinates from the four first vertex coordinates, and the four intersection candidate candidates that are positioned on the outermost side are determined as first vertex coordinates for trapezoidal correction. Calculating a trapezoidal distortion index, determining a second vertex coordinate after trapezoid correction based on the calculated index; and the first vertex coordinate And using the second vertex coordinates to perform perspective projection transformation of the captured image to generate the captured image in front view, using the first vertex coordinates and the second vertex coordinates, Performing a perspective projection conversion of the trimming area to generate the trimming area in front view, and trimming an area specified by the trimming area in front view of the captured image in front view; It is preferable to execute.
 本発明によると、外出先においても撮像画像の所定の領域をトリミングして保存することができる。また、撮像部を有する携帯型の情報端末装置にて撮像対象物を撮像する際に、歪みを補正したうえで撮像画像を保存することができる。取り込んで画像情報化した新聞記事や雑誌の切り抜き等に歪みは無く、ユーザは記事の内容を正確に読み取ることができる。 According to the present invention, it is possible to trim and save a predetermined area of a captured image even on the go. In addition, when the imaging target is imaged by a portable information terminal device having an imaging unit, the captured image can be stored after correcting the distortion. There is no distortion in newspaper articles and magazine clippings that have been captured and converted into image information, and the user can accurately read the contents of the articles.
本発明の一実施形態に係る携帯端末装置の概略構成図であり、(a)は正面図であり、(b)は裏面図である。It is a schematic block diagram of the portable terminal device which concerns on one Embodiment of this invention, (a) is a front view, (b) is a back view. 携帯端末装置のブロック図である。It is a block diagram of a portable terminal device. 本発明の一実施形態に係る携帯端末装置が行う画像処理方法の処理順序を示すフローチャートである。It is a flowchart which shows the process order of the image processing method which the portable terminal device which concerns on one Embodiment of this invention performs. 撮像対象物のパースペクティブを説明するための模式図である。It is a schematic diagram for demonstrating the perspective of an imaging target object. 台形補正用の頂点座標の決定方法を説明するための模式図である。It is a schematic diagram for demonstrating the determination method of the vertex coordinate for trapezoid correction | amendment. 台形補正後の頂点座標の決定方法を説明するための模式図である。It is a schematic diagram for demonstrating the determination method of the vertex coordinate after trapezoid correction | amendment. 取り込み対象の領域Rの特定に用いるマーカーペンの一例である。It is an example of a marker pen used for specifying a region R to be captured.
 以下、本発明の実施の形態を、添付の図面を参照して詳細に説明する。尚、以下の説明及び図面において、同じ符号は同じ又は類似の構成要素を示すこととし、よって、同じ又は類似の構成要素に関する説明を省略する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description and drawings, the same reference numerals indicate the same or similar components, and thus the description of the same or similar components is omitted.
 図1は、本発明の一実施形態に係る携帯端末装置の概略構成図である。(a)は正面図であり、(b)は裏面図である。図2は、携帯端末装置のブロック図である。 FIG. 1 is a schematic configuration diagram of a mobile terminal device according to an embodiment of the present invention. (A) is a front view, (b) is a back view. FIG. 2 is a block diagram of the mobile terminal device.
 図1および図2に示すように、携帯端末装置1は、装置本体2、撮像部3、表示部4、タッチパネル5、記憶部6および制御部7を備えている。またその他に、携帯端末装置1は、無線通信のためのアンテナ、音声通話のためのマイクやスピーカーなどを備えている(いずれも図示せず)。このような携帯端末装置1としては、特に限定されるものではなく、例えば、スマートフォン、携帯電話機、PDAなどが挙げられる。 As shown in FIGS. 1 and 2, the mobile terminal device 1 includes a device main body 2, an imaging unit 3, a display unit 4, a touch panel 5, a storage unit 6, and a control unit 7. In addition, the mobile terminal device 1 includes an antenna for wireless communication, a microphone and a speaker for voice calls, and the like (none of which are shown). Such a portable terminal device 1 is not particularly limited, and examples thereof include a smartphone, a mobile phone, and a PDA.
 装置本体2としては、公知のスマートフォンの本体を用いることができ、電源スイッチや各種のボタンを備えている。また、撮像部3は、レンズを通して入射する被写体像を撮像する公知の構成であり、取り込み対象である新聞記事や雑誌の切り抜き等を撮像することができる。このような撮像部3としては、例えば、アナログ電気信号を出力するCCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)等の撮像素子を含み、レンズを通して入射する被写体像を光電変換し、撮像素子からのアナログ電気信号をデジタル電気信号に変換して画像データを出力するものを用いることができる。 As the device body 2, a known smartphone body can be used, which includes a power switch and various buttons. The imaging unit 3 is a known configuration that images a subject image incident through a lens, and can capture a newspaper article or magazine clipping to be captured. Such an imaging unit 3 includes, for example, an imaging element such as a CCD (Charge-Coupled Device) or CMOS (Complementary Metal-Oxide Semiconductor) that outputs an analog electrical signal, photoelectrically converts a subject image incident through a lens, and performs imaging. A device that converts an analog electric signal from the element into a digital electric signal and outputs image data can be used.
 表示部4としては、液晶ディスプレイや有機ELディスプレイ等の公知の表示デバイスを用いることができ、その表示画面上に画像情報等を表示することができる。また、タッチパネル5は、ユーザーが表面をタッチしたときにそのタッチ位置を認識可能な公知のタッチパネルであり、表示部4の上に重ねて配置されている。このタッチパネル5としては、抵抗膜方式、表面弾性波方式、電磁誘導方式、静電容量方式等、公知の方式のものを用いることができる。また、このタッチパネル5をユーザーがタッチすることにより、各種の入力をすることができる。例えば、タッチパネル5のタッチにより、撮像部3による撮像、表示部4における表示の指示を入力することができる。 As the display unit 4, a known display device such as a liquid crystal display or an organic EL display can be used, and image information or the like can be displayed on the display screen. The touch panel 5 is a known touch panel that can recognize the touch position when the user touches the surface, and is disposed on the display unit 4. As the touch panel 5, a known type such as a resistive film type, a surface acoustic wave type, an electromagnetic induction type, a capacitance type, or the like can be used. Further, various inputs can be made by the user touching the touch panel 5. For example, by touching the touch panel 5, an instruction for imaging by the imaging unit 3 and display on the display unit 4 can be input.
 記憶部6は、情報処理のためのプログラム及びデータを記憶する公知のハードディスク又は半導体メモリなどの記憶媒体により構成されている。この記憶部6は、撮像部3により撮像された画像情報(以下、単に「撮像画像」とも記す)を記憶することができる。撮像画像は、例えば、jpeg及びgifなどの種々の画像フォーマットにより記憶される。 The storage unit 6 includes a storage medium such as a known hard disk or semiconductor memory that stores programs and data for information processing. The storage unit 6 can store image information captured by the imaging unit 3 (hereinafter also simply referred to as “captured image”). The captured image is stored in various image formats such as jpeg and gif, for example.
 制御部7は、プログラム及びデータに基づいて情報処理をする公知のCPUなどのプロセッサにより構成されている。この制御部7は、記憶部6に記憶されたプログラムを実行することにより、上記の各構成要素を制御することができる。 The control unit 7 includes a processor such as a known CPU that performs information processing based on programs and data. The control unit 7 can control each of the above constituent elements by executing a program stored in the storage unit 6.
 以下の説明においては、特に断らない限り携帯端末装置1が行う画像処理方法として説明する。携帯端末装置1が行う処理は、実際には携帯端末装置1の制御部7が行う処理を意味する。制御部7は、記憶部6を作業領域として必要なデータ(処理途中の中間データ等)を一時記憶し、演算結果等の長期保存するデータを記憶部6に適宜記録する。また、携帯端末装置1は、以下で説明するステップS1~S15の処理を行うために使用するプログラムを、例えば実行形式(例えば、C言語等のプログラミング言語からコンパイラにより変換されて生成される)で記憶部6に予め記録しており、携帯端末装置1は、記憶部6に記録したプログラムを使用して処理を行う。 In the following description, it will be described as an image processing method performed by the mobile terminal device 1 unless otherwise specified. The processing performed by the mobile terminal device 1 actually means processing performed by the control unit 7 of the mobile terminal device 1. The control unit 7 temporarily stores necessary data (such as intermediate data during processing) using the storage unit 6 as a work area, and appropriately records data to be stored for a long period of time such as a calculation result in the storage unit 6. In addition, the mobile terminal device 1 generates, for example, a program used for performing the processing of steps S1 to S15 described below in an execution format (for example, generated by being converted from a programming language such as C language by a compiler). The mobile terminal device 1 is recorded in advance in the storage unit 6 and performs processing using the program recorded in the storage unit 6.
 図3は、本発明の一実施形態に係る携帯端末装置が行う画像処理方法の処理順序を示すフローチャートである。以下、本発明の一実施形態に係る画像処理方法の処理順序について、図3に示すフローチャートに基づいて詳細に説明する。 FIG. 3 is a flowchart showing the processing order of the image processing method performed by the mobile terminal device according to the embodiment of the present invention. Hereinafter, the processing order of the image processing method according to the embodiment of the present invention will be described in detail based on the flowchart shown in FIG.
 なお、本実施形態では、図4に示すように、撮像画像に図中のY軸方向にパース(パースペクティブ、以下単にパースと記載する)がついている場合について説明する。また、以下の説明において、台形補正とは、撮像対象物を含む平面の法線方向と撮像方向とが一致せずパースがついた撮像画像を、撮像方向を法線方向に一致させて、パースの無い正面視の画像に変換することを意味する。 In the present embodiment, as shown in FIG. 4, a case will be described in which a captured image has a perspective (perspective, hereinafter simply referred to as a perspective) in the Y-axis direction in the figure. In the following description, trapezoidal correction refers to a captured image in which the normal direction of the plane including the object to be imaged and the image capturing direction do not coincide with each other, and the image capturing direction coincides with the normal direction. This means that the image is converted to a front-view image without any.
 ステップS1において、取り込み対象とする新聞記事の所定の領域を撮像する。ユーザが、マーカーペンMを用いて新聞記事Nの所定の領域Rnを所定の色(例えば、赤色)で囲んでマークし、ユーザが、撮像部3を用いて、マーカで囲まれた所定の領域Rnを含む領域を撮像する。携帯端末装置1は撮像画像を記録する。本実施形態では、図4に示すように、撮像画像には図中のY軸方向にパースがついている。取り込み対象の領域Rnにも、図中のY軸方向にパースがついている。マーカーペンMは、先端の着色部が2つの突起部に分割された、例えばV字またはU字の形状を有するものを用いる。このようなマーカーペンMを用いてマークすると、取り込み対象の領域Rnは、二重線のパターンで囲まれる領域となる。 In step S1, a predetermined area of a newspaper article to be captured is imaged. The user marks a predetermined region Rn of the newspaper article N with a predetermined color (for example, red) using the marker pen M, and the user uses the imaging unit 3 to mark the predetermined region surrounded by the marker. An area including Rn is imaged. The portable terminal device 1 records a captured image. In the present embodiment, as shown in FIG. 4, the captured image is parsed in the Y-axis direction in the figure. The capture target region Rn is also parsed in the Y-axis direction in the figure. As the marker pen M, a marker pen M having a V-shaped or U-shaped shape, for example, in which a colored portion at the tip is divided into two protrusions is used. When marking is performed using such a marker pen M, the capture target region Rn is a region surrounded by a double line pattern.
 パースがついた撮像画像を取り込んだ後、ステップS2において、撮像画像から、トリミング対象とする色のみを抽出した画像(抽出画像)を生成する。抽出画像は、後述するトリミング領域Rtを認識するために利用する画像データである。抽出画像の縦横の画像サイズと、撮像画像の縦横の画像サイズとは一致するので、抽出画像は、撮像画像に対するいわゆる「レイヤー」として機能する。抽出画像の生成は、例えば閾値を用いて抽出する公知の処理方法により行う。新聞記事Nの取り込み対象とする領域Rnは、所定の色で囲んでマークされているので、撮像画像中の所定の色のみを抽出すると、抽出画像中の抽出した色で囲まれる最大の領域が、後述するトリミング領域Rtとなる。ここで、撮像画像にはパースがついているので、抽出画像にもパースがついている。また、取り込み対象の領域Rnは二重線のパターンで囲まれていたので、抽出画像中の抽出した色で囲まれる領域も、二重線のパターンで囲まれている。なお、以後の処理では、抽出画像のレイヤーを座標に変換することで、レイヤーに代えて座標を用いて抽出画像を特定する。 After capturing the captured image with the perspective, in step S2, an image (extracted image) is generated by extracting only the color to be trimmed from the captured image. The extracted image is image data used for recognizing a trimming region Rt described later. Since the vertical and horizontal image sizes of the extracted image match the vertical and horizontal image sizes of the captured image, the extracted image functions as a so-called “layer” for the captured image. The generation of the extracted image is performed by, for example, a known processing method that extracts using a threshold value. Since the region Rn to be captured by the newspaper article N is marked by being surrounded by a predetermined color, when only a predetermined color in the captured image is extracted, the largest region surrounded by the extracted color in the extracted image is This becomes a trimming region Rt described later. Here, since the captured image is parsed, the extracted image is also parsed. Further, since the region Rn to be captured is surrounded by a double line pattern, the region surrounded by the extracted color in the extracted image is also surrounded by the double line pattern. In the subsequent processing, the layer of the extracted image is converted into coordinates, and the extracted image is specified using coordinates instead of the layer.
 ステップS3において、抽出した二重線の画像を単線に変換する。すなわち、抽出した二重線の幅を拡大および収縮させて、一本の線(単線)に変換する。二重線の検出は、例えば閾値を用いて境界を検出する公知の処理方法により行う。 In step S3, the extracted double line image is converted into a single line. That is, the width of the extracted double line is expanded and contracted to be converted into a single line (single line). The double line is detected by a known processing method that detects a boundary using a threshold value, for example.
 ステップS4において、トリミングを行う座標を検出する。抽出画像は所定の色(赤色)のみの画像データであるので、まずこれを二値化(白黒データ化)する。次に、二値化した画像データ内において全ての輪郭線を抽出し、抽出した輪郭線を直線近似する。さらに、直線化された輪郭線を組み合わせて、輪郭線で囲まれる領域の面積が最大となる領域を決定し、この面積が最大となる領域をトリミング領域Rtとする。輪郭線の抽出および直線近似の方法は、公知の処理方法により行う。なお、抽出画像の座標にはパースがついているので、トリミング領域Rtの形状もパースがついた形状となる。 In step S4, coordinates for trimming are detected. Since the extracted image is image data of only a predetermined color (red), it is first binarized (monochrome data). Next, all contour lines are extracted from the binarized image data, and the extracted contour lines are linearly approximated. Further, a region where the area of the region surrounded by the contour line is maximized is determined by combining the linearized contour lines, and the region where the area is maximized is set as a trimming region Rt. The method of contour extraction and straight line approximation is performed by a known processing method. Since the coordinates of the extracted image are parsed, the shape of the trimming region Rt is also parsed.
 ステップS5において、トリミング領域Rtの面積が所定の面積よりも大きいか否かを判定し、大きければステップS6の処理を行い、小さい場合には処理を終了する。 In step S5, it is determined whether or not the area of the trimming region Rt is larger than a predetermined area. If it is larger, the process of step S6 is performed, and if smaller, the process is terminated.
 ステップS6において、台形補正用の頂点座標を計算する。台形補正用の頂点座標は、ステップS4にて求めたトリミング領域Rtを構成する頂点座標から求める。図5は、台形補正用の頂点座標の決定方法を説明するための模式図である。図5に示すように、トリミング領域Rtは、頂点r1~r10を順に結ぶ複数の線分で構成されている。この複数の線分のうち、2つの線分同士が成す角度を求める。求めた角度が所定の角度(例えば、70度)以上であれば、その2本の線分をそれぞれ延長した2本の直線が成す交点の座標を求め、交点の候補とする。求めた交点候補の座標が撮像画像の領域からはみ出る場合には、その交点候補は除外する。このような交点候補を求める処理を、2つの線分の組み合わせを変えた全ての組み合わせ(総当たり)で行う。そして、求めた複数の交点候補のうち、座標が最も外側に位置する4つの交点を、台形補正用の頂点座標P1~P4とする。 In step S6, the vertex coordinates for keystone correction are calculated. The vertex coordinates for trapezoid correction are obtained from the vertex coordinates constituting the trimming region Rt obtained in step S4. FIG. 5 is a schematic diagram for explaining a method of determining vertex coordinates for trapezoid correction. As shown in FIG. 5, the trimming region Rt is composed of a plurality of line segments that connect the vertices r1 to r10 in order. An angle formed by two line segments among the plurality of line segments is obtained. If the obtained angle is equal to or greater than a predetermined angle (for example, 70 degrees), the coordinates of the intersection formed by two straight lines obtained by extending the two line segments are obtained and set as a candidate for the intersection. If the obtained coordinates of the intersection candidate protrude from the area of the captured image, the intersection candidate is excluded. Processing for obtaining such intersection candidate is performed for all combinations (brute force) in which the combinations of the two line segments are changed. Then, of the obtained plurality of intersection candidates, the four intersections whose coordinates are located on the outermost side are designated as vertex coordinates P1 to P4 for trapezoid correction.
 ステップS7において、ユーザが表示画面上で頂点座標P1~P4の座標位置を微調整する。携帯端末装置1は、ステップS6にて求めた台形補正用の頂点座標P1~P4と、トリミング領域Rtを示す座標と、取り込み対象の領域Rnを示す撮像画像とを重ねて表示部4に表示し、タッチパネル5からのユーザの入力に基づいて、微調整された頂点座標P1~P4の座標位置を記録する。 In step S7, the user finely adjusts the coordinate positions of the vertex coordinates P1 to P4 on the display screen. The mobile terminal device 1 displays the trapezoidal correction vertex coordinates P1 to P4 obtained in step S6, the coordinates indicating the trimming area Rt, and the captured image indicating the capture target area Rn on the display unit 4 in an overlapping manner. The coordinate positions of the vertex coordinates P1 to P4 finely adjusted based on the user input from the touch panel 5 are recorded.
 ステップS8において、台形補正後の頂点座標を計算する。台形補正後の頂点座標は、ステップS6またはS7にて決定された台形補正用の頂点座標P1~P4から求める。図6は、台形補正後の頂点座標の決定方法を説明するための模式図である。まず、頂点座標P1~P4で画定される台形の、向かい合う二辺(二直線)が織り成す角度をそれぞれ求め、傾きが小さい方の二辺を選択する。これはパースの方向を判別する処理であり、本実施形態では、図4に示すY軸方向にパースがついているので、傾きが小さい方の二辺として、辺P1-P4と辺P2-P3との組が選択される。次に、この二辺のうち長い方の辺(辺P2-P3)について、両隣の辺との傾きα1、α2をそれぞれ求める。次に、求めた2つの傾きα1、α2の平均値を求め、これを台形の傾きαavとする。この求めた台形の傾きαavは、撮像画像のパースの程度を表す指標となる。 In step S8, the vertex coordinates after the keystone correction are calculated. The vertex coordinates after the trapezoid correction are obtained from the trapezoid correction vertex coordinates P1 to P4 determined in step S6 or S7. FIG. 6 is a schematic diagram for explaining a method of determining vertex coordinates after trapezoid correction. First, the angles woven by two opposite sides (two straight lines) of the trapezoid defined by the vertex coordinates P1 to P4 are obtained, and the two sides having the smaller inclination are selected. This is a process for discriminating the direction of the perspective. In the present embodiment, since the perspective is attached in the Y-axis direction shown in FIG. Is selected. Next, with respect to the longer side (side P2-P3) of these two sides, the inclinations α1 and α2 with the adjacent sides are obtained. Next, an average value of the two obtained inclinations α1 and α2 is obtained, and this is set as a trapezoidal inclination α av . The obtained trapezoidal inclination α av becomes an index representing the degree of parsing of the captured image.
 次に、求めた台形の傾きαavと、予め作成しておいた台形補正用の縦横比率対応表とから、台形補正後の画像の縦横比率を取得する。縦横比率対応表は、台形の傾きαavと、台形補正後の画像の縦横比率との対応表であり、携帯端末装置1の撮像部3を用いた実測により予め作成しておいた表である。表1に、縦横比率対応表の一例を示す。表1中、定数kは、台形の長辺(本実施形態では、辺P2-P3)を1とした場合の短辺(P1-P4)の横辺倍率である。 Next, the aspect ratio of the image after the trapezoid correction is acquired from the obtained trapezoidal inclination α av and the table for correspondence of the aspect ratio for trapezoid correction prepared in advance. The aspect ratio correspondence table is a correspondence table between the trapezoid inclination α av and the aspect ratio of the image after the trapezoid correction, and is a table prepared in advance by actual measurement using the imaging unit 3 of the mobile terminal device 1. . Table 1 shows an example of the aspect ratio correspondence table. In Table 1, the constant k is the lateral magnification of the short side (P1-P4) when the long side of the trapezoid (side P2-P3 in this embodiment) is 1.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
頂点座標P1~P4の座標位置と、対応表から取得した横辺倍率kの値とから、台形補正後の4つの頂点P1x~P4xの座標位置を計算し記録する。本実施形態の場合、辺P2-P3が台形の長辺であり、辺P1-P4が短辺であるので、短辺である辺P1-P4の点P1,P4の座標位置と、横辺倍率kとに基づいて、台形補正後の座標位置P1x,P4xを計算する。 From the coordinate positions of the vertex coordinates P1 to P4 and the value of the lateral magnification k obtained from the correspondence table, the coordinate positions of the four vertices P1x to P4x after trapezoid correction are calculated and recorded. In the present embodiment, the sides P2-P3 are the long sides of the trapezoid and the sides P1-P4 are the short sides, so the coordinate positions of the points P1, P4 of the sides P1-P4, which are the short sides, and the horizontal side magnification Based on k, coordinate positions P1x and P4x after trapezoidal correction are calculated.
 ステップS9において、撮像画像の台形補正を行い、パースが付いて歪んだ撮像画像から、歪みの無い正面視の撮像画像を作成する。台形補正前の頂点座標P1~P4および台形補正後の頂点座標P1x~P4xは、抽出画像上にて既に求められているので、これら補正前および補正後の頂点座標P1~P4、P1x~P4xの情報を用いて透視投影変換を行い、パースが付いた撮像画像の台形補正を行う。台形補正後の撮像画像は、パースの無い正面視の撮像画像となる。透視投影変換については公知であるので、本明細書においては詳細な説明を省略する。 In step S9, the trapezoidal correction of the captured image is performed, and the captured image of the front view without distortion is created from the captured image distorted with the perspective. Since the vertex coordinates P1 to P4 before the trapezoid correction and the vertex coordinates P1x to P4x after the trapezoid correction are already obtained on the extracted image, the vertex coordinates P1 to P4 and P1x to P4x before and after the correction are corrected. The perspective projection conversion is performed using the information, and the trapezoidal correction of the captured image with the perspective is performed. The captured image after the keystone correction is a captured image of front view without perspective. Since perspective projection conversion is known, detailed description thereof is omitted in this specification.
 ステップS10において、トリミング領域の台形補正を行い、パースが付いて歪んだトリミング領域Rtから、歪みの無い正面視のトリミング領域を作成する。補正前および補正後の頂点座標P1~P4、P1x~P4xの情報を用いて透視投影変換を行い、パースが付いたトリミング領域Rtの座標の台形補正を行う。台形補正後のトリミング領域は、パースの無い正面視のトリミング領域となる。 In step S10, the trapezoidal correction of the trimming area is performed, and a front-view trimming area without distortion is created from the trimming area Rt that is distorted with perspective. Perspective projection conversion is performed using the information of the vertex coordinates P1 to P4 and P1x to P4x before and after correction, and the keystone correction of the coordinates of the trimmed region Rt with the perspective is performed. The trimming area after the keystone correction is a trimming area in front view without perspective.
 ステップS11において、ユーザが表示画面上でトリミング領域の頂点座標の座標位置を微調整する。携帯端末装置1は、ステップS10にて求めた正面視のトリミング領域を示す座標と、ステップS9にて求めた正面視の撮像画像とを重ねて表示部4に表示し、タッチパネル5からのユーザの入力に基づいて、正面視のトリミング領域の、微調整された頂点座標の座標位置を記録する。 In step S11, the user finely adjusts the coordinate position of the vertex coordinates of the trimming area on the display screen. The mobile terminal device 1 displays on the display unit 4 the coordinates indicating the front-view trimming area obtained in step S10 and the captured image obtained in step S9 on the display unit 4, and the user from the touch panel 5 Based on the input, the coordinate position of the finely adjusted vertex coordinates of the trimming area in front view is recorded.
 ステップS12において、撮像画像のトリミング処理を行う。まず、ステップS10またはステップS11にて作成した正面視のトリミング領域の座標から、トリミング処理用のマスク画像を作成する。次に、作成したマスク画像を用いて、ステップS9にて作成した正面視の撮像画像のトリミング処理を行う。頂点座標の座標位置および撮像画像はいずれも正面視のものを用いているので、トリミング後の撮像画像は、パースの無い正面視の画像である。マスク画像を用いた画像のトリミング処理には公知の方法を用いる。トリミング処理により、撮像画像内のトリミング領域の外側の領域は、例えば所定の色(例えば、白色)で塗り潰される。 In step S12, the captured image is trimmed. First, a mask image for trimming processing is created from the coordinates of the front-view trimming region created in step S10 or step S11. Next, trimming processing of the captured image of the front view created in step S9 is performed using the created mask image. Since the coordinate position of the vertex coordinates and the captured image are both front-view images, the captured image after trimming is a front-view image without perspective. A known method is used for image trimming using a mask image. By the trimming process, a region outside the trimming region in the captured image is filled with, for example, a predetermined color (for example, white).
 ステップS13において、トリミング処理した撮像画像の画質を調整する。例えば、撮像画像の撮影環境に応じて、トリミング画像のホワイトバランス(色温度)を調整し、ヒストグラムを均一化し、ガンマ補正をする。 In step S13, the image quality of the trimmed captured image is adjusted. For example, the white balance (color temperature) of the trimmed image is adjusted according to the shooting environment of the captured image, the histogram is made uniform, and gamma correction is performed.
 ステップS14において、撮像画像に二重線が存在したか否かを判定する。ステップS1にて取り込んだパース付きの撮像画像に対して二重線の検出処理を行い、二重線が存在した場合にはステップS15の処理を行い、二重線が存在しなかった場合には処理を終了する。二重線の検出は、例えば閾値を用いて境界を検出する公知の処理方法により行う。 In step S14, it is determined whether or not there is a double line in the captured image. A double line detection process is performed on the captured image with the perspective captured in step S1, and if a double line exists, the process of step S15 is performed. If a double line does not exist, the double line detection process is performed. The process ends. The double line is detected by a known processing method that detects a boundary using a threshold value, for example.
 ステップS15において、画質調整済のトリミング画像を記憶部6に保存し、処理を終了する。 In step S15, the trimmed image whose image quality has been adjusted is stored in the storage unit 6, and the process is terminated.
 このように、ユーザが、マーカーペンMを用いて取り込み対象とする記事の領域を所定の色で囲んでマークすれば、携帯端末装置1は、この囲んでマークされた領域をトリミングして、画像情報として保存することができる。記事のトリミングしたい領域をマーカーペンMを用いて囲んでマークするという簡便な手法であるので、外出先においても手軽に画像データをトリミングすることが可能となる。また、保存したトリミング画像はパースの無い正面視画像であるので、画像情報化した新聞記事や雑誌の切り抜き等に歪みは無く、ユーザは記事の内容を正確に読み取ることが可能となる。 In this way, if the user marks the area of the article to be captured with a predetermined color using the marker pen M, the mobile terminal device 1 trims the area marked with the image and images the image. It can be saved as information. Since it is a simple method of enclosing and marking a region to be trimmed of an article using the marker pen M, it is possible to easily trim image data even when away from home. Further, since the stored trimmed image is a front-view image without perspective, there is no distortion in a newspaper article or magazine clipped as image information, and the user can accurately read the contents of the article.
 以上、本発明を特定の実施の形態によって説明したが、本発明は上記した実施の形態に限定されるものではない。 As mentioned above, although this invention was demonstrated by specific embodiment, this invention is not limited to above-described embodiment.
 上記実施の形態では、携帯端末装置1は撮像画像のトリミング処理および台形補正の両方を適用したが、撮影画像の歪みが強くなく、撮像画像を台形補正する必要が無いとユーザが判断した場合は、携帯端末装置1は撮像画像のトリミング処理だけを適用してもよい。すなわち、撮像画像の台形補正はユーザの希望に応じた任意の処理である。 In the above embodiment, the mobile terminal device 1 applies both the trimming process of the captured image and the trapezoidal correction. However, when the user determines that the captured image is not distorted and it is not necessary to correct the trapezoidal image. The mobile terminal device 1 may apply only the trimming process of the captured image. That is, the trapezoidal correction of the captured image is an arbitrary process according to the user's desire.
 また、上記実施の形態では、ステップS4において全ての輪郭線を抽出してトリミング領域を決定していたが、トリミング領域の求め方はこれに限定されず、例えば、ユーザがマーカーペンMを用いてフリーハンドで囲んで示す領域をそのままトリミング領域としてもよい。 In the above embodiment, all the contour lines are extracted and the trimming area is determined in step S4. However, the method for obtaining the trimming area is not limited to this. For example, the user uses the marker pen M. A region surrounded by freehand may be used as a trimming region as it is.
 また、上記実施の形態では、先端の着色部が2つの突起部に分割されたマーカーペンMを用いて、取り込み対象の領域Rを二重線のパターンで囲んでマークしているが、領域Rを囲む線のパターンはこれに限定されない。例えば、図7に示すように、先端の着色部が3つの突起部に分割されたマーカーペンを用いて、取り込み対象の領域Rを三重線のパターンで囲んでマークしても良い。さらに、3つの突起部のそれぞれの幅がそれぞれ異なるマーカーペンを用いて、3本の線の線幅がそれぞれ異なるパターンで、取り込み対象の領域Rを囲んでもよい。このように、取り込み対象の領域Rを囲む線のパターンを特定のパターンとし、撮像画像に対して特定のパターンの存在を検出することで、特定のマーカーペン以外の使用を排除することができる。なお、ステップS3における二重線の画像を単線に変換する処理およびステップS14における二重線の存在の判定処理は任意の処理であり、例えば取り込み対象の領域Rを囲む線が単線の場合には、これらの処理を省略することができる。この場合、単線で囲まれる領域をトリミング領域として撮像画像のトリミング処理を行うことができる。 In the above-described embodiment, the region R to be captured is marked with a double line pattern using the marker pen M in which the colored portion at the tip is divided into two protrusions. The pattern of the line surrounding is not limited to this. For example, as shown in FIG. 7, a marker pen in which the colored portion at the tip is divided into three protrusions may be used to mark the region R to be captured in a triple line pattern. Furthermore, the region R to be captured may be surrounded by a pattern in which the line widths of the three lines are different using marker pens having different widths of the three protrusions. In this way, by using the line pattern surrounding the region R to be captured as a specific pattern and detecting the presence of the specific pattern in the captured image, it is possible to eliminate the use other than the specific marker pen. Note that the process of converting a double line image into a single line in step S3 and the process of determining the presence of a double line in step S14 are arbitrary processes. For example, when the line surrounding the region R to be captured is a single line These processes can be omitted. In this case, the captured image can be trimmed using the region surrounded by the single line as a trimming region.
 また、上記実施の形態では、マーカーペンには通常の染料系または顔料系のインクを使用しているが、これに代えて、赤外線を反射または吸収する赤外線インクを用いてもよい。赤外線反射インクを用いて、取り込み対象の領域Rを囲んでマークすれば、マーク部分において赤外線が反射または吸収されるので、携帯端末装置1側でこれを検知して、取り込み対象の領域Rnを特定してもよい。 In the above embodiment, a normal dye-based or pigment-based ink is used for the marker pen, but instead, an infrared ink that reflects or absorbs infrared light may be used. If the area R to be captured is marked by using infrared reflective ink, the infrared light is reflected or absorbed at the mark portion, and this is detected on the mobile terminal device 1 side to identify the area Rn to be captured. May be.
 また、上記実施の形態では、ステップS7において、携帯端末装置1が、微調整された頂点座標P1~P4の座標位置を記録し、ステップS11において、携帯端末装置1が、正面視のトリミング領域の、微調整された頂点座標の座標位置を記録しているが、これら座標位置またはトリミング領域を微調整する処理は任意の処理であり、処理を省略してもよい。 In the above embodiment, in step S7, the mobile terminal device 1 records the coordinate positions of the finely adjusted vertex coordinates P1 to P4, and in step S11, the mobile terminal device 1 detects the trimming area of the front view. The coordinate positions of the finely adjusted vertex coordinates are recorded, but the process of finely adjusting these coordinate positions or the trimming area is an arbitrary process, and the process may be omitted.
 また、上記実施の形態では、ステップS14において、撮像画像に二重線が存在したか否かを判定しているが、ステップS3において、撮像画像に対して二重線を検出した結果を、例えばブーリアン型のフラグとして記憶しておき、ステップS14において、記憶しておいたこのフラグの情報に基づいて、撮像画像に二重線が存在したか否かを判定してもよい。 Moreover, in the said embodiment, although it determines whether the double line existed in the captured image in step S14, the result of having detected the double line with respect to the captured image in step S3, for example, It may be stored as a Boolean flag, and in step S14, it may be determined whether or not a double line exists in the captured image based on the stored information of the flag.
 また、上記実施の形態では、トリミング処理した撮像画像の画質を調整する例として、ホワイトバランスの調整、ヒストグラムの均一化、およびガンマ補正を例示しているが、これらに限定されない。携帯端末装置1でのトリミング画像の閲覧に適した画質の調整方法であれば適宜適用することができる。 In the above embodiment, white balance adjustment, histogram equalization, and gamma correction are illustrated as examples of adjusting the image quality of a trimmed captured image. However, the present invention is not limited to these. Any method of adjusting the image quality suitable for browsing the trimmed image on the mobile terminal device 1 can be applied as appropriate.
 また、上記実施形態では、トリミング処理後に、撮像画像内のトリミング領域の外側の領域を所定の色で塗りつぶしているが、この領域は、例えば透過GIFの形式で透明化されてもよい。 In the above embodiment, after the trimming process, the area outside the trimming area in the captured image is filled with a predetermined color. However, this area may be made transparent in the form of a transparent GIF, for example.
 本発明によると、外出先においても撮像画像の所定の領域をトリミングして保存することができる。また、撮像部を有する携帯型の情報端末装置にて撮像対象物を撮像する際に、歪みを補正したうえで撮像画像を保存することができる。取り込んで画像情報化した新聞記事や雑誌の切り抜き等に歪みは無く、ユーザは記事の内容を正確に読み取ることができる。 According to the present invention, it is possible to trim and save a predetermined area of a captured image even on the go. In addition, when the imaging target is imaged by a portable information terminal device having an imaging unit, the captured image can be stored after correcting the distortion. There is no distortion in newspaper articles and magazine clippings that have been captured and converted into image information, and the user can accurately read the contents of the articles.
  1  携帯端末装置
  2  装置本体
  3  撮像部
  4  表示部
  5  タッチパネル
  6  記憶部
  7  制御部
  N  新聞記事
  M  マーカーペン
  Rn 取り込み対象の領域
  Rt トリミング領域
 
DESCRIPTION OF SYMBOLS 1 Mobile terminal device 2 Apparatus main body 3 Image pick-up part 4 Display part 5 Touch panel 6 Storage part 7 Control part N Newspaper article M Marker pen Rn The area | region to capture Rt Trimming area | region

Claims (11)

  1.  撮像部と制御部とを備え、前記撮像部から取得した撮像画像の所定の領域をトリミング処理する情報端末装置であって、
     撮像対象物の所定の領域が、所定の色を有する線で囲まれており、
     前記制御部が、
     前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得し、
     前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成し、
     前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定し、
     前記撮像画像の、前記トリミング領域で指定される領域をトリミングする、情報端末装置。
    An information terminal device comprising an imaging unit and a control unit, and trimming a predetermined area of a captured image acquired from the imaging unit,
    A predetermined area of the imaging object is surrounded by a line having a predetermined color,
    The control unit is
    From the imaging unit, obtain a captured image obtained by imaging the predetermined area of the imaging object,
    Generating an extracted image obtained by extracting only the predetermined color to be trimmed from the captured image;
    In the extracted image, a plurality of contour lines corresponding to the predetermined region of the captured image are extracted, and a region having the largest area surrounded by the contour line is determined as a trimming region,
    An information terminal device for trimming an area specified by the trimming area of the captured image.
  2.  前記制御部が、さらに、
     前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定し、
     4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定し、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成し、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成し、
     正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングする、請求項1に記載の情報端末装置。
    The control unit further includes:
    After determining the trimming area, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming area. Among these, the four candidate intersections whose coordinates are located on the outermost side are determined as first vertex coordinates for trapezoid correction,
    A trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a second vertex coordinate after the trapezoid correction is determined based on the calculated index. And
    Using the first vertex coordinates and the second vertex coordinates, perspective projection conversion of the captured image, to generate the captured image of the front view,
    Using the first vertex coordinates and the second vertex coordinates to perform perspective projection transformation of the trimming region, to generate the trimming region in front view,
    The information terminal device according to claim 1, wherein an area specified by the trimming area in front view of the captured image in front view is trimmed.
  3.  前記制御部がさらに、
     トリミングされた正面視の前記撮像画像に対して、色温度の調整、ヒストグラムの均一化、およびガンマ補正からなる群から選択される何れかの画質調整を適用する、請求項1または2に記載の情報端末装置。
    The control unit further includes:
    The image quality adjustment selected from the group consisting of color temperature adjustment, histogram equalization, and gamma correction is applied to the cropped captured image in front view. Information terminal device.
  4.  前記撮像画像の前記所定の領域が、所定の模様で囲まれており、
     前記制御部がさらに、
     前記撮像画像内において前記所定の模様の存在を検出し、
     前記所定の模様の存在を前記撮像画像内に検出した場合に、トリミングされた正面視の前記撮像画像を記録する、請求項1~3のいずれかに記載の情報端末装置。
    The predetermined area of the captured image is surrounded by a predetermined pattern;
    The control unit further includes:
    Detecting the presence of the predetermined pattern in the captured image;
    The information terminal device according to any one of claims 1 to 3, wherein when the presence of the predetermined pattern is detected in the captured image, the cropped captured image in front view is recorded.
  5.  前記所定の模様が、先端の着色部が複数の突起部に分割された筆記具を用いて作成された、請求項4に記載の情報端末装置。 The information terminal device according to claim 4, wherein the predetermined pattern is created using a writing instrument in which a colored portion at a tip is divided into a plurality of protrusions.
  6.  撮像部および制御部を有する情報端末装置と、筆記具とを備え、前記撮像部から取得した撮像画像の所定の領域を前記情報端末装置がトリミング処理する撮像画像処理システムであって、
     撮像対象物の所定の領域が、前記筆記具を用いて、所定の色を有する線で囲まれており、
     前記制御部が、
     前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得し、
     前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成し、
     前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定し、
     前記撮像画像の、前記トリミング領域で指定される領域をトリミングする、撮像画像処理システム。
    An imaged image processing system comprising an information terminal device having an imaging unit and a control unit, and a writing instrument, wherein the information terminal device performs a trimming process on a predetermined area of a captured image acquired from the imaging unit,
    A predetermined region of the imaging object is surrounded by a line having a predetermined color using the writing instrument,
    The control unit is
    From the imaging unit, obtain a captured image obtained by imaging the predetermined area of the imaging object,
    Generating an extracted image obtained by extracting only the predetermined color to be trimmed from the captured image;
    In the extracted image, a plurality of contour lines corresponding to the predetermined region of the captured image are extracted, and a region having the largest area surrounded by the contour line is determined as a trimming region,
    A captured image processing system for trimming an area specified by the trimming area of the captured image.
  7.  前記制御部が、さらに、
     前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定し、
     4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定し、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成し、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成し、
     正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングする、請求項6に記載の撮像画像処理システム。
    The control unit further includes:
    After determining the trimming area, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming area. Among these, the four candidate intersections whose coordinates are located on the outermost side are determined as first vertex coordinates for trapezoid correction,
    A trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a second vertex coordinate after the trapezoid correction is determined based on the calculated index. And
    Using the first vertex coordinates and the second vertex coordinates, perspective projection conversion of the captured image, to generate the captured image of the front view,
    Using the first vertex coordinates and the second vertex coordinates to perform perspective projection transformation of the trimming region, to generate the trimming region in front view,
    The captured image processing system according to claim 6, wherein an area specified by the trimming area in the front view of the captured image in the front view is trimmed.
  8.  撮像部と制御部とを備える情報端末装置において、前記撮像部から取得した撮像画像の所定の領域をトリミング処理する方法であって、
     撮像対象物の所定の領域が、所定の色を有する線で囲まれており、
     前記制御部が、
     前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得するステップと、
     前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成するステップと、
     前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定するステップと、
     前記撮像画像の、前記トリミング領域で指定される領域をトリミングするステップとを含む撮像画像処理方法。
    In an information terminal device including an imaging unit and a control unit, a method for trimming a predetermined region of a captured image acquired from the imaging unit,
    A predetermined area of the imaging object is surrounded by a line having a predetermined color,
    The control unit is
    Obtaining a captured image obtained by imaging the predetermined area of the imaging object from the imaging unit;
    Generating an extracted image obtained by extracting only the predetermined color to be trimmed from the captured image;
    Extracting a plurality of contour lines corresponding to the predetermined region of the captured image in the extracted image, and determining a region having a maximum area surrounded by the contour line as a trimming region;
    Trimming a region specified by the trimming region of the captured image.
  9.  前記制御部が、
     前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定するステップと、
     4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定するステップと、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成するステップと、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成するステップと、
     正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングするステップとをさらに含む、請求項8に記載の撮像画像処理方法。
    The control unit is
    After determining the trimming area, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming area. Of these, determining the four intersection candidate candidates whose coordinates are located on the outermost side as first trapezoidal coordinates for trapezoid correction;
    A trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a second vertex coordinate after the trapezoid correction is determined based on the calculated index. And steps to
    Performing perspective projection transformation of the captured image using the first vertex coordinates and the second vertex coordinates to generate the captured image in front view;
    Performing perspective projection transformation of the trimming area using the first vertex coordinates and the second vertex coordinates, and generating the trimming area in front view;
    The captured image processing method according to claim 8, further comprising: trimming a region specified by the trimming region in the front view of the captured image in the front view.
  10.  撮像部と制御部とを備える情報端末装置において、前記撮像部から取得した撮像画像の所定の領域をトリミング処理するプログラムを記録したコンピュータ読み取り可能な記録媒体であって、
     撮像対象物の所定の領域が、所定の色を有する線で囲まれており、
     前記制御部が、
     前記撮像部から、前記撮像対象物の前記所定の領域を撮像した撮像画像を取得するステップと、
     前記撮像画像から、トリミング対象とする前記所定の色のみを抽出した抽出画像を生成するステップと、
     前記抽出画像において、前記撮像画像の前記所定の領域に対応する複数の輪郭線を抽出し、当該輪郭線で囲まれる面積が最大となる領域をトリミング領域と決定するステップと、
     前記撮像画像の、前記トリミング領域で指定される領域をトリミングするステップと、をコンピュータに実行させるための撮像画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体。
    In an information terminal device including an imaging unit and a control unit, a computer-readable recording medium storing a program for trimming a predetermined area of a captured image acquired from the imaging unit,
    A predetermined area of the imaging object is surrounded by a line having a predetermined color,
    The control unit is
    Obtaining a captured image obtained by imaging the predetermined area of the imaging object from the imaging unit;
    Generating an extracted image obtained by extracting only the predetermined color to be trimmed from the captured image;
    Extracting a plurality of contour lines corresponding to the predetermined region of the captured image in the extracted image, and determining a region having a maximum area surrounded by the contour line as a trimming region;
    A computer-readable recording medium recording a captured image processing program for causing a computer to perform a step of trimming an area specified by the trimming area of the captured image.
  11.  前記制御部が、さらに、
     前記トリミング領域を決定した後、前記トリミング領域を構成する複数の線分を延長した複数の直線の組み合わせから、2つの前記直線同士が成す複数の交点候補を求め、求めた複数の前記交点候補のうち、座標が最も外側に位置する4つの前記交点候補を、台形補正用の第1の頂点座標と決定するステップと、
     4つの前記第1の頂点座標から、4つの前記第1の頂点座標で画定される台形の歪みの指標を計算し、計算した前記指標に基づいて、台形補正後の第2の頂点座標を決定するステップと、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記撮像画像の透視投影変換を行い、正面視の前記撮像画像を生成するステップと、
     前記第1の頂点座標および前記第2の頂点座標を用いて、前記トリミング領域の透視投影変換を行い、正面視の前記トリミング領域を生成するステップと、
     正面視の前記撮像画像の、正面視の前記トリミング領域で指定される領域をトリミングするステップと、をコンピュータに実行させるための撮像画像処理プログラムを記録した、請求項10に記載のコンピュータ読み取り可能な記録媒体。
     
    The control unit further includes:
    After determining the trimming area, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming area. Of these, determining the four intersection candidate candidates whose coordinates are located on the outermost side as first trapezoidal coordinates for trapezoid correction;
    A trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a second vertex coordinate after the trapezoid correction is determined based on the calculated index. And steps to
    Performing perspective projection transformation of the captured image using the first vertex coordinates and the second vertex coordinates to generate the captured image in front view;
    Performing perspective projection transformation of the trimming area using the first vertex coordinates and the second vertex coordinates, and generating the trimming area in front view;
    The computer-readable computer program product according to claim 10, wherein a captured image processing program for causing a computer to perform the step of trimming an area specified by the trimming area of the front view of the captured image of the front view is recorded. recoding media.
PCT/JP2012/056327 2011-12-20 2012-03-12 Information terminal device, captured image processing system, method, and recording medium recording program WO2013094231A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011277948A JP2013131801A (en) 2011-12-20 2011-12-20 Information terminal device, picked up image processing system, method and program, and recording medium
JP2011-277948 2011-12-20

Publications (1)

Publication Number Publication Date
WO2013094231A1 true WO2013094231A1 (en) 2013-06-27

Family

ID=48668141

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/056327 WO2013094231A1 (en) 2011-12-20 2012-03-12 Information terminal device, captured image processing system, method, and recording medium recording program

Country Status (2)

Country Link
JP (1) JP2013131801A (en)
WO (1) WO2013094231A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6543062B2 (en) * 2015-03-23 2019-07-10 キヤノン株式会社 Image processing apparatus, image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0262671A (en) * 1988-08-30 1990-03-02 Toshiba Corp Color editing processor
JP2005267465A (en) * 2004-03-19 2005-09-29 Casio Comput Co Ltd Image processing device, pickup image projection apparatus, image processing method and program
JP2005303941A (en) * 2004-04-16 2005-10-27 Casio Comput Co Ltd Correction reference designation device and correction reference designation method
JP2009069213A (en) * 2007-09-10 2009-04-02 Omi:Kk Map dating determining device, map dating determination method and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3255676B2 (en) * 1991-11-30 2002-02-12 株式会社リコー Digital copier
KR100860940B1 (en) * 2007-01-22 2008-09-29 광주과학기술원 Method of providing contents using a color marker and system for performing the same
JP3150079U (en) * 2009-01-29 2009-04-30 洸弥 平畑 Easily double pen

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0262671A (en) * 1988-08-30 1990-03-02 Toshiba Corp Color editing processor
JP2005267465A (en) * 2004-03-19 2005-09-29 Casio Comput Co Ltd Image processing device, pickup image projection apparatus, image processing method and program
JP2005303941A (en) * 2004-04-16 2005-10-27 Casio Comput Co Ltd Correction reference designation device and correction reference designation method
JP2009069213A (en) * 2007-09-10 2009-04-02 Omi:Kk Map dating determining device, map dating determination method and program

Also Published As

Publication number Publication date
JP2013131801A (en) 2013-07-04

Similar Documents

Publication Publication Date Title
US10318028B2 (en) Control device and storage medium
EP3547218B1 (en) File processing device and method, and graphical user interface
JP5451888B2 (en) Camera-based scanning
US20130027757A1 (en) Mobile fax machine with image stitching and degradation removal processing
WO2018214365A1 (en) Image correction method, apparatus, device, and system, camera device, and display device
US9697431B2 (en) Mobile document capture assist for optimized text recognition
US20130239050A1 (en) Display control device, display control method, and computer-readable recording medium
KR101450782B1 (en) Image processing device and program
KR101797260B1 (en) Information processing apparatus, information processing system and information processing method
JP2017058812A (en) Image display apparatus, image display method and program
US20140049678A1 (en) Mobile terminal and ineffective region setting method
CN111064895B (en) Virtual shooting method and electronic equipment
US9779323B2 (en) Paper sheet or presentation board such as white board with markers for assisting processing by digital cameras
CN113723136A (en) Bar code correction method, device, equipment and storage medium
WO2013136602A1 (en) Imaging device with projector and imaging control method therefor
WO2013094231A1 (en) Information terminal device, captured image processing system, method, and recording medium recording program
JP6067040B2 (en) Information processing apparatus, information processing method, and program
JP2015102915A (en) Information processing apparatus, control method, and computer program
JP2017058801A (en) Image display apparatus, image display method and program
US9521270B1 (en) Changing in real-time the perspective of objects captured in images
JP2013122641A (en) Image display system, portable terminal device, control method, and control program
KR20130118704A (en) The method and aparatue to process the digital image of document for electrical archiving and transmission
KR20220002372A (en) Recording Surface Boundary Markers for Computer Vision
JP4504140B2 (en) Writing surface reproduction method, writing surface reproduction device and writing surface reproduction program
CN117032492A (en) Touch identification method, touch calibration method, related device and equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12860515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12860515

Country of ref document: EP

Kind code of ref document: A1