WO2013094231A1 - Dispositif de terminal d'informations, système de traitement d'image capturée, procédé et support d'enregistrement enregistrant un programme - Google Patents

Dispositif de terminal d'informations, système de traitement d'image capturée, procédé et support d'enregistrement enregistrant un programme Download PDF

Info

Publication number
WO2013094231A1
WO2013094231A1 PCT/JP2012/056327 JP2012056327W WO2013094231A1 WO 2013094231 A1 WO2013094231 A1 WO 2013094231A1 JP 2012056327 W JP2012056327 W JP 2012056327W WO 2013094231 A1 WO2013094231 A1 WO 2013094231A1
Authority
WO
WIPO (PCT)
Prior art keywords
captured image
trimming
area
region
vertex coordinates
Prior art date
Application number
PCT/JP2012/056327
Other languages
English (en)
Japanese (ja)
Inventor
浩明 金田
和久 中林
Original Assignee
ナカバヤシ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ナカバヤシ株式会社 filed Critical ナカバヤシ株式会社
Publication of WO2013094231A1 publication Critical patent/WO2013094231A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3872Repositioning or masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/626Detection of non-electronic marks, e.g. fluorescent markers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present invention relates to a captured image processing method, and more specifically, recorded an information terminal device, a captured image processing system, a method, and a program capable of trimming and storing a predetermined area of a captured image even when away from home.
  • the present invention relates to a computer-readable recording medium.
  • Patent Document 1 there is a technology for storing image data captured by an imaging device included in a mobile terminal device such as a mobile phone or a smartphone via a network (see, for example, Patent Document 1). .
  • a portable information device such as a PDA with a camera or a portable personal computer
  • Image processing such as adjustment of brightness, color, and size is performed on the data.
  • Patent Document 1 Although the technique described in Patent Document 1 has the convenience that image data collected on the go can be saved via a network, the process of trimming only the necessary portions after converting to image data is very troublesome. It was. In addition, even when trying to trim image data, there is a problem that the operability is insufficient with a mobile phone held by a user who is out, and even a compact information device such as a mobile phone can be easily used. Therefore, development of a simple method capable of trimming image data has been desired.
  • the captured image is distorted (perspective).
  • the plane containing the imaging object (document page) is fixed at the position of the glass surface of the flatbed. Does not occur.
  • the imaging unit captures an image from a direction that does not coincide with the normal direction of the plane including the imaging target. There is a problem that the captured image is distorted and the content of the article is difficult to read even if the user browses to confirm the content of the captured image.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide an information terminal device, a captured image processing system, and a method capable of trimming and storing a predetermined region of a captured image even when away from home. Another object of the present invention is to provide a computer-readable recording medium on which a program is recorded.
  • an object of the present invention is to provide an information terminal device capable of correcting a distortion and storing a picked-up image when picking up an object to be picked up by a portable information terminal device having an image pickup unit, and picked-up image processing It is an object to provide a computer-readable recording medium on which a system, a method, and a program are recorded.
  • an information terminal device is an information terminal device that includes an imaging unit and a control unit, and performs a trimming process on a predetermined area of a captured image acquired from the imaging unit.
  • a predetermined region of the object is surrounded by a line having a predetermined color
  • the control unit acquires a captured image obtained by capturing the predetermined region of the imaging object from the imaging unit, and the imaging
  • An extracted image obtained by extracting only the predetermined color to be trimmed is generated from the image, and a plurality of contour lines corresponding to the predetermined region of the captured image are extracted from the extracted image and surrounded by the contour lines
  • This is an information terminal device that determines a region having the largest area as a trimming region and trims a region specified by the trimming region of the captured image.
  • the information terminal device is the information terminal device according to the present invention, wherein the control unit further determines a trimming region, and then extends a plurality of line segments constituting the trimming region.
  • a plurality of intersection candidates formed by two straight lines are obtained from a combination of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are first trapezoidal correction.
  • a trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and a trapezoidally corrected second key is calculated based on the calculated index.
  • first vertex coordinate and the second vertex coordinate are used to perform perspective projection transformation of the captured image to generate the captured image in front view, and the first vertex Coordinates and the second Performing perspective projection transformation of the trimming area using point coordinates, generating the trimming area in front view, and trimming the area specified by the trimming area in front view of the captured image in front view preferable.
  • the captured image processing system includes an information terminal device having an imaging unit and a control unit, and a writing instrument, and the information terminal device performs a trimming process on a predetermined region of the captured image acquired from the imaging unit.
  • a predetermined region of the imaging target is surrounded by a line having a predetermined color using the writing instrument
  • the control unit is configured to transmit the imaging target from the imaging unit.
  • a captured image obtained by capturing the predetermined area is acquired, and an extracted image obtained by extracting only the predetermined color to be trimmed is generated from the captured image.
  • the predetermined area of the captured image is extracted.
  • a plurality of corresponding contour lines are extracted, a region having the maximum area surrounded by the contour lines is determined as a trimming region, and a region specified by the trimming region of the captured image is trimmed.
  • a captured image processing system To ring, a captured image processing system.
  • the control unit further extends a plurality of line segments constituting the trimming area after determining the trimming area.
  • a plurality of intersection candidates formed by two straight lines are obtained from a combination of a plurality of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are obtained as trapezoidal correction firsts.
  • a trapezoidal distortion index defined by the four first vertex coordinates is calculated from the four first vertex coordinates, and after trapezoidal correction based on the calculated index
  • the second vertex coordinates are determined, and the first vertex coordinates and the second vertex coordinates are used to perform perspective projection conversion of the captured image to generate the captured image in front view, and the first Vertex coordinates And the perspective projection conversion of the trimming area using the second vertex coordinates to generate the trimming area for front view, and the area specified by the trimming area for front view of the captured image in front view Is preferably trimmed.
  • a captured image processing method is a method for trimming a predetermined region of a captured image acquired from the image capturing unit in an information terminal device including an image capturing unit and a control unit.
  • a step in which a predetermined region is surrounded by a line having a predetermined color, and the control unit acquires a captured image obtained by capturing the predetermined region of the imaging object from the imaging unit;
  • a captured image including a step of determining a region having the maximum enclosed area as a trimming region, and a step of trimming a region specified by the trimming region of the captured image It is a management method.
  • the captured image processing method according to the present invention is the captured image processing method according to the present invention, wherein the control unit determines a trimming region and then extends a plurality of line segments constituting the trimming region.
  • a plurality of intersection candidates formed by two straight lines are obtained from a combination of straight lines, and among the obtained plurality of intersection candidates, the four intersection candidates whose coordinates are located on the outermost side are first trapezoidal correction.
  • a step of determining a vertex coordinate, and calculating a trapezoidal distortion index defined by the four first vertex coordinates from the four first vertex coordinates, and after correcting the trapezoid based on the calculated index Determining the second vertex coordinates, and performing perspective projection transformation of the captured image using the first vertex coordinates and the second vertex coordinates to generate the captured image in front view; Performing a perspective projection transformation of the trimming area using the first vertex coordinates and the second vertex coordinates to generate the trimming area in a front view; and a front view view of the captured image. It is preferable that the method further includes a step of trimming an area designated by the trimming area.
  • a computer-readable recording medium that records a captured image processing program according to the present invention performs a trimming process on a predetermined area of a captured image acquired from the imaging unit in an information terminal device including an imaging unit and a control unit.
  • the computer-readable recording medium recording the captured image processing program according to the present invention is a computer-readable recording medium recording the captured image processing program according to the present invention, wherein the control unit further includes the trimming area. Is determined, a plurality of intersection candidates formed by the two straight lines are obtained from a combination of a plurality of straight lines obtained by extending a plurality of line segments constituting the trimming region, and coordinates among the obtained intersection candidates are determined. Are determined as four first vertex coordinates from the four first vertex coordinates, and the four intersection candidate candidates that are positioned on the outermost side are determined as first vertex coordinates for trapezoidal correction.
  • Calculating a trapezoidal distortion index determining a second vertex coordinate after trapezoid correction based on the calculated index; and the first vertex coordinate And using the second vertex coordinates to perform perspective projection transformation of the captured image to generate the captured image in front view, using the first vertex coordinates and the second vertex coordinates, Performing a perspective projection conversion of the trimming area to generate the trimming area in front view, and trimming an area specified by the trimming area in front view of the captured image in front view; It is preferable to execute.
  • the present invention it is possible to trim and save a predetermined area of a captured image even on the go.
  • the imaging target is imaged by a portable information terminal device having an imaging unit
  • the captured image can be stored after correcting the distortion. There is no distortion in newspaper articles and magazine clippings that have been captured and converted into image information, and the user can accurately read the contents of the articles.
  • FIG. 1 is a schematic configuration diagram of a mobile terminal device according to an embodiment of the present invention.
  • (A) is a front view
  • (b) is a back view.
  • FIG. 2 is a block diagram of the mobile terminal device.
  • the mobile terminal device 1 includes a device main body 2, an imaging unit 3, a display unit 4, a touch panel 5, a storage unit 6, and a control unit 7.
  • the mobile terminal device 1 includes an antenna for wireless communication, a microphone and a speaker for voice calls, and the like (none of which are shown).
  • a portable terminal device 1 is not particularly limited, and examples thereof include a smartphone, a mobile phone, and a PDA.
  • the imaging unit 3 is a known configuration that images a subject image incident through a lens, and can capture a newspaper article or magazine clipping to be captured.
  • Such an imaging unit 3 includes, for example, an imaging element such as a CCD (Charge-Coupled Device) or CMOS (Complementary Metal-Oxide Semiconductor) that outputs an analog electrical signal, photoelectrically converts a subject image incident through a lens, and performs imaging.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • a device that converts an analog electric signal from the element into a digital electric signal and outputs image data can be used.
  • the display unit 4 a known display device such as a liquid crystal display or an organic EL display can be used, and image information or the like can be displayed on the display screen.
  • the touch panel 5 is a known touch panel that can recognize the touch position when the user touches the surface, and is disposed on the display unit 4.
  • a known type such as a resistive film type, a surface acoustic wave type, an electromagnetic induction type, a capacitance type, or the like can be used.
  • various inputs can be made by the user touching the touch panel 5. For example, by touching the touch panel 5, an instruction for imaging by the imaging unit 3 and display on the display unit 4 can be input.
  • the storage unit 6 includes a storage medium such as a known hard disk or semiconductor memory that stores programs and data for information processing.
  • the storage unit 6 can store image information captured by the imaging unit 3 (hereinafter also simply referred to as “captured image”).
  • the captured image is stored in various image formats such as jpeg and gif, for example.
  • the control unit 7 includes a processor such as a known CPU that performs information processing based on programs and data.
  • the control unit 7 can control each of the above constituent elements by executing a program stored in the storage unit 6.
  • the processing performed by the mobile terminal device 1 actually means processing performed by the control unit 7 of the mobile terminal device 1.
  • the control unit 7 temporarily stores necessary data (such as intermediate data during processing) using the storage unit 6 as a work area, and appropriately records data to be stored for a long period of time such as a calculation result in the storage unit 6.
  • the mobile terminal device 1 generates, for example, a program used for performing the processing of steps S1 to S15 described below in an execution format (for example, generated by being converted from a programming language such as C language by a compiler).
  • the mobile terminal device 1 is recorded in advance in the storage unit 6 and performs processing using the program recorded in the storage unit 6.
  • FIG. 3 is a flowchart showing the processing order of the image processing method performed by the mobile terminal device according to the embodiment of the present invention.
  • the processing order of the image processing method according to the embodiment of the present invention will be described in detail based on the flowchart shown in FIG.
  • trapezoidal correction refers to a captured image in which the normal direction of the plane including the object to be imaged and the image capturing direction do not coincide with each other, and the image capturing direction coincides with the normal direction. This means that the image is converted to a front-view image without any.
  • step S1 a predetermined area of a newspaper article to be captured is imaged.
  • the user marks a predetermined region Rn of the newspaper article N with a predetermined color (for example, red) using the marker pen M, and the user uses the imaging unit 3 to mark the predetermined region surrounded by the marker.
  • An area including Rn is imaged.
  • the portable terminal device 1 records a captured image.
  • the captured image is parsed in the Y-axis direction in the figure.
  • the capture target region Rn is also parsed in the Y-axis direction in the figure.
  • the marker pen M a marker pen M having a V-shaped or U-shaped shape, for example, in which a colored portion at the tip is divided into two protrusions is used.
  • the capture target region Rn is a region surrounded by a double line pattern.
  • an image is generated by extracting only the color to be trimmed from the captured image.
  • the extracted image is image data used for recognizing a trimming region Rt described later. Since the vertical and horizontal image sizes of the extracted image match the vertical and horizontal image sizes of the captured image, the extracted image functions as a so-called “layer” for the captured image.
  • the generation of the extracted image is performed by, for example, a known processing method that extracts using a threshold value. Since the region Rn to be captured by the newspaper article N is marked by being surrounded by a predetermined color, when only a predetermined color in the captured image is extracted, the largest region surrounded by the extracted color in the extracted image is This becomes a trimming region Rt described later.
  • the extracted image is also parsed. Further, since the region Rn to be captured is surrounded by a double line pattern, the region surrounded by the extracted color in the extracted image is also surrounded by the double line pattern.
  • the layer of the extracted image is converted into coordinates, and the extracted image is specified using coordinates instead of the layer.
  • step S3 the extracted double line image is converted into a single line. That is, the width of the extracted double line is expanded and contracted to be converted into a single line (single line).
  • the double line is detected by a known processing method that detects a boundary using a threshold value, for example.
  • step S4 coordinates for trimming are detected. Since the extracted image is image data of only a predetermined color (red), it is first binarized (monochrome data). Next, all contour lines are extracted from the binarized image data, and the extracted contour lines are linearly approximated. Further, a region where the area of the region surrounded by the contour line is maximized is determined by combining the linearized contour lines, and the region where the area is maximized is set as a trimming region Rt. The method of contour extraction and straight line approximation is performed by a known processing method. Since the coordinates of the extracted image are parsed, the shape of the trimming region Rt is also parsed.
  • step S5 it is determined whether or not the area of the trimming region Rt is larger than a predetermined area. If it is larger, the process of step S6 is performed, and if smaller, the process is terminated.
  • step S6 the vertex coordinates for keystone correction are calculated.
  • the vertex coordinates for trapezoid correction are obtained from the vertex coordinates constituting the trimming region Rt obtained in step S4.
  • FIG. 5 is a schematic diagram for explaining a method of determining vertex coordinates for trapezoid correction.
  • the trimming region Rt is composed of a plurality of line segments that connect the vertices r1 to r10 in order. An angle formed by two line segments among the plurality of line segments is obtained. If the obtained angle is equal to or greater than a predetermined angle (for example, 70 degrees), the coordinates of the intersection formed by two straight lines obtained by extending the two line segments are obtained and set as a candidate for the intersection.
  • a predetermined angle for example, 70 degrees
  • intersection candidate is excluded. Processing for obtaining such intersection candidate is performed for all combinations (brute force) in which the combinations of the two line segments are changed. Then, of the obtained plurality of intersection candidates, the four intersections whose coordinates are located on the outermost side are designated as vertex coordinates P1 to P4 for trapezoid correction.
  • step S7 the user finely adjusts the coordinate positions of the vertex coordinates P1 to P4 on the display screen.
  • the mobile terminal device 1 displays the trapezoidal correction vertex coordinates P1 to P4 obtained in step S6, the coordinates indicating the trimming area Rt, and the captured image indicating the capture target area Rn on the display unit 4 in an overlapping manner.
  • the coordinate positions of the vertex coordinates P1 to P4 finely adjusted based on the user input from the touch panel 5 are recorded.
  • step S8 the vertex coordinates after the keystone correction are calculated.
  • the vertex coordinates after the trapezoid correction are obtained from the trapezoid correction vertex coordinates P1 to P4 determined in step S6 or S7.
  • FIG. 6 is a schematic diagram for explaining a method of determining vertex coordinates after trapezoid correction. First, the angles woven by two opposite sides (two straight lines) of the trapezoid defined by the vertex coordinates P1 to P4 are obtained, and the two sides having the smaller inclination are selected. This is a process for discriminating the direction of the perspective. In the present embodiment, since the perspective is attached in the Y-axis direction shown in FIG. Is selected.
  • the inclinations ⁇ 1 and ⁇ 2 with the adjacent sides are obtained.
  • an average value of the two obtained inclinations ⁇ 1 and ⁇ 2 is obtained, and this is set as a trapezoidal inclination ⁇ av .
  • the obtained trapezoidal inclination ⁇ av becomes an index representing the degree of parsing of the captured image.
  • the aspect ratio correspondence table is a correspondence table between the trapezoid inclination ⁇ av and the aspect ratio of the image after the trapezoid correction, and is a table prepared in advance by actual measurement using the imaging unit 3 of the mobile terminal device 1.
  • Table 1 shows an example of the aspect ratio correspondence table.
  • the constant k is the lateral magnification of the short side (P1-P4) when the long side of the trapezoid (side P2-P3 in this embodiment) is 1.
  • the coordinate positions of the four vertices P1x to P4x after trapezoid correction are calculated and recorded.
  • the sides P2-P3 are the long sides of the trapezoid and the sides P1-P4 are the short sides, so the coordinate positions of the points P1, P4 of the sides P1-P4, which are the short sides, and the horizontal side magnification Based on k, coordinate positions P1x and P4x after trapezoidal correction are calculated.
  • step S9 the trapezoidal correction of the captured image is performed, and the captured image of the front view without distortion is created from the captured image distorted with the perspective. Since the vertex coordinates P1 to P4 before the trapezoid correction and the vertex coordinates P1x to P4x after the trapezoid correction are already obtained on the extracted image, the vertex coordinates P1 to P4 and P1x to P4x before and after the correction are corrected.
  • the perspective projection conversion is performed using the information, and the trapezoidal correction of the captured image with the perspective is performed.
  • the captured image after the keystone correction is a captured image of front view without perspective. Since perspective projection conversion is known, detailed description thereof is omitted in this specification.
  • step S10 the trapezoidal correction of the trimming area is performed, and a front-view trimming area without distortion is created from the trimming area Rt that is distorted with perspective.
  • Perspective projection conversion is performed using the information of the vertex coordinates P1 to P4 and P1x to P4x before and after correction, and the keystone correction of the coordinates of the trimmed region Rt with the perspective is performed.
  • the trimming area after the keystone correction is a trimming area in front view without perspective.
  • step S11 the user finely adjusts the coordinate position of the vertex coordinates of the trimming area on the display screen.
  • the mobile terminal device 1 displays on the display unit 4 the coordinates indicating the front-view trimming area obtained in step S10 and the captured image obtained in step S9 on the display unit 4, and the user from the touch panel 5 Based on the input, the coordinate position of the finely adjusted vertex coordinates of the trimming area in front view is recorded.
  • step S12 the captured image is trimmed.
  • a mask image for trimming processing is created from the coordinates of the front-view trimming region created in step S10 or step S11.
  • trimming processing of the captured image of the front view created in step S9 is performed using the created mask image. Since the coordinate position of the vertex coordinates and the captured image are both front-view images, the captured image after trimming is a front-view image without perspective.
  • a known method is used for image trimming using a mask image.
  • a region outside the trimming region in the captured image is filled with, for example, a predetermined color (for example, white).
  • step S13 the image quality of the trimmed captured image is adjusted.
  • the white balance (color temperature) of the trimmed image is adjusted according to the shooting environment of the captured image, the histogram is made uniform, and gamma correction is performed.
  • step S14 it is determined whether or not there is a double line in the captured image.
  • a double line detection process is performed on the captured image with the perspective captured in step S1, and if a double line exists, the process of step S15 is performed. If a double line does not exist, the double line detection process is performed. The process ends.
  • the double line is detected by a known processing method that detects a boundary using a threshold value, for example.
  • step S15 the trimmed image whose image quality has been adjusted is stored in the storage unit 6, and the process is terminated.
  • the mobile terminal device 1 trims the area marked with the image and images the image. It can be saved as information. Since it is a simple method of enclosing and marking a region to be trimmed of an article using the marker pen M, it is possible to easily trim image data even when away from home. Further, since the stored trimmed image is a front-view image without perspective, there is no distortion in a newspaper article or magazine clipped as image information, and the user can accurately read the contents of the article.
  • the mobile terminal device 1 applies both the trimming process of the captured image and the trapezoidal correction.
  • the mobile terminal device 1 may apply only the trimming process of the captured image. That is, the trapezoidal correction of the captured image is an arbitrary process according to the user's desire.
  • all the contour lines are extracted and the trimming area is determined in step S4.
  • the method for obtaining the trimming area is not limited to this.
  • the user uses the marker pen M.
  • a region surrounded by freehand may be used as a trimming region as it is.
  • the region R to be captured is marked with a double line pattern using the marker pen M in which the colored portion at the tip is divided into two protrusions.
  • the pattern of the line surrounding is not limited to this.
  • a marker pen in which the colored portion at the tip is divided into three protrusions may be used to mark the region R to be captured in a triple line pattern.
  • the region R to be captured may be surrounded by a pattern in which the line widths of the three lines are different using marker pens having different widths of the three protrusions.
  • the process of converting a double line image into a single line in step S3 and the process of determining the presence of a double line in step S14 are arbitrary processes.
  • the line surrounding the region R to be captured is a single line
  • these processes can be omitted.
  • the captured image can be trimmed using the region surrounded by the single line as a trimming region.
  • a normal dye-based or pigment-based ink is used for the marker pen, but instead, an infrared ink that reflects or absorbs infrared light may be used. If the area R to be captured is marked by using infrared reflective ink, the infrared light is reflected or absorbed at the mark portion, and this is detected on the mobile terminal device 1 side to identify the area Rn to be captured. May be.
  • step S7 the mobile terminal device 1 records the coordinate positions of the finely adjusted vertex coordinates P1 to P4, and in step S11, the mobile terminal device 1 detects the trimming area of the front view.
  • the coordinate positions of the finely adjusted vertex coordinates are recorded, but the process of finely adjusting these coordinate positions or the trimming area is an arbitrary process, and the process may be omitted.
  • step S14 determines whether the double line existed in the captured image in step S14, the result of having detected the double line with respect to the captured image in step S3, for example, It may be stored as a Boolean flag, and in step S14, it may be determined whether or not a double line exists in the captured image based on the stored information of the flag.
  • white balance adjustment, histogram equalization, and gamma correction are illustrated as examples of adjusting the image quality of a trimmed captured image.
  • the present invention is not limited to these. Any method of adjusting the image quality suitable for browsing the trimmed image on the mobile terminal device 1 can be applied as appropriate.
  • the area outside the trimming area in the captured image is filled with a predetermined color.
  • this area may be made transparent in the form of a transparent GIF, for example.
  • the present invention it is possible to trim and save a predetermined area of a captured image even on the go.
  • the imaging target is imaged by a portable information terminal device having an imaging unit
  • the captured image can be stored after correcting the distortion. There is no distortion in newspaper articles and magazine clippings that have been captured and converted into image information, and the user can accurately read the contents of the articles.

Abstract

La présente invention a pour but de proposer un dispositif de terminal d'informations apte à rogner et de stocker une région prescrite d'une image capturée même à distance du domicile, un système de traitement d'image capturée, un procédé et un support d'enregistrement lisible par ordinateur enregistrant un programme. Ce dispositif de terminal d'informations portable comporte une unité d'imagerie et une unité de commande, et réalise un traitement de rognage sur une région prescrite d'une image capturée acquise à partir de l'unité d'imagerie. La région prescrite d'un sujet d'imagerie est entourée par une ligne ayant une couleur prescrite, et une unité de commande acquiert, à partir de l'unité d'imagerie, une image capturée dans laquelle la région prescrite du sujet d'imagerie est capturée (S1). A partir de l'image capturée, une image extraite est générée par extraction uniquement d'une couleur prescrite qui est le sujet de rognage (S2). Dans l'image extraite, de multiples lignes de contour sont extraites, lesquelles correspondent à la région prescrite de l'image capturée, et la région de rognage est réglée de façon à être la région ayant la plus grande zone entourée par lesdites lignes de contour (S4). La région de l'image capturée indiquée par la région de rognage est ensuite rognée (S12).
PCT/JP2012/056327 2011-12-20 2012-03-12 Dispositif de terminal d'informations, système de traitement d'image capturée, procédé et support d'enregistrement enregistrant un programme WO2013094231A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011277948A JP2013131801A (ja) 2011-12-20 2011-12-20 情報端末装置、撮像画像処理システム、方法、プログラムおよび記録媒体
JP2011-277948 2011-12-20

Publications (1)

Publication Number Publication Date
WO2013094231A1 true WO2013094231A1 (fr) 2013-06-27

Family

ID=48668141

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/056327 WO2013094231A1 (fr) 2011-12-20 2012-03-12 Dispositif de terminal d'informations, système de traitement d'image capturée, procédé et support d'enregistrement enregistrant un programme

Country Status (2)

Country Link
JP (1) JP2013131801A (fr)
WO (1) WO2013094231A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6543062B2 (ja) * 2015-03-23 2019-07-10 キヤノン株式会社 画像処理装置、画像処理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0262671A (ja) * 1988-08-30 1990-03-02 Toshiba Corp カラー編集処理装置
JP2005267465A (ja) * 2004-03-19 2005-09-29 Casio Comput Co Ltd 画像処理装置、撮影画像投影装置、画像処理方法及びプログラム
JP2005303941A (ja) * 2004-04-16 2005-10-27 Casio Comput Co Ltd 補正基準指定装置、及び補正基準指定方法
JP2009069213A (ja) * 2007-09-10 2009-04-02 Omi:Kk 地図年代判定装置、地図年代判定方法、及びプログラム

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3255676B2 (ja) * 1991-11-30 2002-02-12 株式会社リコー デジタル複写機
KR100860940B1 (ko) * 2007-01-22 2008-09-29 광주과학기술원 컬러 마커를 이용한 컨텐츠 제공 방법 및 이를 수행하기 위한 시스템
JP3150079U (ja) * 2009-01-29 2009-04-30 洸弥 平畑 楽々二重ペン

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0262671A (ja) * 1988-08-30 1990-03-02 Toshiba Corp カラー編集処理装置
JP2005267465A (ja) * 2004-03-19 2005-09-29 Casio Comput Co Ltd 画像処理装置、撮影画像投影装置、画像処理方法及びプログラム
JP2005303941A (ja) * 2004-04-16 2005-10-27 Casio Comput Co Ltd 補正基準指定装置、及び補正基準指定方法
JP2009069213A (ja) * 2007-09-10 2009-04-02 Omi:Kk 地図年代判定装置、地図年代判定方法、及びプログラム

Also Published As

Publication number Publication date
JP2013131801A (ja) 2013-07-04

Similar Documents

Publication Publication Date Title
US10318028B2 (en) Control device and storage medium
EP3547218B1 (fr) Dispositif et procédé de traitement de fichiers, et interface utilisateur graphique
JP5451888B2 (ja) カメラベースのスキャニング
US20130027757A1 (en) Mobile fax machine with image stitching and degradation removal processing
WO2018214365A1 (fr) Procédé, appareil, dispositif et système de correction d'image, dispositif de prise de vues et dispositif d'affichage
US9697431B2 (en) Mobile document capture assist for optimized text recognition
US20130239050A1 (en) Display control device, display control method, and computer-readable recording medium
US9491352B2 (en) Imaging device, signal processing method, and signal processing program
KR101450782B1 (ko) 화상 처리 장치 및 프로그램
KR101797260B1 (ko) 정보 처리 장치, 정보 처리 시스템 및 정보 처리 방법
JP2017058812A (ja) 画像表示装置、画像表示方法及びプログラム
US20140049678A1 (en) Mobile terminal and ineffective region setting method
CN111064895B (zh) 一种虚化拍摄方法和电子设备
US9779323B2 (en) Paper sheet or presentation board such as white board with markers for assisting processing by digital cameras
CN113723136A (zh) 条码矫正方法、装置、设备及存储介质
WO2013094231A1 (fr) Dispositif de terminal d'informations, système de traitement d'image capturée, procédé et support d'enregistrement enregistrant un programme
JP6067040B2 (ja) 情報処理装置、情報処理方法及びプログラム
JP2015102915A (ja) 情報処理装置、制御方法およびコンピュータプログラム
JP2017058801A (ja) 画像表示装置、画像表示方法及びプログラム
US9521270B1 (en) Changing in real-time the perspective of objects captured in images
JP2013122641A (ja) 画像表示システム、携帯端末装置、制御方法、及び、制御プログラム
KR20130118704A (ko) 디지털 카메라로 촬영된 문서의 화상으로부터 전기적 문서 보관 및 전송을 위한 화상 처리 방법 및 장치
JP6512829B2 (ja) 情報処理装置、情報処理方法及びプログラム
KR20220002372A (ko) 컴퓨터 비전을 위한 기록 표면 경계 마커
JP4504140B2 (ja) 筆記面再生方法、筆記面再生装置及び筆記面再生プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12860515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12860515

Country of ref document: EP

Kind code of ref document: A1