WO2013111278A1 - Image recording device, image recording method, program for image recording, and information recording medium - Google Patents

Image recording device, image recording method, program for image recording, and information recording medium Download PDF

Info

Publication number
WO2013111278A1
WO2013111278A1 PCT/JP2012/051500 JP2012051500W WO2013111278A1 WO 2013111278 A1 WO2013111278 A1 WO 2013111278A1 JP 2012051500 W JP2012051500 W JP 2012051500W WO 2013111278 A1 WO2013111278 A1 WO 2013111278A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
information
image
image recording
recording apparatus
Prior art date
Application number
PCT/JP2012/051500
Other languages
French (fr)
Japanese (ja)
Inventor
久規 白戸
Original Assignee
SHIROTO Hisanori
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHIROTO Hisanori filed Critical SHIROTO Hisanori
Priority to JP2013555041A priority Critical patent/JP5858388B2/en
Priority to PCT/JP2012/051500 priority patent/WO2013111278A1/en
Publication of WO2013111278A1 publication Critical patent/WO2013111278A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00681Detecting the presence, position or size of a sheet or correcting its position before scanning
    • H04N1/00684Object of the detection
    • H04N1/00687Presence or absence
    • H04N1/00689Presence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00681Detecting the presence, position or size of a sheet or correcting its position before scanning
    • H04N1/00729Detection means
    • H04N1/00734Optical detectors
    • H04N1/00737Optical detectors using the scanning elements as detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/195Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a two-dimensional array or a combination of two-dimensional arrays
    • H04N1/19594Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a two-dimensional array or a combination of two-dimensional arrays using a television camera or a still video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/0402Arrangements not specific to a particular one of the scanning methods covered by groups H04N1/04 - H04N1/207
    • H04N2201/0436Scanning a picture-bearing surface lying face up on a support

Definitions

  • the present invention belongs to the technical fields of an image recording apparatus, an image recording method, an image recording program, and an information recording medium. More specifically, an image recording apparatus provided with imaging means such as a camera, an image recording method executed in the image recording apparatus, an image recording program used in the image recording apparatus, and information recorded with the image recording program It belongs to the technical field of recording media.
  • the read image data and the like are basically stored or accumulated in the personal computer.
  • the read image data or the like is stored in the personal computer owned by the out-of-office or other person.
  • Patent Document 1 the contents disclosed in the following Patent Document 1 and Patent Document 2 are known.
  • the present invention has been made in view of the above-mentioned demand, and an example of the problem is that information described in an imaging object such as a document can be easily imaged and recorded with high image quality.
  • the invention according to claim 1 is an imaging unit in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object.
  • Imaging means such as a camera for outputting imaging information for each imaging, and a CPU for identifying whether or not the imaging object is placed at the object placement position based on each of the output imaging information
  • a recording unit such as a CPU for recording the imaging information output corresponding to the identified timing on a recording medium such as a ROM when it is identified that the imaging object is placed. And comprising.
  • the invention according to claim 33 is an imaging means in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object.
  • an image recording method executed in an image recording apparatus including an imaging unit such as a camera that outputs imaging information for each imaging the imaging object is located at the object placement position based on the output imaging information.
  • a step of identifying whether or not the object to be imaged is placed, and when the object to be imaged is identified, the imaging information output corresponding to the identified timing is recorded on a recording medium. Record maker And, including the.
  • the invention described in claim 34 is an imaging means in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object.
  • a computer included in an image recording apparatus including an imaging unit such as a camera that outputs imaging information for each imaging is used to place the imaging object on the object placement position based on the output imaging information. And a recording unit that records the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed. Means, as To function.
  • the invention according to claim 35 is an imaging means in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object.
  • a computer included in an image recording apparatus including an imaging unit such as a camera that outputs imaging information for each imaging is used to place the imaging object on the object placement position based on the output imaging information. And a recording unit that records the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed. Means, as The image recording program for performance is recorded readably by the computer.
  • the image is continuously detected by the imaging means whose relative position to the object placement position where the imaging object is placed is constant.
  • the imaging object placement position or the imaging object is imaged and it is identified that the imaging object is placed at the object placement position based on each imaging information, it corresponds to the timing of the identification. Therefore, the information recorded on the object to be imaged can be simplified while preventing blurring due to the user's operation with respect to the imaging means for recording or imaging, for example. Can be recorded and recorded.
  • the invention according to claim 2 is the image recording apparatus according to claim 1, wherein the identification means is based on the imaging information respectively output from the imaging means. When a boundary between the imaging object and the periphery of the imaging object in the imaging information is identified, it is configured to identify that the imaging object is placed at the object placement position.
  • the imaging object in each imaging information and the surroundings of the imaging object When the boundary defining the image is identified, it is identified that the imaging object is placed at the object placement position, so the imaging object is imaged more reliably and easily and the information described therein is recorded. can do.
  • a boundary line indicating the identified boundary is displayed as the imaging information.
  • Display means such as a display unit that displays the image in association with the imaging object.
  • the boundary line indicating the boundary corresponds to the imaging object
  • the user can improve the recognition accuracy of the boundary and the success rate of the imaging itself by moving the imaging object so that the display can be performed more accurately.
  • the identification unit is configured to output each of the identification information based on the imaging information output from the imaging unit.
  • the imaging information changes by a predetermined amount in time series, and then it is identified that the change has disappeared, it is configured to identify that the imaging object is placed at the object placement position Is done.
  • a part or all of the imaging information changes by a predetermined amount in time series based on the imaging information respectively output from the imaging means. Then, when it is identified that the change has disappeared thereafter, it is identified that the imaging object is placed at the object placement position, so that the imaging object is imaged more reliably and easily and described in it. Information can be recorded.
  • the invention according to claim 5 is the image recording apparatus according to claim 1, wherein the identification unit is configured to mount the object on the mounting table on which the imaging object is mounted.
  • the identification unit is configured to mount the object on the mounting table on which the imaging object is mounted.
  • the feature information indicating the feature of the portion corresponding to the object mounting position of the mounting table on which the imaging object is mounted. Based on the above, when it is recognized that the object placement position of the placement table is blocked, it is identified that the imaging object is placed at the object placement position. Can be recorded and the information described therein can be recorded.
  • the invention according to claim 6 is the image recording apparatus according to claim 1, wherein the identification unit is located at the object placement position where the imaging object is not placed. Based on the difference between the corresponding imaging information and the other imaging information, it is configured to identify that the imaging object is placed at the object placement position.
  • imaging information corresponding to the object placement position where the imaging object is not placed other imaging information, Based on the difference, it is identified that the imaging target is placed at the target placement position, so that the imaging target can be captured more reliably and easily and the information described therein can be recorded.
  • the invention according to claim 7 recognizes that the imaging object is placed in the image recording apparatus according to any one of claims 1 to 6. It is further provided with notifying means such as a display unit for sequentially notifying whether or not it is being performed.
  • notification is performed sequentially, for example, when the user moves the imaging target based on the notification, the recognition accuracy of the placement and the success rate of imaging itself can be improved.
  • the notification in this case can use a visual method, an auditory method, or a method of vibrating the image recording apparatus.
  • the invention according to claim 8 is the image recording apparatus according to any one of claims 1 to 7, wherein the imaging object is located at the object placement position.
  • a determination unit such as a CPU for determining whether at least the imaging target in the imaging information is stationary for a predetermined time after being identified as being placed; and the imaging target by the determination unit Control means such as a CPU for controlling the recording means so as to record the imaging information output corresponding to the determination that the camera is stationary for the predetermined time on the recording medium.
  • the imaging information after it is identified that the imaging object is placed in addition to the operation of the invention described in any one of claims 1 to 7, in the imaging information after it is identified that the imaging object is placed. Since the imaging information output corresponding to the determination that at least the imaging object is stationary for a predetermined time set in advance is recorded on the recording medium, at least by imaging the imaging object without change, The imaging object can be imaged with higher image quality.
  • the invention according to claim 9 is the image recording apparatus according to any one of claims 1 to 8, wherein the imaging object is located at the object placement position.
  • a determination unit such as a CPU for determining whether or not an object other than the imaging object is imaged within the range of the imaging object in the imaging information after being identified as being placed; and the imaging When it is determined that the other object is captured within the range of the imaging target in the information, recording of the imaging information output corresponding to the determined timing onto the recording medium is prohibited.
  • Control means such as a CPU for controlling the recording means.
  • the object to be imaged is identified.
  • the recording of the imaging information output corresponding to the timing is prohibited, so that the imaging information obtained by capturing an object other than the imaging object is captured. Can be prevented from being recorded.
  • the invention according to claim 10 is the image recording apparatus according to any one of claims 1 to 9, wherein the first imaging information recorded in the recording medium is used.
  • Control means such as a CPU for controlling the recording means.
  • the imaging object in the recorded first imaging information and the unrecorded first is compared for each imaging, and when both are the same, the recording of the second imaging information is prohibited, so that the imaging information including the same imaging object is recorded redundantly. Can be prevented.
  • an invention according to claim 11 is an image recording apparatus according to any one of claims 1 to 10, wherein an image of a related object related to the imaging object.
  • Projection means such as a projector that projects the image onto the imaging object, and the recording means is controlled to record the imaging information output from the imaging means at the timing when the image is projected onto the recording medium. Configured to do.
  • an image of a related object related to the imaging object is displayed on the imaging object. Since the imaging information output at the timing of projecting and projecting the image is recorded, the imaging object can be imaged and recorded while clarifying the relationship with the related object.
  • the invention according to claim 12 is the image recording apparatus according to claim 11, wherein the imaging information corresponding to the imaging object on which the image is projected, and the image The recording means is controlled to record both the imaging information corresponding to the imaging object while the projection is temporarily interrupted on the recording medium.
  • the imaging information corresponding to the imaging object on which the image of the related imaging object is projected and the projection of the image are provided. Since both the imaging information corresponding to the imaging target while being temporarily interrupted are recorded on the recording medium, the imaging information corresponding to the imaging target after the projection of the image is selectively recorded. Can do.
  • an invention according to claim 13 is the image recording apparatus according to any one of claims 1 to 12, wherein a plurality of the images corresponding to the same imaging object are provided.
  • the image forming apparatus further includes a combining unit such as a CPU that combines the imaging information to generate combined imaging information, and is configured to control the recording unit to record the generated combined imaging information on the recording medium.
  • a plurality of pieces of imaging information corresponding to the same imaging object are synthesized and combined. Since the imaging information is generated and recorded, it is possible to record imaging information corresponding to a higher image quality or a wider range of imaging objects.
  • the invention according to claim 14 is the image recording apparatus according to claim 13, wherein the synthesizing means is a plurality of pieces of the imaging information corresponding to the same imaging object.
  • the composite imaging information is generated by combining a plurality of the imaging information having different imaging conditions of the imaging object.
  • a plurality of pieces of imaging information corresponding to the same imaging object and having different imaging conditions of the imaging object are synthesized.
  • the composite imaging information is generated, so that, for example, the imaging object can be imaged stereoscopically, and imaging information corresponding to the imaging object with higher image quality and higher accuracy can be recorded.
  • the invention according to claim 15 is the image recording apparatus according to claim 13, wherein the relative position between the imaging means and the imaging object is between the preceding and following imaging.
  • the synthesizing means uses a plurality of the imaging information before and after the change corresponding to the same imaging object, to produce a composite image having a higher image quality or a wider range than images corresponding to the respective imaging information.
  • the corresponding composite imaging information is generated.
  • the same imaging based on a change in relative position between the imaging means and the imaging object between the preceding and succeeding imaging. Since composite imaging information corresponding to a composite image having a higher image quality or a wider range than the images corresponding to the plurality of imaging information before and after the change corresponding to the object is generated, a higher image quality or a wider range than the image corresponding to the imaging object is generated. Composite imaging information can be recorded.
  • the invention according to claim 16 is the image recording apparatus according to any one of claims 1 to 15, wherein the imaging object personally identifies a person. It is comprised so that it may be the imaging target object in which the personal information to describe is described.
  • the invention according to claim 17 is the image recording apparatus according to any one of claims 1 to 16, wherein the imaging information is recorded on the recording medium.
  • An event information generating unit such as a CPU for generating event information associated with the recorded imaging information and specifying an event related to the recorded imaging information; and the generated event information is an event information recording unit
  • a collation unit such as a CPU for collating whether or not the event information corresponds to the same event information already recorded in the event information, and the generated event information recorded in the event information recording unit When the event information does not correspond to the same event information, the generated event information is recorded in the event information recording means. Further comprising a transmission unit, such as interface.
  • the recorded imaging information is added.
  • Event information is recorded when event information for identifying an event associated with the imaging information is generated, and the event information does not correspond to any event information recorded in the event information recording means. Since the generated event information is recorded by the means, the event information associated with the imaging object and specifying the event related to the imaging information can be easily generated and recorded, thereby easily maintaining and managing the event information. can do.
  • the invention according to claim 18 is the image recording apparatus according to claim 17, wherein the generated event information is recorded in the event information recording means.
  • An association changing unit such as a CPU that changes the association destination of the imaging information associated with the generated event information to the event information of the same event when the event corresponds to the same event as the information; Prepare.
  • the invention according to claim 19 is the image recording apparatus according to any one of claims 1 to 18, wherein the imaging information is recorded on the recording medium.
  • a personal information generating unit such as a CPU for generating personal information associated with the recorded imaging information and identifying an individual related to the recorded imaging information; and the generated personal information is stored in the personal information recording unit.
  • Collation means such as a CPU for collating whether or not it corresponds to an individual indicated by any personal information already recorded, and any of the generated personal information recorded in the personal information recording means
  • a transmission means such as a communication interface for recording the generated personal information in the personal information recording means when the personal information does not correspond to the individual indicated by the personal information. That.
  • the recorded imaging information is added. Generates associated personal information, and records the generated personal information in the personal information recording means when it does not correspond to the individual indicated by any personal information recorded in the personal information recording means Therefore, the personal information related to the imaging object can be easily generated and recorded, so that the personal information can be easily maintained.
  • the invention according to claim 20 is the image recording apparatus according to claim 19, wherein any one of the individuals in which the generated personal information is recorded in the personal information recording means.
  • Association changing means such as a CPU for changing the association destination of the imaging information associated with the generated personal information to the personal information of the recorded individual when corresponding to the individual indicated by the information Is further provided.
  • the invention according to claim 21 is the image recording apparatus according to any one of claims 1 to 20, wherein the imaging information is recorded on the recording medium.
  • Link information generating means such as a CPU for generating link information that is associated with the recorded imaging information and identifies an individual and an event related to the recorded imaging information
  • the generated link information is a link Collation means such as a CPU for collating whether or not the same individual and event as any of the link information already recorded in the information recording means are identified
  • the generated link information is the link information recording means
  • a transmission unit such as a communication interface for recording further comprises a.
  • link information is generated when imaging information is recorded on a recording medium. Since the link information recording means records the generated link information when no individual or event specified by any link information recorded in the link information recording means is specified, it is associated with the imaging information. The link information can be easily generated and recorded, so that the link information can be easily maintained.
  • the invention according to claim 22 is the image recording apparatus according to claim 21, wherein the generated link information is recorded in the link information recording means.
  • the association change of the CPU or the like that changes the association destination of the imaging information associated with the generated link information to the recorded link information Means are further provided.
  • an individual identified by any link information in which the generated link information is recorded in the link information recording means When the event is specified, the association destination of the imaging information associated with the generated link information is changed to the recorded link information, so that a plurality of imaging information is associated with the recorded link information and managed. can do.
  • the invention according to claim 23 is the image recording apparatus according to claim 17 or 18, wherein the imaging means when the imaging information is recorded on the recording medium. It further comprises position detecting means such as a CPU for detecting a position and generating position information indicating the detected position, and the event information generating means generates the event information by including the generated position information. Configured.
  • the event information includes position information indicating the position of the image pickup means when the image pickup information is recorded. Therefore, highly useful event information can be easily maintained and managed.
  • an invention according to claim 24 is the image recording apparatus according to any one of claims 1 to 23, wherein the imaging unit is a portable information processing apparatus. It is comprised so that it may be provided with the imaging means.
  • the imaging means is provided in the portable information processing apparatus, it is simplified.
  • the imaging object can be imaged with a simple configuration.
  • an invention according to claim 25 is the image recording apparatus according to any one of claims 1 to 24, wherein the imaging means is a portable holder.
  • the image recording apparatus is configured to be an image pickup unit in which the relative position is fixed by being held.
  • the relative position of the image pickup means is set on the portable holder. Since the recording apparatus is held constant, the imaging object can be imaged with a simple configuration.
  • the invention according to claim 26 is the image recording apparatus according to claim 24 or claim 25, wherein the imaging means is configured such that the image recording apparatus is held by a holder.
  • the relative position is fixed by the imaging means, and the holder is a holder that is assembled by folding a foldable sheet-like material, and is unfolded into a sheet shape when carried and used.
  • the holder is configured to be folded and assembled so that the image recording apparatus can be held.
  • the image recording device is held by the holder so that the relative position of the imaging means is constant.
  • the holder is a holder that is assembled by folding a foldable sheet-like material, and is unfolded into a sheet shape when carried, and can be folded and assembled when used to hold the image recording apparatus. Since it is a holder, an object to be imaged can be stably imaged with only a cheap / lightweight and highly portable holder.
  • the invention according to claim 27 is the image recording apparatus according to any one of claims 1 to 26, wherein the imaging object is located at the object placement position.
  • An image processing means such as a CPU for adding advertisement information having an advertisement content to the imaging information output corresponding to the identified timing when it is identified as being placed;
  • the means is configured to record the imaging information added with the advertisement information on the recording medium.
  • the imaging information to which the advertising information is added is recorded.
  • the user can enjoy the service related to the present invention at a lower cost.
  • the invention according to claim 28 is the image recording apparatus according to claim 27, wherein the image processing means is configured to place the imaging object at the object placement position.
  • Imaging information transmitting means such as a communication interface for transmitting at least a part of the imaging information output corresponding to the timing identified as being present to an external information processing apparatus, and the information on the content of the transmitted imaging information
  • Imaging information receiving means such as a communication interface for receiving the imaging information to which the advertisement information is added in the information processing apparatus based on a recognition result in the processing apparatus, and the recording means is configured to receive the received imaging information. Is recorded on the recording medium.
  • At least a part of the imaging information output corresponding to the timing when it is identified that the imaging object is placed Is transmitted to an external information processing device, the content is recognized and advertisement information is added, and the imaging information with the advertisement information added is received and recorded, thus reducing the processing load on the image recording device.
  • the advertisement can be recorded together with the imaged information and can be referred to later.
  • the invention according to claim 29 is the image recording apparatus according to claim 27, wherein the image processing means is an advertisement information such as a ROM that records one or more of the advertisement information in advance.
  • a recording unit a recognition unit such as a CPU for recognizing the content of the imaging information output corresponding to the timing at which the imaging object is identified as being placed at the object placement position;
  • Reading means such as a CPU for reading out the advertisement information corresponding to the contents from the advertisement information recording means, and the image processing means converts the read advertisement information into the imaging information whose contents are recognized. Configured to add.
  • the advertisement information to be read is read from the advertisement information recording means and added to the image pickup information, the advertisement can be recorded together with the imaged information and referred to later by the configuration completed within the image recording apparatus.
  • the invention according to claim 30 is the image recording apparatus according to claim 27, wherein the image processing means is configured to place the imaging object at the object placement position.
  • a recognition unit such as a CPU for recognizing the contents of the imaging information output corresponding to the timing identified as being present, and a recognition result transmission such as a communication interface for transmitting a recognition result by the recognition unit to an external information processing apparatus Means and an advertisement information receiving means such as a communication interface for receiving the advertisement information transmitted from the information processing apparatus based on the transmitted recognition result, and the image processing means includes the received advertisement Information is configured to be added to the imaging information whose contents are recognized.
  • the advertisement in addition to the action of the invention of claim 27, recognition of the contents of the imaging information output corresponding to the timing at which the imaging object is identified as being placed Since the result is transmitted to an external image processing apparatus, and the advertisement information based on the recognition result is received and added to the imaging information, the advertisement can be recorded together with the imaged information and can be referred to later.
  • the invention according to claim 31 is the image recording apparatus according to any one of claims 1 to 30, wherein the imaging information is recorded on the recording medium.
  • the recording means includes at least identification information for identifying each of the imaging information, time information indicating when the imaging object corresponding to the imaging information is captured, or location information indicating a place where the imaging is performed. After any one is embedded as an image in the imaging information, the imaging information is recorded on the recording medium.
  • the invention according to claim 32 is the image recording apparatus according to claim 31, wherein the recording means already includes the other identification information in the imaged imaging information. New identification information related to the recognized other identification information is generated, and the generated new identification information is recorded in the imaging information recorded on the recording medium by the imaging. Is configured to be embedded and recorded.
  • the imaging information obtained by multiple imaging can be reliably identified by the relevance of the identification information.
  • the imaging object mounting position or the imaging object is continuously imaged by the imaging means whose relative position with respect to the object mounting position on which the imaging object is placed is constant, and each imaging information When it is identified that the imaging target is placed at the target placement position, the imaging information output corresponding to the identified timing is automatically recorded on the recording medium.
  • information described on the imaging object can be easily imaged and recorded while preventing blurring due to the user's operation on the imaging means for recording or imaging.
  • FIG. 2A is a diagram illustrating a capture process according to the first embodiment
  • FIG. 3A is a diagram illustrating a first example of the capture process
  • FIG. 2A is a diagram illustrating a capture process according to the first embodiment
  • FIG. 4B is a diagram illustrating a second example of the capture process; It is figure (II) which illustrates the capture process which concerns on 1st Embodiment, (a) is a figure which shows the 3rd example of a capture process, (b) is a figure which shows the 4th example of a capture process, (C) is a figure which shows the 5th example of a capture process, (d) is a figure which shows the 6th example of a capture process, (e) is a figure which shows the 7th example of a capture process. It is an external appearance perspective view (I) which shows the other example of the stand contained in the capture system which concerns on 1st Embodiment.
  • FIG. 1 It is an external appearance perspective view which shows schematic structure of the capture system which concerns on 3rd Embodiment. It is a block diagram which shows the schematic structure of the smart phone contained in the capture system which concerns on 4th Embodiment. It is a flowchart which shows the capture process which concerns on 4th Embodiment. It is figure (i) which illustrates the position alignment process which concerns on 4th Embodiment, (a) is a figure which illustrates the state before position alignment, (b) is a figure which illustrates the state during position alignment process (i). (C) is a diagram (ii) illustrating the position alignment process.
  • FIGS. 1-10 Each embodiment described below is a capture system that captures (i.e., records) an image using a portable smartphone including a digital camera (hereinafter simply referred to as a camera) capable of continuous shooting and moving image capturing.
  • a digital camera hereinafter simply referred to as a camera
  • a smartphone user according to the present invention is hereinafter simply referred to as a “carrier”.
  • FIG. 1 is an external perspective view showing a schematic configuration of the capture system according to the first embodiment
  • FIG. 2 is a block diagram showing a schematic configuration of the smartphone according to the first embodiment
  • FIG. It is a flowchart which shows the capture process which concerns on embodiment.
  • 4 and 5 are diagrams illustrating the capture processing according to the first embodiment, respectively.
  • FIGS. 6 and 7 are external perspective views illustrating other examples of the stand included in the capture system according to the first embodiment.
  • the capture system In the capture system according to the first embodiment, information described using, for example, a letter, a symbol, or a figure on a document or a three-dimensional object as an example of an imaging object is displayed with respect to a position where the imaging object is placed. Then, continuous shooting or moving image capturing is performed by using the smartphone camera at a relatively fixed position. Thereafter, the image data selected by the capture process described later is recorded (captured) from the image data corresponding to the captured image.
  • the capture system CS includes a smartphone S provided with a camera 9 and a smartphone S such that the camera 9 faces a document P that is an example of an imaging object.
  • a stand ST that supports The smartphone S in this case corresponds to an example of the “image recording apparatus” according to the present invention.
  • the position of the camera 9 (and its imaging range AR) is made relatively constant with respect to the position where the document P is placed.
  • the document 9 placed on, for example, the desk D in the imaging range AR is imaged by the camera 9 of the smartphone S supported by the stand ST, an image obtained by the imaging is, for example, in FIG.
  • the document P on the desk D as shown is included.
  • the example in which the document P placed on the desk D shown in FIG. 1B is an imaging target is the same in FIGS. 4 and 5 described later.
  • the smartphone S continuously captures or captures a moving image of the document P using the camera 9, and selects image data selected by capture processing described later from image data corresponding to the captured image. Record (capture) in the smartphone S. Thereby, the capture process of the document P (in other words, information described on the document P) according to the first embodiment is performed.
  • the smartphone S included in the capture system CS includes a CPU 1, a ROM (Read Only Memory) 2, a RAM (Random Access Memory) 3, an operation button, a touch panel, and the like.
  • An operation unit 4 comprising: a display 5 comprising a liquid crystal display or the like on which the touch panel is disposed; a call control unit 6 to which a speaker 7 and a microphone 8 are connected; and “imaging means” according to the present invention.
  • a network such as a wireless LAN (Local Area Network), a dedicated line, the Internet, or a so-called 3G line
  • a current image buffer 32 is formed as a volatile storage area as a buffer necessary for executing the capture processing according to the first embodiment centered on the CPU 1.
  • the CPU 1 is an example of “identification means”, an example of “recording means”, an example of “determination means”, an example of “control means”, an example of “comparison means”, ”,“ Event information generation means ”,“ matching means ”,“ personal information generation means ”,“ position detection means ”,“ image processing means ”,“ reading means ” It corresponds to an example and an example of “link information generating means”, respectively.
  • the communication interface 10 is an example of “transmission means”, an example of “imaging information transmission means”, an example of “imaging information reception means”, an example of “recognition result transmission means”, an example of “imaging information reception means”, and “ Each of them corresponds to an example of “advertisement information receiving means”.
  • the display 5 corresponds to an example of “display means” and an example of “notification means” according to the present invention.
  • the ROM 2 includes an example of “recording medium”, an example of “event information recording unit”, an example of “personal information recording unit”, an example of “link information recording unit”, and an “advertisement information recording unit” according to the present invention. Each corresponds to an example.
  • the communication interface 10 controls transmission / reception of data with the network via the antenna ANT under the control of the CPU 1.
  • the communication interface 10 may be configured not only to transmit / receive data wirelessly via the antenna ANT but also to control data transmission / reception via a wired LAN or a so-called USB (Universal Serial Bus). it can.
  • the call control unit 6 controls a voice call as the smartphone S using the microphone 8 and the speaker 7 under the control of the CPU 1. Furthermore, the operation part 4 produces
  • ROM 2 a program or the like for processing as the smartphone S including the capture processing according to the first embodiment described later is recorded in advance in a nonvolatile manner.
  • the ROM 2 includes a rewritable area, and image data corresponding to an image captured by the capture processing according to the first embodiment is recorded in the rewritable ROM 2 area.
  • CPU1 controls the process as the said smart phone S by reading and running the said program etc. which are recorded on ROM2.
  • data necessary for processing as the smartphone S such as telephone number data and address data, is also recorded in a nonvolatile manner.
  • the RAM 3 temporarily stores data necessary for the current image buffer 32, and further temporarily stores other data necessary for processing as the smartphone S. Further, the display 5 displays information necessary for the process as the smartphone S to the user under the control of the CPU 1 in addition to the information necessary for the capture process according to the first embodiment.
  • the camera 9 continuously captures or captures the information on the document P or the like, and captures image data (digitized image data) corresponding to the captured image.
  • image data digital image data
  • the data is output to the CPU 1 continuously.
  • the CPU 1 temporarily inputs the output image data to the current image buffer 32 in the RAM 3.
  • the CPU 1 executes the capture process according to the first embodiment using the image data stored in the current image buffer 32.
  • the light 11 illuminates a part or all of the document P or the like imaged by the camera 9 under the control of the CPU 1 so that the illuminance is suitable for the imaging at the time of the imaging.
  • the current image buffer 32 in the RAM 3 stores the image data for one frame that is the target of the capture processing according to the first embodiment at that time.
  • an image corresponding to one frame of image data (frame image data) is referred to as a “current image”.
  • the capture processing according to the first embodiment is started, for example, when a predetermined operation by a carrier is executed on the operation unit 4.
  • the CPU 1 When the document P is placed and the capture process is started, as shown in FIG. 3, the CPU 1 first activates the camera 9 to image the document P placed in the imaging range AR (step S1). ). Thereby, a plurality of images corresponding to the document P (images corresponding to the current image) are captured by continuous shooting or moving images. Then, the CPU 1 temporarily inputs the image data of the current image input from the camera 9 along with the imaging to the current image buffer 32 (step S2). Next, in the image picked up by the process of step S1, the CPU 1 illustrates the boundary between the document P shown in the image and its surroundings (in the case illustrated in FIG. 1 and FIG. It is identified whether or not all of the boundary BD) between the document P and the desk D is recognized (step S3).
  • step S3 When the entire boundary is recognized in any image in the identification in step S3 (step S3; YES), the CPU 1 next sets in advance the image data stored in the current image buffer 32 at that time. It is determined whether or not the document P or the like shown in the image is stationary for the predetermined time (step S4). In this case, for example, when a boundary is recognized, the boundary BD illustrated in FIG. 4 may be displayed in accordance with the image of the document P at a position corresponding to the recognized boundary. Moreover, it can also be configured to sequentially notify whether or not the document P is recognized by using, for example, blinking of the display 5 or the light 11 or sound emission from the speaker 7.
  • step S4 If it is determined in step S4 that the document P or the like in the image is stationary for a predetermined time (step S4; YES), the CPU 1 next stores the image data stored in the current image buffer 32 at that time. Is compared with the image of the image data captured immediately before and recorded in the ROM 2 to determine whether or not they are the same (step S5). If they are not the same in the determination in step S5 (step S5; NO), the CPU 1 records (that is, captures) the image data stored in the current image buffer 32 at that time in the ROM 2 (step S6).
  • the CPU 1 determines whether or not an operation for ending the capture processing according to the first embodiment has been performed by, for example, the carrier in the operation unit 4 (step S8), and when the end operation has been performed ( Step S8; YES), the capture processing according to the first embodiment is terminated. On the other hand, when the end operation is not performed in the determination in step S8 (step S8; NO), the CPU 1 returns to step S2 and performs the next imaging.
  • step S3 if the entire boundary between the document P and its periphery is not recognized in the image stored in the current image buffer 32 in the identification in step S3 (step S3; NO), or in the determination in step S4.
  • the current image buffer 32 When it is determined that the document P or the like in the image of the image data stored in the current image buffer 32 is not stationary for a predetermined time (step S4; NO), or in the determination of step S5, the current image buffer 32 When the image of the stored image data is the same as the image of the image data captured immediately before and recorded in the ROM 2 (step S5; YES), the CPU 1 at that time, the current image buffer 32 Since the image data stored in the Flop S7), and then shifts to the process in the step S8.
  • step S3 Only when the image including the document P is stationary for the predetermined time (see step S4; YES), the image data stored in the current image buffer 32 at that time is the target of capture. Will be able to. Therefore, for example, as shown in FIG.
  • Step S4 when a hand H having a writing instrument for writing on the document P is on the document P, a part of the boundary BD is not recognized in the image (above Step S3; see NO), and if the image including the document P for writing by the hand H has not been stationary for the predetermined time (see Step S4; NO), it is stored in the current image buffer 32 at that time.
  • the captured image data is discarded without being captured (see step S7).
  • FIG. 5 shows a diagram of the image on which the document P and the desk D shown in FIG. 5A are first captured.
  • step S6 the image on which the document P and the desk D shown in FIG. 5A are first captured.
  • FIG. 5B shows a diagram of the image on which the document P and the desk D shown in FIG. 5A are first captured.
  • the hand H enters the document P but no writing is made on the document P (that is, the document P is the same as the previous capture)
  • the hand H comes out of the document P.
  • the predetermined time has elapsed (see step S4; YES)
  • the document P is the same as the previous capture (see FIG. 5A) (see above).
  • step S5 see YES
  • the image data stored in the current image buffer 32 at that time is discarded without being captured (see step S7).
  • step S4 when the hand H enters the document P and further, for example, a character L is written on the document P as illustrated in FIG. 5D (that is, the document P). If the predetermined time elapses after the hand H comes out of the document P (see step S4; YES), the document P is not the same as the previous capture. Therefore (see step S5; NO), the image data stored in the current image buffer 32 at that time is captured (see step S6).
  • the document 9 is continuously imaged by the camera 9 in which the relative position with respect to the position where the document P is placed is constant.
  • the image data output corresponding to the identified timing is automatically captured. Therefore, the information described on the document P can be captured easily and with high image quality while preventing the camera 9 from shaking due to the user's operation.
  • the document P can be imaged more reliably and easily, and the information described therein can be captured.
  • the image data output corresponding to the determination that at least the document P in the image data is stationary for a predetermined time set in advance is captured.
  • the document P can be imaged with higher accuracy.
  • the document P in the captured image data and the document P in the uncaptured image data are compared for each imaging, and when both are the same, the capture of the image data is prohibited, so the same document P.
  • the image data including the document P it is possible to prevent the image data including the document P from being captured redundantly.
  • the camera 9 is provided in the portable smartphone S, and the relative position of the camera 9 is fixed by holding the smartphone S on a portable stand (see FIG. 7), for example.
  • the document P can be imaged with a simple configuration while keeping the relative position of the camera 9 and the document P constant.
  • the stand is a stand that can be assembled by folding a foldable sheet-like material (see FIG. 7 described later), the document P can be imaged with a simpler configuration.
  • the boundary between the document P and the surroundings when the boundary BD indicating the boundary is displayed in association with the image of the document P, for example, the user writes the document so that the display can be performed more accurately. By moving P or the like, the boundary recognition accuracy and the success rate of capture itself can be improved.
  • the user can move the document P based on the notification, for example. And the success rate of imaging itself can be improved.
  • the image data of the image whose suitability as the current image to be captured is determined by the processes in steps S3 to S5 is captured.
  • the image data to be captured is not limited to the image data itself used for the recognition (identification) of the document P.
  • an image newly captured after the timing when the recognition (identification) of the document P is completed
  • the camera 9 may be configured to capture (see step S6) image data of a high-quality still image captured by switching to the still image mode.
  • the imaging target PP placed on the mounting table B is covered with a transparent plate TB such as acrylic or glass, and the imaging target PP is fixed to the mounting table B.
  • the light LT supported by the supported support portion BS1 illuminates the transparent plate TB, and is further supported by the support portion BS2 fixed to the mounting table B on the holding table PT supported at a fixed position with respect to the imaging target PP.
  • the camera 9 can also be configured to take an image with the camera 9 of the smartphone S placed so as to face the imaging target PP (see step S1).
  • the transparent plate TB may be openable and closable so that the imaging object PP can be easily taken in and out.
  • the bottom plate of the mounting table B may be movable in the vertical direction in FIG. 6 so that the distance from the transparent plate TB can be adjusted according to the height of the imaging target PP.
  • the mounting table B may have a structure in which the imaging target PP can be taken in and out without releasing the transparent plate TB.
  • the support portion BS2 is preferably a mechanism that can adjust the position and angle of the camera 9 with respect to the imaging target PP.
  • the holding stand PT can be configured by a tray type with a dent, a detachable holder having a shape that matches the external shape of the smartphone S, or the like so that the smartphone S can be held stationary. Further, it is convenient that the light LT is automatically turned on in conjunction with the smartphone S placed on the holding base PT.
  • the capture system CS1 illustrated in FIG. 6 for example, when the object to be imaged is a thin document or a film, an image may be captured using a light that illuminates from the back of the object to be imaged.
  • the smartphone S is installed with the camera 9 facing upward, and the document P is placed on the upper side of the transparent plate TB (that is, above the smartphone S with the camera 9 facing upward). You may comprise so that it may face down and image.
  • the capture system CS1 can also be used as a so-called OHP (Over Head Projector) by outputting captured image data from a smartphone S to a projector (not shown) by wire or wirelessly and projecting it.
  • OHP Over Head Projector
  • the holder PT may be provided with a drive unit that is driven under the control of the CPU 1 of the smartphone S and moves the imaging range of the camera 9 on the imaging target PP.
  • an optical mechanism such as a predetermined lens or a polarizing filter may be provided near the camera 9 or the light LT by the support portion BS1.
  • the camera 9 is opposed to the document P by a foldable stand ST1 cut out from a sheet-like material (for example, corrugated paper). You may comprise so that the smart phone S may be hold
  • a sheet-like material for example, corrugated paper
  • the stand ST1 shown in FIG. 7A is configured to include a holding base PT1 and support portions PT2 and PT3 formed of the single sheet-like material.
  • the solid line indicates the cut at the joint between the holding base PT1 and the support PT2, and the joint between the support PT2 and the support PT3, and the alternate long and short dash line indicates the assembly as the stand ST1.
  • the line part made into valley fold is shown.
  • the stand ST1 is actually assembled, and the smartphone S faces the document placed on the lower side of the smartphone S at the time of imaging although not shown in FIG. 7B.
  • An external perspective view of the capture system in a state where it can be imaged by being placed on the holding base PT1 is shown.
  • the CPU 1 can be configured to identify that the document P is placed at the position.
  • FIGS. 8 to 10 are external perspective views respectively showing the schematic configuration of each example of the capture system according to the second embodiment. Further, in the hardware configuration of the smartphone according to the second embodiment, the same members as those of the smartphone S according to the first embodiment will be described using the same member numbers.
  • the capture system CS2-1 according to the second embodiment is a camera of a smartphone S supported by a stand ST similar to the capture system S according to the first embodiment.
  • the document P placed in the imaging range AR is imaged.
  • the smartphone S according to the second embodiment includes the projector 20 in addition to the camera 9.
  • the projector 20 projects other projection information PJ to be captured together with the document P onto the document P when the document P is imaged, for example, by a laser method or another optical method.
  • the projector 20 corresponds to an example of “projection unit” in the present invention.
  • an image projected onto the document P as the projection information PJ for example, a ruled line used when writing on the document P, a style in which a portion to be written is blank, or reference at the time of writing It is an image including other figures, characters, etc. to be performed.
  • this projection information PJ for example, information stored in advance in the ROM 2 of the smartphone S according to the second embodiment may be read and projected, or electronically externally via the recording medium or the communication interface 10.
  • the image data or non-image electronic data generated at the same time may be acquired from the outside and projected.
  • the time of imaging it can be configured to capture both at the timing when the projection information PJ is projected onto the document P (see step S6 in FIG. 3).
  • the projection information PJ is only written on the document P that has been written in a state where the projection information PJ is projected after the projection of the projection information PJ is finished or temporarily interrupted. It can be configured to capture separately (see step S6 in FIG. 3). In this case, information written on the document P can be captured separately from the content of the projection information PJ to be referred to.
  • the projection information PJ is projected from the projector 20 arranged on the back side of the document P as in the capture system CS2-2 shown in FIG. You may comprise.
  • the document P is thin enough to allow the contents of the projection information PJ to be seen from the surface of the document P (the surface to be imaged)
  • the same effect as the capture system CS2 shown in FIG. 8 can be obtained.
  • the projection information PJ is used to display the transparent sheet TS from the back side of the document P using the display device DD arranged on the back side of the document P. You may comprise so that it may project via. Also in this case, if the document P is thin enough that the contents of the projection information PJ can be seen from the surface of the document P, the same effect as the capture system CS2 shown in FIG. 8 can be obtained.
  • the display device DD at this time for example, a liquid crystal panel or a so-called tablet computer can be used.
  • the document P is displayed while clarifying the relationship with the projection information PJ. Images can be recorded.
  • FIG. 11 is an external perspective view showing a schematic configuration of the capture system according to the third embodiment.
  • the same members as those of the smartphone S according to the first embodiment will be described using the same member numbers.
  • the document P is captured using a single camera 9.
  • the imaging object is captured using a plurality of cameras.
  • the capture system CS3 according to the third embodiment is within the imaging range AR1 by the camera 9A of the smartphone S3 supported by the same stand ST as the capture system CS according to the first embodiment.
  • the imaging object PP placed in the imaging range AR2 is imaged by the camera 9B further provided in the same smartphone S3.
  • the image captured by the camera 9A and the image captured by the camera 9B are combined by the CPU 1 to generate one combined image obtained by capturing the imaging target PP.
  • Capture ie, record in ROM 2).
  • the imaged object PP can be captured in a higher image quality or in a wider range by combining images captured separately by the separate cameras 9A and 9B.
  • the camera 9A and the camera 9B can be configured to change, for example, an imaging angle, a focus, a zoom degree, or an illuminance of a light (not shown) provided for each of the cameras 9A and 9B.
  • the imaging target PP is a three-dimensional imaging target PP as illustrated in FIG. 11, the geometric correction of the imaging target PP is performed after recognizing the three-dimensional shape and arrangement thereof. It is possible to generate a composite image with improved accuracy. More specifically, for example, it is possible to generate a high-accuracy composite image by correcting the curvature of the surface or the like when a book with a thick imaging target PP is opened.
  • the CPU 1 compares images captured by the cameras 9A and 9B (for example, images captured before and after time), and based on the comparison result, for example, due to vibration of the smartphone S3 itself. When it is detected that the camera 9A and the camera 9B have moved (blurred), the capture at that timing can be prohibited.
  • the capture system CS3 in addition to the operational effects of the operation of the capture system CS according to the first embodiment, it corresponds to the same imaging target PP and is imaged. Since a composite image is generated by synthesizing a plurality of image data with different imaging conditions of the object PP, for example, the imaging object PP can be imaged in a three-dimensional manner, and the imaging object with higher image quality and higher accuracy. Image data corresponding to PP can be recorded.
  • FIG. 12 is a block diagram illustrating a schematic configuration of a smartphone included in the capture system according to the fourth embodiment
  • FIG. 13 is a flowchart illustrating capture according to the fourth embodiment
  • FIGS. 14 to 16 are diagrams illustrating the alignment process according to the fourth embodiment. Furthermore, in the hardware configuration of the smartphone according to the fourth embodiment, the same members as those of the smartphone S according to the first embodiment will be described using the same member numbers.
  • a single camera 9 is used to capture a document P having a size that falls within the imaging range AR (FIG. 1A )reference).
  • one camera 9 is used to capture the document P having a size that does not fit in the imaging range AR (capture according to step S6 in FIG. 3). Then, after capturing for each part of the document P (a part having a size that fits within the imaging range AR), the original large document P is captured by combining the captured parts.
  • capturing of the document P can be performed by generating a composite image by repeatedly capturing the image of each part over the entire document P.
  • the fourth embodiment can also be applied when the image quality is improved as a whole.
  • processes other than the capture according to step S6 are basically the same as the capture process according to the first embodiment (see FIG. 3). The detailed description of is omitted.
  • the smartphone S4 according to the fourth embodiment is similar to the smartphone S according to the first embodiment, the CPU 1, the ROM 2, the operation unit 4, the display 5, the communication control unit 6, the speaker 7, the microphone 8, and the camera. 9.
  • a communication interface 10 including an antenna ANT and a light 11 are provided.
  • the smartphone S according to the first embodiment is used as a buffer necessary for executing the capture processing according to the fourth embodiment centered on the CPU 1.
  • a composite image buffer 31 and a registered current image buffer 33 are formed as a volatile storage area.
  • each buffer other than the current image buffer 32 in the RAM 3 will be specifically described.
  • the composite image buffer 31 sequentially stores image data corresponding to a composite image formed with high image quality / wide-range by the composite processing in the capture according to the fourth embodiment in accordance with the progress of the composite processing.
  • the aligned current image buffer 33 stores image data for one frame that is the target of the composition process after the alignment process in the image processing according to the fourth embodiment. Note that the current image stored in the current image buffer 32 according to the fourth embodiment is a target of alignment processing described later at that time.
  • the composite image buffer 31 of the smartphone S4 is initialized to “zero” once at the start of the entire capture shown in FIG. 3 including the processing of the fourth embodiment.
  • the compatibility of the image data stored in the current image buffer 32 at that time is the same as the processing in steps S3 to S5 shown in FIG. If determined, the CPU 1 starts the alignment process according to the fourth embodiment shown in FIG.
  • the CPU 1 performs the non-rigid positioning process according to the fourth embodiment using the image data stored in the current image buffer 32 (step S21).
  • the alignment processing of the non-rigid body at this time compares the portion of the document P captured as the current image with the other portion of the document P captured first and stored in the composite image buffer 31. This is a process of aligning the current image so that the overlapping portions of the images are not discontinuous as an image but are joined together. More specifically, the positioning process will be described later with reference to FIGS. 14 to 16.
  • the CPU 1 stores the image data of the current image after the alignment processing in the aligned current image buffer 33 (step S22). ).
  • step S23 the CPU 1 adds the image data in the aligned current image buffer 33 to the image data in the composite image buffer 31, and if there is an overlapping image area, for example, using an average value of pixel values or the like.
  • the image quality is improved (step S23).
  • step S23 the current image is added to the synthesized image synthesized so far.
  • the area ratio of the composite image in the entire document P is enlarged, or the image quality of the corresponding part is improved.
  • step S24 the CPU 1 confirms whether or not to end the generation of the composite image currently being combined according to the fourth embodiment
  • the end confirmation processing for example, by confirming whether or not imaging of the current image has been completed for the preset number of times of imaging, if imaging for the number of times of imaging has been completed. It can be configured to end. In addition, for example, it may be terminated by a predetermined termination operation.
  • step S24 When the same composite image is continuously generated in the confirmation in step S24 (step S24; NO), the CPU 1 ends the capture according to the fourth embodiment, and proceeds to the same process as step S8 shown in FIG. To do.
  • step S24; YES when the generation of the currently synthesized image is completed in the confirmation in step S24 (step S24; YES), the CPU 1 stores in the ROM 2 image data corresponding to the synthesized image stored in the synthesized image buffer 31 at that time. Recording (that is, capturing) (step S25), and after initializing the composite image buffer 31 (step S26), the process proceeds to step S8 shown in FIG.
  • step S21 the non-rigid positioning process according to step S21 will be specifically described with reference to FIGS.
  • the alignment processing according to step 21 compares the portion of the document P captured as the current image with the composite image (image data stored in the composite image buffer 31), and both of them. This is a process of aligning (deforming) the current image so that the overlapping portions are not discontinuous as an image but are joined together.
  • the composite image GA image data stored in the composite image buffer 31
  • the composite image GA shown on the left of FIG. 14A is used as a reference image, and shown on the right of FIG. It is assumed that a non-rigid body alignment process is performed on the current image GT.
  • the CPU 1 first divides the current image GT into a predetermined number as illustrated in FIG. In the case illustrated on the right side of FIG. 14B, the current image GT is divided into four divided images GTa to GTd. However, the larger the number of divisions, the better in order to obtain higher image quality. Next, the CPU 1 pays attention to one divided image as illustrated in FIG. In the case illustrated in FIG. 14C, the CPU 1 focuses on the divided image GTa.
  • the CPU 1 superimposes the divided image GTa of interest on the synthesized image GA synthesized up to that point.
  • the coordinate axes in FIG. 15 have the origin (0, 0) as the upper left corner of the area corresponding to each divided image in the composite image GA and the upper left corner of each divided image, and the right direction in FIG.
  • the direction, the downward direction in FIG. 15, is the positive direction of the y coordinate axis.
  • the offset is (0, 0).
  • the CPU 1 searches for a position (offset) where the divided image GTa and the content best match in the composite image GA.
  • a method using mutual information for example, a method using mutual information (Mutual Information) or a target region (in the case of FIG. 15B, the region of the divided image GTa)
  • SAD Sum of luminance differences
  • the CPU 1 obtains coordinate data ( ⁇ 2, +3) as the offset.
  • the CPU 1 similarly searches for a position (offset) in which the content of the divided images GTb to GTd other than the divided image GTa illustrated in FIG.
  • the coordinate data ( ⁇ 2, +3) is used as the offset
  • the coordinate data (+2, +3) is used as the offset
  • the coordinate data (+4, ⁇ 1) is obtained by the CPU 1 as the offset
  • the coordinate data ( ⁇ 3, ⁇ 1) is obtained by the CPU 1 as the offset.
  • the amount by which the center point of each of the divided images GTa to GTd is to be moved as the alignment processing according to step S21 is exemplified as the above-mentioned offset on the right side of FIG. Obtained respectively.
  • This center point may be generally called “anchor”.
  • alignment processing is performed so as to deform the entire current image GT as illustrated in FIG. 16 by so-called interpolation or extrapolation based on the amount of movement of the anchor of each of the divided images GTa to GTd. It is also possible to do this. In this case, the content of the current image GT and the content of the composite image GA can be matched more. Furthermore, movement or deformation (including rotation, enlargement / reduction, or trapezoid deformation) of the entire current image GT or the entire divided image GTa to the entire divided image GTd can be used.
  • a plurality of image data corresponding to the same document P is obtained. Since the combined image is generated and captured, image data corresponding to a higher quality image or a wide range of document P can be captured.
  • a user shifts the document P between each capture, so that even the document P having a size that cannot be captured entirely by one capture can be captured with high image quality. That is, when the relative position between the camera 9 and the document P is appropriately changed during the preceding and following imaging, the image is higher than the images corresponding to the plurality of image data before and after the change corresponding to the same document P. Since a composite image with a high image quality or a wide range is generated, a composite image with higher image quality corresponding to the document P can be captured. (V) Modifications
  • the present invention can be applied in various ways other than the above-described embodiments.
  • the document P to be imaged may be a document P on which personal information that personally identifies a person is described. In this case, even if the personal information is written on the document P, it does not flow out of the smartphone S, so that the personal information can be protected easily and reliably, and the personal information It is possible to save the trouble of entering or inputting a new one.
  • the event information which identifies the event linked
  • Event information indicating an event associated with an image and it is generally considered that the event information includes, for example, information indicating the date and place related to the event.
  • the generated event information corresponds to the same event as any event information already recorded in the external event information management server or the ROM 2.
  • the event information does not correspond to the same event as any event information recorded in the event information management server or the like, the newly generated event information is recorded in the event information management server or the like.
  • the position of the camera 9 (smart phone S) when the image data is captured is detected using, for example, GPS (Global Positioning System), and the detected position is indicated.
  • the event information may be generated by including position data. In this case, highly valuable event information can be easily maintained.
  • C Third modification
  • the person is associated with the captured image data and specifies an individual related to the captured image data. Whether information is automatically generated and whether the generated personal information corresponds to an individual indicated by any personal information already recorded in an external personal information management server or the like or in the smartphone S itself Are configured to match.
  • the personal information does not correspond to the individual indicated by any personal information recorded in the personal information management server or the like or in the smartphone S itself, the newly generated personal information is stored in the personal information management server. Etc., and can be configured to be recorded. In this case, the personal information related to the document P can be easily generated and recorded, so that the personal information can be easily maintained.
  • a link for specifying the event and the individual indicated by them Information may be generated automatically.
  • the attendance information indicating the participation when the individual specified by the personal information participates in the event indicated by the event information corresponds to the link information.
  • the attendance information is recorded / stored in, for example, an attendance information management server or the like that is separate from the event information management server and the personal information management server.
  • the presence / absence of attendance information is collated with the event information management server, etc., the personal information management server, etc. and the attendance information management server, etc. It is configured so that the presence or absence is confirmed.
  • the image data output corresponding to the identified timing is For example, image data or character string data having the contents of advertisements may be added, or a part of the image data may be replaced and inserted, and image data to which the image data or character string data is added may be captured.
  • the advertisement can be recorded together with the imaged information and can be referred to later, and further, the advertisement can be distributed as an image, so that the user can enjoy the service related to the present invention at a lower cost.
  • the image processing server may receive the image data to which the image data having the content of the advertisement is added and capture the image data. In this case, while reducing the processing load on the smartphone S, it is possible to record an advertisement together with the imaged information and refer to it later.
  • one or a plurality of image data having the contents of the advertisement is recorded in advance in the ROM 2 of the smartphone S, and output corresponding to the timing when the document P is identified as being placed at the predetermined position.
  • the content of the recognized image data may be recognized by the CPU 1, and the image data having the content of the advertisement corresponding to the recognized content may be read from the ROM 2 and added. In this case, with the configuration completed in the smartphone S, it is possible to record an advertisement together with the imaged information and refer to it later.
  • the CPU 1 recognizes the content of the image data output corresponding to the timing at which it is identified that the document P is placed at the predetermined position, and transmits the recognition result to an external image processing server or the like. You may comprise so that the image data which have the content of the advertisement transmitted from the image processing server etc. based on the recognition result may be received and added. In this case, the advertisement can be recorded together with the imaged information and can be referred to later.
  • -Stand ST which concerns on each embodiment can be set as the structure which can adjust an angle and height.
  • -It can comprise so that it may image
  • the captured image can be geometrically corrected to form a general rectangular document image.
  • the angle of the camera 9 with respect to the document P can be detected using an attitude sensor (acceleration sensor or gyro sensor) provided in the smartphone S, and this can be used for the geometric correction.
  • the stand ST and the document (notebook or the like) can be physically fixed with a clip or the like. In this case, if you move the document by hand, the stand will move with it.
  • the camera 9 can be configured to automatically adjust the zoom or swing angle of the camera 9. It is possible to configure not only one document P but also a plurality of imaging objects (for example, small documents such as business cards) to be recognized and captured simultaneously.
  • the imaging object according to the present invention is displayed not only on the document P described in each embodiment, that is, paper, but also on a portable display device such as a slate personal computer or a display device of a so-called electronic book. It can be configured to take an image or the like. -Further, identification information (character string such as a serial number) for identifying each image data at the time of capture, time information indicating when the image data was captured, or a place where the capture was performed It can be configured to record after embedding at least one of the location information as an image in the image data. In this case, when capturing image data, since at least one of identification information, time information, and location information is embedded as an image and recorded, the recorded image data can be easily identified.
  • identification information character string such as a serial number
  • a character string “1234” for example, a serial number
  • the serial of the captured printed material is captured.
  • a number or the like (“1234”) is recognized by, for example, a character recognition function using an image as an input, and related to a character string such as “1234-2” (not related to, for example, “3456”, related A character string may be recorded after being embedded in newly captured image data.
  • the program corresponding to the flowcharts shown in FIGS. 3 and 13 is acquired via a network such as the Internet, or is recorded on an information recording medium such as an optical disk, for example, by a general-purpose microcomputer. It can also be configured to read and execute this.
  • the microcomputer in this case executes the same processing as the CPU 1 according to each embodiment.
  • the present invention can be used in the field of image recording apparatuses. Particularly, when the present invention is applied to the field of image recording apparatuses that capture images captured by the camera 9, a particularly remarkable effect can be obtained. It is done. In addition, as described above, anytime, anywhere, anywhere, personal privacy and work-related information security at the highest level, and easy high-quality or wide-range image synthesis without the hassle. It can be carried out. This is an epoch-making value that was not available in previous devices.

Abstract

Provided is an image recording device able to image and record, simply and with high resolution, information in a subject to be imaged, such as a document. A smartphone (S) is internally provided with: a camera (9) the relative position of which to a prescribed position where a document (P) is to be placed is fixed, said camera continuously acquiring images of the prescribed position if no document (P) is placed thereon or images of a document (P) if placed on the prescribed position and outputting corresponding image data for each image acquired; and a central processing unit (CPU) that, on the basis of the output image data, identifies whether the document (P) is placed at the prescribed position and, when the document (P) is identified to be currently placed, captures output image data corresponding to the time of the identification.

Description

画像記録装置、画像記録方法、画像記録用プログラム及び情報記録媒体Image recording apparatus, image recording method, image recording program, and information recording medium
 本発明は、画像記録装置、画像記録方法、画像記録用プログラム及び情報記録媒体の技術分野に属する。より詳細には、カメラ等の撮像手段を備える画像記録装置及び当該画像記録装置において実行される画像記録方法、当該画像記録装置で用いられる画像記録用プログラム及び当該画像記録用プログラムが記録された情報記録媒体の技術分野に属する。 The present invention belongs to the technical fields of an image recording apparatus, an image recording method, an image recording program, and an information recording medium. More specifically, an image recording apparatus provided with imaging means such as a camera, an image recording method executed in the image recording apparatus, an image recording program used in the image recording apparatus, and information recorded with the image recording program It belongs to the technical field of recording media.
 近年の情報化社会においては、業務以外の個人の日常生活においても、書面(紙)等に文字や記号又は図形として記載されている情報を、高画質且つ高解像度或いは広範囲でデジタル的に画像化したいという要望は依然として多い。ここで、上記のような画像化のためには、いわゆるスキャナを用いることが一般的である。この場合のスキャナとしては、例えばいわゆるフラットベッド型のスキャナ等が一般的である。 In the information-oriented society in recent years, information written as letters, symbols, or figures on paper (paper) etc. is digitally imaged with high image quality, high resolution, and wide range even in daily life of individuals other than work. There are still many requests to do. Here, for imaging as described above, it is common to use a so-called scanner. As a scanner in this case, for example, a so-called flatbed scanner is generally used.
 しかしながら、このようなスキャナを例えば業務以外の個人用として保有することは、コスト、設置場所又は取り扱いの煩雑さ等の面から便利でない場合が多い。また近年では、個人としてパーソナルコンピュータを保有しない場合も多くなり、よって上記スキャナを接続する本体としてのパーソナルコンピュータがそもそも存在しない環境も、個人宅等においては増加している。更に、一般に大型のフラットベッド型スキャナが必要な単行本、雑誌又は大型紙などに記載されている情報の画像化は、上記のような事情でより困難である場合が多い。 However, it is often not convenient to hold such a scanner for personal use other than work, from the viewpoint of cost, installation location, complexity of handling, and the like. In recent years, there are many cases in which an individual does not have a personal computer. Therefore, an environment in which a personal computer as a main body to which the scanner is connected does not exist in the first place is increasing. Furthermore, imaging of information described in a book, magazine, or large paper that generally requires a large flatbed scanner is often more difficult due to the circumstances described above.
 一方、個人が手軽に扱える撮像用装置としては、いわゆるデジタルカメラが一般化している。しかしながら現在入手可能なデジタルカメラをスキャナ代わりに用いようとしても、一般の個人が扱う場合は、例えば撮像時に手ブレが起きる、或いは周囲の照明が不適切といった理由により、スキャナとしては撮像に失敗する場合が多い。また、複数回の撮像(スキャン)を行おうとすると、当該撮像の度に、撮像対象となる書面等をセットし直し、更にデジタルカメラを持ち直して撮像する、と言った煩雑な手間を繰り返す必要がある。 On the other hand, so-called digital cameras have become common as imaging devices that can be easily handled by individuals. However, even if you try to use a digital camera that is currently available instead of a scanner, if it is handled by a general individual, the scanner will fail to capture because of, for example, camera shake or inappropriate ambient lighting There are many cases. In addition, if multiple times of imaging (scanning) are to be performed, it is necessary to repeat the troublesome task of resetting a document or the like to be imaged each time the image is taken and then picking up an image again by holding the digital camera. is there.
 他方、手書きの書面等の場合、当該書面上への記入順序が後ほど重要となる場合が多いが、手書きが完了した後の最終的な書面等を撮像しただけでは、一般に書面等上の内容が手書きされた順序を読み取ることはできない。そして、このような手書きの順序を情報として保存するためには、一つの情報が手書きされる度にその手書き作業を中断して手書きされた内容を撮像しなければならないと言った大変煩雑な手間が必要となる。 On the other hand, in the case of handwritten documents, etc., the order of entry on the document is often important later, but generally only the image of the final document after the completion of handwriting captures the content on the document. The handwritten order cannot be read. And in order to save such a handwritten order as information, every time one piece of information is handwritten, the handwriting work is interrupted and the handwritten content must be imaged. Is required.
 更に、上記スキャナをパーソナルコンピュータに接続して画像化を行う場合、読み取られた画像データ等は、基本的にはそのパーソナルコンピュータ内に記憶又は蓄積される。この場合、例えば外出先や他人所有のスキャナを用いると、その読み取られた画像データ等は当該外出先又は他人所有のパーソナルコンピュータに記憶等されることとなり、結果として、例えば機密保持上の問題やその画像データ等を改めて自身のパーソナルコンピュータ等に移動する手間が必要である等、使い勝手が悪い場合が多い。 Further, when imaging is performed by connecting the scanner to a personal computer, the read image data and the like are basically stored or accumulated in the personal computer. In this case, for example, when an out-of-office or other person's scanner is used, the read image data or the like is stored in the personal computer owned by the out-of-office or other person. In many cases, it is inconvenient, for example, it is necessary to move the image data to its own personal computer.
 他方、最近では、いわゆるスマートフォンやタブレット型情報端末など、カメラが内蔵された携帯型の情報機器を誰もが携行するようになってきており、しかもそれらは常に身に着けられている場合が多い。また、当該情報機器に備えられたカメラが備える撮像能力や画像処理能力、或いはネットワークへの接続能力は、いずれも近年大きく向上してきており、今後も急速な改善が期待される。 On the other hand, recently, everyone is carrying portable information devices with built-in cameras, such as so-called smartphones and tablet information terminals, and they are often worn all the time. . In addition, the imaging capability, image processing capability, and network connection capability of the camera included in the information device have been greatly improved in recent years, and rapid improvement is expected in the future.
 なお、上述したスキャナ又は撮像処理に関連する先行技術としては、例えば下記特許文献1及び特許文献2に開示されている内容が知られている。 In addition, as the prior art related to the above-described scanner or imaging processing, for example, the contents disclosed in the following Patent Document 1 and Patent Document 2 are known.
米国特許第6,061,478号公報US Pat. No. 6,061,478 米国特許第7,949,370 B1号公報US Patent No. 7,949,370 B1
 以上のように、書面等に記載されている情報を手軽に画像化したいという要望が多いにも拘わらず、それを個人でも簡易に実現できる有効な方法は、現状では存在していない。よって、このような画像化の方法の確立が望まれている。 As described above, although there are many requests to easily image information written in documents, there is no effective method that can be easily realized by individuals. Therefore, establishment of such an imaging method is desired.
 そこで、本発明は上記の要望に鑑みて為されたもので、その課題の一例は、書面等の撮像対象物に記載されている情報を簡易に且つ高画質に撮像して記録することが可能な画像記録装置及び当該画像記録装置において実行される画像記録方法、当該画像記録装置で用いられる画像記録用プログラム及び当該画像記録用プログラムが記録された情報記録媒体を提供することにある。 Therefore, the present invention has been made in view of the above-mentioned demand, and an example of the problem is that information described in an imaging object such as a document can be easily imaged and recorded with high image quality. An image recording apparatus, an image recording method executed in the image recording apparatus, an image recording program used in the image recording apparatus, and an information recording medium on which the image recording program is recorded.
 上記の課題を解決するために、請求項1に記載の発明は、書面等の撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力するカメラ等の撮像手段と、各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別するCPU等の識別手段と、前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報をROM等の記録媒体に記録するCPU等の記録手段と、を備える。 In order to solve the above-described problem, the invention according to claim 1 is an imaging unit in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object. Imaging means such as a camera for outputting imaging information for each imaging, and a CPU for identifying whether or not the imaging object is placed at the object placement position based on each of the output imaging information And a recording unit such as a CPU for recording the imaging information output corresponding to the identified timing on a recording medium such as a ROM when it is identified that the imaging object is placed. And comprising.
 上記の課題を解決するために、請求項33に記載の発明は、書面等の撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力するカメラ等の撮像手段を備える画像記録装置において実行される画像記録方法において、各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別工程と、前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録工程と、を含む。 In order to solve the above-mentioned problem, the invention according to claim 33 is an imaging means in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object. In an image recording method executed in an image recording apparatus including an imaging unit such as a camera that outputs imaging information for each imaging, the imaging object is located at the object placement position based on the output imaging information. A step of identifying whether or not the object to be imaged is placed, and when the object to be imaged is identified, the imaging information output corresponding to the identified timing is recorded on a recording medium. Record maker And, including the.
 上記の課題を解決するために、請求項34に記載の発明は、書面等の撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力するカメラ等の撮像手段を備える画像記録装置に含まれるコンピュータを、各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別手段、及び、前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録手段、として機能させる。 In order to solve the above-described problems, the invention described in claim 34 is an imaging means in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object. A computer included in an image recording apparatus including an imaging unit such as a camera that outputs imaging information for each imaging is used to place the imaging object on the object placement position based on the output imaging information. And a recording unit that records the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed. Means, as To function.
 上記の課題を解決するために、請求項35に記載の発明は、書面等の撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力するカメラ等の撮像手段を備える画像記録装置に含まれるコンピュータを、各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別手段、及び、前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録手段、として機能させる画像記録用プログラムが前記コンピュータにより読み取り可能に記録されている。 In order to solve the above-described problem, the invention according to claim 35 is an imaging means in which a relative position with respect to an object placement position on which an imaging object such as a document is placed is constant, and the imaging The imaging object placed at the object placement position or the object placement position where the object is not placed is continuously imaged, and corresponds to the object placement position or the imaging object. A computer included in an image recording apparatus including an imaging unit such as a camera that outputs imaging information for each imaging is used to place the imaging object on the object placement position based on the output imaging information. And a recording unit that records the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed. Means, as The image recording program for performance is recorded readably by the computer.
 請求項1又は請求項33から請求項35のいずれか一項に記載の発明によれば、撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段により連続して撮像物載置位置又は撮像対象物を撮像し、各撮像情報に基づいて対象物載置位置に撮像対象物が置かれていることが識別されたとき、当該識別がされたタイミングに対応して出力された撮像情報を自動的に記録媒体に記録するので、例えば記録や撮像のための撮像手段に対する使用者の操作によるぶれを防止しつつ、撮像対象物上に記載されている情報を簡易に撮像して記録することができる。 According to the invention described in claim 1 or 33 to 35, the image is continuously detected by the imaging means whose relative position to the object placement position where the imaging object is placed is constant. When the imaging object placement position or the imaging object is imaged and it is identified that the imaging object is placed at the object placement position based on each imaging information, it corresponds to the timing of the identification. Therefore, the information recorded on the object to be imaged can be simplified while preventing blurring due to the user's operation with respect to the imaging means for recording or imaging, for example. Can be recorded and recorded.
 上記の課題を解決するために、請求項2に記載の発明は、請求項1に記載の画像記録装置において、前記識別手段は、前記撮像手段からそれぞれ出力された前記撮像情報に基づき、当該各撮像情報における前記撮像対象物と当該撮像対象物の周囲とを画する境界が識別されたとき、前記対象物載置位置に前記撮像対象物が置かれていると識別するように構成される。 In order to solve the above-mentioned problem, the invention according to claim 2 is the image recording apparatus according to claim 1, wherein the identification means is based on the imaging information respectively output from the imaging means. When a boundary between the imaging object and the periphery of the imaging object in the imaging information is identified, it is configured to identify that the imaging object is placed at the object placement position.
 請求項2に記載の発明によれば、請求項1に記載の発明の作用に加えて、撮像手段からそれぞれ出力された撮像情報に基づき、各撮像情報における撮像対象物と当該撮像対象物の周囲とを画する境界が識別されたとき、対象物載置位置に撮像対象物が置かれていると識別するので、より確実且つ簡易に撮像対象物を撮像してそれに記載されている情報を記録することができる。 According to the invention described in claim 2, in addition to the operation of the invention described in claim 1, based on the imaging information output from the imaging means, the imaging object in each imaging information and the surroundings of the imaging object When the boundary defining the image is identified, it is identified that the imaging object is placed at the object placement position, so the imaging object is imaged more reliably and easily and the information described therein is recorded. can do.
 上記の課題を解決するために、請求項3に記載の発明は、請求項2に記載の画像記録装置において、前記境界が識別されたとき、当該識別された境界を示す境界線を前記撮像情報における前記撮像対象物に対応付けて表示する表示部等の表示手段を更に備える。 In order to solve the above-described problem, according to a third aspect of the present invention, in the image recording apparatus according to the second aspect, when the boundary is identified, a boundary line indicating the identified boundary is displayed as the imaging information. Display means such as a display unit that displays the image in association with the imaging object.
 請求項3に記載の発明によれば、請求項2に記載の発明の作用に加えて、撮像対象物と周囲との境界が識別されたとき、当該境界を示す境界線を撮像対象物に対応付けて表示するので、より正確に当該表示がされるように例えばユーザが撮像対象物を移動させる等することにより、当該境界の認識精度及び撮像自体の成功率を向上させることができる。 According to the invention described in claim 3, in addition to the action of the invention described in claim 2, when a boundary between the imaging object and the surrounding is identified, the boundary line indicating the boundary corresponds to the imaging object For example, the user can improve the recognition accuracy of the boundary and the success rate of the imaging itself by moving the imaging object so that the display can be performed more accurately.
 上記の課題を解決するために、請求項4に記載の発明は、請求項1に記載の画像記録装置において、前記識別手段は、前記撮像手段からそれぞれ出力された前記撮像情報に基づき、当該各撮像情報の一部又は全部が時系列において所定量変化し、その後に当該変化がなくなったと識別されたとき、前記対象物載置位置に前記撮像対象物が置かれていると識別するように構成される。 In order to solve the above-described problem, according to a fourth aspect of the present invention, in the image recording apparatus according to the first aspect, the identification unit is configured to output each of the identification information based on the imaging information output from the imaging unit. When a part or all of the imaging information changes by a predetermined amount in time series, and then it is identified that the change has disappeared, it is configured to identify that the imaging object is placed at the object placement position Is done.
 請求項4に記載の発明によれば、請求項1に記載の発明の作用に加えて、撮像手段からそれぞれ出力された撮像情報に基づき、撮像情報の一部又は全部が時系列において所定量変化し、その後に当該変化がなくなったと識別されたとき、対象物載置位置に前記撮像対象物が置かれていると識別するので、より確実且つ簡易に撮像対象物を撮像してそれに記載されている情報を記録することができる。 According to the invention described in claim 4, in addition to the operation of the invention described in claim 1, a part or all of the imaging information changes by a predetermined amount in time series based on the imaging information respectively output from the imaging means. Then, when it is identified that the change has disappeared thereafter, it is identified that the imaging object is placed at the object placement position, so that the imaging object is imaged more reliably and easily and described in it. Information can be recorded.
 上記の課題を解決するために、請求項5に記載の発明は、請求項1に記載の画像記録装置において、前記識別手段は、前記撮像対象物が載置される載置台の前記対象物載置位置に相当する部分の特徴を示す特徴情報に基づいて、前記載置台の前記対象物載置位置が遮られたことが認識されたとき、前記対象物載置位置に前記撮像対象物が置かれていると識別するように構成される。 In order to solve the above-described problem, the invention according to claim 5 is the image recording apparatus according to claim 1, wherein the identification unit is configured to mount the object on the mounting table on which the imaging object is mounted. When it is recognized that the object placement position of the mounting table is blocked based on the feature information indicating the feature of the portion corresponding to the placement position, the imaging object is placed at the object placement position. Configured to be identified.
 請求項5に記載の発明によれば、請求項1に記載の発明の作用に加えて、撮像対象物が載置される載置台の対象物載置位置に相当する部分の特徴を示す特徴情報に基づいて、載置台の対象物載置位置が遮られたことが認識されたとき、対象物載置位置に撮像対象物が置かれていると識別するので、より確実且つ簡易に撮像対象物を撮像してそれに記載されている情報を記録することができる。 According to the invention described in claim 5, in addition to the operation of the invention described in claim 1, the feature information indicating the feature of the portion corresponding to the object mounting position of the mounting table on which the imaging object is mounted. Based on the above, when it is recognized that the object placement position of the placement table is blocked, it is identified that the imaging object is placed at the object placement position. Can be recorded and the information described therein can be recorded.
 上記の課題を解決するために、請求項6に記載の発明は、請求項1に記載の画像記録装置において、前記識別手段は、前記撮像対象物が置かれていない前記対象物載置位置に相当する前記撮像情報と、他の前記撮像情報と、の差分に基づいて、前記対象物載置位置に前記撮像対象物が置かれていると識別するように構成される。 In order to solve the above-mentioned problem, the invention according to claim 6 is the image recording apparatus according to claim 1, wherein the identification unit is located at the object placement position where the imaging object is not placed. Based on the difference between the corresponding imaging information and the other imaging information, it is configured to identify that the imaging object is placed at the object placement position.
 請求項6に記載の発明によれば、請求項1に記載の発明の作用に加えて、撮像対象物が置かれていない対象物載置位置に相当する撮像情報と、他の撮像情報と、の差分に基づいて、対象物載置位置に撮像対象物が置かれていると識別するので、より確実且つ簡易に撮像対象物を撮像してそれに記載されている情報を記録することができる。 According to the invention described in claim 6, in addition to the operation of the invention described in claim 1, imaging information corresponding to the object placement position where the imaging object is not placed, other imaging information, Based on the difference, it is identified that the imaging target is placed at the target placement position, so that the imaging target can be captured more reliably and easily and the information described therein can be recorded.
 上記の課題を解決するために、請求項7に記載の発明は、請求項1から請求項6にいずれか一項に記載の画像記録装置において、前記撮像対象物が置かれていることが認識されているか否かを逐次告知する表示部等の告知手段を更に備える。 In order to solve the above problems, the invention according to claim 7 recognizes that the imaging object is placed in the image recording apparatus according to any one of claims 1 to 6. It is further provided with notifying means such as a display unit for sequentially notifying whether or not it is being performed.
 請求項7に記載の発明によれば、請求項1から請求項6のいずれか一項に記載の発明の作用に加えて、撮像対象物が置かれていることが認識されているか否かが逐次告知されるので、当該告知に基づいて例えばユーザが撮像対象物を移動させる等することにより、当該置かれていることの認識精度及び撮像自体の成功率を向上させることができる。なおこの場合の告知は、視覚による方法、聴覚による方法又は画像記録装置を振動させる方法などを用いることができる。 According to the invention described in claim 7, in addition to the operation of the invention described in any one of claims 1 to 6, whether or not it is recognized that the imaging object is placed is determined. Since notification is performed sequentially, for example, when the user moves the imaging target based on the notification, the recognition accuracy of the placement and the success rate of imaging itself can be improved. The notification in this case can use a visual method, an auditory method, or a method of vibrating the image recording apparatus.
 上記の課題を解決するために、請求項8に記載の発明は、請求項1から請求項7のいずれか一項に記載の画像記録装置において、前記対象物載置位置に前記撮像対象物が置かれていると識別された後、前記撮像情報における少なくとも前記撮像対象物が予め設定された所定時間静止しているか否かを判定するCPU等の判定手段と、前記判定手段による前記撮像対象物が前記所定時間静止している旨の判定に対応して出力された前記撮像情報を前記記録媒体に記録するように前記記録手段を制御するCPU等の制御手段と、を備える。 In order to solve the above-described problem, the invention according to claim 8 is the image recording apparatus according to any one of claims 1 to 7, wherein the imaging object is located at the object placement position. A determination unit such as a CPU for determining whether at least the imaging target in the imaging information is stationary for a predetermined time after being identified as being placed; and the imaging target by the determination unit Control means such as a CPU for controlling the recording means so as to record the imaging information output corresponding to the determination that the camera is stationary for the predetermined time on the recording medium.
 請求項8に記載の発明によれば、請求項1から請求項7のいずれか一項に記載の発明の作用に加えて、撮像対象物が置かれていると識別された後、撮像情報における少なくとも撮像対象物が予め設定された所定時間静止している旨の判定に対応して出力された撮像情報を記録媒体に記録するので、少なくとも撮像対象物に変化がない状態で撮像することにより、より高画質に撮像対象物を撮像することができる。 According to the invention described in claim 8, in addition to the operation of the invention described in any one of claims 1 to 7, in the imaging information after it is identified that the imaging object is placed. Since the imaging information output corresponding to the determination that at least the imaging object is stationary for a predetermined time set in advance is recorded on the recording medium, at least by imaging the imaging object without change, The imaging object can be imaged with higher image quality.
 上記の課題を解決するために、請求項9に記載の発明は、請求項1から請求項8のいずれか一項に記載の画像記録装置において、前記対象物載置位置に前記撮像対象物が置かれていると識別された後、前記撮像情報における前記撮像対象物の範囲内に当該撮像対象物以外の他の物が撮像されているか否かを判定するCPU等の判定手段と、前記撮像情報における前記撮像対象物の範囲内に前記他の物が撮像されていると判定されたとき、当該判定されたタイミングに対応して出力された前記撮像情報の前記記録媒体への記録を禁止するように前記記録手段を制御するCPU等の制御手段と、を備える。 In order to solve the above-described problem, the invention according to claim 9 is the image recording apparatus according to any one of claims 1 to 8, wherein the imaging object is located at the object placement position. A determination unit such as a CPU for determining whether or not an object other than the imaging object is imaged within the range of the imaging object in the imaging information after being identified as being placed; and the imaging When it is determined that the other object is captured within the range of the imaging target in the information, recording of the imaging information output corresponding to the determined timing onto the recording medium is prohibited. Control means such as a CPU for controlling the recording means.
 請求項9に記載の発明によれば、請求項1から請求項8のいずれか一項に記載の発明の作用に加えて、撮像対象物が置かれていると識別された後、撮像対象物の範囲内に他の物が撮像されていると判定されたとき、そのタイミングに対応して出力された撮像情報の記録を禁止するので、撮像対象物以外の他の物が撮像された撮像情報が記録されることを防止することができる。 According to the invention described in claim 9, in addition to the operation of the invention described in any one of claims 1 to 8, after the object to be imaged is identified, the object to be imaged is identified. When it is determined that another object is captured within the range, the recording of the imaging information output corresponding to the timing is prohibited, so that the imaging information obtained by capturing an object other than the imaging object is captured. Can be prevented from being recorded.
 上記の課題を解決するために、請求項10に記載の発明は、請求項1から請求項9のいずれか一項に記載の画像記録装置において、前記記録媒体に記録済みの第1撮像情報における前記撮像対象物と、前記第1撮像情報が前記記録媒体に記録されたタイミングより後に前記撮像手段から出力された第2撮像情報における前記撮像対象物と、を各前記撮像ごとに比較するCPU等の比較手段と、前記第1撮像情報における前記撮像対象物と、前記第2撮像情報における前記撮像対象物と、が同一であるとき、前記第2撮像情報の前記記録媒体への記録を禁止するように前記記録手段を制御するCPU等の制御手段と、を備える。 In order to solve the above-described problem, the invention according to claim 10 is the image recording apparatus according to any one of claims 1 to 9, wherein the first imaging information recorded in the recording medium is used. A CPU for comparing the imaging object with the imaging object in the second imaging information output from the imaging means after the timing when the first imaging information is recorded on the recording medium, for each imaging When the comparison means, the imaging object in the first imaging information, and the imaging object in the second imaging information are the same, recording of the second imaging information onto the recording medium is prohibited. Control means such as a CPU for controlling the recording means.
 請求項10に記載の発明によれば、請求項1から請求項9のいずれか一項に記載の発明の作用に加えて、記録済みの第1撮像情報における撮像対象物と、未記録の第2撮像情報における撮像対象物と、を各撮像ごとに比較し、両者が同一であるとき、第2撮像情報の記録を禁止するので、同一の撮像対象物を含む撮像情報が重複して記録されることを防止できる。 According to the invention described in claim 10, in addition to the operation of the invention described in any one of claims 1 to 9, the imaging object in the recorded first imaging information and the unrecorded first The imaging object in the two imaging information is compared for each imaging, and when both are the same, the recording of the second imaging information is prohibited, so that the imaging information including the same imaging object is recorded redundantly. Can be prevented.
 上記の課題を解決するために、請求項11に記載の発明は、請求項1から請求項10のいずれか一項に記載の画像記録装置において、前記撮像対象物に関連する関連対象物の像を前記撮像対象物上に投影するプロジェクタ等の投影手段を備え、前記像が投影されているタイミングにおいて前記撮像手段から出力された前記撮像情報を前記記録媒体に記録するように前記記録手段を制御するように構成される。 In order to solve the above problem, an invention according to claim 11 is an image recording apparatus according to any one of claims 1 to 10, wherein an image of a related object related to the imaging object. Projection means such as a projector that projects the image onto the imaging object, and the recording means is controlled to record the imaging information output from the imaging means at the timing when the image is projected onto the recording medium. Configured to do.
 請求項11に記載の発明によれば、請求項1から請求項10のいずれか一項に記載の発明の作用に加えて、撮像対象物に関連する関連対象物の像を撮像対象物上に投影し、その像が投影されているタイミングにおいて出力された撮像情報を記録するので、関連対象物との関係を明確としつつ、撮像対象物を撮像して記録することができる。 According to the invention described in claim 11, in addition to the operation of the invention described in any one of claims 1 to 10, an image of a related object related to the imaging object is displayed on the imaging object. Since the imaging information output at the timing of projecting and projecting the image is recorded, the imaging object can be imaged and recorded while clarifying the relationship with the related object.
 上記の課題を解決するために、請求項12に記載の発明は、請求項11に記載の画像記録装置において、前記像が投影されている前記撮像対象物に相当する前記撮像情報と、前記像の投影が一時的に中断されている間の前記撮像対象物に相当する前記撮像情報と、を共に前記記録媒体に記録するように前記記録手段を制御するように構成される。 In order to solve the above-mentioned problem, the invention according to claim 12 is the image recording apparatus according to claim 11, wherein the imaging information corresponding to the imaging object on which the image is projected, and the image The recording means is controlled to record both the imaging information corresponding to the imaging object while the projection is temporarily interrupted on the recording medium.
 請求項12に記載の発明によれば、請求項11に記載の発明の作用に加えて、関連撮像対象物の像が投影されている撮像対象物に相当する撮像情報と、当該像の投影が一時的に中断されている間の撮像対象物に相当する撮像情報と、を共に記録媒体に記録するので、当該像の投影が終了した撮像対象物に相当する撮像情報を選択的に記録することができる。 According to the invention described in claim 12, in addition to the operation of the invention described in claim 11, the imaging information corresponding to the imaging object on which the image of the related imaging object is projected and the projection of the image are provided. Since both the imaging information corresponding to the imaging target while being temporarily interrupted are recorded on the recording medium, the imaging information corresponding to the imaging target after the projection of the image is selectively recorded. Can do.
 上記の課題を解決するために、請求項13に記載の発明は、請求項1から請求項12のいずれか一項に記載の画像記録装置において、同一の前記撮像対象物に相当する複数の前記撮像情報を合成して合成撮像情報を生成するCPU等の合成手段を更に備え、前記生成された合成撮像情報を前記記録媒体に記録するように前記記録手段を制御するように構成される。 In order to solve the above-described problem, an invention according to claim 13 is the image recording apparatus according to any one of claims 1 to 12, wherein a plurality of the images corresponding to the same imaging object are provided. The image forming apparatus further includes a combining unit such as a CPU that combines the imaging information to generate combined imaging information, and is configured to control the recording unit to record the generated combined imaging information on the recording medium.
 請求項13に記載の発明によれば、請求項1から請求項12のいずれか一項に記載の発明の作用に加えて、同一の撮像対象物に相当する複数の撮像情報を合成して合成撮像情報を生成して記録するので、より高画質或いは広範囲の撮像対象物に相当する撮像情報を記録することができる。 According to the invention described in claim 13, in addition to the operation of the invention described in any one of claims 1 to 12, a plurality of pieces of imaging information corresponding to the same imaging object are synthesized and combined. Since the imaging information is generated and recorded, it is possible to record imaging information corresponding to a higher image quality or a wider range of imaging objects.
 上記の課題を解決するために、請求項14に記載の発明は、請求項13に記載の画像記録装置において、前記合成手段は、同一の前記撮像対象物に相当する複数の前記撮像情報であって、前記撮像対象物の撮像条件が相互に異なる複数の前記撮像情報を合成して前記合成撮像情報を生成するように構成される。 In order to solve the above-described problem, the invention according to claim 14 is the image recording apparatus according to claim 13, wherein the synthesizing means is a plurality of pieces of the imaging information corresponding to the same imaging object. Thus, the composite imaging information is generated by combining a plurality of the imaging information having different imaging conditions of the imaging object.
 請求項14に記載の発明によれば、請求項13に記載の発明の作用に加えて、同一の撮像対象物に相当し且つ撮像対象物の撮像条件が相互に異なる複数の撮像情報を合成して合成撮像情報を生成するので、例えば立体的に撮像対象物を撮像することができ、より高画質且つ高精度の撮像対象物に相当する撮像情報を記録することができる。 According to the invention described in claim 14, in addition to the operation of the invention described in claim 13, a plurality of pieces of imaging information corresponding to the same imaging object and having different imaging conditions of the imaging object are synthesized. Thus, the composite imaging information is generated, so that, for example, the imaging object can be imaged stereoscopically, and imaging information corresponding to the imaging object with higher image quality and higher accuracy can be recorded.
 上記の課題を解決するために、請求項15に記載の発明は、請求項13に記載の画像記録装置において、前記撮像手段と前記撮像対象物との相対的な位置の、前後する撮像間における変更に基づき、前記合成手段は、同一の前記撮像対象物に相当する当該変更前後の複数の前記撮像情報を用いて、当該各撮像情報それぞれに相当する画像よりも高画質或いは広範囲の合成画像に相当する前記合成撮像情報を生成するように構成される。 In order to solve the above-mentioned problem, the invention according to claim 15 is the image recording apparatus according to claim 13, wherein the relative position between the imaging means and the imaging object is between the preceding and following imaging. Based on the change, the synthesizing means uses a plurality of the imaging information before and after the change corresponding to the same imaging object, to produce a composite image having a higher image quality or a wider range than images corresponding to the respective imaging information. The corresponding composite imaging information is generated.
 請求項15に記載の発明によれば、請求項13に記載の発明の作用に加えて、撮像手段と撮像対象物との相対的な位置の、前後する撮像間における変更に基づき、同一の撮像対象物に相当する当該変更前後の複数の撮像情報それぞれに相当する画像よりも高画質或いは広範囲の合成画像に相当する合成撮像情報を生成するので、撮像対象物に相当するより高画質或いは広範囲の合成撮像情報を記録することができる。 According to the invention described in claim 15, in addition to the action of the invention described in claim 13, the same imaging based on a change in relative position between the imaging means and the imaging object between the preceding and succeeding imaging. Since composite imaging information corresponding to a composite image having a higher image quality or a wider range than the images corresponding to the plurality of imaging information before and after the change corresponding to the object is generated, a higher image quality or a wider range than the image corresponding to the imaging object is generated. Composite imaging information can be recorded.
 上記の課題を解決するために、請求項16に記載の発明は、請求項1から請求項15のいずれか一項に記載の画像記録装置において、前記撮像対象物は、人を個人的に特定する個人情報が記載された撮像対象物であるように構成される。 In order to solve the above-mentioned problem, the invention according to claim 16 is the image recording apparatus according to any one of claims 1 to 15, wherein the imaging object personally identifies a person. It is comprised so that it may be the imaging target object in which the personal information to describe is described.
 請求項16に記載の発明によれば、請求項1から請求項15のいずれか一項に記載の発明の作用に加えて、撮像対象物に個人情報が記載されているので、個人情報を改めて記入したり入力したりする手間を省くことができる。 According to the invention described in claim 16, in addition to the operation of the invention described in any one of claims 1 to 15, personal information is described in the imaging object. This saves you the trouble of filling in and entering.
 上記の課題を解決するために、請求項17に記載の発明は、請求項1から請求項16のいずれか一項に記載の画像記録装置において、前記撮像情報が前記記録媒体に記録されたとき、当該記録された撮像情報に関連付けられ、且つ当該記録された撮像情報に関連するイベントを特定するイベント情報を生成するCPU等のイベント情報生成手段と、前記生成されたイベント情報がイベント情報記録手段に既に記録されているいずれかのイベント情報と同一のイベントに対応しているか否かを照合するCPU等の照合手段と、前記生成されたイベント情報が前記イベント情報記録手段に記録されているいずれのイベント情報と同一のイベントにも対応していないとき、当該生成されたイベント情報を当該イベント情報記録手段に記録させる通信インターフェース等の送信手段と、を更に備える。 In order to solve the above-described problem, the invention according to claim 17 is the image recording apparatus according to any one of claims 1 to 16, wherein the imaging information is recorded on the recording medium. An event information generating unit such as a CPU for generating event information associated with the recorded imaging information and specifying an event related to the recorded imaging information; and the generated event information is an event information recording unit A collation unit such as a CPU for collating whether or not the event information corresponds to the same event information already recorded in the event information, and the generated event information recorded in the event information recording unit When the event information does not correspond to the same event information, the generated event information is recorded in the event information recording means. Further comprising a transmission unit, such as interface.
 請求項17に記載の発明によれば、請求項1から請求項16のいずれか一項に記載の発明の作用に加えて、撮像情報が記録媒体に記録されたとき、記録された撮像情報に関連付けられ且つ当該撮像情報に関連するイベントを特定するイベント情報を生成し、それがイベント情報記録手段に記録されているいずれのイベント情報と同一のイベントにも対応していない場合に当該イベント情報記録手段にその生成されたイベント情報を記録させるので、撮像対象物に関連付けられ且つ撮像情報に関連するイベントを特定するイベント情報を簡易に生成して記録させることで、当該イベント情報を簡易に維持管理することができる。 According to the invention described in claim 17, in addition to the operation of the invention described in any one of claims 1 to 16, when the imaging information is recorded on the recording medium, the recorded imaging information is added. Event information is recorded when event information for identifying an event associated with the imaging information is generated, and the event information does not correspond to any event information recorded in the event information recording means. Since the generated event information is recorded by the means, the event information associated with the imaging object and specifying the event related to the imaging information can be easily generated and recorded, thereby easily maintaining and managing the event information. can do.
 上記の課題を解決するために、請求項18に記載の発明は、請求項17に記載の画像記録装置において、前記生成されたイベント情報が前記イベント情報記録手段に記録されているいずれかのイベント情報と同一のイベントに対応しているとき、当該生成されたイベント情報に関連付けられる前記撮像情報の当該関連付け先を、前記同一のイベントの前記イベント情報に変更するCPU等の関連付変更手段を更に備える。 In order to solve the above-mentioned problem, the invention according to claim 18 is the image recording apparatus according to claim 17, wherein the generated event information is recorded in the event information recording means. An association changing unit such as a CPU that changes the association destination of the imaging information associated with the generated event information to the event information of the same event when the event corresponds to the same event as the information; Prepare.
 請求項18に記載の発明によれば、請求項17に記載の発明の作用に加えて、生成されたイベント情報が、記録されているいずれかのイベント情報と同一のイベントに対応しているとき、当該生成されたイベント情報に関連付けられる撮像情報の関連付け先を当該同一のイベントのイベント情報に変更するので、複数の撮像情報を記録済みのイベント情報に関連付けて管理することができる。 According to the invention of claim 18, in addition to the action of the invention of claim 17, when the generated event information corresponds to the same event as any of the recorded event information Since the association destination of the imaging information associated with the generated event information is changed to the event information of the same event, a plurality of imaging information can be managed in association with the recorded event information.
 上記の課題を解決するために、請求項19に記載の発明は、請求項1から請求項18のいずれか一項に記載の画像記録装置において、前記撮像情報が前記記録媒体に記録されたとき、当該記録された撮像情報に関連付けられ且つ当該記録された撮像情報に関連する個人を特定する個人情報を生成するCPU等の個人情報生成手段と、前記生成された個人情報が個人情報記録手段に既に記録されているいずれかの個人情報により示される個人に対応しているか否かを照合するCPU等の照合手段と、前記生成された個人情報が前記個人情報記録手段に記録されているいずれの個人情報により示される個人とも対応していないとき、当該生成された個人情報を当該個人情報記録手段に記録させる通信インターフェース等の送信手段と、を更に備える。 In order to solve the above-described problem, the invention according to claim 19 is the image recording apparatus according to any one of claims 1 to 18, wherein the imaging information is recorded on the recording medium. A personal information generating unit such as a CPU for generating personal information associated with the recorded imaging information and identifying an individual related to the recorded imaging information; and the generated personal information is stored in the personal information recording unit. Collation means such as a CPU for collating whether or not it corresponds to an individual indicated by any personal information already recorded, and any of the generated personal information recorded in the personal information recording means And a transmission means such as a communication interface for recording the generated personal information in the personal information recording means when the personal information does not correspond to the individual indicated by the personal information. That.
 請求項19に記載の発明によれば、請求項1から請求項18のいずれか一項に記載の発明の作用に加えて、撮像情報が記録媒体に記録されたとき、記録された撮像情報に関連付けられた個人情報を生成し、それが個人情報記録手段内に記録されているいずれの個人情報により示される個人とも対応していない場合に当該個人情報記録手段にその生成された個人情報を記録させるので、撮像対象物に関連する個人情報を簡易に生成して記録させることで、当該個人情報を簡易に維持管理することができる。 According to the nineteenth aspect of the invention, in addition to the operation of the first aspect of the invention, when the imaging information is recorded on the recording medium, the recorded imaging information is added. Generates associated personal information, and records the generated personal information in the personal information recording means when it does not correspond to the individual indicated by any personal information recorded in the personal information recording means Therefore, the personal information related to the imaging object can be easily generated and recorded, so that the personal information can be easily maintained.
 上記の課題を解決するために、請求項20に記載の発明は、請求項19に記載の画像記録装置において、前記生成された個人情報が前記個人情報記録手段に記録されているいずれかの個人情報により示される個人に対応しているとき、当該生成された個人情報に関連付けられる前記撮像情報の当該関連付け先を、前記記録されている個人の前記個人情報に変更するCPU等の関連付変更手段を更に備える。 In order to solve the above-mentioned problem, the invention according to claim 20 is the image recording apparatus according to claim 19, wherein any one of the individuals in which the generated personal information is recorded in the personal information recording means. Association changing means such as a CPU for changing the association destination of the imaging information associated with the generated personal information to the personal information of the recorded individual when corresponding to the individual indicated by the information Is further provided.
 請求項20に記載の発明によれば、請求項19に記載の発明の作用に加えて、生成された個人情報が、記録されているいずれかの個人情報により示される個人に対応しているとき、当該生成された個人情報に関連付けられる撮像情報の関連付け先を当該記録されている個人の個人情報に変更するので、複数の撮像情報を記録済みの個人情報に関連付けて管理することができる。 According to the invention of claim 20, in addition to the action of the invention of claim 19, when the generated personal information corresponds to an individual indicated by any of the recorded personal information Since the association destination of the imaging information associated with the generated personal information is changed to the personal information of the recorded individual, a plurality of imaging information can be managed in association with the recorded personal information.
 上記の課題を解決するために、請求項21に記載の発明は、請求項1から請求項20のいずれか一項に記載の画像記録装置において、前記撮像情報が前記記録媒体に記録されたとき、当該記録された撮像情報に関連付けられ、且つ当該記録された撮像情報に関連する個人及びイベントを特定するリンク情報を生成するCPU等のリンク情報生成手段と、前記生成されたリンク情報が、リンク情報記録手段に既に記録されているいずれかのリンク情報と同一の個人及びイベントを特定しているか否かを照合するCPU等の照合手段と、前記生成されたリンク情報が、前記リンク情報記録手段に記録されているいずれのリンク情報により特定される個人及びイベントをも特定していないとき、当該生成されたリンク情報を当該リンク情報記録手段に記録させる通信インターフェース等の送信手段と、を更に備える。 In order to solve the above-described problem, the invention according to claim 21 is the image recording apparatus according to any one of claims 1 to 20, wherein the imaging information is recorded on the recording medium. Link information generating means such as a CPU for generating link information that is associated with the recorded imaging information and identifies an individual and an event related to the recorded imaging information, and the generated link information is a link Collation means such as a CPU for collating whether or not the same individual and event as any of the link information already recorded in the information recording means are identified, and the generated link information is the link information recording means When no individual or event is identified by any link information recorded in the link information, the link information generated is A transmission unit such as a communication interface for recording further comprises a.
 請求項21に記載の発明によれば、請求項1から請求項20のいずれか一項に記載の発明の作用に加えて、撮像情報が記録媒体に記録されたときリンク情報を生成し、それがリンク情報記録手段内に記録されているいずれのリンク情報により特定される個人及びイベントをも特定していない場合にリンク情報記録手段にその生成されたリンク情報を記録させるので、撮像情報に関連付けられるリンク情報を簡易に生成して記録させることで、当該リンク情報を簡易に維持管理することができる。 According to the invention described in claim 21, in addition to the operation of the invention described in any one of claims 1 to 20, link information is generated when imaging information is recorded on a recording medium. Since the link information recording means records the generated link information when no individual or event specified by any link information recorded in the link information recording means is specified, it is associated with the imaging information. The link information can be easily generated and recorded, so that the link information can be easily maintained.
 上記の課題を解決するために請求項22に記載の発明は、請求項21に記載の画像記録装置において、前記生成されたリンク情報が前記リンク情報記録手段に記録されているいずれかのリンク情報により特定される前記個人及び前記イベントを特定しているとき、当該生成されたリンク情報に関連付けられる前記撮像情報の当該関連付け先を、前記記録されているリンク情報に変更するCPU等の関連付変更手段を更に備える。 In order to solve the above-described problem, the invention according to claim 22 is the image recording apparatus according to claim 21, wherein the generated link information is recorded in the link information recording means. When specifying the individual and the event specified by, the association change of the CPU or the like that changes the association destination of the imaging information associated with the generated link information to the recorded link information Means are further provided.
 請求項22に記載の発明によれば、請求項21に記載の発明の作用に加えて、生成されたリンク情報がリンク情報記録手段内に記録されているいずれかのリンク情報により特定される個人及びイベントを特定しているとき、当該生成されたリンク情報に関連付けられる撮像情報の関連付け先を当該記録されているリンク情報に変更するので、複数の撮像情報を記録済みのリンク情報に関連付けて管理することができる。 According to the invention described in claim 22, in addition to the operation of the invention described in claim 21, an individual identified by any link information in which the generated link information is recorded in the link information recording means When the event is specified, the association destination of the imaging information associated with the generated link information is changed to the recorded link information, so that a plurality of imaging information is associated with the recorded link information and managed. can do.
 上記の課題を解決するために、請求項23に記載の発明は、請求項17又は請求項18に記載の画像記録装置において、前記撮像情報が前記記録媒体に記録されたときの前記撮像手段の位置を検出し、検出された位置を示す位置情報を生成するCPU等の位置検出手段を更に備え、前記イベント情報生成手段は、前記生成された位置情報を含ませて前記イベント情報を生成するように構成される。 In order to solve the above-described problem, the invention according to claim 23 is the image recording apparatus according to claim 17 or 18, wherein the imaging means when the imaging information is recorded on the recording medium. It further comprises position detecting means such as a CPU for detecting a position and generating position information indicating the detected position, and the event information generating means generates the event information by including the generated position information. Configured.
 請求項23に記載の発明によれば、請求項17又は請求項18に記載の発明の作用に加えて、撮像情報が記録されたときの撮像手段の位置を示す位置情報がイベント情報に含まれるので、利用価値の高いイベント情報を簡易に維持管理することができる。 According to the invention described in claim 23, in addition to the action of the invention described in claim 17 or claim 18, the event information includes position information indicating the position of the image pickup means when the image pickup information is recorded. Therefore, highly useful event information can be easily maintained and managed.
 上記の課題を解決するために、請求項24に記載の発明は、請求項1から請求項23のいずれか一項に記載の画像記録装置において、前記撮像手段は、携帯可能な情報処理装置に備えられた撮像手段であるように構成される。 In order to solve the above-described problem, an invention according to claim 24 is the image recording apparatus according to any one of claims 1 to 23, wherein the imaging unit is a portable information processing apparatus. It is comprised so that it may be provided with the imaging means.
 請求項24に記載の発明によれば、請求項1から請求項23のいずれか一項に記載の発明の作用に加えて、撮像手段が携帯可能な情報処理装置に備えられているので、簡易な構成で撮像対象物を撮像することができる。 According to the invention described in claim 24, in addition to the operation of the invention described in any one of claims 1 to 23, since the imaging means is provided in the portable information processing apparatus, it is simplified. The imaging object can be imaged with a simple configuration.
 上記の課題を解決するために、請求項25に記載の発明は、請求項1から請求項24のいずれか一項に記載の画像記録装置において、前記撮像手段は、携帯可能な保持具に当該画像記録装置が保持されることにより前記相対的な位置が一定とされる撮像手段であるように構成される。 In order to solve the above-described problem, an invention according to claim 25 is the image recording apparatus according to any one of claims 1 to 24, wherein the imaging means is a portable holder. The image recording apparatus is configured to be an image pickup unit in which the relative position is fixed by being held.
 請求項25に記載の発明によれば、請求項1から請求項24のいずれか一項に記載の発明の作用に加えて、撮像手段の相対的な位置が、携帯可能な保持具に当該画像記録装置が保持されることにより一定とされるので、簡易な構成で撮像対象物を撮像することができる。 According to the invention described in claim 25, in addition to the operation of the invention described in any one of claims 1 to 24, the relative position of the image pickup means is set on the portable holder. Since the recording apparatus is held constant, the imaging object can be imaged with a simple configuration.
 上記の課題を解決するために、請求項26に記載の発明は、請求項24又は請求項25に記載の画像記録装置において、前記撮像手段は、保持具に当該画像記録装置が保持されることにより前記相対的な位置が一定とされる撮像手段であり、前記保持具が、折り曲げ可能なシート状材料を折り曲げて組み立てられる保持具であって、携帯時には展開されてシート状とされ、使用時には折り曲げて組み立てられて前記画像記録装置を保持可能とされる保持具であるように構成される。 In order to solve the above-described problem, the invention according to claim 26 is the image recording apparatus according to claim 24 or claim 25, wherein the imaging means is configured such that the image recording apparatus is held by a holder. The relative position is fixed by the imaging means, and the holder is a holder that is assembled by folding a foldable sheet-like material, and is unfolded into a sheet shape when carried and used. The holder is configured to be folded and assembled so that the image recording apparatus can be held.
 請求項26に記載の発明によれば、請求項24又は請求項25に記載の発明の作用に加えて、保持具に画像記録装置が保持されることにより撮像手段の相対的な位置が一定とされ、当該保持具が、折り曲げ可能なシート状材料を折り曲げて組み立てられる保持具であって、携帯時には展開されてシート状とされ、使用時には折り曲げて組み立てられて画像記録装置を保持可能とされる保持具であるので、安価/軽量で携帯性の高い保持具だけで、安定して撮像対象物を撮像することができる。 According to the twenty-sixth aspect of the invention, in addition to the action of the twenty-fourth or twenty-fifth aspect of the invention, the image recording device is held by the holder so that the relative position of the imaging means is constant. The holder is a holder that is assembled by folding a foldable sheet-like material, and is unfolded into a sheet shape when carried, and can be folded and assembled when used to hold the image recording apparatus. Since it is a holder, an object to be imaged can be stably imaged with only a cheap / lightweight and highly portable holder.
 上記の課題を解決するために、請求項27に記載の発明は、請求項1から請求項26のいずれか一項に記載の画像記録装置において、前記対象物載置位置に前記撮像対象物が置かれていると識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報に対して、広告の内容を有する広告情報を加えるCPU等の画像処理手段を更に備え、前記記録手段は、前記広告情報が加えられた前記撮像情報を前記記録媒体に記録するように構成される。 In order to solve the above-described problem, the invention according to claim 27 is the image recording apparatus according to any one of claims 1 to 26, wherein the imaging object is located at the object placement position. An image processing means such as a CPU for adding advertisement information having an advertisement content to the imaging information output corresponding to the identified timing when it is identified as being placed; The means is configured to record the imaging information added with the advertisement information on the recording medium.
 請求項27に記載の発明によれば、請求項1から請求項26のいずれか一項に記載の発明の作用に加えて、広告情報が加えられた撮像情報が記録されるので、広告を画像によって配信することで、ユーザがより安価に本発明に関連するサービスを享受することができる。 According to the twenty-seventh aspect of the invention, in addition to the operation of the invention according to any one of the first to twenty-sixth aspects, the imaging information to which the advertising information is added is recorded. By distributing by, the user can enjoy the service related to the present invention at a lower cost.
 上記の課題を解決するために、請求項28に記載の発明は、請求項27に記載の画像記録装置において、前記画像処理手段は、前記対象物載置位置に前記撮像対象物が置かれていると識別されたタイミングに対応して出力された前記撮像情報の少なくとも一部を外部の情報処理装置に送信する通信インターフェース等の撮像情報送信手段と、前記送信された撮像情報の内容の前記情報処理装置における認識結果に基づいて当該情報処理装置において前記広告情報が加えられた当該撮像情報を受信する通信インターフェース等の撮像情報受信手段と、を備え、前記記録手段は、前記受信された撮像情報を前記記録媒体に記録するように構成される。 In order to solve the above-described problem, the invention according to claim 28 is the image recording apparatus according to claim 27, wherein the image processing means is configured to place the imaging object at the object placement position. Imaging information transmitting means such as a communication interface for transmitting at least a part of the imaging information output corresponding to the timing identified as being present to an external information processing apparatus, and the information on the content of the transmitted imaging information Imaging information receiving means such as a communication interface for receiving the imaging information to which the advertisement information is added in the information processing apparatus based on a recognition result in the processing apparatus, and the recording means is configured to receive the received imaging information. Is recorded on the recording medium.
 請求項28に記載の発明によれば、請求項27に記載の発明の作用に加えて、撮像対象物が置かれていると識別されたタイミングに対応して出力された撮像情報の少なくとも一部を外部の情報処理装置に送信してその内容の認識及び広告情報の追加を行わせ、当該広告情報が加えられた撮像情報を受信して記録するので、画像記録装置における処理負荷を軽減しつつ、撮像された情報と共に広告をも記録して爾後に参照することができる。 According to the invention described in claim 28, in addition to the operation of the invention described in claim 27, at least a part of the imaging information output corresponding to the timing when it is identified that the imaging object is placed. Is transmitted to an external information processing device, the content is recognized and advertisement information is added, and the imaging information with the advertisement information added is received and recorded, thus reducing the processing load on the image recording device. The advertisement can be recorded together with the imaged information and can be referred to later.
 上記の課題を解決するために、請求項29に記載の発明は、請求項27に記載の画像記録装置において、前記画像処理手段は、前記広告情報を予め一又は複数記録するROM等の広告情報記録手段と、前記対象物載置位置に前記撮像対象物が置かれていると識別されたタイミングに対応して出力された前記撮像情報の内容を認識するCPU等の認識手段と、前記認識された内容に対応する前記広告情報を前記広告情報記録手段から読み出すCPU等の読出手段と、を備え、当該画像処理手段は、前記読み出された広告情報を前記内容が認識された前記撮像情報に加えるように構成される。 In order to solve the above-mentioned problem, the invention according to claim 29 is the image recording apparatus according to claim 27, wherein the image processing means is an advertisement information such as a ROM that records one or more of the advertisement information in advance. A recording unit; a recognition unit such as a CPU for recognizing the content of the imaging information output corresponding to the timing at which the imaging object is identified as being placed at the object placement position; Reading means such as a CPU for reading out the advertisement information corresponding to the contents from the advertisement information recording means, and the image processing means converts the read advertisement information into the imaging information whose contents are recognized. Configured to add.
 請求項29に記載の発明によれば、請求項27に記載の発明の作用に加えて、撮像対象物が置かれていると識別されたタイミングに対応して出力された撮像情報の内容に対応する広告情報を広告情報記録手段から読み出して撮像情報に加えるので、画像記録装置内で完結する構成により、撮像された情報と共に広告をも記録して爾後に参照することができる。 According to the invention described in claim 29, in addition to the operation of the invention described in claim 27, it corresponds to the content of the imaging information output corresponding to the timing when the imaging object is identified as being placed. Since the advertisement information to be read is read from the advertisement information recording means and added to the image pickup information, the advertisement can be recorded together with the imaged information and referred to later by the configuration completed within the image recording apparatus.
 上記の課題を解決するために、請求項30に記載の発明は、請求項27に記載の画像記録装置において、前記画像処理手段は、前記対象物載置位置に前記撮像対象物が置かれていると識別されたタイミングに対応して出力された前記撮像情報の内容を認識するCPU等の認識手段と、前記認識手段による認識結果を外部の情報処理装置に送信する通信インターフェース等の認識結果送信手段と、前記送信された認識結果に基づいて当該情報処理装置から送信された前記広告情報を受信する通信インターフェース等の広告情報受信手段と、を備え、当該画像処理手段は、前記受信された広告情報を前記内容が認識された前記撮像情報に加えるように構成される。 In order to solve the above-described problem, the invention according to claim 30 is the image recording apparatus according to claim 27, wherein the image processing means is configured to place the imaging object at the object placement position. A recognition unit such as a CPU for recognizing the contents of the imaging information output corresponding to the timing identified as being present, and a recognition result transmission such as a communication interface for transmitting a recognition result by the recognition unit to an external information processing apparatus Means and an advertisement information receiving means such as a communication interface for receiving the advertisement information transmitted from the information processing apparatus based on the transmitted recognition result, and the image processing means includes the received advertisement Information is configured to be added to the imaging information whose contents are recognized.
 請求項30に記載の発明によれば、請求項27に記載の発明の作用に加えて、撮像対象物が置かれていると識別されたタイミングに対応して出力された撮像情報の内容の認識結果を外部の画像処理装置に送信し、その認識結果に基づく広告情報を受信して撮像情報に加えるので、撮像された情報と共に広告をも記録して爾後に参照することができる。 According to the invention of claim 30, in addition to the action of the invention of claim 27, recognition of the contents of the imaging information output corresponding to the timing at which the imaging object is identified as being placed Since the result is transmitted to an external image processing apparatus, and the advertisement information based on the recognition result is received and added to the imaging information, the advertisement can be recorded together with the imaged information and can be referred to later.
 上記の課題を解決するために、請求項31に記載の発明は、請求項1から請求項30のいずれか一項に記載の画像記録装置において、前記撮像情報を前記記録媒体に記録する際、前記記録手段は、各前記撮像情報を識別するための識別情報、当該撮像情報に相当する前記撮像対象物が撮像された時を示す時刻情報又は当該撮像が行われた場所を示す場所情報の少なくともいずれか一つを前記撮像情報内に画像として埋め込んだ後、当該撮像情報を前記記録媒体に記録するように構成される。 In order to solve the above problem, the invention according to claim 31 is the image recording apparatus according to any one of claims 1 to 30, wherein the imaging information is recorded on the recording medium. The recording means includes at least identification information for identifying each of the imaging information, time information indicating when the imaging object corresponding to the imaging information is captured, or location information indicating a place where the imaging is performed. After any one is embedded as an image in the imaging information, the imaging information is recorded on the recording medium.
 請求項31に記載の発明によれば、請求項1から請求項30のいずれか一項に記載の発明の作用に加えて、撮像情報を記録する際、識別情報、時刻情報又は場所情報の少なくともいずれか一つを撮像情報内に画像として埋め込んだ後に記録するので、記録後の撮像情報を容易に識別することができる。 According to the invention of claim 31, in addition to the operation of the invention of any one of claims 1 to 30, when recording imaging information, at least identification information, time information or location information is recorded. Since any one is embedded after being embedded as an image in the imaging information, the imaging information after recording can be easily identified.
 上記の課題を解決するために、請求項32に記載の発明は、請求項31に記載の画像記録装置において、前記記録手段は、撮像された前記撮像情報内に他の前記識別情報が既に含まれていることを認識したとき、当該認識した他の前記識別情報に関連する新たな識別情報を生成し、当該撮像により前記記録媒体に記録する当該撮像情報内に当該生成された新たな識別情報を埋め込んで記録するように構成される。 In order to solve the above-mentioned problem, the invention according to claim 32 is the image recording apparatus according to claim 31, wherein the recording means already includes the other identification information in the imaged imaging information. New identification information related to the recognized other identification information is generated, and the generated new identification information is recorded in the imaging information recorded on the recording medium by the imaging. Is configured to be embedded and recorded.
 請求項32に記載の発明によれば、請求項31に記載の発明の作用に加えて、撮像された撮像情報内に他の識別情報が既に含まれていることを認識したとき、当該他の識別情報に関連する新たな識別情報を生成して撮像情報内に埋め込んで記録するので、識別情報の関連性により、複数回に渡る撮像により得られた撮像情報を確実に識別することができる。 According to the invention of claim 32, in addition to the action of the invention of claim 31, when recognizing that other identification information is already included in the imaged imaging information, Since new identification information related to the identification information is generated, embedded in the imaging information, and recorded, the imaging information obtained by multiple imaging can be reliably identified by the relevance of the identification information.
 本発明によれば、撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段により連続して撮像物載置位置又は撮像対象物を撮像し、各撮像情報に基づいて対象物載置位置に撮像対象物が置かれていることが識別されたとき、当該識別がされたタイミングに対応して出力された撮像情報を自動的に記録媒体に記録する。 According to the present invention, the imaging object mounting position or the imaging object is continuously imaged by the imaging means whose relative position with respect to the object mounting position on which the imaging object is placed is constant, and each imaging information When it is identified that the imaging target is placed at the target placement position, the imaging information output corresponding to the identified timing is automatically recorded on the recording medium.
 従って、例えば記録や撮像のための撮像手段に対する使用者の操作によるぶれを防止しつつ、撮像対象物上に記載されている情報を簡易に撮像して記録することができる。 Therefore, for example, information described on the imaging object can be easily imaged and recorded while preventing blurring due to the user's operation on the imaging means for recording or imaging.
第1実施形態に係るキャプチャシステムの概要構成を示す外観斜視図等であり、(a)は当該外観斜視図であり、(b)は撮像された画像を例示する図である。BRIEF DESCRIPTION OF THE DRAWINGS It is an external appearance perspective view etc. which show schematic structure of the capture system which concerns on 1st Embodiment, (a) is the said external appearance perspective view, (b) is a figure which illustrates the imaged image. 第1実施形態に係るスマートフォンの概要構成を示すブロック図である。It is a block diagram which shows the schematic structure of the smart phone which concerns on 1st Embodiment. 第1実施形態に係るキャプチャ処理を示すフローチャートである。It is a flowchart which shows the capture process which concerns on 1st Embodiment. 第1実施形態に係るキャプチャ処理を例示する図(I)であり、(a)はキャプチャ処理の第1例を示す図であり、(b)はキャプチャ処理の第2例を示す図である。FIG. 2A is a diagram illustrating a capture process according to the first embodiment, FIG. 3A is a diagram illustrating a first example of the capture process, and FIG. 4B is a diagram illustrating a second example of the capture process; 第1実施形態に係るキャプチャ処理を例示する図(II)であり、(a)はキャプチャ処理の第3例を示す図であり、(b)はキャプチャ処理の第4例を示す図であり、(c)はキャプチャ処理の第5例を示す図であり、(d)はキャプチャ処理の第6例を示す図であり、(e)はキャプチャ処理の第7例を示す図である。It is figure (II) which illustrates the capture process which concerns on 1st Embodiment, (a) is a figure which shows the 3rd example of a capture process, (b) is a figure which shows the 4th example of a capture process, (C) is a figure which shows the 5th example of a capture process, (d) is a figure which shows the 6th example of a capture process, (e) is a figure which shows the 7th example of a capture process. 第1実施形態に係るキャプチャシステムに含まれるスタンドの他の例を示す外観斜視図(I)である。It is an external appearance perspective view (I) which shows the other example of the stand contained in the capture system which concerns on 1st Embodiment. 第1実施形態に係るキャプチャシステムに含まれるスタンドの他の例を示す外観斜視図(II)であり、(a)は組立前の当該スタンドを示す正面図であり、(b)は組立後の当該スタンドの使用例を示す外観斜視図である。It is an external appearance perspective view (II) which shows the other example of the stand contained in the capture system which concerns on 1st Embodiment, (a) is a front view which shows the said stand before an assembly, (b) is after an assembly. It is an external appearance perspective view which shows the usage example of the said stand. 第2実施形態に係るキャプチャシステムの第1例の概要構成を示す外観斜視図である。It is an external appearance perspective view which shows schematic structure of the 1st example of the capture system which concerns on 2nd Embodiment. 第2実施形態に係るキャプチャシステムの第2例の概要構成を示す外観斜視図である。It is an external appearance perspective view which shows schematic structure of the 2nd example of the capture system which concerns on 2nd Embodiment. 第2実施形態に係るキャプチャシステムの第3例の概要構成を示す外観斜視図である。It is an external appearance perspective view which shows schematic structure of the 3rd example of the capture system which concerns on 2nd Embodiment. 第3実施形態に係るキャプチャシステムの概要構成を示す外観斜視図である。It is an external appearance perspective view which shows schematic structure of the capture system which concerns on 3rd Embodiment. 第4実施形態に係るキャプチャシステムに含まれるスマートフォンの概要構成を示すブロック図である。It is a block diagram which shows the schematic structure of the smart phone contained in the capture system which concerns on 4th Embodiment. 第4実施形態に係るキャプチャ処理を示すフローチャートである。It is a flowchart which shows the capture process which concerns on 4th Embodiment. 第4実施形態に係る位置合わせ処理を例示する図(i)であり、(a)は位置合わせ前の状態を例示する図であり、(b)は位置合わせ処理中を例示する図(i)であり、(c)は位置合わせ処理中を例示する図(ii)である。It is figure (i) which illustrates the position alignment process which concerns on 4th Embodiment, (a) is a figure which illustrates the state before position alignment, (b) is a figure which illustrates the state during position alignment process (i). (C) is a diagram (ii) illustrating the position alignment process. 第4実施形態に係る位置合わせ処理を例示する図(ii)であり、(a)は位置合わせ処理中を例示する図(iii)であり、(b)は位置合わせ処理中を例示する図(iv)であり、(c)は位置合わせ処理中を例示する図(v)であり、(d)は位置合わせ処理中を例示する図(vi)である。It is figure (ii) which illustrates the position alignment process which concerns on 4th Embodiment, (a) is a figure (iii) which illustrates during position alignment process, (b) is a figure which illustrates during position alignment process ( iv), (c) is a diagram (v) illustrating the alignment process, and (d) is a diagram (vi) illustrating the alignment process. 第4実施形態に係る位置合わせ処理を例示する図(iii)である。It is a figure (iii) which illustrates the alignment process which concerns on 4th Embodiment.
 次に、本発明を実施するための形態について、図面に基づいて説明する。なお、以下に説明する各実施形態は、連写及び動画の撮像が可能なデジタルカメラ(以下、単にカメラと称する)を備える携帯可能なスマートフォンを用いて画像をキャプチャ(即ち記録する)するキャプチャシステムに対して本発明を適用した場合の実施の形態である。なお当該スマートフォンに備えられているデジタルカメラではなく、装置として独立したデジタルカメラを用いて画像をキャプチャするキャプチャシステムに対して本発明を適用しても良い。また、本発明に係るスマートフォンの携帯者を、以下、単に「携帯者」と称する。
(I)第1実施形態
 初めに、本発明に係る第1実施形態について、図1乃至図7を用いて説明する。なお、図1は第1実施形態に係るキャプチャシステムの概要構成を示す外観斜視図等であり、図2は第1実施形態に係るスマートフォンの概要構成を示すブロック図であり、図3は第1実施形態に係るキャプチャ処理を示すフローチャートである。また図4及び図5は第1実施形態に係るキャプチャ処理をそれぞれ例示する図であり、図6及び図7は第1実施形態に係るキャプチャシステムに含まれるスタンドの他の例をそれぞれ示す外観斜視図である。
Next, modes for carrying out the present invention will be described based on the drawings. Each embodiment described below is a capture system that captures (i.e., records) an image using a portable smartphone including a digital camera (hereinafter simply referred to as a camera) capable of continuous shooting and moving image capturing. This is an embodiment when the present invention is applied to the above. In addition, you may apply this invention with respect to the capture system which captures an image using the digital camera independent as an apparatus instead of the digital camera with which the said smart phone is equipped. In addition, a smartphone user according to the present invention is hereinafter simply referred to as a “carrier”.
(I) First Embodiment First, a first embodiment according to the present invention will be described with reference to FIGS. 1 is an external perspective view showing a schematic configuration of the capture system according to the first embodiment, FIG. 2 is a block diagram showing a schematic configuration of the smartphone according to the first embodiment, and FIG. It is a flowchart which shows the capture process which concerns on embodiment. 4 and 5 are diagrams illustrating the capture processing according to the first embodiment, respectively. FIGS. 6 and 7 are external perspective views illustrating other examples of the stand included in the capture system according to the first embodiment. FIG.
 第1実施形態に係るキャプチャシステムでは、撮像対象物の一例としての例えば書面又は立体物上に文字、記号又は図形等を用いて記載されている情報を、当該撮像対象物が置かれる位置に対して相対的に一定の位置にある上記スマートフォンのカメラを用いて、連写又は動画撮像する。その後、当該撮像された画像に相当する画像データの中から、後述するキャプチャ処理により選択された画像データを記録(キャプチャ)する。 In the capture system according to the first embodiment, information described using, for example, a letter, a symbol, or a figure on a document or a three-dimensional object as an example of an imaging object is displayed with respect to a position where the imaging object is placed. Then, continuous shooting or moving image capturing is performed by using the smartphone camera at a relatively fixed position. Thereafter, the image data selected by the capture process described later is recorded (captured) from the image data corresponding to the captured image.
 即ち図1(a)に示すように、第1実施形態に係るキャプチャシステムCSは、カメラ9を備えるスマートフォンSと、当該カメラ9が撮像対象物の一例である書面Pに対向するようにスマートフォンSを支持するスタンドSTと、により構成されている。この場合のスマートフォンSが本発明に係る「画像記録装置」の一例に相当する。このスタンドSTによりスマートフォンSが動かないように支持されることで、カメラ9(及びその撮像範囲AR)の位置が、書面Pが置かれる位置に対して相対的に一定とされる。スタンドSTにより支持されているスマートフォンSのカメラ9により、撮像範囲AR内の例えば机D上に置かれている書面Pを撮像した場合、当該撮像により得られる画像は、例えば図1(b)に示すような机D上の書面Pを含むものとなる。なお図1(b)に示す机D上に置かれている書面Pが撮像対象物である例は、後述する図4及び図5においても同様である。 That is, as illustrated in FIG. 1A, the capture system CS according to the first embodiment includes a smartphone S provided with a camera 9 and a smartphone S such that the camera 9 faces a document P that is an example of an imaging object. And a stand ST that supports The smartphone S in this case corresponds to an example of the “image recording apparatus” according to the present invention. By supporting the smartphone S so as not to move by the stand ST, the position of the camera 9 (and its imaging range AR) is made relatively constant with respect to the position where the document P is placed. When the document 9 placed on, for example, the desk D in the imaging range AR is imaged by the camera 9 of the smartphone S supported by the stand ST, an image obtained by the imaging is, for example, in FIG. The document P on the desk D as shown is included. The example in which the document P placed on the desk D shown in FIG. 1B is an imaging target is the same in FIGS. 4 and 5 described later.
 図1に示す状態でスマートフォンSは、カメラ9を用いて書面Pを連写又は動画撮像し、撮像された画像に相当する画像データの中から、後述するキャプチャ処理により選択された画像データを当該スマートフォンS内に記録(キャプチャ)する。これにより、第1実施形態に係る書面P(換言すれば書面P上に記載された情報)のキャプチャ処理が行われる。 In the state shown in FIG. 1, the smartphone S continuously captures or captures a moving image of the document P using the camera 9, and selects image data selected by capture processing described later from image data corresponding to the captured image. Record (capture) in the smartphone S. Thereby, the capture process of the document P (in other words, information described on the document P) according to the first embodiment is performed.
 次に、図2乃至図5を用いて、第1実施形態に係るキャプチャシステムCSの構成及び動作の細部について説明する。 Next, details of the configuration and operation of the capture system CS according to the first embodiment will be described with reference to FIGS. 2 to 5.
 先ず図2に示すように、第1実施形態に係るキャプチャシステムCSに含まれるスマートフォンSは、CPU1と、ROM(Read Only Memory)2と、RAM(Random Access Memory)3と、操作ボタン及びタッチパネル等からなる操作部4と、当該タッチパネルがその表面に配置されている液晶ディスプレイ等からなるディスプレイ5と、スピーカ7及びマイク8が接続されている通話制御部6と、本発明に係る「撮像手段」の一例としての上記カメラ9と、図1において図示しない外部のネットワーク(無線LAN(Local Area Network)、専用回線、インターネット又はいわゆる3G回線等のネットワーク)に接続するためのアンテナANTを備える通信インターフェース10と、カメラ9による撮像の対象となる上記撮像対象物の一部又は全部を当該撮像時に照らすライト11と、により構成されている。またRAM3内には、CPU1を中心とした第1実施形態に係るキャプチャ処理を実行するために必要なバッファとして、カレント画像バッファ32が揮発性の記憶領域として形成されている。 First, as shown in FIG. 2, the smartphone S included in the capture system CS according to the first embodiment includes a CPU 1, a ROM (Read Only Memory) 2, a RAM (Random Access Memory) 3, an operation button, a touch panel, and the like. An operation unit 4 comprising: a display 5 comprising a liquid crystal display or the like on which the touch panel is disposed; a call control unit 6 to which a speaker 7 and a microphone 8 are connected; and “imaging means” according to the present invention. A communication interface 10 including the camera 9 as an example and an antenna ANT for connection to an external network (a network such as a wireless LAN (Local Area Network), a dedicated line, the Internet, or a so-called 3G line) not shown in FIG. And a part or all of the imaging object to be imaged by the camera 9 A light 11 that illuminates when imaging, and is composed of. In the RAM 3, a current image buffer 32 is formed as a volatile storage area as a buffer necessary for executing the capture processing according to the first embodiment centered on the CPU 1.
 上記の構成のうちCPU1が、本発明に係る「識別手段」の一例、「記録手段」の一例、「判定手段」の一例、「制御手段」の一例、「比較手段」の一例、「合成手段」の一例、「イベント情報生成手段」の一例、「照合手段」の一例、「個人情報生成手段」の一例、「位置検出手段」の一例、「画像処理手段」の一例、「読出手段」の一例、及び「リンク情報生成手段」の一例に、それぞれ相当する。また通信インターフェース10が、「送信手段」の一例、「撮像情報送信手段」の一例、「撮像情報受信手段」の一例、「認識結果送信手段」の一例、「撮像情報受信手段」の一例及び「広告情報受信手段」の一例に、それぞれ相当する。更にディスプレイ5が、本発明に係る「表示手段」の一例及び「告知手段」の一例に、それぞれ相当する。更にまたROM2が、本発明に係る「記録媒体」の一例、「イベント情報記録手段」の一例、「個人情報記録手段」の一例、「リンク情報記録手段」の一例及び「広告情報記録手段」の一例に、それぞれ相当する。 Among the above configurations, the CPU 1 is an example of “identification means”, an example of “recording means”, an example of “determination means”, an example of “control means”, an example of “comparison means”, ”,“ Event information generation means ”,“ matching means ”,“ personal information generation means ”,“ position detection means ”,“ image processing means ”,“ reading means ” It corresponds to an example and an example of “link information generating means”, respectively. The communication interface 10 is an example of “transmission means”, an example of “imaging information transmission means”, an example of “imaging information reception means”, an example of “recognition result transmission means”, an example of “imaging information reception means”, and “ Each of them corresponds to an example of “advertisement information receiving means”. Furthermore, the display 5 corresponds to an example of “display means” and an example of “notification means” according to the present invention. Furthermore, the ROM 2 includes an example of “recording medium”, an example of “event information recording unit”, an example of “personal information recording unit”, an example of “link information recording unit”, and an “advertisement information recording unit” according to the present invention. Each corresponds to an example.
 この構成において通信インターフェース10は、CPU1の制御の下、アンテナANTを介した上記ネットワークとのデータの授受を制御する。この時通信インターフェース10は、アンテナANTを介した無線によるデータの授受だけでなく、例えば有線LANやいわゆるUSB(Universal Serial Bus)等を介した有線によるデータの授受を制御するように構成することもできる。 In this configuration, the communication interface 10 controls transmission / reception of data with the network via the antenna ANT under the control of the CPU 1. At this time, the communication interface 10 may be configured not only to transmit / receive data wirelessly via the antenna ANT but also to control data transmission / reception via a wired LAN or a so-called USB (Universal Serial Bus). it can.
 また通話制御部6は、CPU1の制御の下、マイク8及びスピーカ7を用いたスマートフォンSとしての音声通話を制御する。更に操作部4は、スマートフォンSの使用者による操作に基づいて、当該操作に対応する操作信号を生成してCPU1に出力する。これによりCPU1は、当該操作信号に基づいてスマートフォンS全体を制御する。 The call control unit 6 controls a voice call as the smartphone S using the microphone 8 and the speaker 7 under the control of the CPU 1. Furthermore, the operation part 4 produces | generates the operation signal corresponding to the said operation based on operation by the user of the smart phone S, and outputs it to CPU1. Thereby, CPU1 controls the smart phone S whole based on the said operation signal.
 一方ROM2には、後述する第1実施形態に係るキャプチャ処理を初めとするスマートフォンSとしての処理のためのプログラム等が予め不揮発性に記録されている。またROM2には書き換え可能な領域が含まれており、この書き換え可能なROM2の領域内には、第1実施形態に係るキャプチャ処理によりキャプチャされた画像に相当する画像データが記録される。そしてCPU1は、ROM2に記録されている上記プログラム等を読み出して実行することにより、上記スマートフォンSとしての処理を制御する。この他ROM2内の上記書き換え可能な領域内には、例えば電話番号データやアドレスデータ等、スマートフォンSとしての処理に必要なデータも不揮発性に記録される。 On the other hand, in the ROM 2, a program or the like for processing as the smartphone S including the capture processing according to the first embodiment described later is recorded in advance in a nonvolatile manner. The ROM 2 includes a rewritable area, and image data corresponding to an image captured by the capture processing according to the first embodiment is recorded in the rewritable ROM 2 area. And CPU1 controls the process as the said smart phone S by reading and running the said program etc. which are recorded on ROM2. In addition, in the rewritable area in the ROM 2, data necessary for processing as the smartphone S, such as telephone number data and address data, is also recorded in a nonvolatile manner.
 一方RAM3は、上記カレント画像バッファ32として必要なデータを一時的に記憶し、更に上記スマートフォンSとしての処理に必要な他のデータを一時的に記憶する。更にディスプレイ5は、CPU1の制御の下、第1実施形態に係るキャプチャ処理のために必要な情報に加えて、スマートフォンSとしての処理に必要な情報を、その携帯者に対して表示する。 On the other hand, the RAM 3 temporarily stores data necessary for the current image buffer 32, and further temporarily stores other data necessary for processing as the smartphone S. Further, the display 5 displays information necessary for the process as the smartphone S to the user under the control of the CPU 1 in addition to the information necessary for the capture process according to the first embodiment.
 他方、カメラ9は、CPU1の制御の下、上記書面P等上の情報を連写又は動画撮像し、当該撮像した画像に対応する画像データ(デジタル化された画像データ)を、当該連写の都度又は動画撮像の場合にあっては連続して、CPU1に出力する。これによりCPU1は、出力された画像データをRAM3内のカレント画像バッファ32に一時的に入力する。そしてCPU1は、カレント画像バッファ32に記憶されている画像データを用いて、第1実施形態に係るキャプチャ処理を実行する。またこの際にライト11は、CPU1の制御の下、カメラ9により撮像される上記書面P等の一部又は全部を、当該撮像時において当該撮像に好適な照度となるように照らす。 On the other hand, under the control of the CPU 1, the camera 9 continuously captures or captures the information on the document P or the like, and captures image data (digitized image data) corresponding to the captured image. Each time or in the case of moving image capturing, the data is output to the CPU 1 continuously. As a result, the CPU 1 temporarily inputs the output image data to the current image buffer 32 in the RAM 3. Then, the CPU 1 executes the capture process according to the first embodiment using the image data stored in the current image buffer 32. At this time, the light 11 illuminates a part or all of the document P or the like imaged by the camera 9 under the control of the CPU 1 so that the illuminance is suitable for the imaging at the time of the imaging.
 次に、RAM3内のカレント画像バッファ32は、その時点で第1実施形態に係るキャプチャ処理の対象となっている一フレーム分の画像データを記憶する。この場合の一フレーム分の画像データ(フレーム画像データ)に相当する画像を、「カレント画像」と称する。 Next, the current image buffer 32 in the RAM 3 stores the image data for one frame that is the target of the capture processing according to the first embodiment at that time. In this case, an image corresponding to one frame of image data (frame image data) is referred to as a “current image”.
 次に、第1実施形態に係るキャプチャ処理について、具体的に図3乃至図5を用いて説明する。第1実施形態に係るキャプチャ処理は、例えば携帯者による所定の操作が操作部4において実行されたときに開始される。 Next, the capture processing according to the first embodiment will be specifically described with reference to FIGS. The capture process according to the first embodiment is started, for example, when a predetermined operation by a carrier is executed on the operation unit 4.
 そして書面Pが置かれて当該キャプチャ処理が開始されると、図3に示すようにCPU1は先ず、撮像範囲AR内に置かれている書面Pを撮像すべく、カメラ9を起動する(ステップS1)。これにより、書面Pに相当する画像(カレント画像に相当する画像)が、連写又は動画により複数撮像される。そしてCPU1は、当該撮像に伴ってカメラ9から入力されてくるカレント画像の画像データを、カレント画像バッファ32に一時的に入力する(ステップS2)。次にCPU1は、ステップS1の処理により撮像された画像において、それに映っている書面Pとその周囲との境界(図1及び図4に例示する場合には、図4(a)右に例示する書面Pと机Dとの境界BD)の全部が認識されるか否かを識別する(ステップS3)。ステップS3の識別においていずれかの画像において当該境界の全体が認識されている場合(ステップS3;YES)、CPU1は次に、その時点でカレント画像バッファ32に記憶されている画像データにおいて、予め設定された所定時間、当該画像に写っている書面P等が静止しているか否かを判定する(ステップS4)。なおこの場合、例えば境界が認識されたとき、当該認識された境界に相当する位置に、図4に例示する境界BDを書面Pの画像に合わせて表示させるように構成することもできる。また、書面Pが置かれていることが認識されているか否かを、例えばディスプレイ5やライト11の点滅、或いはスピーカ7からの放音等を用いて逐次告知するように構成することもできる。 When the document P is placed and the capture process is started, as shown in FIG. 3, the CPU 1 first activates the camera 9 to image the document P placed in the imaging range AR (step S1). ). Thereby, a plurality of images corresponding to the document P (images corresponding to the current image) are captured by continuous shooting or moving images. Then, the CPU 1 temporarily inputs the image data of the current image input from the camera 9 along with the imaging to the current image buffer 32 (step S2). Next, in the image picked up by the process of step S1, the CPU 1 illustrates the boundary between the document P shown in the image and its surroundings (in the case illustrated in FIG. 1 and FIG. It is identified whether or not all of the boundary BD) between the document P and the desk D is recognized (step S3). When the entire boundary is recognized in any image in the identification in step S3 (step S3; YES), the CPU 1 next sets in advance the image data stored in the current image buffer 32 at that time. It is determined whether or not the document P or the like shown in the image is stationary for the predetermined time (step S4). In this case, for example, when a boundary is recognized, the boundary BD illustrated in FIG. 4 may be displayed in accordance with the image of the document P at a position corresponding to the recognized boundary. Moreover, it can also be configured to sequentially notify whether or not the document P is recognized by using, for example, blinking of the display 5 or the light 11 or sound emission from the speaker 7.
 ステップS4の判定において当該画像内の書面P等が所定時間静止していると判定された場合(ステップS4;YES)、次にCPU1は、その時点でカレント画像バッファ32に記憶されている画像データの画像と、その直前にキャプチャされてROM2に記録されている画像データの画像と、を比較し、両者が同一か否かを判定する(ステップS5)。ステップS5の判定において両者が同一でない場合(ステップS5;NO)、CPU1は、その時点でカレント画像バッファ32に記憶されている画像データをROM2に記録(即ちキャプチャ)する(ステップS6)。その後CPU1は、操作部4において第1実施形態に係るキャプチャ処理を終了する旨の操作が例えば携帯者により行われたか否かを判定し(ステップS8)、当該終了操作が行われたときは(ステップS8;YES)、第1実施形態に係るキャプチャ処理を終了する。一方ステップS8の判定において当該終了操作が行われていないとき(ステップS8;NO)、CPU1は上記ステップS2に戻って、次の撮像を行う。 If it is determined in step S4 that the document P or the like in the image is stationary for a predetermined time (step S4; YES), the CPU 1 next stores the image data stored in the current image buffer 32 at that time. Is compared with the image of the image data captured immediately before and recorded in the ROM 2 to determine whether or not they are the same (step S5). If they are not the same in the determination in step S5 (step S5; NO), the CPU 1 records (that is, captures) the image data stored in the current image buffer 32 at that time in the ROM 2 (step S6). Thereafter, the CPU 1 determines whether or not an operation for ending the capture processing according to the first embodiment has been performed by, for example, the carrier in the operation unit 4 (step S8), and when the end operation has been performed ( Step S8; YES), the capture processing according to the first embodiment is terminated. On the other hand, when the end operation is not performed in the determination in step S8 (step S8; NO), the CPU 1 returns to step S2 and performs the next imaging.
 一方、上記ステップS3の識別において、カレント画像バッファ32に記憶されている画像において書面Pとその周囲との境界の全体が認識されていない場合(ステップS3;NO)、又は上記ステップS4の判定において、カレント画像バッファ32に記憶されている画像データの画像内の書面P等が所定時間静止していないと判定された場合(ステップS4;NO)、或いは上記ステップS5の判定においてカレント画像バッファ32に記憶されている画像データの画像と、その直前にキャプチャされてROM2に記録されている画像データの画像と、が同一である場合(ステップS5;YES)、CPU1は、その時点でカレント画像バッファ32に記憶されている画像データはキャプチャに適さないとして、これらを破棄し(ステップS7)、その後上記ステップS8の処理に移行する。 On the other hand, if the entire boundary between the document P and its periphery is not recognized in the image stored in the current image buffer 32 in the identification in step S3 (step S3; NO), or in the determination in step S4. When it is determined that the document P or the like in the image of the image data stored in the current image buffer 32 is not stationary for a predetermined time (step S4; NO), or in the determination of step S5, the current image buffer 32 When the image of the stored image data is the same as the image of the image data captured immediately before and recorded in the ROM 2 (step S5; YES), the CPU 1 at that time, the current image buffer 32 Since the image data stored in the Flop S7), and then shifts to the process in the step S8.
 以上の第1実施形態に係るキャプチャ処理が実行された場合、例えば図4(a)に示すように、書面Pと机Dとの間の境界BDの全てが認識されており(上記ステップS3;YES参照)、且つ書面Pを含む画像内が上記所定時間だけ静止している場合に(上記ステップS4;YES参照)のみ、その時にカレント画像バッファ32に記憶されている画像データがキャプチャの対象とされ得ることになる。従って、例えば図4(b)に示すように、書面Pに書き込みをするための筆記具を持つ手Hが書面P上にあることで境界BDの一部が画像内で認識されていない場合(上記ステップS3;NO参照)、及び当該手Hによる書き込みのため書面Pを含む画像内が上記所定時間静止していない場合(上記ステップS4;NO参照)には、その時にカレント画像バッファ32に記憶されている画像データは、キャプチャの対象とはされずに破棄される(上記ステップS7参照)。 When the capture processing according to the first embodiment described above is executed, for example, as shown in FIG. 4A, all of the boundary BD between the document P and the desk D is recognized (step S3; Only when the image including the document P is stationary for the predetermined time (see step S4; YES), the image data stored in the current image buffer 32 at that time is the target of capture. Will be able to. Therefore, for example, as shown in FIG. 4B, when a hand H having a writing instrument for writing on the document P is on the document P, a part of the boundary BD is not recognized in the image (above Step S3; see NO), and if the image including the document P for writing by the hand H has not been stationary for the predetermined time (see Step S4; NO), it is stored in the current image buffer 32 at that time. The captured image data is discarded without being captured (see step S7).
 更に図5に他の例として示すように、初めに図5(a)に示す書面P及び机Dが映った画像がキャプチャされた(上記ステップS6参照)後に、図5(b)に例示するように手Hが書面P上に入ったが書面P上に何ら書き込みがされなかった場合(即ち書面Pとしては直前のキャプチャ時と同一である場合)、手Hが書面P上から出た以降上記所定時間が経過したとしても(上記ステップS4;YES参照)、図5(c)に例示するように書面Pとしては直前のキャプチャ時(図5(a)参照)と同一であるので(上記ステップS5;YES参照)、その時にカレント画像バッファ32に記憶されている画像データは、キャプチャの対象とはされずに破棄される(上記ステップS7参照)。これに対して、図5(b)に例示するように手Hが書面P上に入り、更に図5(d)に例示するように例えば文字Lが書面Pに書き込まれた場合(即ち書面Pとしては直前のキャプチャ時と同一でなくなった場合)、手Hが書面P上から出た以降上記所定時間が経過したら(上記ステップS4;YES参照)、書面Pとしては直前のキャプチャ時と同一でないため(上記ステップS5;NO参照)、その時にカレント画像バッファ32に記憶されている画像データがキャプチャされる(上記ステップS6参照)。 Further, as shown in FIG. 5 as another example, after the image on which the document P and the desk D shown in FIG. 5A are first captured is captured (see step S6 above), it is illustrated in FIG. 5B. As described above, when the hand H enters the document P but no writing is made on the document P (that is, the document P is the same as the previous capture), the hand H comes out of the document P. Even if the predetermined time has elapsed (see step S4; YES), as illustrated in FIG. 5C, the document P is the same as the previous capture (see FIG. 5A) (see above). At step S5; see YES), the image data stored in the current image buffer 32 at that time is discarded without being captured (see step S7). On the other hand, as illustrated in FIG. 5B, when the hand H enters the document P and further, for example, a character L is written on the document P as illustrated in FIG. 5D (that is, the document P). If the predetermined time elapses after the hand H comes out of the document P (see step S4; YES), the document P is not the same as the previous capture. Therefore (see step S5; NO), the image data stored in the current image buffer 32 at that time is captured (see step S6).
 以上説明したように、第1実施形態に係るキャプチャシステムCSの動作によれば、書面Pが置かれる位置に対する相対的な位置が一定とされるカメラ9により連続して書面Pを撮像し、各画像データに基づいてその位置に書面Pが置かれていることが識別されたとき、当該識別がされたタイミングに対応して出力された画像データを自動的にキャプチャするので、例えば記録や撮像のためのカメラ9に対する使用者の操作によるぶれを防止しつつ、書面P上に記載されている情報を簡易に且つ高画質にキャプチャすることができる。 As described above, according to the operation of the capture system CS according to the first embodiment, the document 9 is continuously imaged by the camera 9 in which the relative position with respect to the position where the document P is placed is constant. When it is identified that the document P is placed at the position based on the image data, the image data output corresponding to the identified timing is automatically captured. Therefore, the information described on the document P can be captured easily and with high image quality while preventing the camera 9 from shaking due to the user's operation.
 また、カメラ9からそれぞれ出力された画像データに基づき、各画像データにおける書面Pと当該書面Pの周囲とを画する境界BDが識別されたとき、所定位置に書面Pが置かれていると識別するので、より確実且つ簡易に書面Pを撮像してそれに記載されている情報をキャプチャすることができる。 Further, based on the image data respectively output from the camera 9, when the boundary BD that defines the document P and the periphery of the document P in each image data is identified, it is identified that the document P is placed at a predetermined position. Therefore, the document P can be imaged more reliably and easily, and the information described therein can be captured.
 更に、書面Pが置かれていると識別された後、画像データにおける少なくとも書面Pが予め設定された所定時間静止している旨の判定に対応して出力された画像データをキャプチャするので、少なくとも書面Pが当該所定時間静止している状態で撮像することにより、より高精度に書面Pを撮像することができる。 Further, after the document P is identified as being placed, the image data output corresponding to the determination that at least the document P in the image data is stationary for a predetermined time set in advance is captured. By taking an image while the document P is stationary for the predetermined time, the document P can be imaged with higher accuracy.
 更にまた、書面Pが置かれていると識別された後、書面Pの範囲内に手H等の他の物が撮像されていると判定されたとき、そのタイミングに対応して出力された画像データのキャプチャを禁止するので、書面P以外の他の物が撮像された画像データが記録されることを防止することができる。 Furthermore, when it is determined that another object such as the hand H has been imaged within the range of the document P after it is identified that the document P is placed, an image output in accordance with the timing. Since data capture is prohibited, it is possible to prevent recording of image data obtained by capturing an object other than the document P.
 また、キャプチャ済みの画像データにおける書面Pと、未キャプチャの画像データにおける書面Pと、を各撮像ごとに比較し、両者が同一であるとき、画像データのキャプチャを禁止するので、同一の書面Pで且つ当該書面Pに新たな書き込み等がない場合に、当該書面Pを含む画像データが重複してキャプチャされることを防止できる。 In addition, the document P in the captured image data and the document P in the uncaptured image data are compared for each imaging, and when both are the same, the capture of the image data is prohibited, so the same document P In addition, when there is no new writing or the like on the document P, it is possible to prevent the image data including the document P from being captured redundantly.
 更に、カメラ9が携帯可能なスマートフォンSに備えられており、またカメラ9の相対的な位置が、例えば携帯可能なスタンド(図7参照)にスマートフォンSが保持されることにより一定とされるので、カメラ9と書面Pとの相対的な位置を一定としつつ、簡易な構成で書面Pを撮像することができる。 Furthermore, since the camera 9 is provided in the portable smartphone S, and the relative position of the camera 9 is fixed by holding the smartphone S on a portable stand (see FIG. 7), for example. The document P can be imaged with a simple configuration while keeping the relative position of the camera 9 and the document P constant.
 また、当該スタンドを、折り曲げ可能なシート状材料を折り曲げて組み立てられるスタンド(後述の図7参照)とした場合は、より簡易な構成により書面Pを撮像することができる。 If the stand is a stand that can be assembled by folding a foldable sheet-like material (see FIG. 7 described later), the document P can be imaged with a simpler configuration.
 更に、書面Pと周囲との境界が識別されたとき、その境界を示す境界BDを書面Pの画像に対応付けて表示する場合には、より正確に当該表示がされるように例えばユーザが書面Pを移動させる等することにより、境界の認識精度及びキャプチャ自体の成功率を向上させることができる。 Further, when the boundary between the document P and the surroundings is identified, when the boundary BD indicating the boundary is displayed in association with the image of the document P, for example, the user writes the document so that the display can be performed more accurately. By moving P or the like, the boundary recognition accuracy and the success rate of capture itself can be improved.
 更にまた、書面Pが置かれていることが認識されているか否かが逐次表示等により告知する場合には、当該告知に基づいて例えばユーザが書面Pを移動させる等することにより、当該置かれていることの認識及び撮像自体の成功率を向上させることができる。 Furthermore, when it is notified by sequential display or the like whether or not the document P is recognized, the user can move the document P based on the notification, for example. And the success rate of imaging itself can be improved.
 なお、上述した第1実施形態に係るキャプチャ処理においては、上記ステップS3乃至ステップS5の処理によりキャプチャ対象であるカレント画像としての適合性が判定された画像の画像データをキャプチャすることとした。しかしながら、キャプチャ対象たる画像データは書面Pの認識(識別)に用いられた画像の画像データ自体に限られず、例えば、書面Pの認識(識別)が完了したタイミング以降に新たに撮像された画像(例えばカメラ9を静止画モードに換えて撮像された高画質の静止画像)の画像データをキャプチャ(上記ステップS6参照)するように構成してもよい。 In the capture process according to the first embodiment described above, the image data of the image whose suitability as the current image to be captured is determined by the processes in steps S3 to S5 is captured. However, the image data to be captured is not limited to the image data itself used for the recognition (identification) of the document P. For example, an image (newly captured after the timing when the recognition (identification) of the document P is completed) For example, the camera 9 may be configured to capture (see step S6) image data of a high-quality still image captured by switching to the still image mode.
 また、第1実施形態に係るスマートフォンSの書面P等に対する設置方法は、図1(a)に例示するスタンドSTによる方法には限られない。 Moreover, the installation method with respect to the document P etc. of the smart phone S which concerns on 1st Embodiment is not restricted to the method by the stand ST illustrated to Fig.1 (a).
 例えば図6に例示するキャプチャシステムCS1のように、載置台B上に置かれている撮像対象物PPに例えばアクリルやガラス等の透明板TBを被せ、この撮像対象物PPを載置台Bに固定された支持部BS1により支持されているライトLTにより透明板TB越しに照らし、更に載置台Bに固定された支持部BS2により撮像対象物PPに対して一定位置に支持されている保持台PT上にカメラ9が撮像対象物PPに対向するように置かれているスマートフォンSの当該カメラ9により撮像する(上記ステップS1参照)ように構成することもできる。 For example, like the capture system CS1 illustrated in FIG. 6, the imaging target PP placed on the mounting table B is covered with a transparent plate TB such as acrylic or glass, and the imaging target PP is fixed to the mounting table B. The light LT supported by the supported support portion BS1 illuminates the transparent plate TB, and is further supported by the support portion BS2 fixed to the mounting table B on the holding table PT supported at a fixed position with respect to the imaging target PP. The camera 9 can also be configured to take an image with the camera 9 of the smartphone S placed so as to face the imaging target PP (see step S1).
 この場合の透明板TBは、撮像対象物PPの出し入れが容易となるように開閉可能とするようにしても良い。また載置台Bの底板は、撮像対象物PPの高さ等に応じて透明板TBとの間隔を調整できるように、図6において上下方向に移動可能にしても良い。更に載置台Bは、透明板TBを解放せずとも撮像対象物PPが出し入れできる構造としても良い。また支持部BS2は撮像対象物PPに対するカメラ9の位置や角度を調整できる機構とするのが好ましい。更に保持台PTは、スマートフォンSを静止保持できるように、凹み付のトレイ型のものやスマートフォンSの外部形状にあった形状の着脱可能式のホルダ等により構成することができる。更にライトLTは、スマートフォンSが保持台PTに載置されたのと連動して自動的に点灯する構成とするのが便利である。 In this case, the transparent plate TB may be openable and closable so that the imaging object PP can be easily taken in and out. Further, the bottom plate of the mounting table B may be movable in the vertical direction in FIG. 6 so that the distance from the transparent plate TB can be adjusted according to the height of the imaging target PP. Furthermore, the mounting table B may have a structure in which the imaging target PP can be taken in and out without releasing the transparent plate TB. Further, the support portion BS2 is preferably a mechanism that can adjust the position and angle of the camera 9 with respect to the imaging target PP. Furthermore, the holding stand PT can be configured by a tray type with a dent, a detachable holder having a shape that matches the external shape of the smartphone S, or the like so that the smartphone S can be held stationary. Further, it is convenient that the light LT is automatically turned on in conjunction with the smartphone S placed on the holding base PT.
 更に図6に例示するキャプチャシステムCS1の応用例として、例えば撮像対象物が薄い原稿やフィルム状のものである場合、当該撮像対象物の裏から照らすライトを用いて撮像しても良い。また、図6に例示する場合とは逆に、カメラ9を上に向けてスマートフォンSを設置し、透明板TBの上側(即ち、カメラ9を上向きに置いたスマートフォンSの上方)に書面Pを下向きに置いて撮像するように構成しても良い。更には、スマートフォンSから、撮像した画像データを有線又は無線により図示しないプロジェクタに出力して投影させることで、いわゆるOHP(Over Head Projector)としてキャプチャシステムCS1を用いることもできる。また、スマートフォンSのCPU1の制御の下で駆動され、撮像対象物PP上におけるカメラ9の撮像範囲を移動させる駆動部を保持台PTが備えていても良い。更に、カメラ9又はライトLTの近傍に、支持部BS1により、例えば所定のレンズ又は偏光フィルタ等の光学機構を備えるように構成しても良い。 Further, as an application example of the capture system CS1 illustrated in FIG. 6, for example, when the object to be imaged is a thin document or a film, an image may be captured using a light that illuminates from the back of the object to be imaged. Further, contrary to the case illustrated in FIG. 6, the smartphone S is installed with the camera 9 facing upward, and the document P is placed on the upper side of the transparent plate TB (that is, above the smartphone S with the camera 9 facing upward). You may comprise so that it may face down and image. Furthermore, the capture system CS1 can also be used as a so-called OHP (Over Head Projector) by outputting captured image data from a smartphone S to a projector (not shown) by wire or wirelessly and projecting it. Further, the holder PT may be provided with a drive unit that is driven under the control of the CPU 1 of the smartphone S and moves the imaging range of the camera 9 on the imaging target PP. Further, an optical mechanism such as a predetermined lens or a polarizing filter may be provided near the camera 9 or the light LT by the support portion BS1.
 また他の例として、図7(a)に示すように、一枚のシート状素材(例えば段ボール紙等)から切り出してなる折り曲げ可能なスタンドST1により、書面Pに対してカメラ9が対向するようにスマートフォンSを保持するように構成しても良い。 As another example, as shown in FIG. 7A, the camera 9 is opposed to the document P by a foldable stand ST1 cut out from a sheet-like material (for example, corrugated paper). You may comprise so that the smart phone S may be hold | maintained.
 図7(a)に示すスタンドST1は、当該一枚のシート状素材により形成される保持台PT1、支持部PT2及びPT3からなるように構成されている。また図7(a)において、保持台PT1と支持部PT2との接合部及び支持部PT2と支持部PT3との接合部において、実線は切り込みを示しており、一点鎖線はスタンドST1としての組立時において谷折りとされる線部を示している。図7(b)には、実際にスタンドST1が組み立てられ、更にカメラ9が、図7(b)において図示しないが撮像時にはスマートフォンSの下方に載置される書面に対向するようにスマートフォンSが保持台PT1上に置かれて撮像可能となっている状態のキャプチャシステムの外観斜視図が示されている。 The stand ST1 shown in FIG. 7A is configured to include a holding base PT1 and support portions PT2 and PT3 formed of the single sheet-like material. In FIG. 7 (a), the solid line indicates the cut at the joint between the holding base PT1 and the support PT2, and the joint between the support PT2 and the support PT3, and the alternate long and short dash line indicates the assembly as the stand ST1. The line part made into valley fold is shown. In FIG. 7B, the stand ST1 is actually assembled, and the smartphone S faces the document placed on the lower side of the smartphone S at the time of imaging although not shown in FIG. 7B. An external perspective view of the capture system in a state where it can be imaged by being placed on the holding base PT1 is shown.
 以上の図6及び図7に例示する構成によっても、第1実施形態に係るキャプチャシステムCSと同様の効果を奏することができる。 6 and 7 can achieve the same effects as the capture system CS according to the first embodiment.
 なお、上述した第1実施形態に係るキャプチャ処理において、書面Pが置かれていると識別された後にカメラ9における振動等に起因する動きが検出されたとき、そのタイミングに対応して出力された画像データのキャプチャを禁止するように構成すれば、カメラ9の動きによってぶれた画像データがキャプチャされることを防止することができる。 In addition, in the capture process according to the first embodiment described above, when a movement caused by vibration or the like in the camera 9 is detected after it is identified that the document P is placed, it is output corresponding to the timing. If the configuration is such that the capture of the image data is prohibited, it is possible to prevent the image data blurred by the movement of the camera 9 from being captured.
 また、(例えば原稿台上の)書面Pが置かれる位置に予め形成されている参照用の模様やパターンなどの特徴を示す特徴データを用いて、その位置が(書面Pにより)が遮られたことが認識されたとき、CPU1がその位置に書面Pが置かれていると識別するように構成することもできる。 In addition, using the feature data indicating features such as a reference pattern or pattern formed in advance at the position where the document P is placed (for example, on the document table), the position is blocked (by the document P). When this is recognized, the CPU 1 can be configured to identify that the document P is placed at the position.
 更に、(例えば原稿台上の)書面Pが置かれていない位置に相当する画像データと、それ以外の他の画像データ(例えばその前後の画像データ)と、の差分に基づいて、当該位置に書面Pが置かれていると識別するように構成することもできる。
(II)第2実施形態
 次に、本発明に係る他の実施形態である第2実施形態について、図8乃至図10を用いて説明する。なお図8乃至図10は、第2実施形態に係るキャプチャシステムの各例の概要構成をそれぞれ示す外観斜視図である。また第2実施形態に係るスマートフォンのハードウエア的な構成において、第1実施形態に係るスマートフォンSと同一の部材は、同一の部材番号を用いてその構成及び機能等を説明する。
Further, based on the difference between the image data corresponding to the position where the document P is not placed (for example, on the document table) and the other image data (for example, the image data before and after that), the position is determined. It can also be configured to identify that the document P is placed.
(II) Second Embodiment Next, a second embodiment which is another embodiment of the present invention will be described with reference to FIGS. FIGS. 8 to 10 are external perspective views respectively showing the schematic configuration of each example of the capture system according to the second embodiment. Further, in the hardware configuration of the smartphone according to the second embodiment, the same members as those of the smartphone S according to the first embodiment will be described using the same member numbers.
 上述した第1実施形態に係るキャプチャシステムCSのキャプチャ処理では、書面Pのみをキャプチャする構成であった。これに対し、以下に説明する第2実施形態に係るキャプチャシステムの各例では、書面Pを、当該書面P上に投影された他の情報と共にキャプチャする。 In the capture process of the capture system CS according to the first embodiment described above, only the document P is captured. In contrast, in each example of the capture system according to the second embodiment described below, the document P is captured together with other information projected on the document P.
 即ち図8に示すように、先ず第1例として、第2実施形態に係るキャプチャシステムCS2-1は、第1実施形態に係るキャプチャシステムSと同様のスタンドSTに支持されているスマートフォンSのカメラ9により、その撮像範囲AR内に置かれている書面Pを撮像する。 That is, as shown in FIG. 8, as a first example, the capture system CS2-1 according to the second embodiment is a camera of a smartphone S supported by a stand ST similar to the capture system S according to the first embodiment. 9, the document P placed in the imaging range AR is imaged.
 このとき第2実施形態に係るスマートフォンSには、カメラ9の他にプロジェクタ20が備えられている。プロジェクタ20は、例えばレーザ方式又は他の光学方式により、書面Pと共にキャプチャされるべき他の投影情報PJを、書面Pの撮像時において当該書面P上に投影する。このプロジェクタ20が、本発明における「投影手段」の一例に相当する。この場合に投影情報PJとして書面P上に投影される画像としては、例えば、書面P上に書き込みをする場合に用いられる罫線や、書き込まれるべき部分が空白とされている様式、或いは書き込み時に参照されるべき他の図形や文字等を含む画像である。この投影情報PJは、例えば第2実施形態に係るスマートフォンSのROM2内に予め記憶されているものを読み出して投影しても良いし、或いは記録媒体又は通信インターフェース10を介して、外部において電子的に生成された画像或いは非画像の電子的データを当該外部から取得して投影するように構成しても良い。 At this time, the smartphone S according to the second embodiment includes the projector 20 in addition to the camera 9. The projector 20 projects other projection information PJ to be captured together with the document P onto the document P when the document P is imaged, for example, by a laser method or another optical method. The projector 20 corresponds to an example of “projection unit” in the present invention. In this case, as an image projected onto the document P as the projection information PJ, for example, a ruled line used when writing on the document P, a style in which a portion to be written is blank, or reference at the time of writing It is an image including other figures, characters, etc. to be performed. As this projection information PJ, for example, information stored in advance in the ROM 2 of the smartphone S according to the second embodiment may be read and projected, or electronically externally via the recording medium or the communication interface 10. The image data or non-image electronic data generated at the same time may be acquired from the outside and projected.
 更に撮像時にあっては、書面P上に投影情報PJを投影しているタイミングにおいて両者を合わせてキャプチャする(図3ステップS6参照)ように構成することができる。この場合には、手で書き込まれるべき情報のみを書面P上に残すことができると共に、キャプチャされる画像としては投影情報PJの内容と、それに基づいて書面Pに書き込まれた情報と、が合わさった画像をキャプチャすることができる。またこの他に、投影情報PJが投影されている状態で書き込みがされた書面Pのみを、投影情報PJの投影を終了させた後又は一時的に中断させている間に、投影情報PJとは別個にキャプチャする(図3ステップS6参照)ように構成することができる。この場合には、参照されるべき投影情報PJの内容とは別個に書面P上に書き込まれた情報をキャプチャすることができる。 Further, at the time of imaging, it can be configured to capture both at the timing when the projection information PJ is projected onto the document P (see step S6 in FIG. 3). In this case, only the information to be written by hand can be left on the document P, and the contents of the projection information PJ and the information written on the document P based on it are combined as an image to be captured. Captured images. In addition, the projection information PJ is only written on the document P that has been written in a state where the projection information PJ is projected after the projection of the projection information PJ is finished or temporarily interrupted. It can be configured to capture separately (see step S6 in FIG. 3). In this case, information written on the document P can be captured separately from the content of the projection information PJ to be referred to.
 なお、第2実施形態に係るキャプチャシステムの第2例としては、例えば図9に示すキャプチャシステムCS2-2のように、投影情報PJを、書面Pの裏に配置されたプロジェクタ20から投影するように構成しても良い。この場合、投影情報PJの内容を書面Pの表面(撮像される面)から見透せる程度に書面Pが薄ければ、図8に示したキャプチャシステムCS2と同様の効果を奏することができる。 As a second example of the capture system according to the second embodiment, the projection information PJ is projected from the projector 20 arranged on the back side of the document P as in the capture system CS2-2 shown in FIG. You may comprise. In this case, if the document P is thin enough to allow the contents of the projection information PJ to be seen from the surface of the document P (the surface to be imaged), the same effect as the capture system CS2 shown in FIG. 8 can be obtained.
 更にまた、第3例として例えば図10に示すキャプチャシステムCS2-3のように、投影情報PJを、書面Pの裏に配置された表示デバイスDDを用いて、書面Pの裏側から透明シートTSを介して投影するように構成しても良い。この場合も、投影情報PJの内容を書面Pの表面から見透せる程度に書面Pが薄ければ、図8に示したキャプチャシステムCS2と同様の効果を奏することができる。このときの表示デバイスDDとしては、例えば、液晶パネル又はいわゆるタブレット型のコンピュータ等を用いることができる。 Furthermore, as a third example, as in the capture system CS2-3 shown in FIG. 10, for example, the projection information PJ is used to display the transparent sheet TS from the back side of the document P using the display device DD arranged on the back side of the document P. You may comprise so that it may project via. Also in this case, if the document P is thin enough that the contents of the projection information PJ can be seen from the surface of the document P, the same effect as the capture system CS2 shown in FIG. 8 can be obtained. As the display device DD at this time, for example, a liquid crystal panel or a so-called tablet computer can be used.
 以上説明したように、第2実施形態に係るキャプチャシステムCS2-1乃至キャプチャシステムCS2-3それぞれの動作によれば、第1実施形態に係るキャプチャシステムCSの動作による作用効果に加えて、ROM2から読み出された投影情報PJの像を書面P上に投影し、その像が投影されているタイミングにおいて出力された画像データをキャプチャするので、投影情報PJとの関係を明確としつつ、書面Pを撮像して記録することができる。 As described above, according to the operations of the capture systems CS2-1 to CS2-3 according to the second embodiment, in addition to the operational effects of the operation of the capture system CS according to the first embodiment, Since the image of the read projection information PJ is projected onto the document P and the image data output at the timing when the image is projected is captured, the document P is displayed while clarifying the relationship with the projection information PJ. Images can be recorded.
 また、投影情報PJが投影されている書面Pに相当する画像データと、当該投影が一時的に中断されている間の書面Pに相当する画像データと、を共にキャプチャするので、当該投影が終了した書面Pに相当する画像データを選択的に記録することができる。
(III)第3実施形態
 次に、本発明に係る他の実施形態である第3実施形態について、図11を用いて説明する。なお図11は、第3実施形態に係るキャプチャシステムの概要構成をそれぞれ示す外観斜視図である。また第3実施形態に係るスマートフォンのハードウエア的な構成において、第1実施形態に係るスマートフォンSと同一の部材は、同一の部材番号を用いてその構成及び機能等を説明する。
Further, since the image data corresponding to the document P on which the projection information PJ is projected and the image data corresponding to the document P while the projection is temporarily interrupted are captured, the projection ends. The image data corresponding to the written document P can be selectively recorded.
(III) Third Embodiment Next, a third embodiment which is another embodiment according to the present invention will be described with reference to FIG. FIG. 11 is an external perspective view showing a schematic configuration of the capture system according to the third embodiment. Moreover, in the hardware configuration of the smartphone according to the third embodiment, the same members as those of the smartphone S according to the first embodiment will be described using the same member numbers.
 例えば上述した第1実施形態に係るキャプチャシステムCSのキャプチャ処理では、単一のカメラ9を用いて書面Pをキャプチャする構成であった。これに対し、以下に説明する第3実施形態に係るキャプチャシステムでは、撮像対象物を複数のカメラを用いてキャプチャする。 For example, in the capture process of the capture system CS according to the first embodiment described above, the document P is captured using a single camera 9. On the other hand, in the capture system according to the third embodiment described below, the imaging object is captured using a plurality of cameras.
 即ち図11に示すように、第3実施形態に係るキャプチャシステムCS3は、第1実施形態に係るキャプチャシステムCSと同様のスタンドSTに支持されているスマートフォンS3のカメラ9Aによりその撮像範囲AR1内に置かれている撮像対象物PPを撮像することに加えて、同じスマートフォンS3に更に備えられたカメラ9Bにより、その撮像範囲AR2内に置かれている当該撮像対象物PPを撮像する。そして第3実施形態に係るスマートフォンS3では、カメラ9Aにより撮像された画像とカメラ9Bにより撮像された画像とをCPU1により合成し、撮像対象物PPを撮像した一つの合成画像を生成し、これをキャプチャする(即ち、ROM2内に記録する)。この場合、別々のカメラ9A及びカメラ9Bで別個に撮像された画像が合成されることで、より高画質或いは広範囲に撮像対象物PPをキャプチャすることができる。 That is, as shown in FIG. 11, the capture system CS3 according to the third embodiment is within the imaging range AR1 by the camera 9A of the smartphone S3 supported by the same stand ST as the capture system CS according to the first embodiment. In addition to imaging the placed imaging object PP, the imaging object PP placed in the imaging range AR2 is imaged by the camera 9B further provided in the same smartphone S3. In the smartphone S3 according to the third embodiment, the image captured by the camera 9A and the image captured by the camera 9B are combined by the CPU 1 to generate one combined image obtained by capturing the imaging target PP. Capture (ie, record in ROM 2). In this case, the imaged object PP can be captured in a higher image quality or in a wider range by combining images captured separately by the separate cameras 9A and 9B.
 なおこの場合、カメラ9Aとカメラ9Bとの間で、例えば、撮像角度、焦点、ズーム度又はカメラ9A及びカメラ9B毎に備えられた図示しないライトの照度などを変えるように構成することができる。これにより、撮像対象物PPが図11に例示するような立体的な撮像対象物PPであったとしても、その立体的な形状や配置などを認識した上で撮像対象物PPの幾何学的補正の精度を向上させて合成画像を生成することができる。より具体的には、例えば撮像対象物PPが厚い書籍が見開かれたものである場合の表面の湾曲などを補正して高精度の合成画像を生成することができる。 In this case, the camera 9A and the camera 9B can be configured to change, for example, an imaging angle, a focus, a zoom degree, or an illuminance of a light (not shown) provided for each of the cameras 9A and 9B. Thereby, even if the imaging target PP is a three-dimensional imaging target PP as illustrated in FIG. 11, the geometric correction of the imaging target PP is performed after recognizing the three-dimensional shape and arrangement thereof. It is possible to generate a composite image with improved accuracy. More specifically, for example, it is possible to generate a high-accuracy composite image by correcting the curvature of the surface or the like when a book with a thick imaging target PP is opened.
 更には、各カメラ9A及びカメラ9Bそれぞれが撮像した画像(例えば時間的に前後して撮像された画像)をCPU1により比較し、その比較結果に基づいて、例えばスマートフォンS3自体の振動等に起因してカメラ9A及びカメラ9Bが動いた(ブれた)ことを検出したとき、そのタイミングでのキャプチャを禁止するように構成することができる。 Furthermore, the CPU 1 compares images captured by the cameras 9A and 9B (for example, images captured before and after time), and based on the comparison result, for example, due to vibration of the smartphone S3 itself. When it is detected that the camera 9A and the camera 9B have moved (blurred), the capture at that timing can be prohibited.
 以上説明したように、第3実施形態に係るキャプチャシステムCS3の動作によれば、第1実施形態に係るキャプチャシステムCSの動作による作用効果に加えて、同一の撮像対象物PPに相当し且つ撮像対象物PPの撮像条件が相互に異なる複数の画像データを合成して合成画像を生成するので、例えば立体的に撮像対象物PPを撮像することができ、より高画質且つ高精度の撮像対象物PPに相当する画像データを記録することができる。 As described above, according to the operation of the capture system CS3 according to the third embodiment, in addition to the operational effects of the operation of the capture system CS according to the first embodiment, it corresponds to the same imaging target PP and is imaged. Since a composite image is generated by synthesizing a plurality of image data with different imaging conditions of the object PP, for example, the imaging object PP can be imaged in a three-dimensional manner, and the imaging object with higher image quality and higher accuracy. Image data corresponding to PP can be recorded.
 なお、撮像条件としてのフォーカス深度が異なる複数の画像データを合成することにより、撮像対象物PP全体の各部でピントが合致した画像をキャプチャするように構成することもできる。また、撮影条件の異なる複数の画像データを得るに当たっては、第3実施形態のように複数のカメラ9A及び9Bを用いることは、必ずしも必要ではない。具体的に例えば、一つのカメラ9でフォーカス深度を変えながら複数の画像データを撮像し、それらを合成することにより撮影対象物PP全体の各部でピント等が合致した画像をキャプチャするように構成することもできる。
(IV)第4実施形態
 最後に、本発明に係る他の実施形態である第4実施形態について、図12乃至図16を用いて説明する。なお図12は第4実施形態に係るキャプチャシステムに含まれるスマートフォンの概要構成を示すブロック図であり、図13は第4実施形態に係るキャプチャを示すフローチャートである。また図14乃至図16は第4実施形態に係る位置合わせ処理をそれぞれ例示する図である。更に第4実施形態に係るスマートフォンのハードウエア的な構成において、第1実施形態に係るスマートフォンSと同一の部材は、同一の部材番号を用いてその構成及び機能等を説明する。
It should be noted that by combining a plurality of pieces of image data having different focus depths as imaging conditions, it is possible to configure to capture an image that is in focus at each part of the entire imaging target PP. Further, in order to obtain a plurality of image data having different shooting conditions, it is not always necessary to use the plurality of cameras 9A and 9B as in the third embodiment. Specifically, for example, a plurality of pieces of image data are picked up while changing the focus depth with one camera 9 and are combined to capture an image in which the focus or the like is matched in each part of the entire photographing target PP. You can also
(IV) Fourth Embodiment Finally, a fourth embodiment, which is another embodiment according to the present invention, will be described with reference to FIGS. FIG. 12 is a block diagram illustrating a schematic configuration of a smartphone included in the capture system according to the fourth embodiment, and FIG. 13 is a flowchart illustrating capture according to the fourth embodiment. FIGS. 14 to 16 are diagrams illustrating the alignment process according to the fourth embodiment. Furthermore, in the hardware configuration of the smartphone according to the fourth embodiment, the same members as those of the smartphone S according to the first embodiment will be described using the same member numbers.
 例えば上述した第1実施形態に係るキャプチャシステムCSのキャプチャ処理では、単一のカメラ9を用いて、その撮像範囲AR内に収まる大きさの書面Pをキャプチャする構成であった(図1(a)参照)。これに対し、以下に説明する第4実施形態に係るキャプチャシステムでは、例えば、撮像範囲ARに収まらない大きさの書面Pのキャプチャ(図3ステップS6に係るキャプチャ)を、一つのカメラ9を用いて当該書面Pの部分(撮像範囲AR内に収まる大きさの部分)毎にキャプチャした後に、これを合成して元の大きな書面Pのキャプチャを行う。またこの他に、撮像範囲ARに収まる大きさの書面Pであっても、その部分毎に撮像することを書面Pの全体に渡って繰り返して合成画像を生成することにより、書面Pのキャプチャを全体として高画質化する場合に第4実施形態を適用することもできる。なお第4実施形態に係るキャプチャ処理では、上記ステップS6に係るキャプチャ以外の処理は、基本的には第1実施形態に係るキャプチャ処理(図3参照)と同様の処理が実行されるため、それらの詳細な説明は省略する。 For example, in the capture process of the capture system CS according to the first embodiment described above, a single camera 9 is used to capture a document P having a size that falls within the imaging range AR (FIG. 1A )reference). On the other hand, in the capture system according to the fourth embodiment described below, for example, one camera 9 is used to capture the document P having a size that does not fit in the imaging range AR (capture according to step S6 in FIG. 3). Then, after capturing for each part of the document P (a part having a size that fits within the imaging range AR), the original large document P is captured by combining the captured parts. In addition to this, even if the document P has a size that fits within the imaging range AR, capturing of the document P can be performed by generating a composite image by repeatedly capturing the image of each part over the entire document P. The fourth embodiment can also be applied when the image quality is improved as a whole. In the capture process according to the fourth embodiment, processes other than the capture according to step S6 are basically the same as the capture process according to the first embodiment (see FIG. 3). The detailed description of is omitted.
 初めに第4実施形態に係るスマートフォンの構成について、図12を用いて説明する。図12に示すように第4実施形態に係るスマートフォンS4は、第1実施形態に係るスマートフォンSと同様のCPU1、ROM2、操作部4、ディスプレイ5、通信制御部6、スピーカ7、マイク8、カメラ9、アンテナANTを備える通信インターフェース10及びライト11を備える。これに加えて第4実施形態に係るスマートフォンS4のRAM3内には、CPU1を中心とした第4実施形態に係るキャプチャ処理を実行するために必要なバッファとして、第1実施形態に係るスマートフォンSと同様のカレント画像バッファ32に加えて、合成画像バッファ31と、位置合わせ済みカレント画像バッファ33と、が、揮発性の記憶領域として形成されている。 First, the configuration of the smartphone according to the fourth embodiment will be described with reference to FIG. As shown in FIG. 12, the smartphone S4 according to the fourth embodiment is similar to the smartphone S according to the first embodiment, the CPU 1, the ROM 2, the operation unit 4, the display 5, the communication control unit 6, the speaker 7, the microphone 8, and the camera. 9. A communication interface 10 including an antenna ANT and a light 11 are provided. In addition to this, in the RAM 3 of the smartphone S4 according to the fourth embodiment, the smartphone S according to the first embodiment is used as a buffer necessary for executing the capture processing according to the fourth embodiment centered on the CPU 1. In addition to the similar current image buffer 32, a composite image buffer 31 and a registered current image buffer 33 are formed as a volatile storage area.
 ここで、当該RAM3内のカレント画像バッファ32以外の各バッファについて、具体的に説明する。 Here, each buffer other than the current image buffer 32 in the RAM 3 will be specifically described.
 先ず合成画像バッファ31は、第4実施形態に係るキャプチャのうちの合成処理により高画質/広範囲化して形成された合成画像に相当する画像データを、当該合成処理の進捗に合わせて逐次記憶する。次に位置合わせ済みカレント画像バッファ33は、第4実施形態に係る画像処理のうちの位置合わせ処理後で、且つこれから合成処理の対象となる一フレーム分の画像データを記憶する。なお第4実施形態に係るカレント画像バッファ32に記憶されているカレント画像が、その時点での後述する位置合わせ処理の対象となる。 First, the composite image buffer 31 sequentially stores image data corresponding to a composite image formed with high image quality / wide-range by the composite processing in the capture according to the fourth embodiment in accordance with the progress of the composite processing. Next, the aligned current image buffer 33 stores image data for one frame that is the target of the composition process after the alignment process in the image processing according to the fourth embodiment. Note that the current image stored in the current image buffer 32 according to the fourth embodiment is a target of alignment processing described later at that time.
 次に、第4実施形態に係るキャプチャ(第1実施形態に係るキャプチャ処理(図3)におけるステップS6に相当)について、具体的に図12乃至図16を用いて説明する。なお第4実施形態においては、スマートフォンS4の合成画像バッファ31が、第4実施形態の処理を含む図3に示したキャプチャ全体の開始時に、一度「ゼロ」に初期化されている。 Next, capture according to the fourth embodiment (corresponding to step S6 in the capture processing according to the first embodiment (FIG. 3)) will be specifically described with reference to FIGS. In the fourth embodiment, the composite image buffer 31 of the smartphone S4 is initialized to “zero” once at the start of the entire capture shown in FIG. 3 including the processing of the fourth embodiment.
 そして、図3に示すステップS3乃至ステップS5と同様の処理によりカレント画像バッファ32にその時点で記憶されている(図3ステップS2参照)画像データの、キャプチャ対象であるカレント画像としての適合性が判定されると、次にCPU1は、図13に示す第4実施形態に係る位置合わせ処理を開始する。 Then, the compatibility of the image data stored in the current image buffer 32 at that time (see step S2 in FIG. 3) as the current image to be captured is the same as the processing in steps S3 to S5 shown in FIG. If determined, the CPU 1 starts the alignment process according to the fourth embodiment shown in FIG.
 即ちCPU1は、図13に示すように、第4実施形態に係る非剛体の位置合わせ処理を、カレント画像バッファ32に記憶されている画像データを用いて行う(ステップS21)。このときの非剛体の位置合わせ処理は、カレント画像として撮像された書面Pの部分と、先に撮像されて合成画像バッファ31に記憶されている書面Pに他の部分と、を比較し、両者の重複部分が画像として不連続とならずに両者が継ぎ合わされるように、カレント画像の位置合わせを行う処理である。当該位置合わせ処理についてより具体的には、図14乃至図16を用いて後ほど詳説する。 That is, as shown in FIG. 13, the CPU 1 performs the non-rigid positioning process according to the fourth embodiment using the image data stored in the current image buffer 32 (step S21). The alignment processing of the non-rigid body at this time compares the portion of the document P captured as the current image with the other portion of the document P captured first and stored in the composite image buffer 31. This is a process of aligning the current image so that the overlapping portions of the images are not discontinuous as an image but are joined together. More specifically, the positioning process will be described later with reference to FIGS. 14 to 16.
 カレント画像に相当する画像データに対する非剛体の位置合わせ処理が完了すると(ステップS21)、CPU1は、当該位置合わせ処理後のカレント画像の画像データを位置合わせ済みカレント画像バッファ33に記憶させる(ステップS22)。 When the alignment processing of the non-rigid body with respect to the image data corresponding to the current image is completed (step S21), the CPU 1 stores the image data of the current image after the alignment processing in the aligned current image buffer 33 (step S22). ).
 次にCPU1は、位置合わせ済みカレント画像バッファ33内の画像データを合成画像バッファ31内の画像データに追加し、重複している画像の領域がある場合は例えば画素値の平均値等を用いて画質を向上させる(ステップS23)。このステップS23の処理により、それまでに合成されていた合成画像にカレント画像が追加される。これにより、書面P全体に占める合成画像の領域的な割合が拡大されたり、或いは該当する部分の画質が向上したりすることになる。その後CPU1は、第4実施形態に係る現在合成中の合成画像の生成を終了するか否かを確認する(ステップS24)。この終了確認処理として具体的には、例えば、予め設定されている撮像回数分だけカレント画像の撮像が完了しているか否かを確認することにより、当該撮像回数分の撮像が完了していれば終了するように構成することができる。この他に例えば、所定の終了操作によって終了させることとしてもよい。 Next, the CPU 1 adds the image data in the aligned current image buffer 33 to the image data in the composite image buffer 31, and if there is an overlapping image area, for example, using an average value of pixel values or the like. The image quality is improved (step S23). By the process in step S23, the current image is added to the synthesized image synthesized so far. As a result, the area ratio of the composite image in the entire document P is enlarged, or the image quality of the corresponding part is improved. Thereafter, the CPU 1 confirms whether or not to end the generation of the composite image currently being combined according to the fourth embodiment (step S24). Specifically, as the end confirmation processing, for example, by confirming whether or not imaging of the current image has been completed for the preset number of times of imaging, if imaging for the number of times of imaging has been completed. It can be configured to end. In addition, for example, it may be terminated by a predetermined termination operation.
 ステップS24の確認において引き続き同一の合成画像の生成を行う場合(ステップS24;NO)、CPU1は、第4実施形態に係るキャプチャを一旦終了して、図3に示すステップS8と同様の処理に移行する。一方ステップS24の確認において現在合成中の合成画像の生成を終了する場合(ステップS24;YES)、CPU1は、その時点で合成画像バッファ31に記憶されている合成画像に相当する画像データをROM2に記録(即ちキャプチャ)し(ステップS25)、次に合成画像バッファ31を初期化した上で(ステップS26)、図3に示すステップS8と同様の処理に移行する。 When the same composite image is continuously generated in the confirmation in step S24 (step S24; NO), the CPU 1 ends the capture according to the fourth embodiment, and proceeds to the same process as step S8 shown in FIG. To do. On the other hand, when the generation of the currently synthesized image is completed in the confirmation in step S24 (step S24; YES), the CPU 1 stores in the ROM 2 image data corresponding to the synthesized image stored in the synthesized image buffer 31 at that time. Recording (that is, capturing) (step S25), and after initializing the composite image buffer 31 (step S26), the process proceeds to step S8 shown in FIG.
 次に、上記ステップS21に係る非剛体の位置合わせ処理について、具体的に図14乃至図16を用いて説明する。 Next, the non-rigid positioning process according to step S21 will be specifically described with reference to FIGS.
 上記ステップ21に係る位置合わせ処理は、上述したように、カレント画像として撮像された書面Pの部分と、合成画像(合成画像バッファ31に記憶されている画像データ)と、を比較し、両者の重複部分が画像として不連続とならずに両者が継ぎ合わされるように、カレント画像の位置合わせ(変形)を行う処理である。ここで、図14乃至図16に示す例では、図14(a)左に示す合成画像GA(合成画像バッファ31に記憶されている画像データ)を基準画像とし、図14(a)右に示すカレント画像GTについて非剛体の位置合わせ処理を行うものとする。 As described above, the alignment processing according to step 21 compares the portion of the document P captured as the current image with the composite image (image data stored in the composite image buffer 31), and both of them. This is a process of aligning (deforming) the current image so that the overlapping portions are not discontinuous as an image but are joined together. Here, in the examples shown in FIGS. 14 to 16, the composite image GA (image data stored in the composite image buffer 31) shown on the left of FIG. 14A is used as a reference image, and shown on the right of FIG. It is assumed that a non-rigid body alignment process is performed on the current image GT.
 ステップS21に係る位置合わせ処理として先ずCPU1は、図14(b)に例示するように、カレント画像GTを所定数に分割する。図14(b)右に例示する場合では、カレント画像GTを四つの分割画像GTa乃至GTdに分割しているが、より高画質を得るためには分割数は多いほどよい。次にCPU1は、図14(c)に例示するように一つの分割画像に着目する。図14(c)に例示する場合、CPU1は分割画像GTaに着目している。 As the alignment process according to step S21, the CPU 1 first divides the current image GT into a predetermined number as illustrated in FIG. In the case illustrated on the right side of FIG. 14B, the current image GT is divided into four divided images GTa to GTd. However, the larger the number of divisions, the better in order to obtain higher image quality. Next, the CPU 1 pays attention to one divided image as illustrated in FIG. In the case illustrated in FIG. 14C, the CPU 1 focuses on the divided image GTa.
 次にCPU1は、図15(a)に例示するように、着目している分割画像GTaを、その時点までに合成されている合成画像GAに重ねる。なお図15における座標軸は、合成画像GAにおける各分割画像に対応する領域の左上隅と、各分割画像の左上隅と、を原点(0,0)とし、図15において右方向がx座標軸の正方向、図15において下方向がy座標軸の正方向である。図15(a)において分割画像GTaを合成画像GAに最初に重ねる場合、そのオフセットは(0,0)とされる。 Next, as illustrated in FIG. 15A, the CPU 1 superimposes the divided image GTa of interest on the synthesized image GA synthesized up to that point. Note that the coordinate axes in FIG. 15 have the origin (0, 0) as the upper left corner of the area corresponding to each divided image in the composite image GA and the upper left corner of each divided image, and the right direction in FIG. The direction, the downward direction in FIG. 15, is the positive direction of the y coordinate axis. In FIG. 15A, when the divided image GTa is first superimposed on the composite image GA, the offset is (0, 0).
 その後CPU1は、図15(b)に例示するように、合成画像GA内において、分割画像GTaとその内容が最もよく一致する位置(オフセット)を探索する。なお、画像同士の内容の一致を定量化するためには、例えば相互情報量(Mutual Information)を用いる手法や、対象となる領域(図15(b)に例示する場合は、分割画像GTaの領域)に対する輝度差の総和(SAD(Sum of Absolute Difference))を用いる手法などが好適である。図15(b)の例でCPU1は、当該オフセットとして座標データ(-2,+3)が得られている。 Thereafter, as illustrated in FIG. 15B, the CPU 1 searches for a position (offset) where the divided image GTa and the content best match in the composite image GA. In addition, in order to quantify the coincidence between the contents of images, for example, a method using mutual information (Mutual Information) or a target region (in the case of FIG. 15B, the region of the divided image GTa) A method using a sum of luminance differences (SAD (Sum (Absolute Difference)) is suitable. In the example of FIG. 15B, the CPU 1 obtains coordinate data (−2, +3) as the offset.
 その後CPU1は、図15に例示する分割画像GTa以外の他の分割画像GTb乃至分割画像GTdについても、同様にその内容が最もよく一致する位置(オフセット)を合成画像GA内において探索する。図15(c)に例示する場合、分割画像GTaの場合は当該オフセットとして座標データ(-2,+3)が、分割画像GTbの場合は当該オフセットとして座標データ(+2,+3)が、分割画像GTcの場合は当該オフセットとして座標データ(+4,-1)が、分割画像GTdの場合は当該オフセットとして座標データ(-3,-1)が、それぞれCPU1により得られている。 Thereafter, the CPU 1 similarly searches for a position (offset) in which the content of the divided images GTb to GTd other than the divided image GTa illustrated in FIG. In the case illustrated in FIG. 15C, in the case of the divided image GTa, the coordinate data (−2, +3) is used as the offset, and in the case of the divided image GTb, the coordinate data (+2, +3) is used as the offset. In this case, the coordinate data (+4, −1) is obtained by the CPU 1 as the offset, and in the case of the divided image GTd, the coordinate data (−3, −1) is obtained by the CPU 1 as the offset.
 以上の一連の処理により、ステップS21に係る位置合わせ処理として各分割画像GTa乃至分割画像GTdそれぞれの中心点が移動されるべき量が、上記オフセットとして、例えば図15(d)右に例示するようにそれぞれ得られる。なおこの中心点は、一般に「アンカー」と称されることもある。 As a result of the series of processes described above, the amount by which the center point of each of the divided images GTa to GTd is to be moved as the alignment processing according to step S21 is exemplified as the above-mentioned offset on the right side of FIG. Obtained respectively. This center point may be generally called “anchor”.
 この他、各分割画像GTa乃至分割画像GTdそれぞれのアンカーの移動量に基づいたいわゆる内挿法又は外挿法により、図16に例示するようにカレント画像GT全体を変形させるように位置合わせ処理を行うことも可能である。この場合には、カレント画像GTの内容と合成画像GAの内容とを、より一致させることができる。更には、カレント画像GT全体、或いは分割画像GTa乃至分割画像GTd全体の移動や変形(それらの回転、拡大/縮小又は台形変形等を含む)を用いることもできる。 In addition, alignment processing is performed so as to deform the entire current image GT as illustrated in FIG. 16 by so-called interpolation or extrapolation based on the amount of movement of the anchor of each of the divided images GTa to GTd. It is also possible to do this. In this case, the content of the current image GT and the content of the composite image GA can be matched more. Furthermore, movement or deformation (including rotation, enlargement / reduction, or trapezoid deformation) of the entire current image GT or the entire divided image GTa to the entire divided image GTd can be used.
 以上説明したように、第4実施形態に係るキャプチャシステムの動作によれば、第1実施形態に係るキャプチャシステムCSの動作による作用効果に加えて、同一の書面Pに相当する複数の画像データを合成して合成画像を生成してキャプチャするので、より高画質或いは広範囲の書面Pに相当する画像データをキャプチャすることができる。 As described above, according to the operation of the capture system according to the fourth embodiment, in addition to the operational effects of the operation of the capture system CS according to the first embodiment, a plurality of image data corresponding to the same document P is obtained. Since the combined image is generated and captured, image data corresponding to a higher quality image or a wide range of document P can be captured.
 なお、各キャプチャの間に書面Pを例えば携帯者がずらすことにより、一度のキャプチャでは全体をキャプチャし切れない大きさの書面Pであっても、高画質にキャプチャすることができる。即ち、前後する撮像の間にカメラ9と書面Pとの相対的な位置が適切に変更されることによって同一の書面Pに相当する当該変更前後の複数の画像データそれぞれに相当する画像よりも高画質或いは広範囲の合成画像を生成するので、書面Pに相当するより高画質の合成画像をキャプチャすることができる。
(V)変形形態
 なお、本発明は、上述してきた各実施形態以外にも様々な応用が可能である。
(a)第一の変形形態
 先ず第一の変形形態として、撮像される書面Pを、人を個人的に特定する個人情報が記載された書面Pとしてもよい。この場合には、書面Pに個人情報が記載されていても、それがスマートフォンSの外部に流出することがないので、当該個人情報を簡易に且つ確実に保護することができると共に、当該個人情報を改めて記入したり入力したりする手間を省くことができる。
(b)第二の変形形態
 また第二の変形形態として、一の画像データがキャプチャされたとき、当該キャプチャされた画像データに関連付けられるイベントを特定するイベント情報(即ち、会議、試合又は登山等の、画像に関連付けられるイベントを示すイベント情報。またこのイベント情報には、例えばそれに関連する日時や場所を示す情報を含めることが、一般的に考えられる。以下、同様。)を自動的に生成し、この生成されたイベント情報が例えば外部のイベント情報管理サーバ又はROM2内に既に記録されているいずれかのイベント情報と同一のイベントに対応しているか否かを照合するように構成する。そして当該イベント情報が、イベント情報管理サーバ等に記録されているいずれのイベント情報と同一のイベントにも対応していないとき、当該新たに生成されたイベント情報をイベント情報管理サーバ等に記録させるように構成することができる。この場合には、書面Pに関連付けられたイベント情報を簡易に生成して記録させることで、当該イベント情報を簡易に維持管理することができる。
Note that, for example, a user shifts the document P between each capture, so that even the document P having a size that cannot be captured entirely by one capture can be captured with high image quality. That is, when the relative position between the camera 9 and the document P is appropriately changed during the preceding and following imaging, the image is higher than the images corresponding to the plurality of image data before and after the change corresponding to the same document P. Since a composite image with a high image quality or a wide range is generated, a composite image with higher image quality corresponding to the document P can be captured.
(V) Modifications The present invention can be applied in various ways other than the above-described embodiments.
(A) First Modification As a first modification, the document P to be imaged may be a document P on which personal information that personally identifies a person is described. In this case, even if the personal information is written on the document P, it does not flow out of the smartphone S, so that the personal information can be protected easily and reliably, and the personal information It is possible to save the trouble of entering or inputting a new one.
(B) As 2nd modification or 2nd modification, when one image data is captured, the event information which identifies the event linked | related with the said captured image data (namely, a meeting, a game, mountain climbing, etc.) Event information indicating an event associated with an image, and it is generally considered that the event information includes, for example, information indicating the date and place related to the event. Then, for example, it is configured to check whether or not the generated event information corresponds to the same event as any event information already recorded in the external event information management server or the ROM 2. When the event information does not correspond to the same event as any event information recorded in the event information management server or the like, the newly generated event information is recorded in the event information management server or the like. Can be configured. In this case, by easily generating and recording event information associated with the document P, the event information can be easily maintained and managed.
 なお、上記第二の変形形態の場合において、画像データがキャプチャされたときのカメラ9(スマートフォンS)の位置を例えばGPS(Global Positioning System)等を用いて検出し、その検出された位置を示す位置データを含ませて上記イベント情報を生成するように構成してもよい。この場合は、利用価値の高いイベント情報を簡易に維持管理することができる。
(c)第三の変形形態
 次に第三の変形形態として、一の画像データがキャプチャされたとき、キャプチャされた画像データに関連付けられ、且つキャプチャされた画像データに関連する個人を特定する個人情報を自動的に生成し、この生成された個人情報が外部の個人情報管理サーバ等内又はスマートフォンS自体内に既に記録されているいずれかの個人情報により示される個人に対応しているか否かを照合するように構成する。そして当該個人情報が、個人情報管理サーバ等内又はスマートフォンS自体内に記録されているいずれの個人情報により示される個人とも対応していないとき、当該新たに生成された個人情報を個人情報管理サーバ等に送信して記録させるように構成することができる。この場合には、書面Pに関連する個人情報を簡易に生成して記録させることで、当該個人情報を簡易に維持管理することができる。
In the case of the second modification, the position of the camera 9 (smart phone S) when the image data is captured is detected using, for example, GPS (Global Positioning System), and the detected position is indicated. The event information may be generated by including position data. In this case, highly valuable event information can be easily maintained.
(C) Third modification Next, as a third modification, when one image data is captured, the person is associated with the captured image data and specifies an individual related to the captured image data. Whether information is automatically generated and whether the generated personal information corresponds to an individual indicated by any personal information already recorded in an external personal information management server or the like or in the smartphone S itself Are configured to match. When the personal information does not correspond to the individual indicated by any personal information recorded in the personal information management server or the like or in the smartphone S itself, the newly generated personal information is stored in the personal information management server. Etc., and can be configured to be recorded. In this case, the personal information related to the document P can be easily generated and recorded, so that the personal information can be easily maintained.
 なお、上記第二の変形形態及び第三の変形形態の場合において、一の画像データがキャプチャされたとき、上記イベント情報及び上記個人情報に加えて、これらにより示されるイベント及び個人を特定するリンク情報を自動的に生成するように構成しても良い。このリンク情報として具体的には、例えば、当該イベント情報により示されるイベントに、当該個人情報により特定される個人が参加する場合の、当該参加の旨の出席情報がこれに相当する。そして、当該出席情報が、上記イベント情報管理サーバ等及び上記個人情報管理サーバ等とは別個の例えば出席情報管理サーバ等に記録/蓄積するように構成する。そして、イベント情報、個人情報に加えて出席情報の記録の有無が、それぞれ上記イベント情報管理サーバ等、上記個人情報管理サーバ等及び上記出席情報管理サーバ等との間で照合され、それらの記録の有無が確認されるように構成する。これにより、新たなイベント情報(個人情報又は出席情報)がイベント情報管理サーバ等(個人情報管理サーバ等又は出席情報管理サーバ等)内に記録されていなければ、画像データとの関連付けはそのままに、その新たなイベント情報(個人情報又は出席情報)をイベント情報管理サーバ等(個人情報管理サーバ等又は出席情報管理サーバ等)内に新たに記録する。これに対し、新たなイベント情報(個人情報又は出席情報)がイベント情報管理サーバ等(個人情報管理サーバ等又は出席情報管理サーバ等)内に記録されていれば、キャプチャされた画像データの関連付けを、当該記録済みのイベント情報(個人情報又は出席情報)に変更する。このようにすれば、一つのイベント情報(個人情報又は出席情報)に対して複数の画像データを関連付けることができ、この結果として、例えば、一つの個人情報への複数の画像データの関連付け、一つのイベント情報への複数の画像データの関連付け、一つのイベントへの複数の出席者、一人の個人の複数のイベントへの出席、のそれぞれの情報管理が可能となることになる。
(d)第四の変形形態
 更に第四の変形形態として、所定位置に書面Pが置かれていると識別されたとき、当該識別されたタイミングに対応して出力された画像データに対して、例えば広告の内容を有する画像データや文字列データ等を追加するか又は一部を置き換えて挿入し、その画像データ又は文字列データが加えられた画像データをキャプチャするように構成してもよい。この場合は、撮像された情報と共に広告をも記録して爾後に参照することができ、更に広告を画像によって配信することで、ユーザがより安価に本発明に関連するサービスを享受することができる。
In the case of the second variation and the third variation, when one image data is captured, in addition to the event information and the personal information, a link for specifying the event and the individual indicated by them Information may be generated automatically. Specifically, for example, the attendance information indicating the participation when the individual specified by the personal information participates in the event indicated by the event information corresponds to the link information. The attendance information is recorded / stored in, for example, an attendance information management server or the like that is separate from the event information management server and the personal information management server. In addition to event information and personal information, the presence / absence of attendance information is collated with the event information management server, etc., the personal information management server, etc. and the attendance information management server, etc. It is configured so that the presence or absence is confirmed. As a result, if new event information (personal information or attendance information) is not recorded in the event information management server or the like (personal information management server or attendance information management server or the like), the association with the image data remains as it is. The new event information (personal information or attendance information) is newly recorded in an event information management server or the like (personal information management server or attendance information management server or the like). In contrast, if new event information (personal information or attendance information) is recorded in an event information management server or the like (personal information management server or attendance information management server or the like), the captured image data is associated. , Change to the recorded event information (personal information or attendance information). In this way, it is possible to associate a plurality of image data with one event information (personal information or attendance information). As a result, for example, associating a plurality of image data with one personal information, Information management of association of a plurality of image data to one event information, a plurality of attendees to one event, and attendance to a plurality of events of one individual becomes possible.
(D) Fourth Modification As a fourth modification , when it is identified that the document P is placed at a predetermined position, the image data output corresponding to the identified timing is For example, image data or character string data having the contents of advertisements may be added, or a part of the image data may be replaced and inserted, and image data to which the image data or character string data is added may be captured. In this case, the advertisement can be recorded together with the imaged information and can be referred to later, and further, the advertisement can be distributed as an image, so that the user can enjoy the service related to the present invention at a lower cost. .
 なおこの場合に、上記所定位置に書面Pが置かれていると識別されたタイミングに対応して出力された画像データの少なくとも一部を外部の画像処理サーバ等に送信し、その送信された画像データの内容の画像処理サーバ等における認識結果に基づいて当該画像処理サーバにおいて広告の内容を有する画像データが加えられた画像データを受信し、これをキャプチャするように構成しても良い。この場合には、スマートフォンSにおける処理負荷を軽減しつつ、撮像された情報と共に広告をも記録して爾後に参照することができる。 In this case, at least a part of the image data output corresponding to the timing when it is identified that the document P is placed at the predetermined position is transmitted to an external image processing server or the like, and the transmitted image is transmitted. Based on the recognition result of the data content in the image processing server or the like, the image processing server may receive the image data to which the image data having the content of the advertisement is added and capture the image data. In this case, while reducing the processing load on the smartphone S, it is possible to record an advertisement together with the imaged information and refer to it later.
 またこれとは異なり、スマートフォンSのROM2内に広告の内容を有する画像データを予め一又は複数記録しておき、上記所定位置に書面Pが置かれていると識別されたタイミングに対応して出力された画像データの内容をCPU1において認識し、認識された内容に対応する広告の内容を有する画像データをROM2から読み出して加えるように構成してもよい。この場合には、スマートフォンS内で完結する構成により、撮像された情報と共に広告をも記録して爾後に参照することができる。 Also, unlike this, one or a plurality of image data having the contents of the advertisement is recorded in advance in the ROM 2 of the smartphone S, and output corresponding to the timing when the document P is identified as being placed at the predetermined position. The content of the recognized image data may be recognized by the CPU 1, and the image data having the content of the advertisement corresponding to the recognized content may be read from the ROM 2 and added. In this case, with the configuration completed in the smartphone S, it is possible to record an advertisement together with the imaged information and refer to it later.
 他方、上記所定位置に書面Pが置かれていると識別されたタイミングに対応して出力された画像データの内容をCPU1において認識し、その認識結果を外部の画像処理サーバ等に送信し、その認識結果に基づいて画像処理サーバ等から送信された広告の内容を有する画像データを受信して加えるように構成してもよい。この場合には、撮像された情報と共に広告をも記録して爾後に参照することができる。 On the other hand, the CPU 1 recognizes the content of the image data output corresponding to the timing at which it is identified that the document P is placed at the predetermined position, and transmits the recognition result to an external image processing server or the like. You may comprise so that the image data which have the content of the advertisement transmitted from the image processing server etc. based on the recognition result may be received and added. In this case, the advertisement can be recorded together with the imaged information and can be referred to later.
 更には、本発明は以下のような様々な変形が可能である。
・各実施形態に係るスタンドSTを、角度や高さの調整が可能な構成とすることができる。
・撮像時においてスマートフォンSのライト11により照明を補って撮影を行うように構成することができる。
・キャプチャした画像を幾何補正して、一般的な長方形の文書画像にするように構成することができる。なおこの場合、スマートフォンSに備えられた姿勢センサ(加速度センサやジャイロセンサ)を用いて書面Pに対するカメラ9の角度を検出し、これを上記幾何補正に用いるように構成することができる。
・スタンドSTと文書(ノートなど)を物理的にクリップなどで固定するように構成することができる。この場合、文書を手で動かすと、スタンドもこれに合わせて一緒に動くことになる。 
・自動でキャプチャが行われた瞬間に、音声、LED等の発光体又は振動などの表示以外の方法で携帯者に通知するように構成しても良い。
・書面Pを置く位置を決める際に、書面Pを置く推奨位置をスマートフォンS又はスタンドSTからの光やレーザ光線、或いはディスプレイで提示すると共に、実際のキャプチャ時には、書面Pの位置に対して最適となるように、カメラ9のズーム又は首振り角度等の自動的に調節するように構成することができる。
・一つの書面Pだけでなく、複数の撮像対象物(例えば、名刺等の小さい書面等)を同時に認識してキャプチャするように構成することができる。
・本発明に係る撮像対象物としては、各実施形態において説明した書面P、即ち紙だけではなく、例えばスレート型のパーソナルコンピュータやいわゆる電子ブックの表示装置等の携帯型の表示装置に表示されている画像等を撮像するように構成することができる。
・更に、キャプチャの際、各画像データを識別するための識別情報(シリアル番号等の文字列等)、当該画像データが撮像された時を示す時刻情報、又は当該キャプチャが行われた場所を示す場所情報の少なくともいずれか一つを画像データ内に画像として埋め込んだ後、記録するように構成することができる。この場合には、画像データをキャプチャする際、識別情報、時刻情報又は場所情報の少なくともいずれか一つを画像として埋め込んだ後に記録するので、記録後の画像データを容易に識別することができる。また、例えば過去にキャプチャして「1234」なる文字列(例えばシリアル番号等)が埋め込まれた画像データを印刷し、それに新しい書き込みを入れた上で再度キャプチャする場合、そのキャプチャされる印刷物のシリアル番号等(「1234」)を、例えば画像を入力とする文字認識機能により認識し、これに関連した、例えば「1234-2」といった文字列(関連のない例えば「3456」などではない、関連した文字列)を新たにキャプチャする画像データ内に埋め込んだ後に記録するように構成してもよい。この場合には、キャプチャされた画像データ内に他の文字列等が既に含まれていることを認識したとき、当該他の文字列等に関連する新たな文字列等を生成して画像データ内に新たに埋め込んで記録するので、文字列等の関連性により、複数回に渡る撮像により得られた画像データを確実に識別することができることになる。
Furthermore, the present invention can be variously modified as follows.
-Stand ST which concerns on each embodiment can be set as the structure which can adjust an angle and height.
-It can comprise so that it may image | photograph by supplementing illumination with the light 11 of the smart phone S at the time of imaging.
The captured image can be geometrically corrected to form a general rectangular document image. In this case, the angle of the camera 9 with respect to the document P can be detected using an attitude sensor (acceleration sensor or gyro sensor) provided in the smartphone S, and this can be used for the geometric correction.
The stand ST and the document (notebook or the like) can be physically fixed with a clip or the like. In this case, if you move the document by hand, the stand will move with it.
-You may comprise so that a portable person may be notified by methods other than display, such as an audio | voice, light-emitting bodies, such as LED, or a vibration, at the moment when capture was performed automatically.
・ When deciding the position to place the document P, the recommended position to place the document P is presented by the light from the smartphone S or stand ST, the laser beam, or the display, and at the actual capture, it is optimal for the position of the document P Thus, the camera 9 can be configured to automatically adjust the zoom or swing angle of the camera 9.
It is possible to configure not only one document P but also a plurality of imaging objects (for example, small documents such as business cards) to be recognized and captured simultaneously.
The imaging object according to the present invention is displayed not only on the document P described in each embodiment, that is, paper, but also on a portable display device such as a slate personal computer or a display device of a so-called electronic book. It can be configured to take an image or the like.
-Further, identification information (character string such as a serial number) for identifying each image data at the time of capture, time information indicating when the image data was captured, or a place where the capture was performed It can be configured to record after embedding at least one of the location information as an image in the image data. In this case, when capturing image data, since at least one of identification information, time information, and location information is embedded as an image and recorded, the recorded image data can be easily identified. For example, when printing image data in which a character string “1234” (for example, a serial number) is captured in the past and embedded again with new writing in it, the serial of the captured printed material is captured. A number or the like (“1234”) is recognized by, for example, a character recognition function using an image as an input, and related to a character string such as “1234-2” (not related to, for example, “3456”, related A character string may be recorded after being embedded in newly captured image data. In this case, when it is recognized that another character string or the like is already included in the captured image data, a new character string or the like related to the other character string or the like is generated, and the image data Since the image is newly embedded and recorded, the image data obtained by performing the imaging a plurality of times can be reliably identified due to the relevance of the character string or the like.
 なお、図3及び図13に示すフローチャートに対応するプログラムを、インターネット等のネットワークを介して取得し、或いは光ディスク等の情報記録媒体に記録されているものを取得して、例えば汎用のマイクロコンピュータによりこれを読み出して実行するように構成することもできる。この場合のマイクロコンピュータは、各実施形態に係るCPU1と同様の処理を実行することになる。 It should be noted that the program corresponding to the flowcharts shown in FIGS. 3 and 13 is acquired via a network such as the Internet, or is recorded on an information recording medium such as an optical disk, for example, by a general-purpose microcomputer. It can also be configured to read and execute this. The microcomputer in this case executes the same processing as the CPU 1 according to each embodiment.
 以上夫々説明したように、本発明は画像記録装置の分野に利用することが可能であり、特にカメラ9により撮像された画像をキャプチャする画像記録装置の分野に適用すれば特に顕著な効果が得られる。また上述したように、いつでも誰でもどこででも、個人ならそのプライバシーを、仕事なら関連する情報のセキュリティを、それぞれ最高レベルで確保しながら、手間を取らずに簡単に高画質或いは広範囲の画像合成を行うことができる。これは、今までの装置等にはなかった、画期的な価値である。 As described above, the present invention can be used in the field of image recording apparatuses. Particularly, when the present invention is applied to the field of image recording apparatuses that capture images captured by the camera 9, a particularly remarkable effect can be obtained. It is done. In addition, as described above, anytime, anywhere, anywhere, personal privacy and work-related information security at the highest level, and easy high-quality or wide-range image synthesis without the hassle. It can be carried out. This is an epoch-making value that was not available in previous devices.
 1  CPU
 2  ROM
 3  RAM
 4  操作部
 5  ディスプレイ
 6  通話制御部
 7  スピーカ
 8  マイク
 9、9A、9B  カメラ
 10  通信インターフェース
 20  プロジェクタ
 31  合成画像バッファ
 32  カレント画像バッファ
 33  位置合わせ済みカレント画像バッファ
 4A、4B、4C  操作ボタン
 S、S3、S4  スマートフォン
 P  書面
 D  机
 H  手
 L  文字
 BD  境界
 DD  表示デバイス
 TS  透明シート
 PJ  投影情報
 PT、PT1  保持台
 ST、ST2  スタンド
 B  載置台
 PP  撮像対象物
 TB  透明板
 BS1、BS2、PT2、PT3  支持部
 LT  ライト
 AR、AR1、AR2  撮像範囲
 CS、CS1、CS2-1、CS2-2、CS2-3、CS3  キャプチャシステム
 ANT  アンテナ
 GA  合成画像
 GT  カレント画像
 GTa、GTb、GTc、GTd  分割画像
1 CPU
2 ROM
3 RAM
4 Operation Unit 5 Display 6 Call Control Unit 7 Speaker 8 Microphone 9, 9A, 9B Camera 10 Communication Interface 20 Projector 31 Composite Image Buffer 32 Current Image Buffer 33 Aligned Current Image Buffer 4A, 4B, 4C Operation Buttons S, S3, S4 Smartphone P Document D Desk H Hand L Character BD Border DD Display device TS Transparency sheet PJ Projection information PT, PT1 Holding stand ST, ST2 Stand B Mounting stand PP Imaging object TB Transparency plate BS1, BS2, PT2, PT3 Supporting part LT Light AR, AR1, AR2 Imaging range CS, CS1, CS2-1, CS2-2, CS2-3, CS3 Capture system ANT Antenna GA Composite image GT Current image GTa, GTb, GTc GTd divided image

Claims (35)

  1.  撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力する撮像手段と、
     各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別手段と、
     前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録手段と、
     を備えることを特徴とする画像記録装置。
    An imaging means in which a relative position with respect to an object placement position on which an imaging object is placed is fixed, and the object placement position on which the imaging object is not placed or the object placement position Imaging means for continuously imaging the placed imaging object and outputting imaging information corresponding to the object placement position or the imaging object for each imaging;
    Identification means for identifying whether or not the imaging object is placed at the object placement position based on the output imaging information;
    A recording means for recording the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed;
    An image recording apparatus comprising:
  2.  請求項1に記載の画像記録装置において、
     前記識別手段は、前記撮像手段からそれぞれ出力された前記撮像情報に基づき、当該各撮像情報における前記撮像対象物と当該撮像対象物の周囲とを画する境界が識別されたとき、前記対象物載置位置に前記撮像対象物が置かれていると識別することを特徴とする画像記録装置。
    The image recording apparatus according to claim 1,
    The identification unit is configured to detect the object mounting when a boundary between the imaging target and the periphery of the imaging target in the imaging information is identified based on the imaging information output from the imaging unit. An image recording apparatus that identifies that the imaging object is placed at a placement position.
  3.  請求項2に記載の画像記録装置において、
     前記境界が識別されたとき、当該識別された境界を示す境界線を前記撮像情報における前記撮像対象物に対応付けて表示する表示手段を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to claim 2,
    An image recording apparatus, further comprising: a display unit configured to display a boundary line indicating the identified boundary in association with the imaging object in the imaging information when the boundary is identified.
  4.  請求項1に記載の画像記録装置において、
     前記識別手段は、前記撮像手段からそれぞれ出力された前記撮像情報に基づき、当該各撮像情報の一部又は全部が時系列において所定量変化し、その後に当該変化がなくなったと識別されたとき、前記対象物載置位置に前記撮像対象物が置かれていると識別することを特徴とする画像記録装置。
    The image recording apparatus according to claim 1,
    The identifying means is based on the imaging information output from the imaging means, respectively, when a part or all of the imaging information is changed by a predetermined amount in time series, and thereafter, when it is identified that the change is lost, An image recording apparatus that identifies that the imaging target is placed at a target placement position.
  5.  請求項1に記載の画像記録装置において、
     前記識別手段は、前記撮像対象物が載置される載置台の前記対象物載置位置に相当する部分の特徴を示す特徴情報に基づいて、前記載置台の前記対象物載置位置が遮られたことが認識されたとき、前記対象物載置位置に前記撮像対象物が置かれていると識別することを特徴とする画像記録装置。
    The image recording apparatus according to claim 1,
    The identification means is configured to block the object placement position of the placement table based on feature information indicating a feature of a portion corresponding to the object placement position of the placement table on which the imaging object is placed. When it is recognized that the imaging object is placed at the object placement position, the image recording apparatus is characterized.
  6.  請求項1に記載の画像記録装置において、
     前記識別手段は、前記撮像対象物が置かれていない前記対象物載置位置に相当する前記撮像情報と、他の前記撮像情報と、の差分に基づいて、前記対象物載置位置に前記撮像対象物が置かれていると識別することを特徴とする画像記録装置。
    The image recording apparatus according to claim 1,
    The identification unit is configured to capture the image at the object placement position based on a difference between the imaging information corresponding to the object placement position where the imaging object is not placed and other imaging information. An image recording apparatus characterized by identifying that an object is placed.
  7.  請求項1から請求項6にいずれか一項に記載の画像記録装置において、
     前記撮像対象物が置かれていることが認識されているか否かを逐次告知する告知手段を更に備えることを特徴とする画像記録装置。
    In the image recording device according to any one of claims 1 to 6,
    An image recording apparatus, further comprising notification means for sequentially notifying whether or not the imaging object is placed.
  8.  請求項1から請求項7のいずれか一項に記載の画像記録装置において、
     前記対象物載置位置に前記撮像対象物が置かれていると識別された後、前記撮像情報における少なくとも前記撮像対象物が予め設定された所定時間静止しているか否かを判定する判定手段と、
     前記判定手段による前記撮像対象物が前記所定時間静止している旨の判定に対応して出力された前記撮像情報を前記記録媒体に記録するように前記記録手段を制御する制御手段と、
     を備えることを特徴とする画像記録装置。
    In the image recording device according to any one of claims 1 to 7,
    Determining means for determining whether at least the imaging object in the imaging information is stationary for a predetermined time after it is identified that the imaging object is placed at the object placement position; ,
    Control means for controlling the recording means so as to record the imaging information output in response to the determination that the imaging object is stationary for the predetermined time by the determination means;
    An image recording apparatus comprising:
  9.  請求項1から請求項8のいずれか一項に記載の画像記録装置において、
     前記対象物載置位置に前記撮像対象物が置かれていると識別された後、前記撮像情報における前記撮像対象物の範囲内に当該撮像対象物以外の他の物が撮像されているか否かを判定する判定手段と、
     前記撮像情報における前記撮像対象物の範囲内に前記他の物が撮像されていると判定されたとき、当該判定されたタイミングに対応して出力された前記撮像情報の前記記録媒体への記録を禁止するように前記記録手段を制御する制御手段と、
     を備えることを特徴とする画像記録装置。
    In the image recording device according to any one of claims 1 to 8,
    Whether or not an object other than the imaging object is imaged within the range of the imaging object in the imaging information after it is identified that the imaging object is placed at the object placement position. Determining means for determining
    When it is determined that the other object is imaged within the range of the imaging object in the imaging information, the imaging information output corresponding to the determined timing is recorded on the recording medium. Control means for controlling the recording means to prohibit;
    An image recording apparatus comprising:
  10.  請求項1から請求項9のいずれか一項に記載の画像記録装置において、
     前記記録媒体に記録済みの第1撮像情報における前記撮像対象物と、前記第1撮像情報が前記記録媒体に記録されたタイミングより後に前記撮像手段から出力された第2撮像情報における前記撮像対象物と、を各前記撮像ごとに比較する比較手段と、
     前記第1撮像情報における前記撮像対象物と、前記第2撮像情報における前記撮像対象物と、が同一であるとき、前記第2撮像情報の前記記録媒体への記録を禁止するように前記記録手段を制御する制御手段と、
     を備えることを特徴とする画像記録装置。
    In the image recording device according to any one of claims 1 to 9,
    The imaging object in the first imaging information recorded on the recording medium and the imaging object in the second imaging information output from the imaging means after the timing when the first imaging information is recorded on the recording medium Comparing means for each imaging,
    When the imaging object in the first imaging information is the same as the imaging object in the second imaging information, the recording unit is configured to prohibit recording of the second imaging information on the recording medium. Control means for controlling
    An image recording apparatus comprising:
  11.  請求項1から請求項10のいずれか一項に記載の画像記録装置において、
     前記撮像対象物に関連する関連対象物の像を前記撮像対象物上に投影する投影手段を備え、
     前記像が投影されているタイミングにおいて前記撮像手段から出力された前記撮像情報を前記記録媒体に記録するように前記記録手段を制御することを特徴とする画像記録装置。
    In the image recording device according to any one of claims 1 to 10,
    Projecting means for projecting an image of a related object related to the imaging object onto the imaging object;
    An image recording apparatus that controls the recording unit to record the imaging information output from the imaging unit on the recording medium at a timing when the image is projected.
  12.  請求項11に記載の画像記録装置において、
     前記像が投影されている前記撮像対象物に相当する前記撮像情報と、前記像の投影が一時的に中断されている間の前記撮像対象物に相当する前記撮像情報と、を共に前記記録媒体に記録するように前記記録手段を制御することを特徴とする画像記録装置。
    The image recording apparatus according to claim 11,
    Both the imaging information corresponding to the imaging object on which the image is projected and the imaging information corresponding to the imaging object while the projection of the image is temporarily interrupted An image recording apparatus for controlling the recording means so as to perform recording.
  13.  請求項1から請求項12のいずれか一項に記載の画像記録装置において、
     同一の前記撮像対象物に相当する複数の前記撮像情報を合成して合成撮像情報を生成する合成手段を更に備え、
     前記生成された合成撮像情報を前記記録媒体に記録するように前記記録手段を制御することを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 12,
    And further comprising combining means for combining the plurality of imaging information corresponding to the same imaging object to generate composite imaging information,
    An image recording apparatus, wherein the recording unit is controlled to record the generated composite imaging information on the recording medium.
  14.  請求項13に記載の画像記録装置において、
     前記合成手段は、同一の前記撮像対象物に相当する複数の前記撮像情報であって、前記撮像対象物の撮像条件が相互に異なる複数の前記撮像情報を合成して前記合成撮像情報を生成することを特徴とする画像記録装置。
    The image recording apparatus according to claim 13.
    The synthesizing unit generates the composite imaging information by synthesizing a plurality of the imaging information corresponding to the same imaging object and having different imaging conditions of the imaging object. An image recording apparatus.
  15.  請求項13に記載の画像記録装置において、
     前記撮像手段と前記撮像対象物との相対的な位置の、前後する撮像間における変更に基づき、前記合成手段は、同一の前記撮像対象物に相当する当該変更前後の複数の前記撮像情報を用いて、当該各撮像情報それぞれに相当する画像よりも高画質或いは広範囲の合成画像に相当する前記合成撮像情報を生成することを特徴とする画像記録装置。
    The image recording apparatus according to claim 13.
    Based on a change in relative position between the imaging means and the imaging object between preceding and following imaging, the synthesizing means uses a plurality of the imaging information before and after the change corresponding to the same imaging object. An image recording apparatus that generates the composite image information corresponding to a composite image having a higher image quality or a wider range than images corresponding to the respective image information.
  16.  請求項1から請求項15のいずれか一項に記載の画像記録装置において、
     前記撮像対象物は、人を個人的に特定する個人情報が記載された撮像対象物であることを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 15,
    The image recording apparatus, wherein the imaging object is an imaging object in which personal information for personally identifying a person is described.
  17.  請求項1から請求項16のいずれか一項に記載の画像記録装置において、
     前記撮像情報が前記記録媒体に記録されたとき、当該記録された撮像情報に関連付けられ、且つ当該記録された撮像情報に関連するイベントを特定するイベント情報を生成するイベント情報生成手段と、
     前記生成されたイベント情報がイベント情報記録手段に既に記録されているいずれかのイベント情報と同一のイベントに対応しているか否かを照合する照合手段と、
     前記生成されたイベント情報が前記イベント情報記録手段に記録されているいずれのイベント情報と同一のイベントにも対応していないとき、当該生成されたイベント情報を当該イベント情報記録手段に記録させる送信手段と、
     を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 16,
    Event information generating means for generating event information that identifies an event associated with the recorded imaging information and related to the recorded imaging information when the imaging information is recorded on the recording medium;
    Checking means for checking whether the generated event information corresponds to the same event as any event information already recorded in the event information recording means;
    Transmitting means for recording the generated event information in the event information recording means when the generated event information does not correspond to the same event as any event information recorded in the event information recording means When,
    An image recording apparatus further comprising:
  18.  請求項17に記載の画像記録装置において、
     前記生成されたイベント情報が前記イベント情報記録手段に記録されているいずれかのイベント情報と同一のイベントに対応しているとき、当該生成されたイベント情報に関連付けられる前記撮像情報の当該関連付け先を、前記同一のイベントの前記イベント情報に変更する関連付変更手段を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to claim 17,
    When the generated event information corresponds to the same event as any event information recorded in the event information recording unit, the association destination of the imaging information associated with the generated event information is An image recording apparatus further comprising association changing means for changing to the event information of the same event.
  19.  請求項1から請求項18のいずれか一項に記載の画像記録装置において、
     前記撮像情報が前記記録媒体に記録されたとき、当該記録された撮像情報に関連付けられ且つ当該記録された撮像情報に関連する個人を特定する個人情報を生成する個人情報生成手段と、
     前記生成された個人情報が個人情報記録手段に既に記録されているいずれかの個人情報により示される個人に対応しているか否かを照合する照合手段と、
     前記生成された個人情報が前記個人情報記録手段に記録されているいずれの個人情報により示される個人とも対応していないとき、当該生成された個人情報を当該個人情報記録手段に記録させる送信手段と、
     を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 18,
    Personal information generating means for generating personal information that identifies an individual associated with the recorded imaging information and associated with the recorded imaging information when the imaging information is recorded on the recording medium;
    Collation means for collating whether or not the generated personal information corresponds to an individual indicated by any personal information already recorded in the personal information recording means;
    Transmitting means for recording the generated personal information in the personal information recording means when the generated personal information does not correspond to an individual indicated by any personal information recorded in the personal information recording means; ,
    An image recording apparatus further comprising:
  20.  請求項19に記載の画像記録装置において、
     前記生成された個人情報が前記個人情報記録手段に記録されているいずれかの個人情報により示される個人に対応しているとき、当該生成された個人情報に関連付けられる前記撮像情報の当該関連付け先を、前記記録されている個人の前記個人情報に変更する関連付変更手段を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to claim 19, wherein
    When the generated personal information corresponds to an individual indicated by any of the personal information recorded in the personal information recording means, the association destination of the imaging information associated with the generated personal information is An image recording apparatus, further comprising association changing means for changing the personal information of the recorded individual.
  21.  請求項1から請求項20のいずれか一項に記載の画像記録装置において、
     前記撮像情報が前記記録媒体に記録されたとき、当該記録された撮像情報に関連付けられ、且つ当該記録された撮像情報に関連する個人及びイベントを特定するリンク情報を生成するリンク情報生成手段と、
     前記生成されたリンク情報が、リンク情報記録手段に既に記録されているいずれかのリンク情報と同一の個人及びイベントを特定しているか否かを照合する照合手段と、
     前記生成されたリンク情報が、前記リンク情報記録手段に記録されているいずれのリンク情報により特定される個人及びイベントをも特定していないとき、当該生成されたリンク情報を当該リンク情報記録手段に記録させる送信手段と、
     を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 20,
    Link information generating means for generating link information specifying an individual and an event associated with the recorded imaging information and associated with the recorded imaging information when the imaging information is recorded on the recording medium;
    Collation means for collating whether the generated link information identifies the same individual and event as any link information already recorded in the link information recording means;
    When the generated link information does not specify an individual or event specified by any link information recorded in the link information recording unit, the generated link information is stored in the link information recording unit. Transmission means for recording;
    An image recording apparatus further comprising:
  22.  請求項21に記載の画像記録装置において、
     前記生成されたリンク情報が前記リンク情報記録手段に記録されているいずれかのリンク情報により特定される前記個人及び前記イベントを特定しているとき、当該生成されたリンク情報に関連付けられる前記撮像情報の当該関連付け先を、前記記録されているリンク情報に変更する関連付変更手段を更に備えることを特徴とする画像記録装置。
    The image recording apparatus according to claim 21, wherein
    The imaging information associated with the generated link information when the generated link information specifies the individual and the event specified by any link information recorded in the link information recording means The image recording apparatus further comprises association changing means for changing the association destination to the recorded link information.
  23.  請求項17又は請求項18に記載の画像記録装置において、
     前記撮像情報が前記記録媒体に記録されたときの前記撮像手段の位置を検出し、検出された位置を示す位置情報を生成する位置検出手段を更に備え、
     前記イベント情報生成手段は、前記生成された位置情報を含ませて前記イベント情報を生成することを特徴とする画像記録装置。
    The image recording apparatus according to claim 17 or 18,
    A position detection unit that detects a position of the imaging unit when the imaging information is recorded on the recording medium, and generates position information indicating the detected position;
    The event information generating means generates the event information by including the generated position information.
  24.  請求項1から請求項23のいずれか一項に記載の画像記録装置において、
     前記撮像手段は、携帯可能な情報処理装置に備えられた撮像手段であることを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 23,
    The image recording apparatus, wherein the image pickup means is an image pickup means provided in a portable information processing apparatus.
  25.  請求項1から請求項24のいずれか一項に記載の画像記録装置において、
     前記撮像手段は、携帯可能な保持具に当該画像記録装置が保持されることにより前記相対的な位置が一定とされる撮像手段であることを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 24,
    The image recording apparatus is an image recording apparatus characterized in that the relative position is fixed by holding the image recording apparatus on a portable holder.
  26.  請求項24又は請求項25に記載の画像記録装置において、
     前記撮像手段は、保持具に当該画像記録装置が保持されることにより前記相対的な位置が一定とされる撮像手段であり、
     前記保持具が、折り曲げ可能なシート状材料を折り曲げて組み立てられる保持具であって、携帯時には展開されてシート状とされ、使用時には折り曲げて組み立てられて前記画像記録装置を保持可能とされる保持具であることを特徴とする画像記録装置。
    The image recording apparatus according to claim 24 or claim 25,
    The imaging means is an imaging means in which the relative position is fixed by holding the image recording apparatus on a holder.
    The holder is a holder that is assembled by folding a foldable sheet-like material, and is a holder that is unfolded into a sheet shape when carried and can be folded and assembled when used to hold the image recording apparatus. An image recording apparatus characterized by being a tool.
  27.  請求項1から請求項26のいずれか一項に記載の画像記録装置において、
     前記対象物載置位置に前記撮像対象物が置かれていると識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報に対して、広告の内容を有する広告情報を加える画像処理手段を更に備え、
     前記記録手段は、前記広告情報が加えられた前記撮像情報を前記記録媒体に記録することを特徴とする画像記録装置。
    In the image recording device according to any one of claims 1 to 26,
    When it is identified that the imaging object is placed at the object placement position, advertisement information having advertisement content is added to the imaging information output corresponding to the identified timing. An image processing means;
    The image recording apparatus, wherein the recording unit records the imaging information to which the advertisement information is added on the recording medium.
  28.  請求項27に記載の画像記録装置において、
     前記画像処理手段は、
     前記対象物載置位置に前記撮像対象物が置かれていると識別されたタイミングに対応して出力された前記撮像情報の少なくとも一部を外部の情報処理装置に送信する撮像情報送信手段と、
     前記送信された撮像情報の内容の前記情報処理装置における認識結果に基づいて当該情報処理装置において前記広告情報が加えられた当該撮像情報を受信する撮像情報受信手段と、
     を備え、
     前記記録手段は、前記受信された撮像情報を前記記録媒体に記録することを特徴とする画像記録装置。
    The image recording apparatus according to claim 27, wherein
    The image processing means includes
    Imaging information transmitting means for transmitting at least a part of the imaging information output corresponding to the timing at which the imaging object is identified as being placed at the object placement position to an external information processing device;
    Imaging information receiving means for receiving the imaging information to which the advertising information is added in the information processing device based on a recognition result in the information processing device of the content of the transmitted imaging information;
    With
    The image recording apparatus, wherein the recording unit records the received imaging information on the recording medium.
  29.  請求項27に記載の画像記録装置において、
     前記画像処理手段は、
     前記広告情報を予め一又は複数記録する広告情報記録手段と、
     前記対象物載置位置に前記撮像対象物が置かれていると識別されたタイミングに対応して出力された前記撮像情報の内容を認識する認識手段と、
     前記認識された内容に対応する前記広告情報を前記広告情報記録手段から読み出す読出手段と、
     を備え、
     当該画像処理手段は、前記読み出された広告情報を前記内容が認識された前記撮像情報に加えることを特徴とする画像記録装置。
    The image recording apparatus according to claim 27, wherein
    The image processing means includes
    Advertising information recording means for recording one or more of the advertising information in advance;
    Recognizing means for recognizing the content of the imaging information output corresponding to the timing at which the imaging object is identified as being placed at the object placement position;
    Reading means for reading out the advertisement information corresponding to the recognized content from the advertisement information recording means;
    With
    The image processing unit adds the read advertisement information to the imaging information whose contents are recognized.
  30.  請求項27に記載の画像記録装置において、
     前記画像処理手段は、
     前記対象物載置位置に前記撮像対象物が置かれていると識別されたタイミングに対応して出力された前記撮像情報の内容を認識する認識手段と、
     前記認識手段による認識結果を外部の情報処理装置に送信する認識結果送信手段と、
     前記送信された認識結果に基づいて当該情報処理装置から送信された前記広告情報を受信する広告情報受信手段と、
     を備え、
     当該画像処理手段は、前記受信された広告情報を前記内容が認識された前記撮像情報に加えることを特徴とする画像記録装置。
    The image recording apparatus according to claim 27, wherein
    The image processing means includes
    Recognizing means for recognizing the content of the imaging information output corresponding to the timing at which the imaging object is identified as being placed at the object placement position;
    Recognition result transmission means for transmitting the recognition result by the recognition means to an external information processing device;
    Advertising information receiving means for receiving the advertising information transmitted from the information processing device based on the transmitted recognition result;
    With
    The image processing means adds the received advertisement information to the imaging information whose contents are recognized.
  31.  請求項1から請求項30のいずれか一項に記載の画像記録装置において、
     前記撮像情報を前記記録媒体に記録する際、前記記録手段は、各前記撮像情報を識別するための識別情報、当該撮像情報に相当する前記撮像対象物が撮像された時を示す時刻情報又は当該撮像が行われた場所を示す場所情報の少なくともいずれか一つを前記撮像情報内に画像として埋め込んだ後、当該撮像情報を前記記録媒体に記録することを特徴とする画像記録装置。
    The image recording apparatus according to any one of claims 1 to 30,
    When recording the imaging information on the recording medium, the recording unit includes identification information for identifying each imaging information, time information indicating when the imaging object corresponding to the imaging information is captured, or the An image recording apparatus, wherein at least one of location information indicating a location where imaging has been performed is embedded as an image in the imaging information, and then the imaging information is recorded on the recording medium.
  32.  請求項31に記載の画像記録装置において、
     前記記録手段は、撮像された前記撮像情報内に他の前記識別情報が既に含まれていることを認識したとき、当該認識した他の前記識別情報に関連する新たな識別情報を生成し、当該撮像により前記記録媒体に記録する当該撮像情報内に当該生成された新たな識別情報を埋め込んで記録することを特徴とする画像記録装置。
    The image recording apparatus according to claim 31, wherein
    When the recording unit recognizes that the other identification information is already included in the captured imaging information, the recording unit generates new identification information related to the recognized other identification information, and An image recording apparatus characterized by embedding and recording the generated new identification information in the imaging information to be recorded on the recording medium by imaging.
  33.  撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力する撮像手段を備える画像記録装置において実行される画像記録方法において、
     各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別工程と、
     前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録工程と、
     を含むことを特徴とする画像記録方法。
    An imaging means in which a relative position with respect to an object placement position on which an imaging object is placed is fixed, and the object placement position on which the imaging object is not placed or the object placement position An image to be executed in an image recording apparatus including an imaging unit that continuously images the placed imaging object and outputs imaging information corresponding to the object placement position or the imaging object for each imaging In the recording method,
    An identification step for identifying whether or not the imaging object is placed at the object placement position based on each of the output imaging information;
    A recording step of recording the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed;
    An image recording method comprising:
  34.  撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力する撮像手段を備える画像記録装置に含まれるコンピュータを、
     各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別手段、及び、
     前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録手段、
     として機能させることを特徴とする画像記録用プログラム。
    An imaging means in which a relative position with respect to an object placement position on which an imaging object is placed is fixed, and the object placement position on which the imaging object is not placed or the object placement position A computer included in an image recording apparatus including imaging means for continuously imaging the placed imaging object and outputting imaging information corresponding to the object placement position or the imaging object for each imaging; ,
    Identification means for identifying whether or not the imaging object is placed at the object placement position based on each of the output imaging information; and
    Recording means for recording the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed;
    An image recording program which is made to function as:
  35.  撮像対象物が置かれる対象物載置位置に対する相対的な位置が一定とされる撮像手段であって、前記撮像対象物が置かれていない前記対象物載置位置又は当該対象物載置位置に置かれている前記撮像対象物を連続して撮像し、当該対象物載置位置又は当該撮像対象物に相当する撮像情報を当該撮像ごとに出力する撮像手段を備える画像記録装置に含まれるコンピュータを、
     各前記出力された撮像情報に基づいて、前記対象物載置位置に前記撮像対象物が置かれているか否かを識別する識別手段、及び、
     前記撮像対象物が置かれていることが識別されたとき、当該識別されたタイミングに対応して出力された前記撮像情報を記録媒体に記録する記録手段、
     として機能させる画像記録用プログラムが前記コンピュータにより読み取り可能に記録されていることを特徴とする情報記録媒体。
    An imaging means in which a relative position with respect to an object placement position on which an imaging object is placed is fixed, and the object placement position on which the imaging object is not placed or the object placement position A computer included in an image recording apparatus including imaging means for continuously imaging the placed imaging object and outputting imaging information corresponding to the object placement position or the imaging object for each imaging; ,
    Identification means for identifying whether or not the imaging object is placed at the object placement position based on each of the output imaging information; and
    Recording means for recording the imaging information output corresponding to the identified timing on a recording medium when it is identified that the imaging object is placed;
    An information recording medium in which an image recording program that functions as an image recording medium is recorded so as to be readable by the computer.
PCT/JP2012/051500 2012-01-25 2012-01-25 Image recording device, image recording method, program for image recording, and information recording medium WO2013111278A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2013555041A JP5858388B2 (en) 2012-01-25 2012-01-25 Image recording apparatus, image recording method, image recording program, and information recording medium
PCT/JP2012/051500 WO2013111278A1 (en) 2012-01-25 2012-01-25 Image recording device, image recording method, program for image recording, and information recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/051500 WO2013111278A1 (en) 2012-01-25 2012-01-25 Image recording device, image recording method, program for image recording, and information recording medium

Publications (1)

Publication Number Publication Date
WO2013111278A1 true WO2013111278A1 (en) 2013-08-01

Family

ID=48873050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/051500 WO2013111278A1 (en) 2012-01-25 2012-01-25 Image recording device, image recording method, program for image recording, and information recording medium

Country Status (2)

Country Link
JP (1) JP5858388B2 (en)
WO (1) WO2013111278A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014115429A (en) * 2012-12-07 2014-06-26 Pfu Ltd Placing table for imaging and imaging system
JP5698823B1 (en) * 2013-10-31 2015-04-08 株式会社Pfu LIGHTING DEVICE, IMAGING SYSTEM, AND LIGHTING CONTROL METHOD
US9137430B1 (en) 2014-03-18 2015-09-15 Pfu Limited Image capturing system
JP7438736B2 (en) 2019-12-09 2024-02-27 キヤノン株式会社 Image processing device, image processing method, and program
JP7451159B2 (en) 2019-12-09 2024-03-18 キヤノン株式会社 Image processing device, image processing method, and program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH089318A (en) * 1994-06-22 1996-01-12 Sony Corp Document image picking-up and filing device
JP2004356984A (en) * 2003-05-29 2004-12-16 Casio Comput Co Ltd Photographed image processor and program
JP2006048626A (en) * 2004-07-06 2006-02-16 Casio Comput Co Ltd Photography device, image processing method of photographed image and program
JP2006235498A (en) * 2005-02-28 2006-09-07 Ricoh Co Ltd Photographic pod for camera
JP2007208821A (en) * 2006-02-03 2007-08-16 Casio Comput Co Ltd Document photographing apparatus, and method and program for detecting stillness of document
JP2008072388A (en) * 2006-09-13 2008-03-27 Ricoh Co Ltd Image processing apparatus and method, and program
JP2010056771A (en) * 2008-08-27 2010-03-11 Ricoh Co Ltd Device and method for reading image, program, and storage medium
WO2011132733A1 (en) * 2010-04-22 2011-10-27 エイディシーテクノロジー株式会社 Storage device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002112008A (en) * 2000-09-29 2002-04-12 Minolta Co Ltd Image processing system and recording medium recording image processing program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH089318A (en) * 1994-06-22 1996-01-12 Sony Corp Document image picking-up and filing device
JP2004356984A (en) * 2003-05-29 2004-12-16 Casio Comput Co Ltd Photographed image processor and program
JP2006048626A (en) * 2004-07-06 2006-02-16 Casio Comput Co Ltd Photography device, image processing method of photographed image and program
JP2006235498A (en) * 2005-02-28 2006-09-07 Ricoh Co Ltd Photographic pod for camera
JP2007208821A (en) * 2006-02-03 2007-08-16 Casio Comput Co Ltd Document photographing apparatus, and method and program for detecting stillness of document
JP2008072388A (en) * 2006-09-13 2008-03-27 Ricoh Co Ltd Image processing apparatus and method, and program
JP2010056771A (en) * 2008-08-27 2010-03-11 Ricoh Co Ltd Device and method for reading image, program, and storage medium
WO2011132733A1 (en) * 2010-04-22 2011-10-27 エイディシーテクノロジー株式会社 Storage device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014115429A (en) * 2012-12-07 2014-06-26 Pfu Ltd Placing table for imaging and imaging system
JP5698823B1 (en) * 2013-10-31 2015-04-08 株式会社Pfu LIGHTING DEVICE, IMAGING SYSTEM, AND LIGHTING CONTROL METHOD
CN104597692A (en) * 2013-10-31 2015-05-06 株式会社Pfu Lighting device, image capturing system, and lighting control method
US9280036B2 (en) 2013-10-31 2016-03-08 Pfu Limited Lighting device, image capturing system, and lighting control method
US9137430B1 (en) 2014-03-18 2015-09-15 Pfu Limited Image capturing system
JP7438736B2 (en) 2019-12-09 2024-02-27 キヤノン株式会社 Image processing device, image processing method, and program
JP7451159B2 (en) 2019-12-09 2024-03-18 キヤノン株式会社 Image processing device, image processing method, and program

Also Published As

Publication number Publication date
JP5858388B2 (en) 2016-02-10
JPWO2013111278A1 (en) 2015-05-11

Similar Documents

Publication Publication Date Title
CN107026973B (en) Image processing device, image processing method and photographic auxiliary equipment
JP5858388B2 (en) Image recording apparatus, image recording method, image recording program, and information recording medium
CN109891871A (en) The information processing terminal
CN110168606B (en) Method and system for generating composite image of physical object
US20160173840A1 (en) Information output control device
CN104428815B (en) Anamorphose device and its method of controlling operation
US20150269782A1 (en) Augmented reality display system, augmented reality information generating apparatus, augmented reality display apparatus, and server
JP2013255166A (en) Image reader and program
JP6098784B2 (en) Image processing apparatus and program
CN108920113A (en) Video frame images Method of printing, device and computer readable storage medium
JP5831764B2 (en) Image display apparatus and program
JP2013182211A (en) Portable terminal, handwriting support method, and program
JP2005010512A (en) Autonomous photographing device
US20090051941A1 (en) Method and apparatus for printing images
JP5796747B2 (en) Information processing apparatus and program
JP6450604B2 (en) Image acquisition apparatus and image acquisition method
JP5987136B2 (en) Printing apparatus, information processing apparatus, printing method, information processing program, and information recording medium
JP2013070218A (en) Projection apparatus
TW201203129A (en) Image pickup system
JP2011113196A (en) Face direction specification device and imaging device
WO2013175550A1 (en) Image capturing system, image capturing method, image capturing program and information recording medium
JP2014178977A (en) Display device and control program of display device
TW201005679A (en) Display device and image zooming method thereof
US10652472B2 (en) Enhanced automatic perspective and horizon correction
JP6025031B2 (en) Image processing apparatus and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12866841

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013555041

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12866841

Country of ref document: EP

Kind code of ref document: A1