WO2016157406A1 - Image acquisition device, image file generation method, and image file generation program - Google Patents

Image acquisition device, image file generation method, and image file generation program Download PDF

Info

Publication number
WO2016157406A1
WO2016157406A1 PCT/JP2015/060106 JP2015060106W WO2016157406A1 WO 2016157406 A1 WO2016157406 A1 WO 2016157406A1 JP 2015060106 W JP2015060106 W JP 2015060106W WO 2016157406 A1 WO2016157406 A1 WO 2016157406A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
position information
image
acquisition device
image acquisition
Prior art date
Application number
PCT/JP2015/060106
Other languages
French (fr)
Japanese (ja)
Inventor
清水 宏
鈴木 基之
橋本 康宣
西島 英男
荒井 郁也
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2015/060106 priority Critical patent/WO2016157406A1/en
Publication of WO2016157406A1 publication Critical patent/WO2016157406A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Definitions

  • the present apparatus relates to an image acquisition apparatus that captures and stores an image, and particularly relates to an apparatus having a function of storing positional information of the image acquisition apparatus at the time of shooting together with the captured image.
  • a digital system that obtains a two-dimensional photographed image by projecting an image of a subject through a lens to a camera sensor, which is an assembly of multiple pixels composed of semiconductors, and measuring the amount of light irradiated to each pixel.
  • the image acquisition device is widely used.
  • the captured image data of an image captured by such an image acquisition device is compressed by a predetermined image compression method so that the size of the image file is reduced, and attribute information called exif (exchangable image file format) is used. Are added to the compressed image file.
  • attribute information in addition to information related to the shooting conditions of the image data (photographed image acquisition device and lens, focal length and aperture value of the lens, shutter speed, sensor sensitivity, etc.), position information of the image acquisition device at the time of shooting is included.
  • the attribute information is included and stored in the image file.
  • Patent Document 1 includes a GPS position acquisition function in a camera, and acquires a movement locus when a user moves with the camera in addition to position information at the time of shooting. Technology is disclosed.
  • artificial satellite information is received, particularly at the initial activation of the position acquisition function by GPS, and the positional information can be calculated according to the operation schedule of the artificial satellite at the current time.
  • the camera position information can be calculated at any position on the earth as long as it can receive radio waves from artificial satellites. Location information can be obtained.
  • the position information that can be acquired is only the position information of the camera, that is, the photographer, and the subject position information cannot be acquired. there were.
  • An object of the present invention is to provide a technique that allows subject position information of a subject to be added to an image file together with current position information of an image acquisition device when the subject is photographed.
  • an image acquisition device that acquires current position information of an image acquisition device, by collating a shooting region captured by the image acquisition device with subject electronic information including coordinates constituting the subject.
  • An image file including a subject position information acquisition unit that specifies the subject and acquires subject position information of the specified subject, the current position information, and the subject position information acquired by the subject position information acquisition unit is generated. And an image file generation unit.
  • the subject position information of the subject can be added to the image file together with the current position information of the image acquisition device when the subject is photographed.
  • FIG. 1 is a configuration diagram of a communication system including an image acquisition device having a position information acquisition function according to Embodiment 1.
  • FIG. 2 is a hardware configuration diagram of the image acquisition device according to Embodiment 1.
  • FIG. 6 is a diagram illustrating an example of a viewfinder image that is a screen when an object is observed by the image acquisition apparatus according to Embodiment 1.
  • FIG. 2 is a software configuration diagram of the image acquisition apparatus according to Embodiment 1.
  • FIG. 6 is an explanatory diagram of a shooting axis from the image acquisition apparatus according to Embodiment 1.
  • FIG. It is explanatory drawing which shows the content of the image file handled with the image acquisition apparatus which concerns on a 1st Example.
  • FIG. 6 is an explanatory diagram illustrating an operation example of the image acquisition device according to Embodiment 1.
  • FIG. 6 is an explanatory diagram illustrating an operation example of the image acquisition device according to Embodiment 1.
  • FIG. (A)-(c) is explanatory drawing which shows the operation example of the image acquisition apparatus which concerns on Embodiment 1.
  • FIG. 6 is an explanatory diagram illustrating an example of data in which a maximum distance to a subject selected by a lock-on mark is set for each subject by the image acquisition device according to the first embodiment.
  • FIG. 3 is a diagram showing an overview of overall processing of the image acquisition device according to the first embodiment.
  • FIG. (A)-(c) is explanatory drawing which shows the method based on Embodiment 2 which selects the to-be-photographed object which is not near the center part of a finder.
  • (A)-(c) is a figure which shows the outline
  • FIG. (A) is a figure which shows typically the outline
  • (b) is a schematic diagram of the structural example of an optical finder camera.
  • Embodiment 1 of the present invention will be described in detail with reference to FIGS. (Embodiment 1) ⁇ System configuration>
  • FIG. 1 is a configuration diagram of a communication system including an image acquisition apparatus 1000 having a position information acquisition function according to the first embodiment.
  • the communication system includes an image acquisition device 1000, a wireless router 1050, and a database 1010 connected to the image acquisition device 1000 via the wireless router 1050.
  • the image acquisition device 1000 is a digital camera (the image of the subject 1020 is focused on the image sensor by an optical lens, and the luminance and color of the subject image projected on the image sensor are detected by a plurality of pixels constituting the image sensor. Therefore, not only in the case of a camera that shoots a digital image composed of multiple pixels), but also with an existing film shooting camera, electronic information corresponding to each frame of the film being shot.
  • the present invention can be applied to the case where a storage medium that stores the position information configured as described above is provided.
  • the image acquisition apparatus 1000 is equipped with a GPS unit that acquires current position information indicating the position taken by the image acquisition apparatus 1000.
  • the GPS unit receives radio waves from the GPS satellite 1040 and acquires current position information of the camera.
  • the GPS unit 3050 is capable of receiving radio waves from a plurality of GPS artificial satellites, receives radio waves from at least three or more artificial satellites, and is obtained from received information.
  • the basic information is the exact time obtained by the above and the coordinates of each artificial satellite at that time (orbital orbit information of the artificial satellite).
  • the current position is calculated by using the distance to each artificial satellite obtained by the same method of 300,000 km / sec.
  • the image acquisition apparatus 1000 can also specify the current position approximately using radio waves from a wireless router 1050 or a mobile phone base station (not shown). Furthermore, the image acquisition apparatus 1000 receives the current position of the portable information terminal obtained by a portable information terminal capable of acquiring subject position information 1002 by GPS nearby via short-range wireless communication such as Bluetooth (registered trademark). Then, it can be acquired as the current position information when the received current position of the portable information terminal is photographed.
  • a wireless router 1050 or a mobile phone base station not shown.
  • the image acquisition apparatus 1000 receives the current position of the portable information terminal obtained by a portable information terminal capable of acquiring subject position information 1002 by GPS nearby via short-range wireless communication such as Bluetooth (registered trademark). Then, it can be acquired as the current position information when the received current position of the portable information terminal is photographed.
  • the subject 1020 when the subject 1020 is imaged using the image acquisition device 1000 and the subject position information 1002 is stored, it is better to store not the position information of the image acquisition device but the location of the subject 1020 (subject position information 1002). There are actually useful cases.
  • the subject position information 1002 is acquired, and the information is transmitted to the image acquisition device 1000 to obtain a captured image.
  • subject position information 1002 may be stored in association with each other.
  • an apparatus different from the image acquisition apparatus 1000 needs to be placed near the subject, and the photographer needs to approach the subject 1020 once. Therefore, when photographing a long-distance subject (such as a mountain), it becomes difficult to acquire the position of the subject 1020 at the time of photographing.
  • the image acquisition apparatus 1000 can photograph a predetermined range as the photographing region 1060 around the photographing axis 1030 that coincides with the center line of the lens.
  • the subject 1020 on the photographing axis 1030 is the main photographing target in the photographing region 1060
  • subject position information 1002 of the subject 1020 is acquired.
  • the imaging axis 1030 is a line connecting the center of the lens of the image acquisition apparatus 1000 and the center of the imaging area 1060.
  • the database 1010 stores subject electronic information in which the name of the subject 1020, subject position information 1002, and three-dimensional image data of the subject 1020 are associated with each other.
  • the three-dimensional image data is data for drawing the subject 1020 and is composed of coordinates for specifying the position of each point constituting the subject 1020 in the three-dimensional space.
  • the database 1010 may be included in the image acquisition apparatus 1000.
  • the image acquisition apparatus 1000 may be configured to be able to be updated to the latest data by periodically acquiring data stored in the database 1010 from an external server (database).
  • the image acquisition apparatus 1000 receives the three-dimensional image data of the subject 1020 on the shooting axis 1030 and the subject position information 1002 from the database 1010 via the wireless router 1050. Then, the image acquisition apparatus 1000 determines whether the received 3D image data matches the shape of the subject 1020 in the shooting area 1060, and based on the subject position information 1002 of the matched subject 1020, the name of the subject 1020, and the like. Generate an image file. Note that the image acquisition apparatus 1000 may directly receive the three-dimensional image data, the subject position information 1002, and the like from the database 1010 without passing through the wireless router 1050. ⁇ Hardware configuration of image acquisition device>
  • FIG. 2 is a hardware configuration diagram of the image acquisition apparatus 1000 according to the first embodiment.
  • the image acquisition apparatus 1000 includes a CPU 3000 that is a central information processing apparatus, a shutter button 3010 that is pressed during shooting, a sensor (camera sensor) 3020, a signal processing DSP 3030, and an encoder / decoder 3040.
  • the image acquisition apparatus 1000 is basically configured as a computer system as an example.
  • Sensor 3020 is an image sensor that converts an optical image collected by a lens (not shown) into an electrical signal.
  • the signal processing DSP 3030 performs signal processing of the sensor 3020.
  • the sensor 3020, the signal processing DSP 3030, and the encoder / decoder 3040 are not only connected to the bus, but the output signal from the sensor 3020 may be directly sent to the signal processing DSP 3030 and the encoder / decoder 3040 to process the video signal. good.
  • the video signal having a large size is not passed through the bus 3001, the image signal is not occupied on the bus 3001, and the camera can perform other operations while performing compression processing from shooting.
  • the encoder / decoder 3040 compresses the video signal composed of RGB obtained by the signal processing DSP 3030 using a compression method such as discrete cosine transform or Huffman coding. Note that the encoder / decoder 3040 may have a function of compressing not only a captured still image but also a moving image.
  • the GPS unit 3050 acquires position information indicating the current position of the image acquisition apparatus 1000.
  • the G sensor 3060 measures the elevation angle of the image acquisition device 1000 based on the direction of the image acquisition device 1000 and the acceleration generated when the image acquisition device 1000 is moved.
  • the geomagnetic sensor 3070 measures the azimuth angle of the image acquisition device 1000, for example.
  • the wireless LAN 3080 performs wireless communication between the camera and an external device such as a portable information terminal, or obtains the current position using a signal of a wireless communication base station.
  • the flash memory 3090 stores a program for controlling the entire camera and basic constants.
  • the SD-RAM 3100 is a work memory for program execution, and stores GPS satellite orbit information that is sequentially updated, position information that is acquired by GPS, and the like.
  • the clock 3110 is used for attaching a time code to image information stored at the time of photographing or measuring the position information by the GPS.
  • the operation switch 3130 accepts various operations of the image acquisition apparatus 1000 such as changing the setting contents of the image acquisition apparatus 1000, for example.
  • the infrared light receiving unit 3151 receives an instruction from the outside such as a shutter operation of the image acquisition apparatus 1000 by an infrared remote controller or the like.
  • the remote control I / F 3150 converts the output signal output from the infrared light receiving unit 3151 into digital data for use as a control signal for the image acquisition apparatus 1000.
  • the short-range wireless communication unit 3160 performs communication between the image acquisition apparatus 1000 and an external device such as a portable information terminal via short-range wireless (for example, Bluetooth (registered trademark)).
  • short-range wireless for example, Bluetooth (registered trademark)
  • the EVF / LCD (display) 3120 displays a finder image (described later, FIG. 3) of the subject received by the sensor 3020 during shooting.
  • the EVF / LCD (display) 3120 is used for visually confirming image data that has already been taken and stored in an external memory 3141 to be described later.
  • the EVF / LCD (display) 3120 is used for confirming / changing the setting contents of the image acquisition apparatus 1000.
  • a finder image displayed on the EVF / LCD 3120 will be described with reference to FIG.
  • the EVF / LCD displays a viewfinder image 7000 that is a screen when the photographer observes the subject with the camera.
  • a viewfinder image 7000 In the viewfinder image 7000, subject candidates, a first subject 4030, a second subject 4040, and a third subject 4050 are displayed.
  • a lock-on mark 7010 is displayed on the viewfinder image 7000, and this lock-on mark 7010 is usually displayed at a position where the subject can be most easily captured, that is, near the center of the screen of the viewfinder image 7000.
  • the photographer aligns the lock-on mark 7010 with a target subject. For example, the photographer selects the second subject 4040, presses the shutter button halfway, and aligns AF (Auto Focus) with the second subject 4040.
  • the second subject 4040 is selected as a subject for which the photographer wants to record subject position information, and the photographed image data is acquired by pressing the shutter button. Thereafter, an image file in which shooting information including current position information and subject position information is added to the compressed image data compressed by the encoder / decoder is stored in the external memory.
  • FIG. 4 is a software configuration diagram of the image acquisition apparatus 1000 according to the first embodiment.
  • the image acquisition apparatus 1000 includes an image data acquisition unit 210, a subject position information acquisition unit 220, a matching processing unit 230, and an image file generation unit 240.
  • the subject position information acquisition unit 220 is a three-dimensional structure composed of a photographing region photographed by the image obtaining apparatus 1000 and coordinates for specifying the position of each point constituting the subject stored in the database in a three-dimensional space.
  • the subject is identified by collating with the image data, and subject position information of the identified subject is acquired.
  • the subject position information acquisition unit 220 requests all the subject position information and three-dimensional image data included in the imaging region specified by the image data acquisition unit 210 from the database.
  • the database transmits subject position information and three-dimensional image data to the image acquisition apparatus 1000 in response to a request.
  • the image acquisition apparatus 1000 receives subject position information and three-dimensional image data transmitted from the database.
  • the subject position information acquisition unit 220 acquires subject position information and three-dimensional image data transmitted from the database.
  • the subject position information acquisition unit 220 calculates a shooting axis line that connects the center of the lens of the image acquisition apparatus 1000 and the center of the shooting area.
  • the imaging axis will be described in detail with reference to FIG.
  • FIG. 5 is a diagram showing a vector of the imaging axis 1030.
  • the X-axis 6010 shown in FIG. 5 is the north-facing direction
  • the Y-axis 6020 is the east-facing
  • the Z-axis 6030 is the upward-facing coordinate system.
  • the vector of the imaging axis 1030 can be shown by having two angles of a directional angle 6060 and an elevation angle 6050 with respect to the X axis 6010 in the X, Y, and Z axis space.
  • the imaging axis 1030 is based on the current position information (origin 6000) of the image acquisition device 1000 acquired by the GPS unit, the direction angle 6060 measured by the geomagnetic sensor, and the elevation angle 6050 acquired by the G sensor. Calculated.
  • the subject is a target subject by indicating the relationship between the imaging axis 1030 and the constituent plane of the subject's 3D image data (whether there is an intersection in the constituent plane).
  • the subject position information acquisition unit 220 calculates an imaging axis based on the current position information of the image acquisition apparatus 1000, the elevation angle, and the direction angle. Then, the subject position information acquisition unit 220 identifies a subject that intersects with the shooting axis (a subject including an intersection with the shooting axis). Specifically, the subject position information acquisition unit 220 specifies 3D image data that intersects the calculated imaging axis from each 3D image data transmitted from the database. For example, the subject position information acquisition unit 220, when any of the coordinates constituting the shooting axis coincides with any of the coordinates that constitute the three-dimensional image data, the subject position information acquisition unit 220 converts the three-dimensional image data including the coordinates to the 3 of the subject. Identified as dimensional image data.
  • the subject position information acquisition unit 220 matches the subject position information with the position information associated with the subject electronic information or the image data of the subject included in the shooting area according to the type of the subject electronic information.
  • the subject position information corresponding to the information or the subject position information of the subject that overlaps in a predesignated range around the photographing axis connecting the center of the lens of the image acquisition device and the center of the photographing region is acquired.
  • the matching processing unit 230 converts the 3D image data specified by the subject position information acquisition unit 220 into 2D image data.
  • the matching processing unit 230 extracts image data of the subject to be imaged from the captured image data of the captured image. For example, the matching processing unit 230 extracts image data of a subject imaged within a predetermined range from the center of the imaging region. Then, the matching processing unit 230 determines whether the image based on the converted two-dimensional image image data and the image based on the image data extracted from the captured image data match or approximate.
  • the image data acquisition unit 210 obtains an imaging region to be imaged by the image acquisition device 1000 based on the current position information acquired by the GPS unit, the elevation angle measured by the G sensor, and the azimuth angle measured by the geomagnetic sensor. Identify.
  • the image data acquisition unit 210 acquires captured image data for drawing an image in the imaging region. Then, the image data acquisition unit 210 generates a thumbnail image based on the captured image data. The captured image data is compressed by an encoder / decoder to generate compressed image data.
  • the image file generation unit 240 generates shooting information B2 including current position information and subject position information, and shooting information A. In addition, the image file generation unit 240 generates an image file (described later, FIG. 6) including the generated shooting information A, shooting information, thumbnail images, and compressed image data. Then, the image file generation unit 240 stores the generated image file in the external memory. ⁇ Image information file>
  • FIG. 6 is an explanatory diagram showing the contents of the image file 2010 handled by the image acquisition apparatus 1000 according to the first embodiment of the present invention.
  • the image file 2010 includes shooting information A2020, shooting information B2030, a thumbnail image 2040, and compressed image data 2050.
  • Shooting information A2020 indicates the type of information related to the shot image 2000 stored in the image file 2010.
  • the thumbnail image 2040 is a reduced image of the captured image 2000.
  • the compressed image data 2050 is compressed by combining the information amount of the captured image 2000 with a transform / encoding method such as discrete cosine transform or Huffman coding, thereby reducing the data amount and increasing the storage and reading efficiency. ing.
  • a transform / encoding method such as discrete cosine transform or Huffman coding
  • Information about the captured image 2000 shown in the shooting information B2030 includes, for example, a shooting date and time, a storage date and time, a camera name used for shooting, a lens name used for shooting, a shutter speed, an aperture value, and a film mode (for example, Reversal mode, black-and-white mode, etc.), ISO sensitivity indicating a gain for amplifying the sensor output at the time of shooting, current position information indicating the position where the image acquisition device 1000 has shot the shot image 2000, subject position information, And a subject name indicating the name.
  • a shooting date and time includes, for example, a shooting date and time, a storage date and time, a camera name used for shooting, a lens name used for shooting, a shutter speed, an aperture value, and a film mode (for example, Reversal mode, black-and-white mode, etc.), ISO sensitivity indicating a gain for amplifying the sensor output at the time of shooting, current position information indicating the position where the image acquisition device 1000 has shot the shot image 2000,
  • the shooting information A2020, the shooting information B2030, the thumbnail image 2040, and the compressed image data 2050 are collectively handled as one image file 2010, whereby the shooting information A2020 and the shooting information B2030 included in the image file 2010 are displayed.
  • the thumbnail image 2040 and the compressed image data 2050 can be integrally copied from the image acquisition apparatus 1000 to another device.
  • the related information can be handled together, so that the image file 2010 can be handled without losing the shooting information B2030.
  • “at a certain date / time”, “from which location”, and “from which location the subject was viewed”, which corresponds to the shooting information B2030 not the entire image file according to the present embodiment. It is also possible to obtain only information. ⁇ Camera operation>
  • FIG. 7 and 8 are explanatory diagrams illustrating an operation example of the image acquisition apparatus 1000 according to the first embodiment.
  • 7 shows a state in which the layout of the image acquisition device 1000, the first subject 4030, and the second subject 4040 is observed from the side
  • FIG. 8 shows the image acquisition device 1000, the first subject 4030, and the second subject 4040.
  • the layout is observed from an oblique direction.
  • the image acquisition apparatus 1000 is installed in a state in which an image of the first subject 4030 and the second subject 4040 can be acquired with the imaging axis 1030 extending in the lens direction. ing.
  • the current position 4001 of the image acquisition apparatus 1000 can be indicated by coordinate data represented by three numerical values of latitude, longitude, and altitude on the earth.
  • the three-dimensional image 4010 drawn based on the three-dimensional image data has a rectangular parallelepiped shape and has six constituent surfaces 4011 in total. Consists of.
  • the subject position information acquisition unit 220 calculates the presence / absence of an intersection between the configuration surface 4011 constituting the three-dimensional image 4010 and the imaging axis 1030. Then, the subject position information acquisition unit 220 identifies the first subject 4030 and the second subject 4040 where the intersection is present as subject candidates.
  • the matching processing unit 230 two-dimensionally converts the three-dimensional image data for drawing the first subject 4030 specified by the subject position information acquisition unit 220 and the three-dimensional image data for drawing the second subject 4040. Convert to image data. Then, the matching processing unit 230 matches or approximates the image of each subject viewed from the imaging axis 1030 of the image acquisition apparatus 1000 and the image of the building corresponding to the subject included in the image captured by the photographer. Perform a matching check.
  • the photographer mainly photographs the main subject, that is, the subject to which the subject position information is to be added.
  • the subject mainly includes, for example, an AF locked target and a target set by a lock-on mark described later.
  • the matching processing unit 230 selects a subject that matches or approximates as a result of the matching check. By performing the matching check, it can be selected whether the main subject is the first subject 4030 or the second subject 4040.
  • the subject position information acquisition unit 220 acquires subject position information corresponding to three-dimensional image data that matches or approximates the image data of the subject included in the imaging region. As a result, an appropriate one of the first subject position 4031 that is the position of the first subject 4030 and the second subject position 4041 that is the position of the second subject 4040 can be acquired as subject position information. Then, more appropriate subject position information can be added to the image file.
  • the first subject 4030 is a candidate for a subject that is closest to the image acquisition apparatus 1000.
  • the second subject position 4041 is a candidate for a subject that exists near the image acquisition apparatus 1000 next to the first subject 4030.
  • the subject position information acquisition unit 220 replaces the subject that is determined to match or approximate as a result of the matching check, and is a first subject that has an intersection closest to the image acquisition device 1000. 4030 may be selected as the subject.
  • FIGS. 9A to 9C are explanatory diagrams illustrating an operation example of the image acquisition apparatus 1000 according to the first embodiment.
  • the imaging area 1060 includes a first subject 4030, a second subject 4040, and a third subject 4050 in front of the image acquisition apparatus 1000.
  • the first subject 4030, the second subject 4040, and the third subject 4050 are included in the range of the lock-on mark 7010.
  • the subject position information acquisition unit has a predetermined range (for example, a range in which a lock-on mark 7010 is displayed) centered on a shooting axis connecting the center of the lens of the image acquisition device and the center of the shooting region.
  • a predetermined range for example, a range in which a lock-on mark 7010 is displayed
  • a subject that partially or entirely overlaps is specified as a subject for obtaining subject position information.
  • the subject position information acquisition unit calculates whether or not a part of or all of the lock-on mark 7010 overlaps the three-dimensional configuration surface of each subject. Then, the subject position information acquisition unit identifies the first subject 4030, the second subject 4040, and the third subject 4050, which are partially or entirely overlapped with the lock-on mark 7010, as subject candidates.
  • the subject position information acquisition unit specifies a subject having a short distance from the image acquisition device as a target for acquiring subject position information.
  • the subject position information acquisition unit includes a distance from the lens of the image acquisition device 1000 to the first subject 4030, a distance from the lens of the image acquisition device 1000 to the second subject 4040, The distance from the lens of the image acquisition apparatus 1000 to the third subject 4050 is calculated. Then, the subject position information acquisition unit selects the first subject 4030 having a small distance value (close distance) as the subject.
  • a small distance value close distance
  • FIG. 9B similar to FIG. 9A, all the subjects that partially overlap with the lock-on mark 7010 are selected as candidates.
  • a wire frame image (subject electronic) based on the three-dimensional image data of the subject is added to the image of the selected subject (the subject from which subject position information is acquired at the time of shooting). (Image based on information) 4042 is superimposed and displayed.
  • the photographer can easily see which subject is selected when the shutter button is pressed.
  • the layout of the imaging region 1060, the imaging axis 1030, the lock-on mark 7010, the first subject 4030, and the second subject 4040 of the image acquisition apparatus 1000 is as shown in FIG. 9A.
  • the height of the first subject 4030 is lower than the height of the second subject 4040.
  • the first subject 4030 or the second subject 4040 is selected at the moment when the shutter button is pressed due to slight vertical movement of the image acquisition apparatus 1000 at the time of shooting. Therefore, for example, as shown in FIG. 9B, the second subject 4040 that is the closest to the distance from the lens of the image acquisition device 1000 and that partially or entirely overlaps the lock-on mark 7010 is selected as the subject.
  • the EVF / LCD is displayed.
  • the wire frame image 4042 By displaying the wire frame image 4042 on the image of the selected subject, it is possible to present to the photographer that the subject has been selected more clearly than just having the lock-on mark on the subject.
  • the photographer can select and shoot the subject intended by the photographer with more certainty.
  • FIG. 9B when the first subject 4030 and the second subject 4040 are close to each other and the photographing axis 1030 is on the last line between the positions of the subjects, the photographer It can be shown that the subject to be clearly selected is definitely selected.
  • FIG. 9C is a view of the state in which the wire frame image 4042 is superimposed and displayed on the image of the selected subject, as in FIG. 9B, as viewed from above.
  • the example shown in FIG. 9C is different in that it is generated by the image acquisition apparatus 1000 instead of using the subject position information stored in the database as the subject position information of the supplemented subject. This is effective when there is a large error in the position information of the subject, for example, when the subject is not a very large facility, for example, when one of subjects composed of a plurality of buildings is selected.
  • centroid point 9000 in the two-dimensional figure shape of the projected view from the top of the three-dimensional shape collated with the subject to be photographed, and acquiring the calculated centroid point 9000 as the coordinates of the subject, It is possible to generate subject position information that is much more consistent with a photographed subject as compared to other facilities attached or representative positions of large subjects.
  • the lock-on mark 7010 described so far has been described as a circle, but in order to calculate the positional relationship with the three dimensions, the shape of a rectangular parallelepiped or a square is simpler to calculate, and the rectangular parallelepiped. However, there is little adverse effect that it is difficult for the photographer to shoot. Further, in the selection of subjects, it is not necessary to select all subjects within the range of the lock-on mark 7010 as selection candidates, and depending on the focal length of the lens, if the subject to be photographed is a general house, In some cases, by limiting the distance to the farthest object to be selected as a candidate depending on the case of a mountain or lake, it is not necessary to perform extra extraction calculations.
  • the 3D image data of the subject, the subject position information and the subject name related thereto, and the map information that summarizes them are downloaded to the image acquisition device 1000 each time it is acquired from the database, and the image acquisition device 1000
  • the computer may perform the process, or vice versa, all the image acquisition apparatuses 1000 including the above three-dimensional data may be stored in the built-in memory, and all processing may be performed in the image acquisition apparatus 1000. .
  • the depression angle may be stored in association with each other.
  • the captured image data from the image acquisition device 1000 to the computer (image file generation device), current position information of the image acquisition device 1000 for specifying the vector of the imaging axis, and the direction angle measured by the geomagnetic sensor, Transmission may be performed in association with the elevation angle acquired by the G sensor.
  • subject selection and subject position information of the subject may be added as post-processing.
  • the image file generation device includes a subject position information acquisition unit, a matching processing unit, and an image file generation unit.
  • the image file generation device does not have a configuration such as a shutter button, a sensor (camera sensor), a GPS unit, a G sensor, or a geomagnetic sensor.
  • the current position information of the image acquisition device that captures the subject, the elevation angle of the image acquisition device, and the azimuth angle of the image acquisition device are input to the image file generation device via an external memory. Then, the subject position information acquisition unit of the image file generation device captures the imaging region based on the current position information of the image acquisition device that captures the input subject, the elevation angle of the image acquisition device, and the azimuth angle of the image acquisition device. And an imaging axis.
  • the subject position information acquisition unit of the image file generation device is for specifying the shooting area shot by the image acquisition device calculated by itself and the position of each point constituting the subject stored in the database in the three-dimensional space.
  • a subject is identified by collating with three-dimensional image data composed of coordinates, and subject position information of the identified subject is acquired.
  • the image file generation unit of the image file generation device generates an image file including the current position information and the subject position information.
  • the size of the subject is determined to some extent depending on the focal length of the lens, and the subject can be placed in the EVF / LCD regardless of whether the subject is too small or too large for the entire EVF / LCD. It becomes difficult to express in. Therefore, it is desirable to set a range in which the subject position information acquisition unit 220 of the image acquisition apparatus 1000 acquires subject position coordinates according to the size of the subject. Depending on the subject, subject position information may not be stored in the database, or subject position information stored in the database may be inaccurate. Therefore, it is desirable to change the method for specifying the subject position coordinates according to the size of the subject.
  • the subject position information acquisition unit changes the distance from the image acquisition device that acquires the subject position information to the subject according to the type of the subject electronic information. The following description is based on the assumption that a standard lens (such as a 50 mm lens at 35 mm full size) is used as the lens.
  • the subject position information acquisition unit only applies the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 100 m.
  • subject position information corresponding to a three-dimensional image may not be stored in the database.
  • the subject position information acquisition unit uses the coordinates in the vicinity of the center of gravity of the two-dimensional plan view of the building (such as ordinary houses and condominiums) viewed from above as subject position information. get.
  • the subject position information acquisition unit acquires only the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 500 m.
  • the subject position information acquisition unit acquires subject position information stored in the database in addition to the coordinates near the center of gravity position.
  • the subject position information acquisition unit acquires only the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 5 km. In addition to the subject position information stored in the database, the subject position information acquisition unit acquires, as subject coordinates, the intersection (lock-on position) between the plane of the three-dimensional image data facing the camera and the shooting axis. Also good.
  • the subject position information acquisition unit only applies the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 20 km. To get.
  • the subject position information acquisition unit for example, in the case of a mountain, the top of a mountain, in the case of a lake, in the vicinity of the center of gravity position, in the case of an island, the center of the island, the lock-on position, the position near the center of gravity, or the subject stored in the database You may acquire position information.
  • the shape of the subject is not a rectangular parallelepiped, and therefore, using a plurality of small surface information used for generating the shape of the mountain or island as they are,
  • the subject may be specified by calculating the intersection coordinates.
  • the acquisition distance shown in FIG. 10 and the position information setting location differ depending on the focal length of the lens, and can be changed as appropriate based on customized data individually set by the photographer.
  • the captured image data is data composed of digital information
  • the acquired current position information and subject position information may be added and stored in the captured image data as they are, and are associated with the captured image data.
  • Different data (which may be the same device / other devices) may be stored, and even when shooting is performed on a film, it may be stored as digital data that can be associated with the film.
  • a sensor camera sensor
  • captures an electronic image of a subject in order to collate with three dimensions.
  • the subject is specified based on the sensor image and three dimensions, and subject position information is calculated.
  • the calculated subject position information is stored in an electronic information storage medium such as a memory separate from the film, starting from the film case.
  • the sensor may have a different optical system that is independent of the optical system that shoots on the film. If installed so as to have, the operation shown in the present invention can be performed.
  • a camera-equipped mobile phone equipped with a camera, capable of capturing and storing captured images, and having functions and capabilities capable of referencing and uploading external data through communication, and further enabling image processing within the main unit, etc.
  • the portable information communication terminal is one of devices suitable for implementing the present invention. ⁇ Overall processing>
  • FIG. 11 is a diagram showing an overview of the overall processing of the image acquisition apparatus 1000 according to the first embodiment. Note that the entire process is started when the image acquisition apparatus 1000 starts shooting the captured image 2000.
  • the GPS unit 3050 acquires the current position information of the image acquisition apparatus 1000.
  • the G sensor 3060 measures the elevation angle of the image acquisition apparatus 1000.
  • the geomagnetic sensor 3070 measures the azimuth angle of the image acquisition device 1000.
  • the image data acquisition unit 210 identifies the imaging region 1060 based on the current position information acquired in S1101, the elevation angle measured in S1102, and the azimuth angle.
  • the subject position information acquisition unit 220 requests the database 1010 for all subject position information and three-dimensional image data included in the imaging region 1060 specified in S1103. As a result, the subject position information acquisition unit 220 acquires subject position information and three-dimensional image data transmitted from the database 1010.
  • the image data acquisition unit 210 acquires captured image data for rendering an image in the imaging region 1060. Further, the image data acquisition unit 210 generates a thumbnail image 2040 based on the captured image data. The captured image data is compressed by an encoder / decoder 3040 to generate compressed image data 2050.
  • the subject position information acquisition unit 220 calculates a shooting axis 1030 that connects the center of the lens of the image acquisition apparatus 1000 and the center of the shooting area 1060 specified in S1103. Specifically, the subject position information acquisition unit 220 uses the current position information of the image acquisition apparatus 1000 acquired by the GPS unit in S1101, the direction angle measured by the geomagnetic sensor in S1102, and the G sensor in S1102. An imaging axis 1030 is calculated based on the acquired elevation angle 6050.
  • the subject position information acquisition unit 220 identifies a subject that intersects the shooting axis 1030. Specifically, the subject position information acquisition unit 220 specifies 3D image data that intersects the imaging axis 1030 calculated in S1107 from each 3D image data acquired in S1104. For example, the subject position information acquisition unit 220 specifies the three-dimensional image data including the coordinates when any of the coordinates configuring the imaging axis 1030 matches any of the coordinates configuring the three-dimensional image data. .
  • the subject position information acquisition unit 220 extracts subject position information of the subject corresponding to the three-dimensional image data specified in S1108 from each subject position information acquired in S1104.
  • the matching processing unit 230 converts the 3D image data specified in S1108 into 2D image data.
  • the matching processing unit 230 extracts image data of the subject to be photographed from the photographed image data acquired in S ⁇ b> 1106. For example, the matching processing unit 230 extracts image data of a subject imaged within a predetermined range from the center of the imaging region 1060.
  • the matching processing unit 230 determines whether the image based on the two-dimensional image image data converted in S1110 matches the image based on the image data extracted in S1111.
  • the matching processing unit 230 determines that the image based on the two-dimensional image image data converted in S1110 and the image based on the image data extracted in S1111 do not match or approximate (S1112-No)
  • S1116 Proceed to
  • the matching processing unit 230 determines that the image based on the two-dimensional image image data converted in S1110 matches the image based on the image data extracted in S1111 (S1112-Yes)
  • the process proceeds to S1113.
  • the image file generation unit 240 generates shooting information B2030 including the current position information acquired in S1101 and subject position information extracted in S1109, and shooting information A2020.
  • the image file generation unit 240 includes the shooting information A2020 generated in step S1113, shooting information B2030, the thumbnail image 2040 generated in step S1106, and the compressed image data 2050.
  • An image file 2010 is generated.
  • the image file generation unit 240 stores the image file 2010 generated in S1114 in the external memory 3141, and ends the entire process.
  • the image file generation unit 240 If NO in S1112, the image file generation unit 240 generates shooting information B2030 including the current position information acquired in S1101 and shooting information A2020 in S1116. Note that the subject position information included in the shooting information B2030 stores information indicating that the subject position information has not been acquired. Note that the image file generation unit 240 may generate shooting information B2030 that does not include subject position information.
  • the image file generation unit 240 includes the shooting information A2020, the shooting information B2030 generated in step S1116, the thumbnail image 2040 generated in step S1106, and the compressed image data 2050.
  • An image file 2010 is generated.
  • the image file generation unit 240 stores the image file 2010 generated in S1117 in the external memory 3141, and ends the entire process.
  • the subject position information acquisition unit 220 performs pattern matching between the two-dimensional image image data obtained by converting the three-dimensional image data and the image in the shooting region 1060, thereby extracting an image that matches or approximates.
  • the image may be specified as a subject.
  • the shape of the subject as seen from the camera in a small house, condominium, office building, etc. is clear and can be calculated using only the shape as seen from the camera when there are many objects. , Can reduce the amount of calculation.
  • the subject position information of the subject can be added to the image file together with the current position information of the image acquisition device when the subject is photographed.
  • Embodiment 1 subject position information and photographed image data of a subject displayed near the center of the finder image when the shutter button is pressed are acquired and stored in an external memory.
  • subject position information of a subject displayed near the center of the finder image when the shutter button is half-pressed and captured image data taken when the shutter button is pressed are acquired. And stored in the external memory. Accordingly, it is possible to acquire subject position information of a subject that is not in the center of the finder image 7000 when the shutter button 3010 is pressed.
  • the second embodiment of the present invention will be described below with reference to FIGS. 12 (a) to 12 (c), mainly with respect to differences from the first embodiment.
  • FIGS. 12A to 12C show the case where the subject to be selected is not near the center of the viewfinder where the lock-on mark is located when the photographer performs framing (specifies the shooting range) with the viewfinder A method for selecting a subject will be described.
  • the subject that the photographer wants to photograph is the first subject 4030.
  • the finder image 7000 to be framed is the image shown in FIG. 12A, and is manually shaken leftward with respect to the state shown in FIG. ing.
  • a lock-on mark 7010 is superimposed on the first subject 4030 selected by the photographer.
  • the shutter button 3010 of the viewfinder image 7000 is half-pressed.
  • half-pressing the shutter button 3010 is an operation used for performing exposure and AF and locking the exposure amount and focus position of the image acquisition apparatus 1000 in that state. This may be an operation for aligning exposure or AF with the first subject 4030. Simultaneously with this operation, a selection operation as a target for acquiring position information is performed on the first subject 4030.
  • FIG. 12 (c) shows a state where the image acquisition apparatus 1000 once shaken to the left is manually returned to perform the lock-on, and the scene to be photographed is framed.
  • the viewfinder image 7000 includes a first subject 4030 and a second subject 4040.
  • the first subject 4030 selected in the state shown in FIG. One subject 4030 is displayed on the left side of the screen.
  • the first subject 4030 is selected as the subject. Therefore, for example, when a mark equivalent to the lock-on mark 7010 can be superimposed and displayed in the finder image 7000 such as EVF (Electronic View Finder), the subject tracking mark 7020 is displayed as shown in FIG. The subject tracking mark 7020 moves in the viewfinder image 7000 while tracking the first subject 4030.
  • EVF Electronic View Finder
  • the photographer can recognize that the first subject 4030 is selected as the subject, so the photographer can change the orientation of the image acquisition apparatus 1000 with confidence.
  • a predetermined viewfinder image 7000 can be taken, and a subject that is not in the center of the viewfinder image 7000 can be selected when the shutter button 3010 is pressed.
  • the subject position information of the subject selected when the shutter button 3010 is pressed down, the image taken when the shutter button 3010 is pressed, and the current position information of the image acquisition device 1000 are associated with each other in the external memory.
  • FIGS. 13A to 13C are diagrams showing an outline of a configuration example of a portable information terminal (for example, a camera-equipped mobile phone or a camera-equipped tablet terminal) that is an image acquisition apparatus according to the third embodiment.
  • 13A shows a state in which the portable information terminal 1200 is observed from the side
  • FIGS. 13B and 13C show a state in which the portable information terminal 1200 is observed from the front.
  • the portable information terminal 1200 is provided on the front surface side of the portable information terminal 1200, and is provided on the opposite side of the display 1230, which is a display 1230 that is used as a finder like a normal camera. And a front camera 1220 provided on the front side of the portable information terminal 1200.
  • the front camera 1220 photographs the display 1230 side, for example, the photographer and the background of the photographer.
  • the front camera 1220 is used when a portable information terminal is used as a videophone.
  • FIG. 13B shows a state in which the subject is photographed using the rear camera 1210.
  • the display 1230 displays a subject photographed by the rear camera 1210 as a real-time moving image, and displays a lock-on mark 1240 superimposed on the subject.
  • the portable information terminal 1200 acquires the subject position coordinates of the subject at the position of the lock-on mark 1240.
  • the portable information terminal 1200 recognizes the subject at the location where the lock-on mark 1240 was present as the subject to be photographed, and places the lock-on mark 1240 on the subject. The monitor screen continues to be displayed in real time. After that, when the shutter button 1260 is pressed, the portable information terminal 1200 acquires the subject image and the subject position information of the subject indicated by the lock-on mark 1240.
  • FIG. 13C shows an example of a state in which when a photographer is photographed using the front camera 1220, the photographing result is displayed on the display 1230 of the portable information terminal 1200.
  • the display 1230 displays a subject 1270 together with the photographer on the captured image.
  • an additional information display area 1280 is displayed so as to be superimposed on the screen.
  • the subject name of the subject 1270, and the subject position information of the subject by latitude and longitude are displayed.
  • the front camera 1220 since the front camera 1220 is used, the real-time display of the monitor screen at the time of shooting is more convenient for shooting when the left and right are reversed and displayed as a mirror image.
  • the three-dimensional data stored in the database is collated with the photographed image data before being horizontally reversed or displayed on the display unit.
  • the three-dimensional image data is reversed left-right and then collated with the subject image.
  • FIG. 14A is a diagram schematically showing an outline of a configuration example of a single-lens reflex camera 1400 that is an image acquisition device according to the fourth embodiment
  • FIG. 14B is a diagram according to the fourth embodiment. It is a figure which shows typically the outline
  • the light from the subject incident from the photographing optical axis 1410 is reflected by the mirror 1430 up 90 degrees through the lens and then focused on the focus glass 1420.
  • the subject image connected to the focus glass 1420 is repeatedly reflected by the pentaprism 1460 and then guided to the viewfinder 1440 so that the photographer can visually see the optical image of the subject.
  • the mirror 1430 rises to the vicinity of the focus glass 1420, and the light passing through the shooting optical axis 1410 is focused on the fourth camera sensor 1454 without blocking the mirror 1430, and the focal plane shutter 1470 in front of it is opened and closed. Thus, exposure is performed at the shutter speed at the time of shooting.
  • the camera sensor is shown in this figure, a camera using a film may be applied instead of the camera sensor.
  • the subject light is not focused on the fourth camera sensor 1454 as in a general digital camera.
  • the fourth camera sensor 1454 cannot be used to match the shape of the subject with the three-dimensional shape.
  • a method of acquiring the shape of the subject by passing through a part of the mirror 1430 as a half mirror and forming an image on the fourth camera sensor 1454 at the camera floor position by the sub mirror 1431, the viewfinder 1440
  • the third camera sensor 1453 is installed in parallel to the photographic optical axis 1410, separately from the method of imaging the light on the fourth camera sensor 1454 on the finder, and completely separate from the photographic optical system.
  • FIG. 14B shows a case where the image acquisition device is an optical viewfinder camera 1500 provided with an optical viewfinder separately from the photographing lens. Similarly to FIG. 14A, exposure is performed at the shutter speed at the time of shooting by opening and closing the focal plane shutter 1520 in front of the first camera sensor 1510.
  • a camera sensor is used, but a camera using a film may be used instead of the camera sensor.
  • a method is shown in which a mirror is provided in the optical viewfinder in the same manner as in FIG. 14A and light is focused on the second camera sensor 1530 on the viewfinder.
  • the shape of the subject is acquired, collated with three dimensions, and the subject position information is obtained. You can get it.
  • the present invention made by the present inventor has been specifically described based on the embodiment.
  • the present invention is not limited to the embodiment, and various modifications can be made without departing from the scope of the invention. Needless to say.
  • a part of the configuration of one embodiment may be replaced with the configuration of another embodiment.
  • the configuration of another embodiment may be added to the configuration of a certain embodiment. These all belong to the category of the present invention.
  • the present invention acquires both device position information and subject position information.
  • the present invention can also be applied to a device having a function for storing the position information.
  • a mobile phone such as a camera-equipped mobile phone equipped with a camera, capable of capturing and storing captured images, and having functions for referencing and uploading external data through communication and image processing inside the main unit, etc.
  • the information communication terminal is one of devices suitable for implementing the present invention.
  • Image data acquisition unit 220 ... subject position information acquisition unit, 230 ... matching processing unit, 240 image file generator, 1000: Image acquisition device, 1010 ... database, 1030: Shooting axis, 1400 ... single-lens reflex camera, 1500 ... optical viewfinder camera, 2010 ... Image file, 2030 ... Shooting information B 7010: Lock-on mark.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

An image acquisition device that acquires current position information of the image acquisition device comprises: a subject position information acquisition unit that determines a subject by comparing an image capture area, in which the image acquisition device captures an image, with subject electronic information including the coordinates constituting the subject, and that then acquires subject position information of the determined subject; and an image file generation unit that generates an image file including the current position information and the subject position information acquired by the subject position information acquisition unit.

Description

画像取得装置および画像ファイル生成方法ならびに画像ファイル生成プログラムImage acquisition apparatus, image file generation method, and image file generation program
 本装置は、画像を撮影、記憶する画像取得装置に関するものであり、特に撮影時の画像取得装置の位置情報を撮影した画像と共に記憶する機能を有する装置に関する。 The present apparatus relates to an image acquisition apparatus that captures and stores an image, and particularly relates to an apparatus having a function of storing positional information of the image acquisition apparatus at the time of shooting together with the captured image.
 被写体の画像を半導体により構成される多画素の集合体であるカメラセンサにレンズを通して画像投射させ、画素毎に照射された光の量を測定することにより、二次元の撮影画像を取得するデジタル方式の画像取得装置が普及している。 A digital system that obtains a two-dimensional photographed image by projecting an image of a subject through a lens to a camera sensor, which is an assembly of multiple pixels composed of semiconductors, and measuring the amount of light irradiated to each pixel. The image acquisition device is widely used.
 このような画像取得装置で撮影された画像の撮影画像データは、所定の画像圧縮方法により、その画像ファイルのサイズが小さくなるように圧縮されるとともに、exif(exchangeable image file format)と呼ばれる属性情報が、圧縮された画像ファイルに付加される。 The captured image data of an image captured by such an image acquisition device is compressed by a predetermined image compression method so that the size of the image file is reduced, and attribute information called exif (exchangable image file format) is used. Are added to the compressed image file.
 この属性情報には、画像データの撮影条件に関する情報(撮影した画像取得装置やレンズ、レンズの焦点距離や絞り値、シャッター速度、センサ感度等)に加え、撮影時の画像取得装置の位置情報が含まれ、これら属性情報を、前記画像ファイルに内包して記憶できる。 In this attribute information, in addition to information related to the shooting conditions of the image data (photographed image acquisition device and lens, focal length and aperture value of the lens, shutter speed, sensor sensitivity, etc.), position information of the image acquisition device at the time of shooting is included. The attribute information is included and stored in the image file.
 ここで、機器の位置情報の取得には、GPS(Global Positioning System)を用いた方法が最も一般的である。特開2006-33186号公報(特許文献1)には、GPSによる位置取得機能をカメラに搭載し、撮影時点の位置情報の他に、カメラを持ってユーザーが移動したときの移動軌跡を取得する技術が開示されている。 Here, a method using GPS (Global Positioning System) is the most common for acquiring the position information of the device. Japanese Patent Laid-Open No. 2006-33186 (Patent Document 1) includes a GPS position acquisition function in a camera, and acquires a movement locus when a user moves with the camera in addition to position information at the time of shooting. Technology is disclosed.
特開2006-33186号公報JP 2006-33186 A
 特許文献1に開示された技術によれば、特にGPSによる位置取得機能の初期起動時に、人工衛星の情報を受信し、現在時刻における人工衛星の運行スケジュールに従って、位置情報を算出できる。また、地球上の如何なる位置でも、人工衛星からの電波を受信できる条件であれば、カメラの位置情報を算出することができ、地図や景色などの情報に頼ることなく、簡便に正確なカメラの位置情報を得ることができる。 According to the technique disclosed in Patent Document 1, artificial satellite information is received, particularly at the initial activation of the position acquisition function by GPS, and the positional information can be calculated according to the operation schedule of the artificial satellite at the current time. In addition, the camera position information can be calculated at any position on the earth as long as it can receive radio waves from artificial satellites. Location information can be obtained.
 ところで、画像取得装置を用いて、被写体を撮影し、位置情報を記憶する場合、画像取得装置の位置情報ではなく、被写体の場所(被写体位置情報)を記憶する方が、実際に有用なケースがある。 By the way, when the subject is photographed using the image acquisition device and the position information is stored, it is actually more useful to store the location of the subject (subject position information) instead of the position information of the image acquisition device. is there.
 特許文献1に開示された技術によれば、位置取得機能はカメラに搭載されているため、取得できる位置情報はカメラ、即ち撮影者の位置情報のみであり、被写体位置情報を取得できないという問題があった。 According to the technique disclosed in Patent Document 1, since the position acquisition function is installed in the camera, the position information that can be acquired is only the position information of the camera, that is, the photographer, and the subject position information cannot be acquired. there were.
 本発明の目的は、被写体を撮影した際の画像取得装置の現在位置情報とともに被写体の被写体位置情報を画像ファイルに付加可能にする技術を提供することである。 An object of the present invention is to provide a technique that allows subject position information of a subject to be added to an image file together with current position information of an image acquisition device when the subject is photographed.
 上記課題は、例えば請求項の範囲に記載された技術により解決される。 The above problem can be solved by, for example, the technology described in the scope of claims.
 一例を挙げるならば、画像取得装置の現在位置情報を取得する画像取得装置であって、画像取得装置が撮影する撮影領域と、被写体を構成する座標を含む被写体電子情報と、を照合することで前記被写体を特定し、特定した前記被写体の被写体位置情報を取得する被写体位置情報取得部と、前記現在位置情報と、前記被写体位置情報取得部が取得した前記被写体位置情報とを含む画像ファイルを生成する画像ファイル生成部とを有する。 As an example, an image acquisition device that acquires current position information of an image acquisition device, by collating a shooting region captured by the image acquisition device with subject electronic information including coordinates constituting the subject. An image file including a subject position information acquisition unit that specifies the subject and acquires subject position information of the specified subject, the current position information, and the subject position information acquired by the subject position information acquisition unit is generated. And an image file generation unit.
 本発明によれば、被写体を撮影した際の画像取得装置の現在位置情報とともに被写体の被写体位置情報を画像ファイルに付加できる。 According to the present invention, the subject position information of the subject can be added to the image file together with the current position information of the image acquisition device when the subject is photographed.
実施の形態1に係る位置情報の取得機能を有する画像取得装置を含む通信システムの構成図である。1 is a configuration diagram of a communication system including an image acquisition device having a position information acquisition function according to Embodiment 1. FIG. 実施の形態1に係る画像取得装置のハードウェア構成図である。2 is a hardware configuration diagram of the image acquisition device according to Embodiment 1. FIG. 実施の形態1に係る画像取得装置にて被写体を観察するときの画面であるファインダーイメージの例を示す図である。6 is a diagram illustrating an example of a viewfinder image that is a screen when an object is observed by the image acquisition apparatus according to Embodiment 1. FIG. 実施の形態1に係る画像取得装置のソフトウェア構成図である。2 is a software configuration diagram of the image acquisition apparatus according to Embodiment 1. FIG. 実施の形態1に係る画像取得装置からの撮影軸線の説明図である。6 is an explanatory diagram of a shooting axis from the image acquisition apparatus according to Embodiment 1. FIG. 第1の実施例に係る画像取得装置にて取り扱う画像ファイルの内容を示す説明図である。It is explanatory drawing which shows the content of the image file handled with the image acquisition apparatus which concerns on a 1st Example. 実施の形態1に係る画像取得装置の動作例を示す説明図である。6 is an explanatory diagram illustrating an operation example of the image acquisition device according to Embodiment 1. FIG. 実施の形態1に係る画像取得装置の動作例を示す説明図である。6 is an explanatory diagram illustrating an operation example of the image acquisition device according to Embodiment 1. FIG. (a)~(c)は、実施の形態1に係る画像取得装置の動作例を示す説明図である。(A)-(c) is explanatory drawing which shows the operation example of the image acquisition apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る画像取得装置による、ロックオンマークにより選択する被写体までの最大距離を被写体毎に設定したデータの例を示す説明図である。FIG. 6 is an explanatory diagram illustrating an example of data in which a maximum distance to a subject selected by a lock-on mark is set for each subject by the image acquisition device according to the first embodiment. 実施の形態1に係る画像取得装置の全体処理の概要を示す図である。FIG. 3 is a diagram showing an overview of overall processing of the image acquisition device according to the first embodiment. (a)~(c)は、実施の形態2に係る、ファインダーの中央部近傍にない被写体を選択する方法を示す説明図である。(A)-(c) is explanatory drawing which shows the method based on Embodiment 2 which selects the to-be-photographed object which is not near the center part of a finder. (a)~(c)は、実施の形態3に係る画像取得装置である携帯情報端末の構成例の概要を示す図である。(A)-(c) is a figure which shows the outline | summary of the structural example of the portable information terminal which is an image acquisition apparatus which concerns on Embodiment 3. FIG. (a)は、実施の形態4に係る画像取得装置である、一眼レフレックスカメラの構成例の概要を模式的に示す図であり、(b)は、光学ファインダーカメラの構成例の概要を模式的に示す図である。(A) is a figure which shows typically the outline | summary of the structural example of the single-lens reflex camera which is an image acquisition apparatus which concerns on Embodiment 4, (b) is a schematic diagram of the structural example of an optical finder camera. FIG.
 以下、本発明の実施の形態1を、図1~図11を用いて詳細に説明する。
(実施の形態1)
 <システム構成>
Hereinafter, Embodiment 1 of the present invention will be described in detail with reference to FIGS.
(Embodiment 1)
<System configuration>
 図1は、実施の形態1に係る位置情報の取得機能を有する画像取得装置1000を含む通信システムの構成図である。 FIG. 1 is a configuration diagram of a communication system including an image acquisition apparatus 1000 having a position information acquisition function according to the first embodiment.
 図1に示されるように、通信システムは、画像取得装置1000と、無線ルータ1050と、無線ルータ1050を介して画像取得装置1000と接続されるデータベース1010とを有する。 As illustrated in FIG. 1, the communication system includes an image acquisition device 1000, a wireless router 1050, and a database 1010 connected to the image acquisition device 1000 via the wireless router 1050.
 画像取得装置1000は、デジタルカメラ(被写体1020の映像を光学レンズにより撮像素子に焦点を合わせ、撮像素子に投射された被写体映像の輝度と色とが、撮像素子を構成する複数の画素により検出されることで、多画素で構成されたデジタルイメージを撮影するカメラ)である場合だけではなく、既存のフィルム撮影式カメラであっても、撮影しているフィルムの一コマ一コマに対応した電子情報で構成された位置情報を記憶する記憶媒体を有している場合には、適用が可能である。 The image acquisition device 1000 is a digital camera (the image of the subject 1020 is focused on the image sensor by an optical lens, and the luminance and color of the subject image projected on the image sensor are detected by a plurality of pixels constituting the image sensor. Therefore, not only in the case of a camera that shoots a digital image composed of multiple pixels), but also with an existing film shooting camera, electronic information corresponding to each frame of the film being shot The present invention can be applied to the case where a storage medium that stores the position information configured as described above is provided.
 画像取得装置1000は、画像取得装置1000の撮影した位置を示す現在位置情報を取得するGPSユニットを搭載している。GPSユニットは、GPS衛星1040からの電波を受信し、カメラの現在位置情報を取得する。詳細には、GPSユニット3050は、複数のGPS人工衛星からの電波を受信可能であり、少なくとも3つ以上の人工衛星からの電波を受信し、受信した情報から得られる、人工衛星が有する原子時計により得られた正確な時刻、および各人工衛星の当該時刻における座標(人工衛星の軌道周回情報)を基礎情報とし、電波の発信から受信までにかかる時間(電波の伝播速度は、光の速度と同じ30万km/秒である)により得られた各人工衛星までの距離を用いることで、現在位置を算出する。 The image acquisition apparatus 1000 is equipped with a GPS unit that acquires current position information indicating the position taken by the image acquisition apparatus 1000. The GPS unit receives radio waves from the GPS satellite 1040 and acquires current position information of the camera. Specifically, the GPS unit 3050 is capable of receiving radio waves from a plurality of GPS artificial satellites, receives radio waves from at least three or more artificial satellites, and is obtained from received information. The basic information is the exact time obtained by the above and the coordinates of each artificial satellite at that time (orbital orbit information of the artificial satellite). The current position is calculated by using the distance to each artificial satellite obtained by the same method of 300,000 km / sec.
 また、画像取得装置1000は、無線ルータ1050や、図示しない携帯電話基地局からの電波を用いて、現在位置をおよそ特定することも可能である。さらに、画像取得装置1000は、近隣にGPSによる被写体位置情報1002を取得可能な携帯情報端末により得られた携帯情報端末の現在位置を、Bluetooth(登録商標)等の短距離無線通信を介して受信し、受信した携帯情報端末の現在位置を撮影した時の現在位置情報として取得できる。 The image acquisition apparatus 1000 can also specify the current position approximately using radio waves from a wireless router 1050 or a mobile phone base station (not shown). Furthermore, the image acquisition apparatus 1000 receives the current position of the portable information terminal obtained by a portable information terminal capable of acquiring subject position information 1002 by GPS nearby via short-range wireless communication such as Bluetooth (registered trademark). Then, it can be acquired as the current position information when the received current position of the portable information terminal is photographed.
 ところで、画像取得装置1000を用いて、被写体1020を撮影し、被写体位置情報1002を記憶する場合、画像取得装置の位置情報ではなく、被写体1020の場所(被写体位置情報1002)を記憶する方が、実際に有用なケースがある。このようなニーズに対して、画像取得装置1000とは別体の装置を被写体1020の近傍に置くことで、被写体位置情報1002を取得し、その情報を画像取得装置1000に送信して、撮影画像と被写体位置情報1002を関連付けて記憶することも考えられる。しかしながら、被写体近傍に画像取得装置1000とは別の装置を置く必要があり、撮影者は一度被写体1020に近づく必要がある。そのため、遠距離の被写体(山など)を撮影する場合には、撮影時の被写体1020の位置を取得することが困難になってしまう。 By the way, when the subject 1020 is imaged using the image acquisition device 1000 and the subject position information 1002 is stored, it is better to store not the position information of the image acquisition device but the location of the subject 1020 (subject position information 1002). There are actually useful cases. In response to such needs, by placing a device separate from the image acquisition device 1000 in the vicinity of the subject 1020, the subject position information 1002 is acquired, and the information is transmitted to the image acquisition device 1000 to obtain a captured image. And subject position information 1002 may be stored in association with each other. However, an apparatus different from the image acquisition apparatus 1000 needs to be placed near the subject, and the photographer needs to approach the subject 1020 once. Therefore, when photographing a long-distance subject (such as a mountain), it becomes difficult to acquire the position of the subject 1020 at the time of photographing.
 本発明において、画像取得装置1000はそのレンズの中心線と一致する撮影軸線1030を中心として、所定の範囲を撮影領域1060として撮影できる。この撮影領域1060の中で、撮影軸線1030上にある被写体1020が主な撮影対象である場合、その被写体1020の被写体位置情報1002を取得する。なお、撮影軸線1030は、画像取得装置1000のレンズの中心と、撮影領域1060の中心とを結んだ線である。 In the present invention, the image acquisition apparatus 1000 can photograph a predetermined range as the photographing region 1060 around the photographing axis 1030 that coincides with the center line of the lens. When the subject 1020 on the photographing axis 1030 is the main photographing target in the photographing region 1060, subject position information 1002 of the subject 1020 is acquired. The imaging axis 1030 is a line connecting the center of the lens of the image acquisition apparatus 1000 and the center of the imaging area 1060.
 データベース1010には、被写体1020の名称と、被写体位置情報1002と、被写体1020の3次元イメージデータとが対応付けた被写体電子情報が記憶されている。3次元イメージデータは、被写体1020を描画するためのデータであって、被写体1020を構成する各点の3次元空間上の位置を特定するための座標から構成される。 The database 1010 stores subject electronic information in which the name of the subject 1020, subject position information 1002, and three-dimensional image data of the subject 1020 are associated with each other. The three-dimensional image data is data for drawing the subject 1020 and is composed of coordinates for specifying the position of each point constituting the subject 1020 in the three-dimensional space.
 なお、データベース1010は、画像取得装置1000が有しても良い。その場合、画像取得装置1000を、定期的にデータベース1010に記憶されているデータを外部のサーバ(データベース)から取得することで最新のデータへ更新可能に構成しても良い。 The database 1010 may be included in the image acquisition apparatus 1000. In that case, the image acquisition apparatus 1000 may be configured to be able to be updated to the latest data by periodically acquiring data stored in the database 1010 from an external server (database).
 画像取得装置1000は、撮影軸線1030上にある被写体1020の3次元イメージデータと被写体位置情報1002とをデータベース1010から無線ルータ1050を経由して受信する。そして、画像取得装置1000は、受信した3次元イメージデータと撮影領域1060内にある被写体1020の形状との一致判定を行い、一致した被写体1020の被写体位置情報1002や、被写体1020の名称などに基づき画像ファイルを生成する。なお、画像取得装置1000は、3次元イメージデータや被写体位置情報1002などを、無線ルータ1050を経由せず、直接、データベース1010から受信しても良い。
 <画像取得装置のハードウェア構成>
The image acquisition apparatus 1000 receives the three-dimensional image data of the subject 1020 on the shooting axis 1030 and the subject position information 1002 from the database 1010 via the wireless router 1050. Then, the image acquisition apparatus 1000 determines whether the received 3D image data matches the shape of the subject 1020 in the shooting area 1060, and based on the subject position information 1002 of the matched subject 1020, the name of the subject 1020, and the like. Generate an image file. Note that the image acquisition apparatus 1000 may directly receive the three-dimensional image data, the subject position information 1002, and the like from the database 1010 without passing through the wireless router 1050.
<Hardware configuration of image acquisition device>
 図2は、実施の形態1に係る画像取得装置1000のハードウェア構成図である。図2に示されるように、画像取得装置1000は、中央情報処理装置であるCPU3000と、撮影時に押下されるシャッターボタン3010と、センサ(カメラセンサ)3020と、信号処理DSP3030と、エンコーダ/デコーダ3040と、GPSユニット3050と、Gセンサ3060と、地磁気センサ3070と、無線LAN3080と、フラッシュメモリ3090と、SD-RAM3100と、時計3110と、EVF/LCD3120と、操作スイッチ3130と、外部メモリI/F3140と、外部メモリ3141と、リモートコントロールI/F3150と、赤外線受光部3151と、短距無線通信部3160と、各部を相互に接続するバス3001とを有する。画像取得装置1000は、その実施例として基本的にコンピュータシステムとして構成している。 FIG. 2 is a hardware configuration diagram of the image acquisition apparatus 1000 according to the first embodiment. As shown in FIG. 2, the image acquisition apparatus 1000 includes a CPU 3000 that is a central information processing apparatus, a shutter button 3010 that is pressed during shooting, a sensor (camera sensor) 3020, a signal processing DSP 3030, and an encoder / decoder 3040. GPS unit 3050, G sensor 3060, geomagnetic sensor 3070, wireless LAN 3080, flash memory 3090, SD-RAM 3100, clock 3110, EVF / LCD 3120, operation switch 3130, external memory I / F 3140 An external memory 3141, a remote control I / F 3150, an infrared light receiving unit 3151, a short-range wireless communication unit 3160, and a bus 3001 that interconnects the units. The image acquisition apparatus 1000 is basically configured as a computer system as an example.
 センサ3020は、図示しないレンズにより集光された光学映像を電気信号に変換する撮像素子である。信号処理DSP3030は、センサ3020の信号処理を行う。センサ3020、信号処理DSP3030、エンコーダ/デコーダ3040はバスに接続されているだけでなく、センサ3020からの出力信号を直接、信号処理DSP3030、エンコーダ/デコーダ3040に送って映像信号の処理を行っても良い。この場合は、バス3001をサイズの大きい映像信号を通過させないため、バス3001を画像信号が占有することがなく、撮影から圧縮処理を行いつつ、カメラは他の動作を行うことができる。 Sensor 3020 is an image sensor that converts an optical image collected by a lens (not shown) into an electrical signal. The signal processing DSP 3030 performs signal processing of the sensor 3020. The sensor 3020, the signal processing DSP 3030, and the encoder / decoder 3040 are not only connected to the bus, but the output signal from the sensor 3020 may be directly sent to the signal processing DSP 3030 and the encoder / decoder 3040 to process the video signal. good. In this case, since the video signal having a large size is not passed through the bus 3001, the image signal is not occupied on the bus 3001, and the camera can perform other operations while performing compression processing from shooting.
 エンコーダ/デコーダ3040は、信号処理DSP3030により得られたRGBで構成された映像信号を離散コサイン変換やハフマン符号化等の圧縮方法を駆使して圧縮処理を行う。なお、エンコーダ/デコーダ3040は、撮影した静止画像のみならず、動画像の圧縮処理を行う機能を有していても良い。 The encoder / decoder 3040 compresses the video signal composed of RGB obtained by the signal processing DSP 3030 using a compression method such as discrete cosine transform or Huffman coding. Note that the encoder / decoder 3040 may have a function of compressing not only a captured still image but also a moving image.
 GPSユニット3050は、画像取得装置1000の現在位置を示す位置情報を取得する。Gセンサ3060は、画像取得装置1000の方向や画像取得装置1000が移動させたときに発生する加速度などに基づき、画像取得装置1000の仰俯角などを測定する。地磁気センサ3070は、例えば、画像取得装置1000の方位角を測定する。 The GPS unit 3050 acquires position information indicating the current position of the image acquisition apparatus 1000. The G sensor 3060 measures the elevation angle of the image acquisition device 1000 based on the direction of the image acquisition device 1000 and the acceleration generated when the image acquisition device 1000 is moved. The geomagnetic sensor 3070 measures the azimuth angle of the image acquisition device 1000, for example.
 無線LAN3080は、カメラと携帯情報端末などの外部機器との間の無線通信や無線通信基地局の信号を用いて現在位置の取得などを行う。 The wireless LAN 3080 performs wireless communication between the camera and an external device such as a portable information terminal, or obtains the current position using a signal of a wireless communication base station.
 フラッシュメモリ3090は、カメラ全体を制御するプログラムや、基本定数が記憶されている。SD-RAM3100は、プログラム実行におけるワークメモリであり、逐次更新されるGPS衛星軌道情報や、GPSにより取得される位置情報などが記憶される。 The flash memory 3090 stores a program for controlling the entire camera and basic constants. The SD-RAM 3100 is a work memory for program execution, and stores GPS satellite orbit information that is sequentially updated, position information that is acquired by GPS, and the like.
 時計3110は、撮影時に記憶される画像情報にタイムコードをつけたり、前記したGPSによる位置情報の測定のために利用される。 The clock 3110 is used for attaching a time code to image information stored at the time of photographing or measuring the position information by the GPS.
 操作スイッチ3130は、例えば、画像取得装置1000の設定内容の変更など、画像取得装置1000の各種操作を受け付ける。 The operation switch 3130 accepts various operations of the image acquisition apparatus 1000 such as changing the setting contents of the image acquisition apparatus 1000, for example.
 外部メモリI/F3140は、圧縮された画像データが記憶されている外部メモリ3141を画像取得装置1000に接続する。 The external memory I / F 3140 connects the external memory 3141 storing the compressed image data to the image acquisition apparatus 1000.
 赤外線受光部3151は、赤外線リモコンなどで画像取得装置1000のシャッター動作等の外部からの指示を受信する。 The infrared light receiving unit 3151 receives an instruction from the outside such as a shutter operation of the image acquisition apparatus 1000 by an infrared remote controller or the like.
 リモートコントロールI/F3150は、赤外線受光部3151から出力される出力信号を画像取得装置1000の制御信号として利用するためのデジタルデータに変換する。 The remote control I / F 3150 converts the output signal output from the infrared light receiving unit 3151 into digital data for use as a control signal for the image acquisition apparatus 1000.
 短距無線通信部3160は、画像取得装置1000と携帯情報端末などの外部機器との間で短距離無線(例えば、Bluetooth(登録商標))を介した通信を行う。 The short-range wireless communication unit 3160 performs communication between the image acquisition apparatus 1000 and an external device such as a portable information terminal via short-range wireless (for example, Bluetooth (registered trademark)).
 EVF/LCD(ディスプレイ)3120は、撮影時にセンサ3020で受光した被写体のファインダーイメージ(後述、図3)を表示する。また、EVF/LCD(ディスプレイ)3120は、後述の外部メモリ3141に記憶されている既に撮影して記憶されている画像データを視認により確認するために利用される。また、EVF/LCD(ディスプレイ)3120は、画像取得装置1000の設定内容を確認・変更をするために利用される。以下、図3を用いて、EVF/LCD3120に表示されるファインダーイメージについて説明する。 The EVF / LCD (display) 3120 displays a finder image (described later, FIG. 3) of the subject received by the sensor 3020 during shooting. The EVF / LCD (display) 3120 is used for visually confirming image data that has already been taken and stored in an external memory 3141 to be described later. The EVF / LCD (display) 3120 is used for confirming / changing the setting contents of the image acquisition apparatus 1000. Hereinafter, a finder image displayed on the EVF / LCD 3120 will be described with reference to FIG.
 図3に示されるように、EVF/LCDには、撮影者がカメラにて被写体を観察するときの画面であるファインダーイメージ7000が表示されている。ファインダーイメージ7000内には、被写体の候補である、第1被写体4030と第2被写体4040と第3被写体4050とが表示されている。 As shown in FIG. 3, the EVF / LCD displays a viewfinder image 7000 that is a screen when the photographer observes the subject with the camera. In the viewfinder image 7000, subject candidates, a first subject 4030, a second subject 4040, and a third subject 4050 are displayed.
 また、ファインダーイメージ7000にはロックオンマーク7010が表示されており、通常このロックオンマーク7010は、もっとも被写体を捉えやすい位置、即ちファインダーイメージ7000の画面中央近傍に表示される。撮影者は、このロックオンマーク7010を目的の被写体に合わせ、例えば、第2被写体4040を撮影者が選択し、シャッターボタンを半押しして、第2被写体4040にAF(Auto Focus)を合わせる等の操作を行うことにより、第2被写体4040を撮影者が被写体位置情報を記録したい被写体として選定し、シャッターボタンを押下することで、撮影画像データが取得される。その後、エンコーダ/デコーダにより圧縮された圧縮画像データに現在位置情報と被写体位置情報とを含む撮影情報を付加した画像ファイルが、外部メモリに記憶される。
 <画像取得装置のソフトウェア構成>
Also, a lock-on mark 7010 is displayed on the viewfinder image 7000, and this lock-on mark 7010 is usually displayed at a position where the subject can be most easily captured, that is, near the center of the screen of the viewfinder image 7000. The photographer aligns the lock-on mark 7010 with a target subject. For example, the photographer selects the second subject 4040, presses the shutter button halfway, and aligns AF (Auto Focus) with the second subject 4040. By performing the above operations, the second subject 4040 is selected as a subject for which the photographer wants to record subject position information, and the photographed image data is acquired by pressing the shutter button. Thereafter, an image file in which shooting information including current position information and subject position information is added to the compressed image data compressed by the encoder / decoder is stored in the external memory.
<Software configuration of image acquisition device>
 図4は、実施の形態1に係る画像取得装置1000のソフトウェア構成図である。図4に示されるように、画像取得装置1000は、画像データ取得部210と、被写体位置情報取得部220と、マッチング処理部230と、画像ファイル生成部240とを有する。 FIG. 4 is a software configuration diagram of the image acquisition apparatus 1000 according to the first embodiment. As illustrated in FIG. 4, the image acquisition apparatus 1000 includes an image data acquisition unit 210, a subject position information acquisition unit 220, a matching processing unit 230, and an image file generation unit 240.
 被写体位置情報取得部220は、画像取得装置1000が撮影する撮影領域と、データベースに記憶されている被写体を構成する各点の3次元空間上の位置を特定するための座標から構成される3次元イメージデータとを照合することで被写体を特定し、特定した被写体の被写体位置情報を取得する。 The subject position information acquisition unit 220 is a three-dimensional structure composed of a photographing region photographed by the image obtaining apparatus 1000 and coordinates for specifying the position of each point constituting the subject stored in the database in a three-dimensional space. The subject is identified by collating with the image data, and subject position information of the identified subject is acquired.
 被写体位置情報取得部220は、画像データ取得部210により特定される撮影領域に含まれる、すべての被写体位置情報と3次元イメージデータとをデータベースに要求する。データベースは、要求に応じて被写体位置情報と3次元イメージデータとを画像取得装置1000へ送信する。画像取得装置1000は、データベースから送信される被写体位置情報と3次元イメージデータとを受信する。そして、被写体位置情報取得部220は、データベースから送信される被写体位置情報と3次元イメージデータとを取得する。 The subject position information acquisition unit 220 requests all the subject position information and three-dimensional image data included in the imaging region specified by the image data acquisition unit 210 from the database. The database transmits subject position information and three-dimensional image data to the image acquisition apparatus 1000 in response to a request. The image acquisition apparatus 1000 receives subject position information and three-dimensional image data transmitted from the database. The subject position information acquisition unit 220 acquires subject position information and three-dimensional image data transmitted from the database.
 また、被写体位置情報取得部220は、画像取得装置1000のレンズの中心と、撮影領域の中心とを結ぶ撮影軸線を算出する。以下、図5に基づいて、撮影軸線について詳細に説明する。 Also, the subject position information acquisition unit 220 calculates a shooting axis line that connects the center of the lens of the image acquisition apparatus 1000 and the center of the shooting area. Hereinafter, the imaging axis will be described in detail with reference to FIG.
 図5は、撮影軸線1030のベクトルを示す図である。ここで、図5に示されるX軸6010は北向き、Y軸6020は東向き、Z軸6030は上向きという座標系で、画像取得装置1000の現在位置情報を原点6000とする。その場合、撮影軸線1030のベクトルは、X、Y、Z軸の空間上でX軸6010を基準として、方向角6060、仰俯角6050の二つの角度を持つことで、示すことができる。すなわち、撮影軸線1030は、GPSユニットにより取得される画像取得装置1000の現在位置情報(原点6000)と、地磁気センサにより測定される方向角6060と、Gセンサにより取得される仰俯角6050とに基づき算出される。 FIG. 5 is a diagram showing a vector of the imaging axis 1030. Here, the X-axis 6010 shown in FIG. 5 is the north-facing direction, the Y-axis 6020 is the east-facing, and the Z-axis 6030 is the upward-facing coordinate system. In this case, the vector of the imaging axis 1030 can be shown by having two angles of a directional angle 6060 and an elevation angle 6050 with respect to the X axis 6010 in the X, Y, and Z axis space. That is, the imaging axis 1030 is based on the current position information (origin 6000) of the image acquisition device 1000 acquired by the GPS unit, the direction angle 6060 measured by the geomagnetic sensor, and the elevation angle 6050 acquired by the G sensor. Calculated.
 そして、撮影軸線1030と、被写体の3次元イメージデータの構成面との関係(構成面内に交点があるかどうか)を示すことで、被写体がターゲットとなる被写体であるかどうかを判別できる。 Then, it is possible to determine whether or not the subject is a target subject by indicating the relationship between the imaging axis 1030 and the constituent plane of the subject's 3D image data (whether there is an intersection in the constituent plane).
 再び図4を参照する。被写体位置情報取得部220は、画像取得装置1000の現在位置情報と、仰俯角と、方向角とに基づき、撮影軸線を算出する。そして、被写体位置情報取得部220は、撮影軸線と交わる被写体(撮影軸線との交点を含む被写体)を特定する。詳細には、被写体位置情報取得部220は、データベースから送信される各3次元イメージデータから、算出した撮影軸線と交わる3次元イメージデータを特定する。例えば、被写体位置情報取得部220は、撮影軸線を構成するいずれかの座標と、3次元イメージデータを構成するいずれかの座標とが一致する場合、当該座標を含む3次元イメージデータを被写体の3次元イメージデータとして特定する。 Refer to FIG. 4 again. The subject position information acquisition unit 220 calculates an imaging axis based on the current position information of the image acquisition apparatus 1000, the elevation angle, and the direction angle. Then, the subject position information acquisition unit 220 identifies a subject that intersects with the shooting axis (a subject including an intersection with the shooting axis). Specifically, the subject position information acquisition unit 220 specifies 3D image data that intersects the calculated imaging axis from each 3D image data transmitted from the database. For example, the subject position information acquisition unit 220, when any of the coordinates constituting the shooting axis coincides with any of the coordinates that constitute the three-dimensional image data, the subject position information acquisition unit 220 converts the three-dimensional image data including the coordinates to the 3 of the subject. Identified as dimensional image data.
 また、被写体位置情報取得部220は、データベースから送信される各被写体位置情報から、特定した3次元イメージデータと対応する被写体の被写体位置情報を取得する。 Further, the subject position information acquisition unit 220 acquires subject position information of the subject corresponding to the identified three-dimensional image data from each subject position information transmitted from the database.
 また、被写体位置情報取得部220は、被写体電子情報の種類に応じて、被写体位置情報を、被写体電子情報に付随した位置情報、もしくは撮影領域に含まれる被写体の画像データと一致または近似する被写体電子情報と対応する被写体位置情報、もしくは画像取得装置のレンズの中心と撮影領域の中心とを結ぶ撮影軸線を中心とした、予め指定されている範囲にて重なる被写体の被写体位置情報を取得する。 Further, the subject position information acquisition unit 220 matches the subject position information with the position information associated with the subject electronic information or the image data of the subject included in the shooting area according to the type of the subject electronic information. The subject position information corresponding to the information or the subject position information of the subject that overlaps in a predesignated range around the photographing axis connecting the center of the lens of the image acquisition device and the center of the photographing region is acquired.
 マッチング処理部230は、被写体位置情報取得部220が特定した3次元イメージデータを2次元画像イメージデータへ変換する。また、マッチング処理部230は、撮影された画像の撮影画像データから、撮影対象の被写体の画像データを抽出する。例えば、マッチング処理部230は、撮影領域の中央から所定範囲内に撮像されている被写体の画像データを抽出する。そして、マッチング処理部230は、変換した後の2次元画像イメージデータに基づく画像と、撮影画像データから抽出した画像データに基づく画像とが一致または近似するか判定する。 The matching processing unit 230 converts the 3D image data specified by the subject position information acquisition unit 220 into 2D image data. In addition, the matching processing unit 230 extracts image data of the subject to be imaged from the captured image data of the captured image. For example, the matching processing unit 230 extracts image data of a subject imaged within a predetermined range from the center of the imaging region. Then, the matching processing unit 230 determines whether the image based on the converted two-dimensional image image data and the image based on the image data extracted from the captured image data match or approximate.
 画像データ取得部210は、GPSユニットが取得した現在位置情報と、Gセンサにより測定された仰俯角と、地磁気センサにより測定された方位角とに基づき、画像取得装置1000により撮影される撮影領域を特定する。 The image data acquisition unit 210 obtains an imaging region to be imaged by the image acquisition device 1000 based on the current position information acquired by the GPS unit, the elevation angle measured by the G sensor, and the azimuth angle measured by the geomagnetic sensor. Identify.
 また、シャッターボタンが押下されると、画像データ取得部210は、撮影領域内の画像を描画するための撮影画像データを取得する。そして、画像データ取得部210は、撮像画像データに基づき、サムネイル画像を生成する。また、撮影画像データは、エンコーダ/デコーダにより圧縮され圧縮画像データが生成される。 Further, when the shutter button is pressed, the image data acquisition unit 210 acquires captured image data for drawing an image in the imaging region. Then, the image data acquisition unit 210 generates a thumbnail image based on the captured image data. The captured image data is compressed by an encoder / decoder to generate compressed image data.
 画像ファイル生成部240は、現在位置情報と、被写体位置情報とを含む撮影情報B2と、撮影情報Aとを生成する。また、画像ファイル生成部240は、生成した撮影情報Aと、撮影情報と、サムネイル画像と、圧縮画像データとにより構成される画像ファイル(後述、図6)を生成する。そして、画像ファイル生成部240は、生成した画像ファイルを外部メモリに記憶する。
 <画像情報ファイル>
The image file generation unit 240 generates shooting information B2 including current position information and subject position information, and shooting information A. In addition, the image file generation unit 240 generates an image file (described later, FIG. 6) including the generated shooting information A, shooting information, thumbnail images, and compressed image data. Then, the image file generation unit 240 stores the generated image file in the external memory.
<Image information file>
 図6は、本発明の第1の実施例に係る画像取得装置1000にて取り扱う画像ファイル2010の内容を示した説明図である。 FIG. 6 is an explanatory diagram showing the contents of the image file 2010 handled by the image acquisition apparatus 1000 according to the first embodiment of the present invention.
 画像ファイル2010は、撮影情報A2020と、撮影情報B2030と、サムネイル画像2040と、圧縮画像データ2050とにより構成される。 The image file 2010 includes shooting information A2020, shooting information B2030, a thumbnail image 2040, and compressed image data 2050.
 撮影情報A2020は、画像ファイル2010に記憶されている撮影画像2000に関する情報の種類を示す。サムネイル画像2040は、撮影画像2000を縮小した画像である。圧縮画像データ2050は、撮影画像2000の情報量を離散コサイン変換やハフマン符号化などの変換・符号化方法を組み合わせることで圧縮されることで、データ量を少なくして記憶や読出し効率が上げられている。撮影情報B2030には、撮影情報A2020で示した各情報の実データが記憶されている。 Shooting information A2020 indicates the type of information related to the shot image 2000 stored in the image file 2010. The thumbnail image 2040 is a reduced image of the captured image 2000. The compressed image data 2050 is compressed by combining the information amount of the captured image 2000 with a transform / encoding method such as discrete cosine transform or Huffman coding, thereby reducing the data amount and increasing the storage and reading efficiency. ing. In the shooting information B2030, actual data of each piece of information indicated by the shooting information A2020 is stored.
 撮影情報B2030に示す、撮影画像2000に関する情報は、例えば撮影日時と、記憶日時と、撮影に用いたカメラ名と、撮影に利用したレンズ名と、シャッター速度と、絞り値と、フィルムモード(例えばリバーサルモードや白黒モードなど)と、撮影時のセンサ出力を増幅するゲインを表すISO感度と、画像取得装置1000が撮影画像2000を撮影した位置を示す現在位置情報と、被写体位置情報と、被写体の名称を示す被写体名称とから構成される。 Information about the captured image 2000 shown in the shooting information B2030 includes, for example, a shooting date and time, a storage date and time, a camera name used for shooting, a lens name used for shooting, a shutter speed, an aperture value, and a film mode (for example, Reversal mode, black-and-white mode, etc.), ISO sensitivity indicating a gain for amplifying the sensor output at the time of shooting, current position information indicating the position where the image acquisition device 1000 has shot the shot image 2000, subject position information, And a subject name indicating the name.
 このように、撮影情報A2020と撮影情報B2030とサムネイル画像2040と圧縮画像データ2050とをひとまとめにして、一つの画像ファイル2010として扱うことで、画像ファイル2010に含まれる撮影情報A2020と撮影情報B2030とサムネイル画像2040と圧縮画像データ2050とを、一体で画像取得装置1000から他の機器にコピーできる。これによって、関連情報をひとまとめにして取り扱うことができるため、特に撮影情報B2030を失うことなく、画像ファイル2010を取り扱うことができる。また、図1にて前述したように、本実施例である画像ファイル全体ではなく、撮影情報B2030に相当する、「ある日時に」「どの場所から」「どの場所にある被写体を見た」という情報だけを取得することも可能である。
 <カメラ動作>
As described above, the shooting information A2020, the shooting information B2030, the thumbnail image 2040, and the compressed image data 2050 are collectively handled as one image file 2010, whereby the shooting information A2020 and the shooting information B2030 included in the image file 2010 are displayed. The thumbnail image 2040 and the compressed image data 2050 can be integrally copied from the image acquisition apparatus 1000 to another device. As a result, the related information can be handled together, so that the image file 2010 can be handled without losing the shooting information B2030. In addition, as described above with reference to FIG. 1, “at a certain date / time”, “from which location”, and “from which location the subject was viewed”, which corresponds to the shooting information B2030, not the entire image file according to the present embodiment. It is also possible to obtain only information.
<Camera operation>
 図7、図8は、実施の形態1に係る画像取得装置1000の動作例を示す説明図である。図7は、画像取得装置1000と第1被写体4030と第2被写体4040とのレイアウトを、側方から観察した状態を示し、図8は、画像取得装置1000と第1被写体4030と第2被写体4040とのレイアウトを、斜め方向から観察した状態を示す。 7 and 8 are explanatory diagrams illustrating an operation example of the image acquisition apparatus 1000 according to the first embodiment. 7 shows a state in which the layout of the image acquisition device 1000, the first subject 4030, and the second subject 4040 is observed from the side, and FIG. 8 shows the image acquisition device 1000, the first subject 4030, and the second subject 4040. The layout is observed from an oblique direction.
 図7、図8に示されるように、画像取得装置1000は、そのレンズ方向に伸びる撮影軸線1030方向に向けられて、第1被写体4030や第2被写体4040の画像を取得可能な状態で設置されている。このとき、画像取得装置1000の現在位置4001は、地球上の緯度・経度・高度の3つの数値で表される座標データで示すことができる。 As shown in FIGS. 7 and 8, the image acquisition apparatus 1000 is installed in a state in which an image of the first subject 4030 and the second subject 4040 can be acquired with the imaging axis 1030 extending in the lens direction. ing. At this time, the current position 4001 of the image acquisition apparatus 1000 can be indicated by coordinate data represented by three numerical values of latitude, longitude, and altitude on the earth.
 図1で説明した被写体の3次元イメージデータが一般的な形状の建物の場合、3次元イメージデータに基づいて描画される3次元イメージ4010は、直方体形状であり、全部で6面の構成面4011により構成される。 When the three-dimensional image data of the subject described in FIG. 1 is a building having a general shape, the three-dimensional image 4010 drawn based on the three-dimensional image data has a rectangular parallelepiped shape and has six constituent surfaces 4011 in total. Consists of.
 被写体位置情報取得部220は、3次元イメージ4010を構成する構成面4011と、撮影軸線1030との交点の有無を計算する。そして、被写体位置情報取得部220は、交点が存在する第1被写体4030と、第2被写体4040とを被写体の候補として特定する。 The subject position information acquisition unit 220 calculates the presence / absence of an intersection between the configuration surface 4011 constituting the three-dimensional image 4010 and the imaging axis 1030. Then, the subject position information acquisition unit 220 identifies the first subject 4030 and the second subject 4040 where the intersection is present as subject candidates.
 次に、マッチング処理部230は、被写体位置情報取得部220が特定した第1被写体4030を描画するための3次元イメージデータと、第2被写体4040を描画するための3次元イメージデータとを2次元画像イメージデータへ変換する。そして、マッチング処理部230は、画像取得装置1000の撮影軸線1030から見た各被写体の画像と、撮影者により撮影された画像に含まれる被写体に相当する建物などの画像とが一致または近似するかマッチングチェックをする。 Next, the matching processing unit 230 two-dimensionally converts the three-dimensional image data for drawing the first subject 4030 specified by the subject position information acquisition unit 220 and the three-dimensional image data for drawing the second subject 4040. Convert to image data. Then, the matching processing unit 230 matches or approximates the image of each subject viewed from the imaging axis 1030 of the image acquisition apparatus 1000 and the image of the building corresponding to the subject included in the image captured by the photographer. Perform a matching check.
 撮影者により撮影される画像は、撮影者が主な被写体、すなわち、被写体位置情報を付加したい被写体をメインに撮影している。そして、主に被写体としては、例えばAFロックされた対象や、後述するロックオンマークによりターゲット設定がされた対象が該当する。 In the image photographed by the photographer, the photographer mainly photographs the main subject, that is, the subject to which the subject position information is to be added. The subject mainly includes, for example, an AF locked target and a target set by a lock-on mark described later.
 マッチング処理部230は、マッチングチェックした結果、一致または近似する被写体を選定する。マッチングチェックすることで主な被写体が、第1被写体4030か第2被写体4040のいずれであるかを選定できる。 The matching processing unit 230 selects a subject that matches or approximates as a result of the matching check. By performing the matching check, it can be selected whether the main subject is the first subject 4030 or the second subject 4040.
 被写体位置情報取得部220は、撮影領域に含まれる被写体の画像データと一致または近似する3次元イメージデータと対応する被写体位置情報を取得する。これにより、第1被写体4030の位置である第1被写体位置4031と、第2被写体4040の位置である第2被写体位置4041とのうち、適切な方を被写体位置情報として取得できる。そして、より適切な被写体位置情報を画像ファイルに付加できる。 The subject position information acquisition unit 220 acquires subject position information corresponding to three-dimensional image data that matches or approximates the image data of the subject included in the imaging region. As a result, an appropriate one of the first subject position 4031 that is the position of the first subject 4030 and the second subject position 4041 that is the position of the second subject 4040 can be acquired as subject position information. Then, more appropriate subject position information can be added to the image file.
 第1被写体4030は、もっとも画像取得装置1000の近くに存在する被写体の候補である。そして、第2被写体位置4041が、第1被写体4030の次に画像取得装置1000の近くに存在する被写体の候補である。このような場合、被写体位置情報取得部220は、マッチングチェックされた結果、一致または近似すると判定された被写体に替えて、もっとも画像取得装置1000との距離が近い交点を有する被写体である第1被写体4030を、被写体として選定するようにしても良い。 The first subject 4030 is a candidate for a subject that is closest to the image acquisition apparatus 1000. The second subject position 4041 is a candidate for a subject that exists near the image acquisition apparatus 1000 next to the first subject 4030. In such a case, the subject position information acquisition unit 220 replaces the subject that is determined to match or approximate as a result of the matching check, and is a first subject that has an intersection closest to the image acquisition device 1000. 4030 may be selected as the subject.
 ここで、望遠レンズによる撮影では顕著になるが、被写体に対して1点しかない中心点を、所望の被写体に正確に合わせ続けることは困難であり、フレーミング中もわずかな手振れで、ロックオンマークの中心位置が隣の被写体に行ってしまい、シャッター半押しによる被写体の選択も難しくなってしまうという問題があった。これまでに説明した例では、ロックオンマークの中心の1点で被写体を選択していたのに対し、以下に示す例では、ロックオンマークと一部が重なる被写体を選択する点で異なる。 Here, it becomes noticeable when shooting with a telephoto lens, but it is difficult to keep the center point, which is only one point on the subject, accurately aligned with the desired subject. There is a problem in that it becomes difficult to select a subject by half-pressing the shutter because the center position of the subject goes to the adjacent subject. In the examples described so far, the subject is selected at one point at the center of the lock-on mark, but in the example shown below, a subject that partially overlaps with the lock-on mark is selected.
 図9(a)~(c)は、実施の形態1に係る画像取得装置1000の動作例を示す説明図である。図9(a)に示されるように、画像取得装置1000の前には、第1被写体4030と、第2被写体4040と、第3被写体4050とが撮影領域1060に含まれる。また、第1被写体4030と、第2被写体4040と、第3被写体4050とは、ロックオンマーク7010の範囲に含まれる。 FIGS. 9A to 9C are explanatory diagrams illustrating an operation example of the image acquisition apparatus 1000 according to the first embodiment. As shown in FIG. 9A, the imaging area 1060 includes a first subject 4030, a second subject 4040, and a third subject 4050 in front of the image acquisition apparatus 1000. The first subject 4030, the second subject 4040, and the third subject 4050 are included in the range of the lock-on mark 7010.
 被写体位置情報取得部は、画像取得装置のレンズの中心と撮影領域の中心とを結ぶ撮影軸線を中心とした、予め指定されている範囲(例えば、ロックオンマーク7010が表示されている範囲)にて一部または全部が重なる被写体を、被写体位置情報取得の対象として特定する。 The subject position information acquisition unit has a predetermined range (for example, a range in which a lock-on mark 7010 is displayed) centered on a shooting axis connecting the center of the lens of the image acquisition device and the center of the shooting region. Thus, a subject that partially or entirely overlaps is specified as a subject for obtaining subject position information.
 図9(a)に示される例では、被写体位置情報取得部は、各被写体の3次元を構成する構成面と、ロックオンマーク7010の一部または全部が重なるか否かを計算する。そして、被写体位置情報取得部は、ロックオンマーク7010の一部または全部が重なる第1被写体4030と、第2被写体4040と、第3被写体4050とを被写体の候補として特定する。 In the example shown in FIG. 9A, the subject position information acquisition unit calculates whether or not a part of or all of the lock-on mark 7010 overlaps the three-dimensional configuration surface of each subject. Then, the subject position information acquisition unit identifies the first subject 4030, the second subject 4040, and the third subject 4050, which are partially or entirely overlapped with the lock-on mark 7010, as subject candidates.
 次に、シャッターボタンが押下されると、被写体位置情報取得部は、画像取得装置からの距離が近い被写体を、被写体位置情報を取得する対象として特定する。 Next, when the shutter button is pressed, the subject position information acquisition unit specifies a subject having a short distance from the image acquisition device as a target for acquiring subject position information.
 図9(a)に示される例では、被写体位置情報取得部は、画像取得装置1000のレンズから第1被写体4030までの距離と、画像取得装置1000のレンズから第2被写体4040までの距離と、画像取得装置1000のレンズから第3被写体4050までの距離とを算出する。そして、被写体位置情報取得部は、距離の値が小さい(距離が近い)第1被写体4030を被写体として選定する。これにより、撮影前に被写体の3次元イメージとの照合や距離計算が済んでいるので、速やかな撮影と終了処理を行うことができるが、この手法だけでは、シャッターボタンが押下された瞬間にどの被写体を選択されたかが定かではないという問題が生じる。 In the example illustrated in FIG. 9A, the subject position information acquisition unit includes a distance from the lens of the image acquisition device 1000 to the first subject 4030, a distance from the lens of the image acquisition device 1000 to the second subject 4040, The distance from the lens of the image acquisition apparatus 1000 to the third subject 4050 is calculated. Then, the subject position information acquisition unit selects the first subject 4030 having a small distance value (close distance) as the subject. As a result, since the comparison with the three-dimensional image of the subject and the distance calculation have been completed before shooting, it is possible to perform quick shooting and end processing. However, with this method alone, whichever is the moment the shutter button is pressed There is a problem that it is not certain whether the subject has been selected.
 図9(b)は、図9(a)と同様に、ロックオンマーク7010と一部が重なる被写体をすべて候補として選択する。一方、図9(b)に示される例では選択されている被写体(撮影時に被写体位置情報が取得される対象となる被写体)の画像に、被写体の3次元イメージデータに基づくワイヤフレームイメージ(被写体電子情報に基づく画像)4042を重畳して表示する。これにより、撮影者は、シャッターボタンが押下される際にどの被写体が選択されるかを分かり易く見ることができる。 In FIG. 9B, similar to FIG. 9A, all the subjects that partially overlap with the lock-on mark 7010 are selected as candidates. On the other hand, in the example shown in FIG. 9B, a wire frame image (subject electronic) based on the three-dimensional image data of the subject is added to the image of the selected subject (the subject from which subject position information is acquired at the time of shooting). (Image based on information) 4042 is superimposed and displayed. Thus, the photographer can easily see which subject is selected when the shutter button is pressed.
 また、図9(b)では、画像取得装置1000の撮影領域1060と、撮影軸線1030と、ロックオンマーク7010と、第1被写体4030と、第2被写体4040とのレイアウトは図9(a)と同様であるものの、第1被写体4030の高さが、第2被写体4040の高さよりも低い。 9B, the layout of the imaging region 1060, the imaging axis 1030, the lock-on mark 7010, the first subject 4030, and the second subject 4040 of the image acquisition apparatus 1000 is as shown in FIG. 9A. Although the same, the height of the first subject 4030 is lower than the height of the second subject 4040.
 ロックオンマーク7010の一部または全部が、第1被写体4030と、第2被写体4040とに重なっている場合、その両方の3次元イメージデータとの照合がフレーミング中に行われる。その後、撮影時のわずかな画像取得装置1000の上下方向のブレで、シャッターボタンが押下される瞬間に第1被写体4030を選ぶか、第2被写体4040を選ぶかが変わる可能性がある。そこで、例えば、図9(b)に示されるように、画像取得装置1000のレンズからの距離の最も近く、かつ、ロックオンマーク7010の一部または全部が重なる第2被写体4040が被写体として選択されていることを示すために、EVF/LCDには、選択されている被写体の画像に第2被写体4040のワイヤフレームイメージ4042が重畳表示される。 When a part or all of the lock-on mark 7010 overlaps the first subject 4030 and the second subject 4040, collation with both of the three-dimensional image data is performed during framing. Thereafter, there is a possibility that the first subject 4030 or the second subject 4040 is selected at the moment when the shutter button is pressed due to slight vertical movement of the image acquisition apparatus 1000 at the time of shooting. Therefore, for example, as shown in FIG. 9B, the second subject 4040 that is the closest to the distance from the lens of the image acquisition device 1000 and that partially or entirely overlaps the lock-on mark 7010 is selected as the subject. In order to indicate that the wire frame image 4042 of the second subject 4040 is superimposed on the image of the selected subject, the EVF / LCD is displayed.
 選択されている被写体の画像にワイヤフレームイメージ4042が表示されることで、ロックオンマークが被写体上にあるだけよりも、より明確に当該被写体を選択したということを撮影者に提示できる。そして、撮影者は、自らが意図した被写体をより確実に選択して撮影できる。特に、図9(b)に示されるように第1被写体4030と第2被写体4040とが近接しており、被写体相互の位置のぎりぎりの線に撮影軸線1030が乗っている場合に、撮影者により明確に選択したい被写体が間違いなく選択されたことを示すことができる。 By displaying the wire frame image 4042 on the image of the selected subject, it is possible to present to the photographer that the subject has been selected more clearly than just having the lock-on mark on the subject. The photographer can select and shoot the subject intended by the photographer with more certainty. In particular, as shown in FIG. 9B, when the first subject 4030 and the second subject 4040 are close to each other and the photographing axis 1030 is on the last line between the positions of the subjects, the photographer It can be shown that the subject to be clearly selected is definitely selected.
 図9(c)は、図9(b)と同様に、選択されている被写体の画像にワイヤフレームイメージ4042を重畳して表示した状態を上から観察した図である。 FIG. 9C is a view of the state in which the wire frame image 4042 is superimposed and displayed on the image of the selected subject, as in FIG. 9B, as viewed from above.
 図9(c)に示される例では、補足した被写体の被写体位置情報として、データベースに記憶されている被写体位置情報を利用するのではなく、画像取得装置1000にて生成する点が異なる。被写体があまり大きな施設ではなく、例えば複数のビルで構成される被写体のうちのひとつを選んだ場合などで、被写体の位置情報では誤差が大きい場合に効果的である。 The example shown in FIG. 9C is different in that it is generated by the image acquisition apparatus 1000 instead of using the subject position information stored in the database as the subject position information of the supplemented subject. This is effective when there is a large error in the position information of the subject, for example, when the subject is not a very large facility, for example, when one of subjects composed of a plurality of buildings is selected.
 具体的には、撮影する被写体と照合した3次元の形状の、上からみた投影図の二次元図形形状における重心点9000を算出し、算出した重心点9000を被写体の座標として取得することで、付随した他の施設や大きな被写体の代表位置に比べて、はるかに撮影した被写体に合致した被写体位置情報を生成できる。 Specifically, by calculating the centroid point 9000 in the two-dimensional figure shape of the projected view from the top of the three-dimensional shape collated with the subject to be photographed, and acquiring the calculated centroid point 9000 as the coordinates of the subject, It is possible to generate subject position information that is much more consistent with a photographed subject as compared to other facilities attached or representative positions of large subjects.
 なお、これまで説明したロックオンマーク7010は、円形で説明していたが、3次元との位置関係を計算するためには、直方体や正方形のような形状のほうが、計算が簡単であり、直方体としても撮影者にとって撮影しにくいという悪影響を与えることも少ない。また、被写体の選択において、ロックオンマーク7010の範囲内にある全ての被写体を、選択候補として選び出す必要はなく、レンズの焦点距離にもよるが、撮影する被写体が一般家屋の場合、高層ビルの場合、山や湖などの場合によって、候補として選び出すもっとも遠い被写体までの距離を制限することで、余分な抽出計算をする必要がなくなる。 Note that the lock-on mark 7010 described so far has been described as a circle, but in order to calculate the positional relationship with the three dimensions, the shape of a rectangular parallelepiped or a square is simpler to calculate, and the rectangular parallelepiped. However, there is little adverse effect that it is difficult for the photographer to shoot. Further, in the selection of subjects, it is not necessary to select all subjects within the range of the lock-on mark 7010 as selection candidates, and depending on the focal length of the lens, if the subject to be photographed is a general house, In some cases, by limiting the distance to the farthest object to be selected as a candidate depending on the case of a mountain or lake, it is not necessary to perform extra extraction calculations.
 さらに、被写体の3次元イメージデータとそれに関連した被写体位置情報や被写体名称、そしてそれらをまとめた地図情報は、データベースから取得する度にデータを画像取得装置1000にダウンロードして、画像取得装置1000で被写体の選定計算を行う方法の他に、撮影するためのフレーミング画像を被写体選定エンジン(不図示)を有するコンピュータ(画像ファイル生成装置)にアップロードし、被写体の選定などの処理が大きな計算をすべて外部コンピュータに行わせても良く、さらにその逆で、3次元をはじめとする上記データのすべての画像取得装置1000が内蔵メモリに記憶して、全ての処理を画像取得装置1000内で行っても良い。 Further, the 3D image data of the subject, the subject position information and the subject name related thereto, and the map information that summarizes them are downloaded to the image acquisition device 1000 each time it is acquired from the database, and the image acquisition device 1000 In addition to the method for calculating the selection of the subject, upload the framing image for shooting to a computer (image file generation device) that has a subject selection engine (not shown). The computer may perform the process, or vice versa, all the image acquisition apparatuses 1000 including the above three-dimensional data may be stored in the built-in memory, and all processing may be performed in the image acquisition apparatus 1000. .
 そして、撮影が終わってから、撮像画像データと、撮影軸線のベクトルを特定するための、画像取得装置1000の現在位置情報と、地磁気センサにより測定される方向角と、Gセンサにより取得される仰俯角とを対応付けて記憶しても良い。また、画像取得装置1000からコンピュータ(画像ファイル生成装置)へ撮像画像データと、撮影軸線のベクトルを特定するための、画像取得装置1000の現在位置情報と、地磁気センサにより測定される方向角と、Gセンサにより取得される仰俯角とを対応付けて送信を行っても良い。また、例えば、自宅のコンピュータに外部メモリ(メモリカード)にて画像データのコピーをした後、後処理として被写体の選択と、当該被写体の被写体位置情報の付加を行ってもよい。 Then, after shooting is finished, the captured image data, the current position information of the image acquisition device 1000 for specifying the vector of the shooting axis, the direction angle measured by the geomagnetic sensor, and the elevation acquired by the G sensor. The depression angle may be stored in association with each other. Further, the captured image data from the image acquisition device 1000 to the computer (image file generation device), current position information of the image acquisition device 1000 for specifying the vector of the imaging axis, and the direction angle measured by the geomagnetic sensor, Transmission may be performed in association with the elevation angle acquired by the G sensor. Also, for example, after copying image data to a home computer with an external memory (memory card), subject selection and subject position information of the subject may be added as post-processing.
 画像ファイル生成装置は、画像取得装置1000と同様に、被写体位置情報取得部と、マッチング処理部と、画像ファイル生成部とを有する。ただし、画像ファイル生成装置は、シャッターボタン、センサ(カメラセンサ)、GPSユニット、Gセンサ、地磁気センサなどの構成を有しない。 Similarly to the image acquisition device 1000, the image file generation device includes a subject position information acquisition unit, a matching processing unit, and an image file generation unit. However, the image file generation device does not have a configuration such as a shutter button, a sensor (camera sensor), a GPS unit, a G sensor, or a geomagnetic sensor.
 画像ファイル生成装置には、被写体を撮影する画像取得装置の現在位置情報と、画像取得装置の仰俯角と、画像取得装置の方位角とが、外部メモリを介して入力される。そして、画像ファイル生成装置の被写体位置情報取得部は、入力された被写体を撮影する画像取得装置の現在位置情報と、画像取得装置の仰俯角と、画像取得装置の方位角とに基づき、撮影領域と、撮影軸線とを算出する。 The current position information of the image acquisition device that captures the subject, the elevation angle of the image acquisition device, and the azimuth angle of the image acquisition device are input to the image file generation device via an external memory. Then, the subject position information acquisition unit of the image file generation device captures the imaging region based on the current position information of the image acquisition device that captures the input subject, the elevation angle of the image acquisition device, and the azimuth angle of the image acquisition device. And an imaging axis.
 画像ファイル生成装置の被写体位置情報取得部は、自ら算出した画像取得装置により撮影される撮影領域と、データベースに記憶されている被写体を構成する各点の3次元空間上の位置を特定するための座標から構成される3次元イメージデータとを照合することで被写体を特定し、特定した被写体の被写体位置情報を取得する。 The subject position information acquisition unit of the image file generation device is for specifying the shooting area shot by the image acquisition device calculated by itself and the position of each point constituting the subject stored in the database in the three-dimensional space. A subject is identified by collating with three-dimensional image data composed of coordinates, and subject position information of the identified subject is acquired.
 そして、画像ファイル生成装置の画像ファイル生成部は、前記現在位置情報と、前記被写体位置情報とを含む画像ファイルを生成する。 Then, the image file generation unit of the image file generation device generates an image file including the current position information and the subject position information.
 ここで、被写体の大きさは、レンズの焦点距離によって、適切な大きさがある程度定められており、EVF/LCD全体に対して被写体があまり小さすぎても大きすぎても、被写体をEVF/LCD内に表すことが困難となる。そのため、被写体の大きさに応じて、画像取得装置1000の被写体位置情報取得部220が被写体位置座標を取得する範囲を設定することが望ましい。また、被写体によっては、被写体位置情報がデータベースに記憶されていない場合や、データベースに記憶されている被写体位置情報が不正確である場合がある。そのため、被写体の大きさに応じて、被写体位置座標を特定する方法を変化させることが望ましい。そして、被写体位置情報取得部は、被写体電子情報の種類に応じて、被写体位置情報を取得する、画像取得装置から前記被写体までの距離を変化させる。以下、レンズとして標準レンズ(35mmフルサイズにおける50mmレンズなど)が適用されていることを前提に説明する。 Here, the size of the subject is determined to some extent depending on the focal length of the lens, and the subject can be placed in the EVF / LCD regardless of whether the subject is too small or too large for the entire EVF / LCD. It becomes difficult to express in. Therefore, it is desirable to set a range in which the subject position information acquisition unit 220 of the image acquisition apparatus 1000 acquires subject position coordinates according to the size of the subject. Depending on the subject, subject position information may not be stored in the database, or subject position information stored in the database may be inaccurate. Therefore, it is desirable to change the method for specifying the subject position coordinates according to the size of the subject. The subject position information acquisition unit changes the distance from the image acquisition device that acquires the subject position information to the subject according to the type of the subject electronic information. The following description is based on the assumption that a standard lens (such as a 50 mm lens at 35 mm full size) is used as the lens.
 図10に示されるように、一般家屋やマンションなどの比較的小さな被写体については、被写体位置情報取得部は、画像取得装置1000の現在位置からの距離が100m以内に存在する被写体の被写体位置座標のみを取得する。また、特に一般家屋の場合は3次元イメージと対応する被写体位置情報がデータベースに記憶されていない場合がある。また、データベースに被写体位置情報が記憶されている場合であっても、増築により建屋形状が複雑になって、当初の被写体位置情報では役に立たない場合がある。そのため、一般家屋やマンションなど、比較的小さな被写体については、被写体位置情報取得部は、上から見た建物(一般家屋やマンションなど)の二次元平面図の重心位置近傍の座標を被写体位置情報として取得する。 As shown in FIG. 10, for a relatively small subject such as a general house or a condominium, the subject position information acquisition unit only applies the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 100 m. To get. In particular, in the case of a general house, subject position information corresponding to a three-dimensional image may not be stored in the database. Even if the subject position information is stored in the database, the building shape becomes complicated due to the extension, and the original subject position information may not be useful. Therefore, for relatively small subjects such as ordinary houses and condominiums, the subject position information acquisition unit uses the coordinates in the vicinity of the center of gravity of the two-dimensional plan view of the building (such as ordinary houses and condominiums) viewed from above as subject position information. get.
 中サイズオフィスビルなどの被写体については、被写体位置情報取得部は、画像取得装置1000の現在位置からの距離が500m以内に存在する被写体の被写体位置座標のみを取得する。また、被写体位置情報取得部は、重心位置近傍の座標の他に、データベースに記憶されている被写体位置情報を取得する。 For a subject such as a medium-sized office building, the subject position information acquisition unit acquires only the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 500 m. The subject position information acquisition unit acquires subject position information stored in the database in addition to the coordinates near the center of gravity position.
 高層ビル/タワーなどの被写体については、被写体位置情報取得部は、画像取得装置1000の現在位置からの距離が5km以内に存在する被写体の被写体位置座標のみを取得する。また、被写体位置情報取得部は、データベースに記憶されている被写体位置情報の他に、3次元イメージデータのカメラに向いている面と撮影軸線の交点(ロックオン位置)を被写体座標として取得しても良い。 For a subject such as a high-rise building / tower, the subject position information acquisition unit acquires only the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 5 km. In addition to the subject position information stored in the database, the subject position information acquisition unit acquires, as subject coordinates, the intersection (lock-on position) between the plane of the three-dimensional image data facing the camera and the shooting axis. Also good.
 山/湖/島などの被写体については、さらに遠距離からの展望が望めるので、被写体位置情報取得部は、画像取得装置1000の現在位置からの距離が20km以内に存在する被写体の被写体位置座標のみを取得する。また、被写体位置情報取得部は、例えば山の場合は山頂、湖の場合は重心位置近傍の座標、島の場合は島の中心またはロックオン位置または重心点近傍位置またはデータベースに記憶されている被写体位置情報を取得しても良い。なお、被写体が山や島などの場合は、被写体の形状は直方体ではないため、山や島の形状を生成するために用いられる複数の小さな面情報をそのまま用いて、各面と撮影軸線との交点座標を算出することで、被写体を特定しても良い。 Since subjects such as mountains / lakes / islands can be viewed from a further distance, the subject position information acquisition unit only applies the subject position coordinates of a subject whose distance from the current position of the image acquisition apparatus 1000 is within 20 km. To get. In addition, the subject position information acquisition unit, for example, in the case of a mountain, the top of a mountain, in the case of a lake, in the vicinity of the center of gravity position, in the case of an island, the center of the island, the lock-on position, the position near the center of gravity, or the subject stored in the database You may acquire position information. When the subject is a mountain or island, the shape of the subject is not a rectangular parallelepiped, and therefore, using a plurality of small surface information used for generating the shape of the mountain or island as they are, The subject may be specified by calculating the intersection coordinates.
 なお、図10に示される取得距離と、位置情報設定場所とは、レンズの焦点距離で異なるため、撮影者が個別に設定したカスタマイズデータに基づき適宜変更可能である。 Note that the acquisition distance shown in FIG. 10 and the position information setting location differ depending on the focal length of the lens, and can be changed as appropriate based on customized data individually set by the photographer.
 また、撮影画像データがデジタル情報で構成されたデータの場合は、取得された現在位置情報および被写体位置情報をそのまま撮影画像データ内に付加して記憶してもよく、撮影した画像データと関連付けられた異なるデータ(同じデバイス/他のデバイスのいずれでもよい)として記憶してもよく、さらに撮影をフィルムで行った場合も、そのフィルムと関連づけることができるデジタルデータとして記憶してもよい。 If the captured image data is data composed of digital information, the acquired current position information and subject position information may be added and stored in the captured image data as they are, and are associated with the captured image data. Different data (which may be the same device / other devices) may be stored, and even when shooting is performed on a film, it may be stored as digital data that can be associated with the film.
 なお、フィルムを用いて撮影するフィルム撮影式カメラの場合も、3次元と照合するために被写体の電子映像を撮影するセンサ(カメラセンサ)を搭載する必要がある。そのセンサ画像と3次元により、被写体を特定し、被写体位置情報の計算を行う。そしてフィルムケース上を始め、フィルムとは別体のメモリなどの電子情報記憶媒体に、算出した被写体位置情報を記憶する。センサは、フィルムに撮影する光学系と同一の光学系を用いる他に、独立した異なる光学系を有してもよく、この場合もフィルムに撮影する光学系と平行位置に構成される撮影軸線を持つように設置すれば、本発明に示す動作を行うことができる。特に、カメラを搭載し、撮影画像の取り込みと記憶が可能で、通信による外部にあるデータの参照やアップロード、さらに本体内での画像処理等が可能な機能と能力を有するカメラ付き携帯電話などの携帯情報通信端末は、本発明の実施に適した機器のひとつとなる。
 <全体処理>
In the case of a film photographing camera that photographs using a film, it is necessary to mount a sensor (camera sensor) that captures an electronic image of a subject in order to collate with three dimensions. The subject is specified based on the sensor image and three dimensions, and subject position information is calculated. Then, the calculated subject position information is stored in an electronic information storage medium such as a memory separate from the film, starting from the film case. In addition to using the same optical system as the optical system that shoots on the film, the sensor may have a different optical system that is independent of the optical system that shoots on the film. If installed so as to have, the operation shown in the present invention can be performed. In particular, a camera-equipped mobile phone equipped with a camera, capable of capturing and storing captured images, and having functions and capabilities capable of referencing and uploading external data through communication, and further enabling image processing within the main unit, etc. The portable information communication terminal is one of devices suitable for implementing the present invention.
<Overall processing>
 図11は、実施の形態1に係る画像取得装置1000の全体処理の概要を示す図である。なお、全体処理は、画像取得装置1000が撮影を撮影画像2000の開始した場合に開始される。 FIG. 11 is a diagram showing an overview of the overall processing of the image acquisition apparatus 1000 according to the first embodiment. Note that the entire process is started when the image acquisition apparatus 1000 starts shooting the captured image 2000.
 まず、S1101にて、GPSユニット3050は、画像取得装置1000の現在位置情報を取得する。 First, in S1101, the GPS unit 3050 acquires the current position information of the image acquisition apparatus 1000.
 次に、S1102にて、Gセンサ3060は、画像取得装置1000の仰俯角を測定する。また、地磁気センサ3070は、画像取得装置1000の方位角を測定する。 Next, in S1102, the G sensor 3060 measures the elevation angle of the image acquisition apparatus 1000. The geomagnetic sensor 3070 measures the azimuth angle of the image acquisition device 1000.
 次に、S1103にて、画像データ取得部210は、S1101にて取得された現在位置情報と、S1102にて測定された仰俯角と、方位角とに基づき、撮影領域1060を特定する。 Next, in S1103, the image data acquisition unit 210 identifies the imaging region 1060 based on the current position information acquired in S1101, the elevation angle measured in S1102, and the azimuth angle.
 次に、S1104にて、被写体位置情報取得部220は、S1103にて特定された撮影領域1060に含まれる、すべての被写体位置情報と3次元イメージデータとをデータベース1010に要求する。これによって、被写体位置情報取得部220は、データベース1010から送信される、被写体位置情報と3次元イメージデータとを取得する。 Next, in S1104, the subject position information acquisition unit 220 requests the database 1010 for all subject position information and three-dimensional image data included in the imaging region 1060 specified in S1103. As a result, the subject position information acquisition unit 220 acquires subject position information and three-dimensional image data transmitted from the database 1010.
 次に、S1105にて、シャッターボタン3010が押下されない場合(S1105-No)、S1101へ戻る。一方、シャッターボタン3010が押下された場合(S1105-Yes)、S1106へ進む。 Next, when the shutter button 3010 is not pressed in S1105 (S1105-No), the process returns to S1101. On the other hand, if the shutter button 3010 has been pressed (S1105-Yes), the process proceeds to S1106.
 次に、S1106にて、画像データ取得部210は、撮影領域1060内の画像を描画するための撮影画像データを取得する。また、画像データ取得部210は、撮像画像データに基づき、サムネイル画像2040を生成する。また、撮影画像データは、エンコーダ/デコーダ3040により圧縮され圧縮画像データ2050が生成される。 Next, in S1106, the image data acquisition unit 210 acquires captured image data for rendering an image in the imaging region 1060. Further, the image data acquisition unit 210 generates a thumbnail image 2040 based on the captured image data. The captured image data is compressed by an encoder / decoder 3040 to generate compressed image data 2050.
 次に、S1107にて、被写体位置情報取得部220は、画像取得装置1000のレンズの中心と、S1103にて特定した撮影領域1060の中心とを結ぶ撮影軸線1030を算出する。詳細には、被写体位置情報取得部220は、S1101にてGPSユニットにより取得される画像取得装置1000の現在位置情報と、S1102にて地磁気センサにより測定される方向角と、S1102にてGセンサにより取得される仰俯角6050とに基づき、撮影軸線1030を算出する。 Next, in S1107, the subject position information acquisition unit 220 calculates a shooting axis 1030 that connects the center of the lens of the image acquisition apparatus 1000 and the center of the shooting area 1060 specified in S1103. Specifically, the subject position information acquisition unit 220 uses the current position information of the image acquisition apparatus 1000 acquired by the GPS unit in S1101, the direction angle measured by the geomagnetic sensor in S1102, and the G sensor in S1102. An imaging axis 1030 is calculated based on the acquired elevation angle 6050.
 次に、S1108にて、被写体位置情報取得部220は、撮影軸線1030と交わる被写体を特定する。詳細には、被写体位置情報取得部220は、S1104にて取得した各3次元イメージデータから、S1107にて算出した撮影軸線1030と交わる3次元イメージデータを特定する。例えば、被写体位置情報取得部220は、撮影軸線1030を構成するいずれかの座標と、3次元イメージデータを構成するいずれかの座標とが一致する場合、当該座標を含む3次元イメージデータを特定する。 Next, in S 1108, the subject position information acquisition unit 220 identifies a subject that intersects the shooting axis 1030. Specifically, the subject position information acquisition unit 220 specifies 3D image data that intersects the imaging axis 1030 calculated in S1107 from each 3D image data acquired in S1104. For example, the subject position information acquisition unit 220 specifies the three-dimensional image data including the coordinates when any of the coordinates configuring the imaging axis 1030 matches any of the coordinates configuring the three-dimensional image data. .
 次に、S1109にて、被写体位置情報取得部220は、S1104にて取得した各被写体位置情報から、S1108にて特定した3次元イメージデータと対応する被写体の被写体位置情報を抽出する。 Next, in S1109, the subject position information acquisition unit 220 extracts subject position information of the subject corresponding to the three-dimensional image data specified in S1108 from each subject position information acquired in S1104.
 次に、S1110にて、マッチング処理部230は、S1108にて特定した3次元イメージデータを2次元画像イメージデータへ変換する。 Next, in S1110, the matching processing unit 230 converts the 3D image data specified in S1108 into 2D image data.
 次に、S1111にて、マッチング処理部230は、S1106にて取得した撮影画像データから、撮影対象の被写体の画像データを抽出する。例えば、マッチング処理部230は、撮影領域1060の中央から所定範囲内に撮像されている被写体の画像データを抽出する。 Next, in S <b> 1111, the matching processing unit 230 extracts image data of the subject to be photographed from the photographed image data acquired in S <b> 1106. For example, the matching processing unit 230 extracts image data of a subject imaged within a predetermined range from the center of the imaging region 1060.
 次に、S1112にて、マッチング処理部230は、S1110にて変換した後の2次元画像イメージデータに基づく画像と、S1111にて抽出した画像データに基づく画像とが一致または近似するか判定する。マッチング処理部230が、S1110にて変換した後の2次元画像イメージデータに基づく画像と、S1111にて抽出した画像データに基づく画像とが一致または近似しないと判定する場合(S1112-No)、S1116へ進む。一方、マッチング処理部230が、S1110にて変換した後の2次元画像イメージデータに基づく画像と、S1111にて抽出した画像データに基づく画像とが一致または近似すると判定する場合(S1112-Yes)、S1113へ進む。 Next, in S1112, the matching processing unit 230 determines whether the image based on the two-dimensional image image data converted in S1110 matches the image based on the image data extracted in S1111. When the matching processing unit 230 determines that the image based on the two-dimensional image image data converted in S1110 and the image based on the image data extracted in S1111 do not match or approximate (S1112-No), S1116 Proceed to On the other hand, when the matching processing unit 230 determines that the image based on the two-dimensional image image data converted in S1110 matches the image based on the image data extracted in S1111 (S1112-Yes), The process proceeds to S1113.
 次に、S1113にて、画像ファイル生成部240は、S1101にて取得された現在位置情報と、S1109にて抽出した被写体位置情報とを含む撮影情報B2030と、撮影情報A2020とを生成する。 Next, in S1113, the image file generation unit 240 generates shooting information B2030 including the current position information acquired in S1101 and subject position information extracted in S1109, and shooting information A2020.
 次に、S1114にて、画像ファイル生成部240は、S1113にて生成された撮影情報A2020と、撮影情報B2030と、S1106にて生成されたサムネイル画像2040と、圧縮画像データ2050とにより構成される画像ファイル2010を生成する。 In step S1114, the image file generation unit 240 includes the shooting information A2020 generated in step S1113, shooting information B2030, the thumbnail image 2040 generated in step S1106, and the compressed image data 2050. An image file 2010 is generated.
 次に、S1115にて、画像ファイル生成部240は、S1114にて生成した画像ファイル2010を外部メモリ3141に記憶し、全体処理を終了する。 Next, in S1115, the image file generation unit 240 stores the image file 2010 generated in S1114 in the external memory 3141, and ends the entire process.
 S1112にてNoだった場合、S1116にて、画像ファイル生成部240は、S1101にて取得された現在位置情報を含む撮影情報B2030と、撮影情報A2020とを生成する。なお、撮影情報B2030に含まれる被写体位置情報には、被写体位置情報が未取得であることを示す情報が記憶される。なお、画像ファイル生成部240は、被写体位置情報を含まない撮影情報B2030を生成しても良い。 If NO in S1112, the image file generation unit 240 generates shooting information B2030 including the current position information acquired in S1101 and shooting information A2020 in S1116. Note that the subject position information included in the shooting information B2030 stores information indicating that the subject position information has not been acquired. Note that the image file generation unit 240 may generate shooting information B2030 that does not include subject position information.
 次に、S1117にて、画像ファイル生成部240は、S1116にて生成された撮影情報A2020と、撮影情報B2030と、S1106にて生成されたサムネイル画像2040と、圧縮画像データ2050とにより構成される画像ファイル2010を生成する。 In step S1117, the image file generation unit 240 includes the shooting information A2020, the shooting information B2030 generated in step S1116, the thumbnail image 2040 generated in step S1106, and the compressed image data 2050. An image file 2010 is generated.
 次に、S1118にて、画像ファイル生成部240は、S1117にて生成した画像ファイル2010を外部メモリ3141に記憶し、全体処理を終了する。 Next, in S1118, the image file generation unit 240 stores the image file 2010 generated in S1117 in the external memory 3141, and ends the entire process.
 なお、S1108にて、被写体位置情報取得部220は、3次元イメージデータを変換した2次元画像イメージデータと、撮影領域1060内の画像とをパターンマッチングすることで、一致または近似する画像を抽出し、抽出した画像と撮影軸線1030とが交わる場合に、当該画像を被写体として特定しても良い。これによって、被写体が、小規模の家屋やマンション、オフィスビルなどのカメラから見た形状がはっきりしており、かつ、多量にある場合に、カメラから見た場合の形状だけを用いて計算出来るので、計算量を減らすことができる。
 <実施の形態1の効果>
In S1108, the subject position information acquisition unit 220 performs pattern matching between the two-dimensional image image data obtained by converting the three-dimensional image data and the image in the shooting region 1060, thereby extracting an image that matches or approximates. When the extracted image and the shooting axis 1030 intersect, the image may be specified as a subject. As a result, the shape of the subject as seen from the camera in a small house, condominium, office building, etc. is clear and can be calculated using only the shape as seen from the camera when there are many objects. , Can reduce the amount of calculation.
<Effect of Embodiment 1>
 以上説明した実施の形態1によれば、被写体を撮影した際の画像取得装置の現在位置情報とともに被写体の被写体位置情報を画像ファイルに付加できる。
(実施の形態2)
According to the first embodiment described above, the subject position information of the subject can be added to the image file together with the current position information of the image acquisition device when the subject is photographed.
(Embodiment 2)
 実施の形態1では、シャッターボタンが押下された際にファインダーイメージの中心付近に表示される被写体の被写体位置情報と撮影画像データとが取得され、外部メモリに記憶される。一方、実施の形態2では、シャッターボタンが半押しされた際にファインダーイメージの中心付近に表示される被写体の被写体位置情報と、シャッターボタンが押下された際に撮影される撮影画像データとが取得され、外部メモリに記憶される。これによって、シャッターボタン3010が押下された際にファインダーイメージ7000の中心にはない被写体の被写体位置情報を取得できる。以下、本発明の実施の形態2を、実施の形態1と異なる点を主に、図12(a)~(c)を用いて説明する。 In Embodiment 1, subject position information and photographed image data of a subject displayed near the center of the finder image when the shutter button is pressed are acquired and stored in an external memory. On the other hand, in the second embodiment, subject position information of a subject displayed near the center of the finder image when the shutter button is half-pressed and captured image data taken when the shutter button is pressed are acquired. And stored in the external memory. Accordingly, it is possible to acquire subject position information of a subject that is not in the center of the finder image 7000 when the shutter button 3010 is pressed. The second embodiment of the present invention will be described below with reference to FIGS. 12 (a) to 12 (c), mainly with respect to differences from the first embodiment.
 図12(a)~(c)は、撮影者がファインダーでフレーミング(撮影する範囲を特定する)したときに、選択する被写体がロックオンマークの位置するファインダーの中央部近傍にいない場合に、その被写体を選択する方法を示す。図12(a)~(c)に示される例では、例えば、撮影者が撮影したい被写体が、第1被写体4030であるとする。このときに、フレーミングされるファインダーイメージ7000は、イメージは図12(a)に示されるものであり、図12(c)に示される状態に対して、人手で左方向に振られた状態になっている。また、このフレーミングによって、撮影者が選択する第1被写体4030に、ロックオンマーク7010が重ねられる。 FIGS. 12A to 12C show the case where the subject to be selected is not near the center of the viewfinder where the lock-on mark is located when the photographer performs framing (specifies the shooting range) with the viewfinder A method for selecting a subject will be described. In the example shown in FIGS. 12A to 12C, for example, it is assumed that the subject that the photographer wants to photograph is the first subject 4030. At this time, the finder image 7000 to be framed is the image shown in FIG. 12A, and is manually shaken leftward with respect to the state shown in FIG. ing. Further, by this framing, a lock-on mark 7010 is superimposed on the first subject 4030 selected by the photographer.
 次に、図12(b)において、ファインダーイメージ7000のシャッターボタン3010を半押しする。一般にシャッターボタン3010の半押しは露光やAFを行ない、その状態で画像取得装置1000の露光量とピント位置をロックするのに使われる操作である。これは、第1被写体4030に対して、露光やAFを合わせる動作であることがある。この動作と同時に、第1被写体4030に対して、位置情報を取得する対象としての選択動作を行う。 Next, in FIG. 12B, the shutter button 3010 of the viewfinder image 7000 is half-pressed. In general, half-pressing the shutter button 3010 is an operation used for performing exposure and AF and locking the exposure amount and focus position of the image acquisition apparatus 1000 in that state. This may be an operation for aligning exposure or AF with the first subject 4030. Simultaneously with this operation, a selection operation as a target for acquiring position information is performed on the first subject 4030.
 図12(c)は、前記ロックオンをするために、一度、人手で左方向に振られた画像取得装置1000を再び戻して、撮影しようとするシーンにフレーミングをした状態である。図12(c)に示されるように、ファインダーイメージ7000には、第1被写体4030と、第2被写体4040とが含まれているが、図12(a)に示される状態にて選択された第1被写体4030は、画面左側に表示されている。 FIG. 12 (c) shows a state where the image acquisition apparatus 1000 once shaken to the left is manually returned to perform the lock-on, and the scene to be photographed is framed. As shown in FIG. 12C, the viewfinder image 7000 includes a first subject 4030 and a second subject 4040. The first subject 4030 selected in the state shown in FIG. One subject 4030 is displayed on the left side of the screen.
 ここで、図12(a)に示される状態で、シャッターボタン3010が半押しされることで、第1被写体4030は被写体として選択されている。そのため、例えば、EVF(Electronic View Finder)のように、ファインダーイメージ7000内にロックオンマーク7010と同等のマークを重畳表示できる場合は、図12(c)に示されるように、被写体追尾マーク7020を表示し、被写体追尾マーク7020が第1被写体4030を追尾しつつファインダーイメージ7000内を移動する。 Here, in the state shown in FIG. 12A, when the shutter button 3010 is half-pressed, the first subject 4030 is selected as the subject. Therefore, for example, when a mark equivalent to the lock-on mark 7010 can be superimposed and displayed in the finder image 7000 such as EVF (Electronic View Finder), the subject tracking mark 7020 is displayed as shown in FIG. The subject tracking mark 7020 moves in the viewfinder image 7000 while tracking the first subject 4030.
 被写体追尾マーク7020が第1被写体4030を追尾することで、撮影者は、被写体として第1被写体4030が選択されていることを認識できるため、撮影者は安心して画像取得装置1000の向きを変え、所定のファインダーイメージ7000の撮影を行うことができ、シャッターボタン3010が押下された際にファインダーイメージ7000の中心にはない被写体を選択した撮影ができる。そして、シャッターボタン3010が反押しされた時に選択された被写体の被写体位置情報と、シャッターボタン3010が押下された際に撮影した画像と画像取得装置1000の現在位置情報とが対応付けて外部メモリに記憶される。
 <実施の形態2の効果>
Since the subject tracking mark 7020 tracks the first subject 4030, the photographer can recognize that the first subject 4030 is selected as the subject, so the photographer can change the orientation of the image acquisition apparatus 1000 with confidence. A predetermined viewfinder image 7000 can be taken, and a subject that is not in the center of the viewfinder image 7000 can be selected when the shutter button 3010 is pressed. The subject position information of the subject selected when the shutter button 3010 is pressed down, the image taken when the shutter button 3010 is pressed, and the current position information of the image acquisition device 1000 are associated with each other in the external memory. Remembered.
<Effect of Embodiment 2>
 以上説明した実施の形態2によれば、シャッターボタン3010が押下された際にファインダーイメージ7000の中心にはない被写体を選択した撮影ができる。
(実施の形態3)
According to the second embodiment described above, it is possible to perform shooting by selecting a subject that is not at the center of the finder image 7000 when the shutter button 3010 is pressed.
(Embodiment 3)
 図13(a)~(c)は、実施の形態3に係る画像取得装置である携帯情報端末(例えば、カメラ付き携帯電話や、カメラ付きタブレット端末)の構成例の概要を示す図である。図13(a)は、携帯情報端末1200を側方から観察した状態を示し、図13(b)および図13(c)は、携帯情報端末1200を正面から観察した状態を示す。 FIGS. 13A to 13C are diagrams showing an outline of a configuration example of a portable information terminal (for example, a camera-equipped mobile phone or a camera-equipped tablet terminal) that is an image acquisition apparatus according to the third embodiment. 13A shows a state in which the portable information terminal 1200 is observed from the side, and FIGS. 13B and 13C show a state in which the portable information terminal 1200 is observed from the front.
 図13(a)に示されるように、携帯情報端末1200は、携帯情報端末1200の表面側に設けられ、通常のカメラと同様にファインダーとして利用されるディスプレイ1230と、ディスプレイ1230の反対側に設けられる背面カメラ1210と、携帯情報端末1200の表面側に設けられる前面カメラ1220とを有する。 As shown in FIG. 13A, the portable information terminal 1200 is provided on the front surface side of the portable information terminal 1200, and is provided on the opposite side of the display 1230, which is a display 1230 that is used as a finder like a normal camera. And a front camera 1220 provided on the front side of the portable information terminal 1200.
 前面カメラ1220は、ディスプレイ1230側を撮影し、例えば、撮影者と撮影者の背景とを撮影する。また、前面カメラ1220は、テレビ電話として携帯情報端末を利用する場合などに利用される。 The front camera 1220 photographs the display 1230 side, for example, the photographer and the background of the photographer. The front camera 1220 is used when a portable information terminal is used as a videophone.
 図13(b)背面カメラ1210を用いて被写体を撮影している状態を示す。図13(b)に示されるように、ディスプレイ1230は、背面カメラ1210により撮影される被写体をリアルタイム動画で表示するとともに、ロックオンマーク1240を被写体に重畳表示する。 FIG. 13B shows a state in which the subject is photographed using the rear camera 1210. As shown in FIG. 13B, the display 1230 displays a subject photographed by the rear camera 1210 as a real-time moving image, and displays a lock-on mark 1240 superimposed on the subject.
 シャッターボタン1260が押下されると、携帯情報端末1200は、ロックオンマーク1240の位置にある被写体の被写体位置座標を取得する。また、ロックオンボタン1250が押下された場合、携帯情報端末1200は、その時点でロックオンマーク1240があった場所の被写体を、撮影対象である被写体として認識し、ロックオンマーク1240をその被写体上に表示した状態で、モニター画面のリアルタイム表示が続く。その後、シャッターボタン1260を押下された場合に、携帯情報端末1200は、被写体の画像を取得すると共に、ロックオンマーク1240で指示した被写体の被写体位置情報を取得する。 When the shutter button 1260 is pressed, the portable information terminal 1200 acquires the subject position coordinates of the subject at the position of the lock-on mark 1240. When the lock-on button 1250 is pressed, the portable information terminal 1200 recognizes the subject at the location where the lock-on mark 1240 was present as the subject to be photographed, and places the lock-on mark 1240 on the subject. The monitor screen continues to be displayed in real time. After that, when the shutter button 1260 is pressed, the portable information terminal 1200 acquires the subject image and the subject position information of the subject indicated by the lock-on mark 1240.
 図13(c)は、前面カメラ1220を用いて、撮影者を撮影した場合に、その撮影結果を携帯情報端末1200のディスプレイ1230に表示している状態の例を示す。ディスプレイ1230には撮影した画像に、撮影者と共に被写体1270が表示されており、この例ではその画面に重畳して、付加情報表示領域1280が表示される。 FIG. 13C shows an example of a state in which when a photographer is photographed using the front camera 1220, the photographing result is displayed on the display 1230 of the portable information terminal 1200. The display 1230 displays a subject 1270 together with the photographer on the captured image. In this example, an additional information display area 1280 is displayed so as to be superimposed on the screen.
 付加情報表示領域1280には、被写体1270の被写体名や、緯度、経度による被写体の座被写体位置情報が表示されている。このような画像を撮影する場合は、前面カメラ1220が使用されるため、撮影時のモニター画面のリアルタイム表示は、左右を反転してミラーイメージで表示したほうが撮影するときに便利である。 In the additional information display area 1280, the subject name of the subject 1270, and the subject position information of the subject by latitude and longitude are displayed. When shooting such an image, since the front camera 1220 is used, the real-time display of the monitor screen at the time of shooting is more convenient for shooting when the left and right are reversed and displayed as a mirror image.
 このとき前面カメラ1220から取込んだ画像を左右反転してディスプレイ1230に表示するので、データベースに記憶されている3次元は、左右反転をする前の撮影画像データと照合するか、表示部に表示するために左右反転した後の被写体の画像と照合する場合は、3次元イメージデータを左右反転した上で、被写体の画像と照合を行う。 At this time, since the image captured from the front camera 1220 is horizontally reversed and displayed on the display 1230, the three-dimensional data stored in the database is collated with the photographed image data before being horizontally reversed or displayed on the display unit. For this purpose, when collating with the subject image after left-right reversal, the three-dimensional image data is reversed left-right and then collated with the subject image.
 携帯情報端末の場合、元々通信機能を有しているので、ネットワーク上のデータベース上にある3次元を、撮影する度にダウンロードして照合し、撮影した画像をアップロードして、データベースを含むサーバの上で照合するという方法も含め、あらゆる通信による対応を簡便に行うことができる。
 <実施の形態3の効果>
In the case of the portable information terminal, since it originally has a communication function, the 3D on the database on the network is downloaded and collated every time it is photographed, the photographed image is uploaded, and the server including the database It is possible to easily cope with any communication including the method of collating above.
<Effect of Embodiment 3>
 以上説明した実施の形態3によれば、左右を反転したミラーイメージの撮影画像データが取得される場合であっても、被写体位置情報を取得できる。
(実施の形態4)
According to the third embodiment described above, it is possible to acquire subject position information even when captured image data of a mirror image that is reversed right and left is acquired.
(Embodiment 4)
 図14(a)は、実施の形態4に係る画像取得装置である一眼レフレックスカメラ1400の構成例の概要を模式的に示す図であり、図14(b)は、実施の形態4に係る画像取得装置である光学式ファインダーを撮影レンズとは別に設けた光学ファインダーカメラ1500の構成例の概要を模式的に示す図である。 FIG. 14A is a diagram schematically showing an outline of a configuration example of a single-lens reflex camera 1400 that is an image acquisition device according to the fourth embodiment, and FIG. 14B is a diagram according to the fourth embodiment. It is a figure which shows typically the outline | summary of the structural example of the optical finder camera 1500 which provided the optical finder which is an image acquisition apparatus separately from the imaging lens.
 図14(a)に示されるように、撮影光軸1410から入射した被写体からの光はレンズを通して、ミラー1430で90度上に反射した後、ピントグラス1420にその焦点を合わせる。ピントグラス1420に結ばれた被写体像は、ペンタプリズム1460により反射を繰り返した後、ファインダー1440に導光され、撮影者が目視で被写体の光学像を見ることができる。 As shown in FIG. 14 (a), the light from the subject incident from the photographing optical axis 1410 is reflected by the mirror 1430 up 90 degrees through the lens and then focused on the focus glass 1420. The subject image connected to the focus glass 1420 is repeatedly reflected by the pentaprism 1460 and then guided to the viewfinder 1440 so that the photographer can visually see the optical image of the subject.
 また、撮影時には、ミラー1430がピントグラス1420近傍まで上がって、撮影光軸1410を通る光をミラー1430が遮ることなく、第4カメラセンサ1454に焦点を結び、その手前のフォーカルプレーンシャッター1470の開閉により、撮影時のシャッター速度で露出を行う。この図ではカメラセンサとしているが、カメラセンサの代わりにフィルムを用いたカメラを適用しても良い。 At the time of shooting, the mirror 1430 rises to the vicinity of the focus glass 1420, and the light passing through the shooting optical axis 1410 is focused on the fourth camera sensor 1454 without blocking the mirror 1430, and the focal plane shutter 1470 in front of it is opened and closed. Thus, exposure is performed at the shutter speed at the time of shooting. Although the camera sensor is shown in this figure, a camera using a film may be applied instead of the camera sensor.
 ここでミラー1430が下がった状態で、光学像を直接撮影者が見ている状態では、一般のデジタルカメラのように、第4カメラセンサ1454に被写体光が焦点を結ぶわけではないので、本発明による被写体の形状と3次元との形状照合を、第4カメラセンサ1454を用いて行うことはできない。ここでは、例えばミラー1430の一部をハーフミラーとして光を透過し、サブミラー1431でカメラの床位置にある第4カメラセンサ1454に結像させることによって、被写体の形状を取得する方法、ファインダー1440内にミラーを設け、光をファインダー上の第4カメラセンサ1454に結像させる方法、さらに、撮影の光学系とは全く別に、第3カメラセンサ1453を、撮影光軸1410と平行に設置し、このセンサから被写体の形状を取得する方法などがある。 Here, in the state where the photographer is directly viewing the optical image with the mirror 1430 lowered, the subject light is not focused on the fourth camera sensor 1454 as in a general digital camera. The fourth camera sensor 1454 cannot be used to match the shape of the subject with the three-dimensional shape. Here, for example, a method of acquiring the shape of the subject by passing through a part of the mirror 1430 as a half mirror and forming an image on the fourth camera sensor 1454 at the camera floor position by the sub mirror 1431, the viewfinder 1440 The third camera sensor 1453 is installed in parallel to the photographic optical axis 1410, separately from the method of imaging the light on the fourth camera sensor 1454 on the finder, and completely separate from the photographic optical system. There is a method of acquiring the shape of a subject from a sensor.
 図14(b)は、画像取得装置が、光学式ファインダーを撮影レンズとは別に設けた光学ファインダーカメラ1500である場合を示す。図14(a)と同様に、第1カメラセンサ1510の手前のフォーカルプレーンシャッター1520の開閉により、撮影時のシャッター速度で露出を行う。この図ではカメラセンサとしているが、カメラセンサの代わりにフィルムを用いたカメラであってもよい。ここでは、光学ファインダーの中に、図14(a)と同様にミラーを設け、光をファインダー上の第2カメラセンサ1530に結像させる方法を示す。
 <実施の形態4の効果>
FIG. 14B shows a case where the image acquisition device is an optical viewfinder camera 1500 provided with an optical viewfinder separately from the photographing lens. Similarly to FIG. 14A, exposure is performed at the shutter speed at the time of shooting by opening and closing the focal plane shutter 1520 in front of the first camera sensor 1510. In this figure, a camera sensor is used, but a camera using a film may be used instead of the camera sensor. Here, a method is shown in which a mirror is provided in the optical viewfinder in the same manner as in FIG. 14A and light is focused on the second camera sensor 1530 on the viewfinder.
<Effect of Embodiment 4>
 以上説明した実施の形態4によれば、撮影する被写体イメージを、常に電子式のカメラセンサで取得できない構造のカメラでも、被写体の形状を取得し、3次元と照合して、当該被写体位置情報を取得できる。 According to the fourth embodiment described above, even with a camera having a structure in which a subject image to be photographed cannot always be acquired by an electronic camera sensor, the shape of the subject is acquired, collated with three dimensions, and the subject position information is obtained. You can get it.
 以上、本発明者によってなされた発明を実施の形態に基づき具体的に説明したが、本発明は前記実施の形態に限定されるものではなく、その要旨を逸脱しない範囲で種々変更可能であることはいうまでもない。例えば、ある実施の形態の構成の一部を他の実施の形態の構成と置き換えるようにしても良い。また、ある実施の形態の構成に他の実施の形態の構成を追加するようにしても良い。そして、これらは全て本発明の範疇に属するものである。 As mentioned above, the invention made by the present inventor has been specifically described based on the embodiment. However, the present invention is not limited to the embodiment, and various modifications can be made without departing from the scope of the invention. Needless to say. For example, a part of the configuration of one embodiment may be replaced with the configuration of another embodiment. The configuration of another embodiment may be added to the configuration of a certain embodiment. These all belong to the category of the present invention.
 また、前述したフィルム撮影式カメラ、更には撮影した画像を記憶する機能を持たない、例えば望遠鏡のような機器の場合でも、本発明により、機器の位置情報と、被写体位置情報の両方を取得し、これら位置情報を記憶するような機能を持つ機器にも適用可能である。特に、カメラを搭載し、撮影画像の取り込みと記憶が可能で、通信による外部にあるデータの参照やアップロード、さらに本体内での画像処理等が可能な機能を有するカメラ付き携帯電話などの、携帯情報通信端末は、本発明の実施に適した機器のひとつとなる。 In addition, even in the case of a device such as a telescope that does not have the function of storing the above-described film photographing camera and the photographed image, for example, a device such as a telescope, the present invention acquires both device position information and subject position information. The present invention can also be applied to a device having a function for storing the position information. In particular, a mobile phone such as a camera-equipped mobile phone equipped with a camera, capable of capturing and storing captured images, and having functions for referencing and uploading external data through communication and image processing inside the main unit, etc. The information communication terminal is one of devices suitable for implementing the present invention.
210…画像データ取得部、
220…被写体位置情報取得部、
230…マッチング処理部、
240画像ファイル生成部、
1000…画像取得装置、
1010…データベース、
1030…撮影軸線、
1400…一眼レフレックスカメラ、
1500…光学ファインダーカメラ、
2010…画像ファイル、
2030…撮影情報B
7010…ロックオンマーク。
210: Image data acquisition unit,
220 ... subject position information acquisition unit,
230 ... matching processing unit,
240 image file generator,
1000: Image acquisition device,
1010 ... database,
1030: Shooting axis,
1400 ... single-lens reflex camera,
1500 ... optical viewfinder camera,
2010 ... Image file,
2030 ... Shooting information B
7010: Lock-on mark.

Claims (11)

  1.  画像取得装置の現在位置情報を取得する画像取得装置であって、
     画像取得装置が撮影する撮影領域と、被写体を構成する座標を含む被写体電子情報とを照合することで前記被写体を特定し、特定した前記被写体の被写体位置情報を取得する被写体位置情報取得部と、
     前記現在位置情報と、前記被写体位置情報取得部が取得した前記被写体位置情報とを含む画像ファイルを生成する画像ファイル生成部と、
     を有する、画像取得装置。
    An image acquisition device that acquires current position information of an image acquisition device,
    A subject position information obtaining unit for identifying the subject by collating a photographing region photographed by the image obtaining device with subject electronic information including coordinates constituting the subject, and obtaining subject position information of the identified subject;
    An image file generation unit that generates an image file including the current position information and the subject position information acquired by the subject position information acquisition unit;
    An image acquisition device.
  2.  請求項1記載の画像取得装置において、
     前記被写体位置情報取得部は、前記被写体位置情報を、無線通信を介して外部のデータベースから取得する、画像取得装置。
    The image acquisition device according to claim 1,
    The subject position information acquisition unit is an image acquisition device that acquires the subject position information from an external database via wireless communication.
  3.  請求項1記載の画像取得装置において、
     前記被写体位置情報取得部は、前記撮影領域に含まれる被写体の画像データと一致または近似する前記被写体電子情報と対応する前記被写体位置情報を取得する、画像取得装置。
    The image acquisition device according to claim 1,
    The subject position information acquisition unit acquires the subject position information corresponding to the subject electronic information that matches or approximates image data of a subject included in the shooting region.
  4.  請求項1記載の画像取得装置において、
     前記被写体位置情報取得部は、画像取得装置のレンズの中心と前記撮影領域の中心とを結ぶ撮影軸線と交わる被写体を、前記被写体位置情報を取得する対象として特定する、画像取得装置。
    The image acquisition device according to claim 1,
    The subject position information acquisition unit is an image acquisition device that identifies a subject that intersects a shooting axis connecting a lens center of the image acquisition device and a center of the shooting region as a target for acquiring the subject position information.
  5.  請求項1記載の画像取得装置において、
     前記被写体位置情報取得部は、画像取得装置のレンズの中心と前記撮影領域の中心とを結ぶ撮影軸線を中心とした、予め指定されている範囲にて重なる被写体を、前記被写体位置情報を取得する対象として特定する、画像取得装置。
    The image acquisition device according to claim 1,
    The subject position information acquisition unit acquires subject position information for a subject that overlaps in a predesignated range around a shooting axis connecting a lens center of the image acquisition device and a center of the shooting region. An image acquisition device specified as a target.
  6.  請求項5記載の画像取得装置において、
     前記被写体位置情報取得部は、前記画像取得装置からの距離が近い被写体を、被写体位置情報を取得する対象として特定する、画像取得装置。
    The image acquisition device according to claim 5.
    The subject position information acquisition unit is an image acquisition device that identifies a subject having a short distance from the image acquisition device as a target for acquiring subject position information.
  7.  請求項1記載の画像取得装置において、
     被写体を観察するファインダーをさらに有し、
     前記ファインダーは、撮影時に被写体位置情報が取得される対象となる被写体の画像に、被写体電子情報に基づく画像を重畳して表示する、画像取得装置。
    The image acquisition device according to claim 1,
    It also has a viewfinder for observing the subject,
    The viewfinder is an image acquisition device that superimposes and displays an image based on subject electronic information on an image of a subject from which subject position information is acquired at the time of shooting.
  8.  請求項4記載の画像取得装置において、
     前記被写体位置情報取得部は、前記被写体電子情報の種類に応じて、前記被写体位置情報を取得する、前記画像取得装置から前記被写体までの距離を変化させる、画像取得装置。
    The image acquisition device according to claim 4.
    The subject position information acquisition unit is an image acquisition device that acquires the subject position information according to a type of the subject electronic information, and changes a distance from the image acquisition device to the subject.
  9.  請求項2に記載の画像取得装置において、
     前記被写体位置情報取得部は、前記被写体電子情報の種類に応じて、被写体位置情報を、前記被写体電子情報に付随した位置情報、もしくは撮影領域に含まれる被写体の画像データと一致または近似する前記被写体電子情報と対応する前記被写体位置情報、もしくは画像取得装置のレンズの中心と前記撮影領域の中心とを結ぶ撮影軸線を中心とした、予め指定されている範囲にて重なる被写体の前記被写体位置情報を取得する、画像取得装置。
    The image acquisition device according to claim 2,
    The subject position information acquisition unit matches or approximates the subject position information with the position information attached to the subject electronic information or the image data of the subject included in the shooting area according to the type of the subject electronic information. The subject position information corresponding to the electronic information, or the subject position information of the subject that overlaps in a predesignated range around the photographing axis connecting the center of the lens of the image acquisition device and the center of the photographing region. An image acquisition device for acquiring.
  10.  画像取得装置の現在位置情報を取得する画像取得装置における画像ファイル生成方法であって、
     被写体位置情報取得部が、画像取得装置が撮影する撮影領域と、データベースに記憶されている被写体を構成する座標を含む被写体電子情報とを照合することで被写体を特定し、特定した前記被写体の被写体位置情報を取得する被写体位置情報取得ステップと、
     画像ファイル生成部が、前記現在位置情報と、前記被写体位置情報取得ステップにて取得した前記被写体位置情報とを含む画像ファイルを生成する画像ファイル生成ステップと、
     を有する、画像ファイル生成方法。
    An image file generation method in an image acquisition device for acquiring current position information of an image acquisition device,
    The subject position information acquisition unit specifies a subject by collating a shooting region shot by the image acquisition device with subject electronic information including coordinates constituting the subject stored in the database, and the subject of the specified subject A subject position information acquisition step for acquiring position information;
    An image file generating step for generating an image file including the current position information and the subject position information acquired in the subject position information acquiring step;
    A method for generating an image file.
  11.  被写体位置情報取得部と、画像ファイル生成部とを有する画像ファイル生成装置のコンピュータに実行させるための画像ファイル生成プログラムであって、
     被写体位置情報取得部が、画像取得装置が撮影する撮影領域と、データベースに記憶されている被写体を構成する座標を含む被写体電子情報とを照合することで被写体を特定し、特定した前記被写体の被写体位置情報を取得する被写体位置情報取得ステップと、
     画像ファイル生成部が、画像取得装置の現在位置情報と、前記被写体位置情報取得ステップにて取得した前記被写体位置情報とを含む画像ファイルを生成する画像ファイル生成ステップと、
     を有する、画像ファイル生成プログラム。
     
    An image file generation program for causing a computer of an image file generation apparatus having a subject position information acquisition unit and an image file generation unit to execute the program,
    The subject position information acquisition unit specifies a subject by collating a shooting region shot by the image acquisition device with subject electronic information including coordinates constituting the subject stored in the database, and the subject of the specified subject A subject position information acquisition step for acquiring position information;
    An image file generation unit that generates an image file including current position information of the image acquisition device and the subject position information acquired in the subject position information acquisition step;
    An image file generation program.
PCT/JP2015/060106 2015-03-31 2015-03-31 Image acquisition device, image file generation method, and image file generation program WO2016157406A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/060106 WO2016157406A1 (en) 2015-03-31 2015-03-31 Image acquisition device, image file generation method, and image file generation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/060106 WO2016157406A1 (en) 2015-03-31 2015-03-31 Image acquisition device, image file generation method, and image file generation program

Publications (1)

Publication Number Publication Date
WO2016157406A1 true WO2016157406A1 (en) 2016-10-06

Family

ID=57005317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/060106 WO2016157406A1 (en) 2015-03-31 2015-03-31 Image acquisition device, image file generation method, and image file generation program

Country Status (1)

Country Link
WO (1) WO2016157406A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019149791A (en) * 2018-11-08 2019-09-05 京セラ株式会社 Electronic device, control method, and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1186035A (en) * 1997-07-11 1999-03-30 Nippon Telegr & Teleph Corp <Ntt> Distance reference type scenery labeling device and system therefor
JP2002344789A (en) * 2001-05-16 2002-11-29 Fuji Photo Film Co Ltd Imaging device and position information detection system
JP2009017540A (en) * 2007-05-31 2009-01-22 Panasonic Corp Image capturing device, additional information providing server, and additional information filtering system
JP4488233B2 (en) * 2003-04-21 2010-06-23 日本電気株式会社 Video object recognition device, video object recognition method, and video object recognition program
JP2012244562A (en) * 2011-05-24 2012-12-10 Nikon Corp Digital camera
JP2014224861A (en) * 2013-05-15 2014-12-04 オリンパスイメージング株式会社 Display device and imaging device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1186035A (en) * 1997-07-11 1999-03-30 Nippon Telegr & Teleph Corp <Ntt> Distance reference type scenery labeling device and system therefor
JP2002344789A (en) * 2001-05-16 2002-11-29 Fuji Photo Film Co Ltd Imaging device and position information detection system
JP4488233B2 (en) * 2003-04-21 2010-06-23 日本電気株式会社 Video object recognition device, video object recognition method, and video object recognition program
JP2009017540A (en) * 2007-05-31 2009-01-22 Panasonic Corp Image capturing device, additional information providing server, and additional information filtering system
JP2012244562A (en) * 2011-05-24 2012-12-10 Nikon Corp Digital camera
JP2014224861A (en) * 2013-05-15 2014-12-04 オリンパスイメージング株式会社 Display device and imaging device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019149791A (en) * 2018-11-08 2019-09-05 京セラ株式会社 Electronic device, control method, and program

Similar Documents

Publication Publication Date Title
CN109064545B (en) Method and device for data acquisition and model generation of house
WO2017088678A1 (en) Long-exposure panoramic image shooting apparatus and method
WO2017221659A1 (en) Image capturing device, display device, and image capturing and displaying system
KR101720190B1 (en) Digital photographing apparatus and control method thereof
CN110022444B (en) Panoramic photographing method for unmanned aerial vehicle and unmanned aerial vehicle using panoramic photographing method
JP4396500B2 (en) Imaging apparatus, image orientation adjustment method, and program
WO2013069048A1 (en) Image generating device and image generating method
US8339477B2 (en) Digital camera capable of detecting name of captured landmark and method thereof
WO2015192547A1 (en) Method for taking three-dimensional picture based on mobile terminal, and mobile terminal
CN106791483B (en) Image transmission method and device and electronic equipment
CN104243800A (en) Control device and storage medium
JPWO2014141522A1 (en) Image determination apparatus, imaging apparatus, three-dimensional measurement apparatus, image determination method, and program
KR20120012201A (en) Method for photographing panorama picture
JP5750696B2 (en) Display device and display program
KR100943548B1 (en) Method and apparatus for pose guide of photographing device
JP2011058854A (en) Portable terminal
JP2019110434A (en) Image processing apparatus, image processing system, and program
JP6741498B2 (en) Imaging device, display device, and imaging display system
CN104169795B (en) Image display device, the camera and the method for displaying image that carry this image display device as viewfinder
JP5248951B2 (en) CAMERA DEVICE, IMAGE SHOOTING SUPPORT DEVICE, IMAGE SHOOTING SUPPORT METHOD, AND IMAGE SHOOTING SUPPORT PROGRAM
JP7306089B2 (en) Image processing system, imaging system, image processing device, imaging device, and program
WO2016157406A1 (en) Image acquisition device, image file generation method, and image file generation program
JP2009111827A (en) Photographing apparatus and image file providing system
JP2009060338A (en) Display device and electronic camera
JP2004088607A (en) Imaging apparatus, imaging method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15887559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15887559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP