WO2017149852A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2017149852A1
WO2017149852A1 PCT/JP2016/083994 JP2016083994W WO2017149852A1 WO 2017149852 A1 WO2017149852 A1 WO 2017149852A1 JP 2016083994 W JP2016083994 W JP 2016083994W WO 2017149852 A1 WO2017149852 A1 WO 2017149852A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance
information
unit
imaging
derivation
Prior art date
Application number
PCT/JP2016/083994
Other languages
French (fr)
Japanese (ja)
Inventor
智紀 増田
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2017149852A1 publication Critical patent/WO2017149852A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/40Systems for automatic generation of focusing signals using time delay of the reflected waves, e.g. of ultrasonic waves

Definitions

  • the technology of the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • Japanese Patent Application Laid-Open No. 2004-157386 discloses a technique for measuring a distance to a subject based on an image signal indicating the subject.
  • International Publication No. 2008-155961 discloses a technique for measuring a distance to a subject based on a pair of captured images obtained by imaging with a pair of imaging units mounted at a predetermined interval apart from each other. Is disclosed.
  • Japanese Patent Application Laid-Open No. 2004-264827 discloses a technique for deriving a focal length at an imaging unit based on image data.
  • the distance measuring device receives the input reference image.
  • a technique for correcting the focal length in the imaging unit based on the length of the image pickup unit is known.
  • the focal length in order to improve the focal length, can be set with high accuracy without taking time and effort compared to the case where the user inputs the length of the reference image included in the captured image.
  • An information processing apparatus is an imaging unit having a focus lens, and an imaging unit that captures an image of a subject, and directivity light that is directional light is emitted to the subject, thereby providing directivity.
  • An acquisition unit that acquires an actual distance that is a distance measured by a measurement unit included in a distance measuring device that includes a measurement unit that measures a distance to a subject by receiving reflected light of the light, an actual distance and a focus lens
  • a deriving unit for deriving a focal length corresponding to the actually measured distance obtained by the obtaining unit using correspondence relationship information indicating a correspondence relationship with the focal length.
  • the information processing apparatus takes less time than the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length.
  • the focal length can be derived with high accuracy.
  • the correspondence information is information including a correction value corresponding to the actually measured distance, The focal distance corresponding to the actual distance is derived by correcting the actual distance with the correction value.
  • the information processing apparatus takes less time compared to the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • An information processing apparatus is the information processing apparatus according to the first aspect of the present invention, wherein the imaging unit further includes a zoom lens, and the correspondence information includes measured distance, imaging The information indicating the correspondence relationship between the position of the zoom lens in the optical axis direction and the focal length at the imaging unit, and the acquisition unit further acquires and derives position information indicating the position of the zoom lens in the optical axis direction at the imaging unit The unit uses the correspondence information to derive the focal distance corresponding to the actually measured distance and the position information acquired by the acquisition unit.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the focal length can be derived with high accuracy without taking time and effort.
  • the correspondence information includes an actually measured distance, a position of the zoom lens in the optical axis direction at the imaging unit, and imaging.
  • Information indicating the correspondence between the temperature of the region affecting the imaging by the unit and the focal length the acquisition unit further acquires temperature information indicating the temperature of the region, and the derivation unit uses the correspondence information, The focal distance corresponding to the measured distance, the position information, and the temperature information acquired by the acquisition unit is derived.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes and the temperature of the region that affects imaging changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the fourth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit. Information indicating the correspondence between the position of the image, the temperature of the region that affects the imaging by the imaging unit, the attitude of the focus lens with respect to the vertical direction, and the focal length,
  • the acquisition unit further acquires focus lens posture information indicating the posture of the focus lens with respect to the vertical direction,
  • the deriving unit derives the focal distance corresponding to the measured distance, the position information, the temperature information, and the focus lens attitude information acquired by the acquiring unit using the correspondence information. ing.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the temperature of the region that affects the imaging by the imaging unit changes, and the attitude of the focus lens changes, the focal length is derived with high accuracy without taking time and effort. be able to.
  • the information processing apparatus is the information processing apparatus according to the fifth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit.
  • the acquisition unit further acquires zoom lens posture information indicating the posture of the zoom lens with respect to the vertical direction
  • the derivation unit uses the correspondence information to acquire the measured distance and the position acquired by the acquisition unit.
  • the focal length corresponding to the information, the temperature information, the focus lens attitude information, and the zoom lens attitude information is derived.
  • the information processing apparatus has a zoom lens light compared to a case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the temperature of the area that affects the imaging by the imaging unit changes, the attitude of the focus lens changes, and the attitude of the zoom lens changes, it takes time and effort. Therefore, the focal length can be derived with high accuracy.
  • the information processing apparatus is the information processing apparatus according to the third aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit.
  • the deriving unit derives the focal distance corresponding to the measured distance, the position information, and the zoom lens attitude information acquired by the acquiring unit using the correspondence information.
  • the information processing apparatus provides light from the zoom lens in comparison with a case where the user inputs the length of the reference image included in the captured image in order to increase the accuracy of the focal length. Even if the position in the axial direction changes and the posture of the zoom lens changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the first aspect of the present invention, wherein the imaging unit further includes a zoom lens,
  • the correspondence information is information indicating a correspondence relationship between the posture of the zoom lens with respect to a vertical direction and the focal length
  • the acquisition unit further includes zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction.
  • the derivation unit derives the focal distance corresponding to the measured distance and the zoom lens posture information acquired by the acquisition unit using the correspondence information.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing device is the information processing device according to the eighth aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the zoom lens with respect to the vertical direction, and the vertical It is information indicating the correspondence relationship between the orientation of the focus lens with respect to the direction and the focal length, the acquisition unit further acquires focus lens orientation information indicating the orientation of the focus lens with respect to the vertical direction, and the derivation unit is The focal distance corresponding to the measured distance, the zoom lens attitude information, and the focus lens attitude information acquired by the acquisition unit is derived using the correspondence information.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if the angle changes and the posture of the focus lens changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the ninth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit.
  • the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit.
  • Information indicating a correspondence relationship between the attitude of the zoom lens with respect to the vertical direction, the attitude of the focus lens with respect to the vertical direction, and the focal length, and the acquisition unit further acquires temperature information indicating the temperature of the region
  • the derivation unit derives the focal distance corresponding to the measured distance, the temperature information, the zoom lens posture information, and the focus lens posture information acquired by the acquisition unit using the correspondence information. It is supposed to be.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if the angle of the focus lens changes, and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing apparatus is the information processing apparatus according to the eighth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the posture of the zoom lens with respect to the vertical direction and the focal length, the acquisition unit further acquires temperature information indicating a temperature of the region, and the derivation unit includes the correspondence relationship The focal distance corresponding to the measured distance, the temperature information, and the zoom lens attitude information acquired by the acquisition unit is derived using information.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even when the temperature changes and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing apparatus is the information processing apparatus according to the third aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the focus lens with respect to a vertical direction, and the focus.
  • Information indicating a correspondence relationship between distances wherein the acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction, and the derivation unit uses the correspondence relationship information to obtain the acquisition
  • the focal distance corresponding to the measured distance, the position information, and the focus lens attitude information acquired by the unit is derived.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes and the posture of the focus lens changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the twelfth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit.
  • the position of the focus lens with respect to the vertical direction, the posture of the zoom lens with respect to the vertical direction, and the information indicating the correspondence relationship between the focal lengths, and the acquisition unit is the posture of the zoom lens with respect to the vertical direction
  • Zoom lens posture information is further obtained, and the derivation unit uses the correspondence information to obtain the measured distance, the position information, the focus lens posture information, and the zoom lens posture obtained by the acquisition unit.
  • the focal length corresponding to the information is derived.
  • the information processing apparatus provides light from the zoom lens as compared with the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. Even if the position in the axial direction is changed, the posture of the focus lens is changed, and the posture of the zoom lens is changed, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the first aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the focus lens with respect to a vertical direction, and the focus. Information indicating a correspondence relationship between distances, wherein the acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction, and the derivation unit uses the correspondence relationship information to obtain the acquisition The focal distance corresponding to the measured distance and the focus lens attitude information acquired by the unit is derived.
  • the focus lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if changes, the focal length can be derived with high accuracy without taking time and effort.
  • the correspondence information includes an actually measured distance, a temperature of an area that affects imaging by the imaging unit, and a focal point. This is information indicating the correspondence between distances, the acquisition unit further acquires temperature information indicating the temperature of the region, and the derivation unit corresponds to the measured distance and temperature information acquired by the acquisition unit using the correspondence relationship information.
  • the focal length is derived.
  • imaging by the imaging unit is performed compared to when the user inputs the length of the reference image included in the captured image. Even if the temperature of the region affecting the temperature changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing device is the information processing device according to the fifteenth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the attitude of the focus lens with respect to the vertical direction and the focal length, and the acquisition unit further acquires focus lens attitude information indicating the attitude of the focus lens with respect to the vertical direction, and derives the information. The unit derives the focal distance corresponding to the measured distance, the temperature information, and the focus lens attitude information acquired by the acquisition unit using the correspondence information.
  • the focus lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even when the temperature changes and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing device is the information processing device according to any one of the first to sixteenth aspects of the present invention, wherein the acquisition unit is configured to move the subject from the first imaging position.
  • a first captured image obtained by imaging by the imaging unit and a second captured image obtained by imaging the subject from a second imaging position different from the first imaging position are acquired, and corresponds to the first imaging position.
  • the directional light is emitted from the position by the measurement unit to the subject and the reflected light of the directional light is received to obtain the measured distance to the subject, and the derivation unit uses the correspondence information
  • the imaging position distance which is the distance between the first imaging position and the second imaging position, is derived based on the derived focal length.
  • the information processing apparatus takes less time than the case where the user inputs the length of the reference image included in the captured image in order to increase the accuracy of the focal length.
  • the imaging position distance can be derived with high accuracy.
  • the information processing device is the information processing device according to any one of the first to seventeenth aspects of the present invention, wherein the derivation unit derives using the correspondence information. Specified in the image obtained by imaging by the imaging unit in the imaging range irradiated with the directional light used in the measurement of the measured distance acquired by the acquisition unit and the measured distance acquired by the measuring unit. The size of the real space region corresponding to the interval is derived based on the interval between the plurality of pixels.
  • the information processing apparatus takes less time than the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length.
  • the dimensions of the real space region can be derived with high accuracy.
  • An information processing method is an image pickup unit having a focus lens, an image pickup unit for picking up an image of a subject, and directivity light, which is directional light, is emitted to the subject.
  • the focal length in order to increase the focal length accuracy, it takes less time than when the user inputs the length of the reference image included in the captured image.
  • the focal length can be derived with high accuracy.
  • a program according to a twentieth aspect of the present invention is an imaging unit having a focus lens in a computer, an imaging unit that images a subject, and directional light that is directional light is emitted to the subject and directed.
  • the measured distance which is the distance measured by the measuring unit included in the distance measuring device including the measuring unit that measures the distance to the subject by receiving the reflected light of the natural light, is obtained, and the measured distance and the focus at the imaging unit
  • the program according to the twentieth aspect of the present invention is capable of focusing with less effort compared to the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length.
  • the distance can be derived with high accuracy.
  • the focal length in order to increase the focal length, can be increased without much effort compared to the case where the user inputs the length of the reference image included in the captured image.
  • lead to precision is acquired.
  • FIG. 6 is a front view showing an example of an appearance of a distance measuring device according to the first to sixth embodiments.
  • FIG. 6 is a block diagram illustrating an example of a hardware configuration of a distance measuring device according to first to fifth embodiments.
  • 10 is a time chart showing an example of a measurement sequence by the distance measuring apparatus according to the first to seventh embodiments.
  • 10 is a time chart showing an example of a laser trigger, a light emission signal, a light reception signal, and a count signal required for performing one measurement by the distance measuring apparatus according to the first to seventh embodiments.
  • FIG. 6 is a block diagram illustrating an example of a hardware configuration of a main control unit included in a distance measuring device according to first to fifth embodiments. It is a block diagram which shows an example of a structure of the focal distance derivation
  • FIG. 11 is a block diagram showing an example of main functions of a CPU according to the first to seventh embodiments.
  • FIG. 6 is a schematic plan view showing an example of a positional relationship between a distance measuring apparatus according to the first to fifth embodiments and a subject.
  • FIG. 6 is a conceptual diagram illustrating an example of a positional relationship among a part of a subject, a first captured image, a second captured image, a principal point of an imaging lens at a first imaging position, and a principal point of the imaging lens at a second imaging position.
  • FIG. 10 is a diagram for explaining a method for deriving irradiation position real space coordinates according to the first to seventh embodiments.
  • FIG. 10 is a diagram for explaining a method for deriving irradiation position real space coordinates according to the first to seventh embodiments.
  • FIG. 10 is a diagram for explaining a method for deriving irradiation position pixel coordinates according to the first to seventh embodiments. It is a flowchart which shows an example of the flow of the dimension derivation
  • FIG. 16 is a conceptual diagram illustrating an example of a subject included in a shooting range of the imaging device according to the first to seventh embodiments. It is a screen figure which shows an example of the screen of the state by which the irradiation position mark and the measurement distance were superimposed and displayed on the captured image obtained by imaging with the imaging device which concerns on 1st Embodiment.
  • FIG. 23 is a flowchart showing an example of the flow of a first derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 22. It is a flowchart which shows an example of the flow of the 2nd derivation
  • FIG. 23 is a flowchart showing an example of the flow of a first derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 22. It is a flowchart which shows an example of the flow of the 2nd derivation
  • FIG. 25 is a flowchart showing an example of a flow of a second derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 24. It is a screen figure which shows an example of the screen of the state in which the 1st captured image acquired by image pick-up by the imaging device which concerns on 1st Embodiment was displayed. It is a screen figure which shows an example of the screen of the state by which the irradiation position mark and the 1st measurement distance were superimposed and displayed on the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment.
  • An example of a screen in which a first captured image obtained by capturing an image with the imaging apparatus according to the first embodiment and displaying a first captured image in which a pixel of interest and first to third pixels are specified is displayed.
  • FIG. It is a screen figure which shows an example of the screen of the state by which the imaging position distance was superimposed and displayed on the 2nd captured image obtained by imaging with the imaging device which concerns on 1st Embodiment.
  • FIG. 23 is a flowchart showing an example of a flow of a first derivation process included in an imaging position distance derivation process according to the second embodiment, and is a continuation of the flowchart shown in FIG. 22.
  • FIG. 25 is a flowchart showing an example of a flow of a second derivation process included in the imaging position distance derivation process according to the second embodiment, and is a continuation of the flowchart shown in FIG. 24. It is a block diagram which shows the modification of a structure of a focal distance derivation
  • FIG. 23 is a flowchart showing an example of a flow of a first derivation process included in an imaging position distance derivation process according to the fifth embodiment, and is a continuation of the flowchart shown in FIG. 22. It is a screen figure which shows an example of the screen of the state by which the last imaging position distance was superimposed and displayed on the 2nd captured image acquired by the imaging device which concerns on 5th Embodiment. It is a flowchart which shows an example of the flow of the three-dimensional coordinate derivation process which concerns on 5th Embodiment.
  • FIG. 10 is a front view showing a modified example of the appearance of the distance measuring apparatus according to the first to sixth embodiments. It is a block diagram which shows the modification of the structure of the focal distance derivation
  • the distance from the distance measuring device 10A to the subject to be measured is also simply referred to as “distance” or “distance to the subject”.
  • the angle of view with respect to the subject is also simply referred to as “angle of view”.
  • a distance measuring device 10 ⁇ / b> A that is an example of an information processing device according to the technology of the present disclosure includes a distance measuring unit 12 and an imaging device 14.
  • the distance measurement unit 12 and a distance measurement control unit 68 (see FIG. 2) described later are examples of a measurement unit according to the technique of the present disclosure
  • the imaging device 14 is an imaging unit according to the technique of the present disclosure. It is an example.
  • the imaging device 14 includes a lens unit 16 and an imaging device body 18, and the lens unit 16 is detachably attached to the imaging device body 18.
  • a hot shoe 20 is provided on the left side of the image pickup apparatus main body 18 when viewed from the front, and the distance measuring unit 12 is detachably attached to the hot shoe 20.
  • the distance measuring device 10A includes a distance measuring function for performing distance measurement by emitting a distance measuring laser beam to the distance measuring unit 12, and an image capturing function for obtaining a captured image by causing the image capturing device 14 to capture an image of a subject. ing.
  • the captured image is also simply referred to as “image”.
  • the optical axis L1 (see FIG. 2) of the laser light emitted from the distance measuring unit 12 is the same height as the optical axis L2 (see FIG. 2) of the lens unit 16. It is assumed that this is the case.
  • the distance measuring device 10A operates the distance measuring system function to perform one measurement sequence (see FIG. 3) in response to one instruction, and finally, one measurement sequence is performed. The distance is output.
  • the ranging device 10A has a still image capturing mode and a moving image capturing mode as operation modes of the image capturing system function.
  • the still image capturing mode is an operation mode for capturing a still image
  • the moving image capturing mode is an operation mode for capturing a moving image.
  • the still image capturing mode and the moving image capturing mode are selectively set according to a user instruction.
  • the distance measuring unit 12 includes an emitting unit 22, a light receiving unit 24, and a connector 26.
  • the connector 26 can be connected to the hot shoe 20, and the distance measuring unit 12 operates under the control of the imaging apparatus main body 18 with the connector 26 connected to the hot shoe 20.
  • the emission unit 22 includes an LD (Laser Diode) 30, a condenser lens (not shown), an objective lens 32, and an LD driver 34.
  • LD Laser Diode
  • condenser lens not shown
  • objective lens 32 an objective lens
  • LD driver 34 an LD driver
  • the condenser lens and objective lens 32 are provided along the optical axis L1 of the laser light emitted from the LD 30, and are arranged in the order of the condenser lens and objective lens 32 along the optical axis L1 from the LD 30 side. .
  • the LD 30 emits laser light for distance measurement, which is an example of directional light according to the technology of the present disclosure.
  • the laser beam emitted by the LD 30 is a colored laser beam. For example, if the laser beam is within a range of about several meters from the emission unit 22, the irradiation position of the laser beam is visually recognized in real space and imaged. It is also visually recognized from a captured image obtained by imaging by the device 14.
  • the condensing lens condenses the laser light emitted by the LD 30 and passes the condensed laser light.
  • the objective lens 32 faces the subject and emits laser light that has passed through the condenser lens to the subject.
  • the LD driver 34 is connected to the connector 26 and the LD 30 and drives the LD 30 in accordance with an instruction from the imaging apparatus main body 18 to emit laser light.
  • the light receiving unit 24 includes a PD (photodiode: Photo Diode) 36, an objective lens 38, and a light reception signal processing circuit 40.
  • the objective lens 38 is disposed on the light receiving surface side of the PD 36, and reflected laser light, which is laser light reflected by the laser light emitted by the emission unit 22 when hitting the subject, is incident on the objective lens 38.
  • the objective lens 38 passes the reflected laser light and guides it to the light receiving surface of the PD 36.
  • the PD 36 receives the reflected laser light that has passed through the objective lens 38, and outputs an analog signal corresponding to the amount of received light as a light reception signal.
  • the light reception signal processing circuit 40 is connected to the connector 26 and the PD 36, amplifies the light reception signal input from the PD 36 by an amplifier (not shown), and performs A / D (Analog / Digital) conversion on the amplified light reception signal. I do. Then, the light reception signal processing circuit 40 outputs the light reception signal digitized by A / D conversion to the imaging apparatus body 18.
  • the imaging device 14 includes mounts 42 and 44.
  • the mount 42 is provided in the imaging apparatus main body 18, and the mount 44 is provided in the lens unit 16.
  • the lens unit 16 is attached to the imaging apparatus main body 18 in a replaceable manner by coupling the mount 44 to the mount 42.
  • the lens unit 16 includes a focus lens 50, a zoom lens 52, a focus lens moving mechanism 53, a zoom lens moving mechanism 54, and a motor 56.
  • Subject light that is reflected light from the subject is incident on the focus lens 50.
  • the focus lens 50 passes the subject light and guides it to the zoom lens 52.
  • a focus lens 50 is attached to the focus lens moving mechanism 53 so as to be slidable with respect to the optical axis L2.
  • a motor 57 is connected to the focus lens moving mechanism 53, and the focus lens moving mechanism 53 receives the power of the motor 57 and slides the focus lens 50 along the direction of the optical axis L2.
  • a zoom lens 52 is attached to the zoom lens moving mechanism 54 so as to be slidable with respect to the optical axis L2. Further, a motor 56 is connected to the zoom lens moving mechanism 54, and the zoom lens moving mechanism 54 slides the zoom lens 52 along the optical axis L2 direction by receiving the power of the motor 56.
  • the motors 56 and 57 are connected to the image pickup apparatus main body 18 via mounts 42 and 44, and their driving is controlled in accordance with commands from the image pickup apparatus main body 18.
  • stepping motors are applied as an example of the motors 56 and 57. Therefore, the motors 56 and 57 operate in synchronization with the pulse power according to a command from the imaging apparatus main body 18.
  • the imaging device main body 18 includes an imaging device 60, a main control unit 62, an image memory 64, an image processing unit 66, a distance measurement control unit 68, motor drivers 72 and 73, an imaging device driver 74, an image signal processing circuit 76, and display control. A portion 78 is provided.
  • the imaging device main body 18 includes a touch panel I / F (Interface) 79, a reception I / F 80, and a media I / F 82.
  • the main control unit 62, image memory 64, image processing unit 66, distance measurement control unit 68, motor drivers 72 and 73, image sensor driver 74, image signal processing circuit 76, and display control unit 78 are connected to the bus line 84. ing.
  • a touch panel I / F 79, a reception I / F 80, and a media I / F 82 are also connected to the bus line 84.
  • the imaging element 60 is a CMOS (Complementary Metal Oxide Semiconductor) type image sensor, and includes a color filter (not shown).
  • the color filter includes a G filter corresponding to G (Green: green) that contributes most to obtain a luminance signal, an R filter corresponding to R (Red: red), and a B filter corresponding to B (Blue: blue).
  • the imaging element 60 includes an imaging pixel group 60A including a plurality of imaging pixels 60A1 arranged in a matrix. Each of the imaging pixels 60A1 is assigned with any one of an R filter, a G filter, and a B filter included in the color filter, and the imaging pixel group 60A captures the subject by receiving the subject light.
  • the subject light that has passed through the zoom lens 52 is imaged on the imaging surface 60B, which is the light receiving surface of the imaging device 60, and charges corresponding to the amount of received light of the subject light are accumulated in the imaging pixel 60A1.
  • the imaging element 60 outputs the electric charge accumulated in each imaging pixel 60A1 as an image signal indicating an image corresponding to a subject image obtained by imaging subject light on the imaging surface 60B.
  • the main control unit 62 controls the entire distance measuring device 10 ⁇ / b> A via the bus line 84.
  • the motor driver 72 is connected to the motor 56 via the mounts 42 and 44, and controls the motor 56 in accordance with instructions from the main control unit 62.
  • the motor driver 73 is connected to the motor 57 via the mounts 42 and 44, and controls the motor 57 in accordance with instructions from the main control unit 62.
  • the imaging device 14 has a view angle changing function.
  • the angle of view changing function is a function of changing the angle of view by moving the zoom lens 52.
  • the angle of view changing function includes the zoom lens 52, the zoom lens moving mechanism 54, the motor 56, and the motor driver 72. , And the main control unit 62.
  • the optical angle-of-view changing function by the zoom lens 52 is illustrated, but the technology of the present disclosure is not limited to this, and an electronic angle of view that does not use the zoom lens 52. It may be a change function.
  • the image sensor driver 74 is connected to the image sensor 60 and supplies drive pulses to the image sensor 60 under the control of the main control unit 62.
  • Each imaging pixel 60A1 included in the imaging pixel group 60A is driven according to a driving pulse supplied to the imaging element 60 by the imaging element driver 74.
  • the image signal processing circuit 76 is connected to the image sensor 60, and reads an image signal for one frame from the image sensor 60 for each imaging pixel 60A1 under the control of the main control unit 62.
  • the image signal processing circuit 76 performs various processes such as correlated double sampling processing, automatic gain adjustment, and A / D conversion on the read image signal.
  • the image signal processing circuit 76 converts the image signal digitized by performing various processes on the image signal into a specific frame rate (for example, several tens frames / s) defined by the clock signal supplied from the main control unit 62. Second) for every frame.
  • the image memory 64 temporarily holds the image signal input from the image signal processing circuit 76.
  • the imaging apparatus body 18 includes a display unit 86, a touch panel 88, a receiving device 90, and a memory card 92.
  • the display unit 86 is connected to the display control unit 78 and displays various information under the control of the display control unit 78.
  • the display unit 86 is realized by, for example, an LCD (Liquid Crystal Display).
  • the touch panel 88 is superimposed on the display screen of the display unit 86, and accepts contact with a user's finger or an indicator such as a touch pen.
  • the touch panel 88 is connected to the touch panel I / F 79 and outputs position information indicating the position touched by the indicator to the touch panel I / F 79.
  • the touch panel I / F 79 operates the touch panel 88 according to an instruction from the main control unit 62 and outputs position information input from the touch panel 88 to the main control unit 62.
  • the touch panel 88 is illustrated, but not limited thereto, a mouse (not shown) connected to the distance measuring device 10A and used instead of the touch panel 88 may be applied.
  • the touch panel 88 and a mouse may be used in combination.
  • the reception device 90 includes a measurement imaging button 90A, an imaging button (not shown), an imaging system operation mode switching button 90B, a wide-angle instruction button 90C, and a telephoto instruction button 90D.
  • the receiving device 90 also includes a dimension derivation button 90E, an imaging position distance derivation button 90F, a three-dimensional coordinate derivation button 90G, and the like, and accepts various instructions from the user.
  • the reception device 90 is connected to the reception I / F 80, and the reception I / F 80 outputs an instruction content signal indicating the content of the instruction received by the reception device 90 to the main control unit 62.
  • the measurement imaging button 90A is a press-type button that receives an instruction to start measurement and imaging.
  • the imaging button is a push button that receives an instruction to start imaging.
  • the imaging system operation mode switching button 90B is a push-type button that receives an instruction to switch between the still image capturing mode and the moving image capturing mode.
  • the wide-angle instruction button 90C is a press-type button that receives an instruction to change the angle of view.
  • the amount of change of the angle of view to the wide-angle side is within an allowable range, and the pressure on the wide-angle instruction button 90C is continued. It depends on the pressing time.
  • the telephoto instruction button 90D is a push-type button that accepts an instruction to set the angle of view to telephoto.
  • the amount of change of the angle of view to the telephoto side is within an allowable range, and the press to the telephoto instruction button 90D is continued. It depends on the pressing time.
  • the dimension derivation button 90E is a push button that receives an instruction to start a dimension derivation process described later.
  • the imaging position distance derivation button 90F is a push button that receives an instruction to start an imaging position distance derivation process described later.
  • the three-dimensional coordinate derivation button 90G is a push button that receives an instruction to start an imaging position distance derivation process described later and a three-dimensional coordinate derivation process described later.
  • buttons when it is not necessary to distinguish between the measurement imaging button 90A and the imaging button, they are referred to as “release buttons”. In the following, for convenience of explanation, when there is no need to distinguish between the wide-angle instruction button 90C and the telephoto instruction button 90D, they are referred to as “view angle instruction buttons”.
  • the manual focus mode and the autofocus mode are selectively set according to a user instruction via the reception device 90.
  • the release button receives a two-stage pressing operation of an imaging preparation instruction state and an imaging instruction state.
  • the imaging preparation instruction state refers to, for example, a state where the release button is pressed from the standby position to the intermediate position (half-pressed position), and the imaging instruction state refers to the final pressed position (full-pressed when the release button exceeds the intermediate position). The position is pressed down to (position).
  • half-pressed state the state where the release button is pressed from the standby position to the half-pressed position
  • full-pressed state the state where the release button is pressed from the standby position to the full-pressed position”. Is referred to as a “fully pressed state”.
  • the imaging condition is adjusted by pressing the release button halfway, and then the main exposure is performed when the release button is fully pressed.
  • the release button is pressed halfway, the exposure adjustment is performed by the AE (Automatic Exposure) function, and then the focus adjustment is performed by the AF (Auto-Focus) function.
  • the main exposure is performed.
  • the main exposure refers to exposure performed to obtain a still image file described later.
  • exposure means exposure performed for obtaining a live view image described later and exposure performed for obtaining a moving image file described later in addition to the main exposure.
  • exposure means exposure performed for obtaining a live view image described later and exposure performed for obtaining a moving image file described later in addition to the main exposure.
  • the main control unit 62 performs exposure adjustment by the AE function and focus adjustment by the AF function. Moreover, although the case where exposure adjustment and focus adjustment are performed is illustrated in the present embodiment, the technology of the present disclosure is not limited to this, and exposure adjustment or focus adjustment may be performed. .
  • the image processing unit 66 acquires an image signal for each frame from the image memory 64 at a specific frame rate, and performs various processes such as gamma correction, luminance color difference conversion, and compression processing on the acquired image signal.
  • the image processing unit 66 outputs an image signal obtained by performing various processes to the display control unit 78 frame by frame at a specific frame rate. Further, the image processing unit 66 outputs an image signal obtained by performing various processes to the main control unit 62 in response to a request from the main control unit 62.
  • the display control unit 78 outputs the image signal input from the image processing unit 66 to the display unit 86 at a specific frame rate for each frame under the control of the main control unit 62.
  • the display unit 86 displays images, character information, and the like.
  • the display unit 86 displays the image indicated by the image signal input at a specific frame rate from the display control unit 78 as a live view image.
  • the live view image is a continuous frame image obtained by continuously capturing images, and is also referred to as a through image.
  • the display unit 86 also displays a still image that is a single frame image obtained by imaging in a single frame. Further, the display unit 86 displays a playback image, a menu screen, and the like in addition to the live view image.
  • the image processing unit 66 and the display control unit 78 are realized by ASIC (Application Specific Integrated Circuit), but the technology of the present disclosure is not limited to this.
  • each of the image processing unit 66 and the display control unit 78 may be realized by an FPGA (Field-Programmable Gate Array).
  • the image processing unit 66 may be realized by a computer including a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory).
  • the display control unit 78 may also be realized by a computer including a CPU, a ROM, and a RAM.
  • each of the image processing unit 66 and the display control unit 78 may be realized by a combination of a hardware configuration and a software configuration.
  • the main control unit 62 controls the image sensor driver 74 to cause the image sensor 60 to perform exposure for one frame when an instruction to capture a still image is received by the release button in the still image capturing mode.
  • the main control unit 62 acquires an image signal obtained by performing exposure for one frame from the image processing unit 66, performs a compression process on the acquired image signal, and performs still image processing in a specific still image format. Generate an image file.
  • the specific still image format refers to, for example, JPEG (Joint Photographic Experts Group).
  • the main control unit 62 When an instruction to capture a moving image is received by the release button in the moving image capturing mode, the main control unit 62 outputs an image signal output from the image processing unit 66 to the display control unit 78 as a live view image for a specific frame. Get every frame at the rate. Then, the main control unit 62 performs a compression process on the image signal acquired from the image processing unit 66 to generate a moving image file in a specific moving image format.
  • the specific moving image format refers to, for example, MPEG (Moving Picture Experts Group).
  • MPEG Motion Picture Experts Group
  • the media I / F 82 is connected to the memory card 92, and records and reads image files from and to the memory card 92 under the control of the main control unit 62. Note that the image file read from the memory card 92 by the media I / F 82 is decompressed by the main control unit 62 and displayed on the display unit 86 as a reproduced image.
  • the main control unit 62 associates the distance information input from the distance measurement control unit 68 with the image file, and stores it in the memory card 92 via the media I / F 82.
  • the distance information is read from the memory card 92 through the media I / F 82 by the main control unit 62 together with the image file, and the distance indicated by the read distance information is displayed together with the reproduced image by the related image file. Displayed on the part 86.
  • the distance measurement control unit 68 controls the distance measurement unit 12 under the control of the main control unit 62.
  • the ranging control unit 68 is realized by an ASIC, but the technology of the present disclosure is not limited to this.
  • the distance measurement control unit 68 may be realized by an FPGA.
  • the distance measurement control unit 68 may be realized by a computer including a CPU, a ROM, and a RAM. Further, the distance measurement control unit 68 may be realized by a combination of a hardware configuration and a software configuration.
  • the hot shoe 20 is connected to the bus line 84, and the distance measurement control unit 68 controls the LD driver 34 under the control of the main control unit 62 to control the light emission of the laser beam by the LD 30.
  • a light reception signal is acquired from the signal processing circuit 40.
  • the distance measurement control unit 68 derives the distance to the subject based on the timing at which the laser light is emitted and the timing at which the light reception signal is acquired, and outputs distance information indicating the derived distance to the main control unit 62.
  • the measurement of the distance to the subject by the distance measurement control unit 68 will be described in more detail.
  • one measurement sequence by the distance measuring device 10A is defined by a voltage adjustment period, an actual measurement period, and a pause period.
  • the voltage adjustment period is a period for adjusting the drive voltage of the LD 30 and the PD 36.
  • the actual measurement period is a period during which the distance to the subject is actually measured. In the actual measurement period, the operation of causing the LD 30 to emit laser light and causing the PD 36 to receive reflected laser light is repeated several hundred times. Based on the timing at which the laser light is emitted and the timing at which the received light signal is obtained, Is derived.
  • the pause period is a period for stopping the driving of the LD 30 and the PD 36. Therefore, in one measurement sequence, the distance to the subject is measured several hundred times.
  • each of the voltage adjustment period, the actual measurement period, and the rest period is set to several hundred milliseconds.
  • the distance measurement control unit 68 is supplied with a count signal that defines the timing at which the distance measurement control unit 68 gives an instruction to emit laser light and the timing at which the light reception signal is acquired.
  • the count signal is generated by the main control unit 62 and supplied to the distance measurement control unit 68, but is not limited thereto, and is generated by a dedicated circuit such as a time counter connected to the bus line 84. You may make it supply to the ranging control part 68.
  • the ranging control unit 68 outputs a laser trigger for emitting laser light to the LD driver 34 in accordance with the count signal.
  • the LD driver 34 drives the LD 30 to emit laser light according to the laser trigger.
  • the laser light emission time is set to several tens of nanoseconds.
  • the time until the laser light emitted toward the subject several kilometers ahead by the emitting unit 22 is received by the PD 36 as reflected laser light is “several kilometers ⁇ 2 / light speed” ⁇ several microseconds. Therefore, in order to measure the distance to the subject several kilometers away, as shown in FIG. 3 as an example, a minimum required time of several microseconds is required.
  • the measurement time of one time is set to several milliseconds.
  • the round trip time of the laser beam depends on the distance to the subject. Since they are different, the measurement time per time may be varied according to the assumed distance.
  • the distance measurement control unit 68 derives the distance to the subject based on the measurement values obtained from several hundred measurements in one measurement sequence, for example, a histogram of the measurement values obtained from several hundred measurements To derive the distance to the subject.
  • the horizontal axis is the distance to the subject
  • the vertical axis is the number of measurements
  • the number of measurements is derived by the distance measurement control unit 68 as a distance measurement result.
  • the histogram shown in FIG. 5 is merely an example, based on the round trip time of the laser beam (elapsed time from light emission to light reception) or 1/2 of the round trip time of the laser beam instead of the distance to the subject. A histogram may be generated.
  • the main control unit 62 includes a CPU 100, a primary storage unit 102, and a secondary storage unit 104, which are examples of an acquisition unit and a derivation unit according to the technology of the present disclosure.
  • the CPU 100 controls the entire distance measuring device 10A.
  • the primary storage unit 102 is a volatile memory used as a work area or the like when executing various programs.
  • An example of the primary storage unit 102 is a RAM.
  • the secondary storage unit 104 is a non-volatile memory that stores in advance a control program for controlling the operation of the distance measuring apparatus 10A, various parameters, or the like. Examples of the secondary storage unit 104 include an EEPROM (Electrically Erasable Programmable Read Only Memory) or a flash memory.
  • the CPU 100, the primary storage unit 102, and the secondary storage unit 104 are connected to each other via the bus line 84.
  • the secondary storage unit 104 stores a size derivation program 105A, an imaging position distance derivation program 106A, a three-dimensional coordinate derivation program 108A, and a focal length derivation table 109A.
  • the dimension derivation program 105A and the imaging position distance derivation program 106A are examples of programs according to the technique of the present disclosure.
  • the focal length derivation table 109A is an example of correspondence information according to the technology of the present disclosure.
  • the focal length derivation table 109 ⁇ / b> A is a table showing a correspondence relationship between the actually measured distance and the focal length of the focus lens 50.
  • the actually measured distance refers to the distance to the subject measured by using the ranging system function, that is, the distance to the subject measured by the distance measurement unit 12 and the distance measurement control unit 68.
  • the focal length of the focus lens 50 is associated with each of a plurality of derivation distances.
  • the derivation distance refers to the distance from the distance measuring device 10A to the subject.
  • the derivation distance is a parameter to be compared with the actually measured distance.
  • the focal length of the focus lens 50 is simply referred to as “focal length”.
  • a focal distance of 7 millimeters is associated with a derivation distance of 1 meter.
  • the focal length of 8 millimeters is associated with the derivation distance of 2 meters.
  • a focal length of 10 millimeters is associated with a derivation distance of 3 meters.
  • the focal length of 12 millimeters is associated with the derivation distance of 5 meters.
  • the focal distance of 14 millimeters is associated with the derivation distance of 10 meters.
  • the focal length derivation table 109A In the focal length derivation table 109A, the focal length of 16 millimeters is associated with the derivation distance of 30 meters. Further, in the focal length derivation table 109A, the focal distance of 18 millimeters is associated with the derivation distance of infinity.
  • the focal length derivation table 109A is a table derived from at least one result of, for example, a test using an actual distance measuring device 10A and a computer simulation based on a design specification of the distance measuring device 10A.
  • the CPU 100 reads the dimension derivation program 105A, the imaging position distance derivation program 106A, and the three-dimensional coordinate derivation program 108A from the secondary storage unit 104.
  • the CPU 100 expands the read dimension derivation program 105A, the imaging position distance derivation program 106A, and the three-dimensional coordinate derivation program 108A in the primary storage unit 102.
  • the CPU 100 executes a dimension deriving program 105A, an imaging position distance deriving program 106A, and a three-dimensional coordinate deriving program 108A developed in the primary storage unit 102.
  • the CPU 100 operates as an acquisition unit 110A and a deriving unit 111A as illustrated in FIG. 8 as an example by executing at least one of the dimension deriving program 105A and the imaging position distance deriving program 106A.
  • the obtaining unit 110A obtains the measured distance measured by using the ranging system function.
  • the deriving unit 111A derives a focal length corresponding to the actually measured distance acquired by the acquiring unit 110A using the focal length deriving table 109A.
  • the distance measuring device 10A is provided with a dimension deriving function.
  • the dimension deriving function is a function realized by the CPU 100 executing the dimension deriving program 105A and operating as the acquiring unit 110A and the deriving unit 111A. .
  • the dimension derivation function is based on the addresses u1 and u2 of designated pixels, the distance L to the subject measured by the distance measurement unit 12 and the distance measurement control unit 68, and the like. refers or to derive the length L M of the area in the real space contained in the subject, a function or to derive an area based on the length L M.
  • the distance L to the subject indicates an actually measured distance.
  • the distance L to the subject is simply referred to as “distance L”.
  • the length L M of the area in the real space included in the subject is simply referred to as “length L M ”.
  • the “designated pixel” refers to a pixel in the image sensor 60 corresponding to, for example, two points designated on the captured image by the user.
  • the length L M is calculated, for example, by the following equation (1).
  • Equation (1) p is a pitch between pixels included in the image pickup device 60, u1, u2 is the address of the pixel that is specified by the user, f 0 is the focal length.
  • Formula (1) is a formula used on the assumption that an object to be derived from a dimension is imaged in a state of facing the focus lens 50 in front view. Therefore, in the distance measuring device 10A, for example, when a subject including an object whose size is to be derived is captured in a state where the subject is not directly facing the focus lens 50 in front view, projective conversion processing is performed. .
  • the projective transformation process is, for example, a process of converting a captured image obtained by capturing an image into a front-view image based on a rectangular image included in the captured image using a known technique such as affine transformation. Point to.
  • the directly-facing image refers to an image in a state of facing the focus lens 50 in a front view.
  • the pixel address u1, u2 in the image pickup device 60 is specified via the confronting vision image, the length L M is derived from Equation (1).
  • the distance measuring device 10A is provided with a three-dimensional coordinate derivation function.
  • the three-dimensional coordinate derivation function operates as the acquisition unit 110A and the derivation unit 111A when the CPU 100 executes the three-dimensional coordinate derivation program 108A. It is a function realized by this.
  • the three-dimensional coordinate derivation function is a mathematical expression (2) from a first designated pixel coordinate described later, a second designated pixel coordinate described later, an imaging position distance described later, a focal length of the focus lens 50, and a dimension of the imaging pixel 60A1. ), A function for deriving designated pixel three-dimensional coordinates, which will be described later.
  • Equation (2) “u L ” refers to the X coordinate of the first designated pixel coordinate.
  • “v L ” indicates the Y coordinate of the first designated pixel coordinate.
  • “u R ” indicates the X coordinate of the second designated pixel coordinate.
  • “B” indicates an imaging position distance (see FIGS. 10 and 11).
  • “f” refers to (focal length) / (dimension of the imaging pixel 60A1). Further, in the formula (2), (X, Y, Z) indicates designated pixel three-dimensional coordinates.
  • the first designated pixel coordinates are two-dimensional coordinates that specify a first designated pixel whose position in the real space is designated as a corresponding pixel in a first captured image described later.
  • the second designated pixel coordinates are two-dimensional coordinates that specify a second designated pixel designated as a pixel corresponding to a position in real space in a second captured image described later. That is, the first designated pixel and the second designated pixel are pixels whose positions in the real space are designated as corresponding pixels, and correspond to each other in each of the first captured image and the second captured image. It is a pixel that can be specified by position.
  • the first designated pixel coordinates are two-dimensional coordinates on the first captured image
  • the second designated pixel coordinates are two-dimensional coordinates on the second captured image.
  • the designated pixel three-dimensional coordinates refer to three-dimensional coordinates that are coordinates on the real space corresponding to the first designated pixel coordinates and the second designated pixel coordinates.
  • the first captured image refers to a captured image obtained by capturing an image of the subject from the first imaging position by the imaging device 14.
  • the second captured image is an image of a subject including a subject that is an imaging target from the first imaging position from a second imaging position different from the first imaging position.
  • the captured image obtained by being imaged by the device 14 is indicated.
  • not only the first captured image and the second captured image but also a captured image obtained by being captured by the imaging device 14 including a still image and a moving image will be described separately. When it is not necessary, it is simply referred to as “captured image”.
  • the first measurement position refers to the position of the distance measurement unit 12 when the subject is imaged by the image pickup device 14 from the first image pickup position with the distance measurement unit 12 correctly attached to the image pickup device 14.
  • the second measurement position refers to the position of the distance measurement unit 12 when the subject is imaged by the imaging device 14 from the second imaging position in a state where the distance measurement unit 12 is correctly attached to the imaging device 14.
  • the imaging position distance refers to the distance between the first imaging position and the second imaging position.
  • an imaging position distance as shown in FIG. 11, the main point of the focus lens 50 of the imaging device 14 at the principal point O L and a second imaging position of the focus lens 50 of the imaging apparatus 14 in the first imaging position O R
  • the technology of the present disclosure is not limited to this.
  • the distance between the imaging pixel 60A1 positioned at the center of the imaging device 60 of the imaging device 14 at the first imaging position and the imaging pixel 60A1 positioned at the center of the imaging device 60 of the imaging device 14 at the second imaging position is the imaging position distance. It may be said.
  • the pixel P L included in the first image is a first designated pixel
  • the pixel P R included in the second image is a second designated pixel
  • the pixel P L, P R is , The pixel corresponding to the point P of the subject.
  • the first designated pixel coordinates are two-dimensional coordinates of the pixel P L (u L, v L )
  • the second designated pixel coordinates are two-dimensional coordinates of the pixel P R (u R, v R ) is This corresponds to the designated pixel three-dimensional coordinates (X, Y, Z) which is the three-dimensional coordinates of the point P.
  • Equation (2) “v R ” is not used.
  • designated pixels when it is not necessary to distinguish between the first designated pixel and the second designated pixel, they are referred to as “designated pixels”. In the following, for convenience of explanation, when there is no need to distinguish between the first designated pixel coordinates and the second designated pixel coordinates, they are referred to as “designated pixel coordinates”.
  • the ranging device 10A uses the three-dimensional coordinate derivation function to derive the designated pixel three-dimensional coordinates based on the formula (2), it is preferable to derive the imaging position distance with high accuracy. This is because “B”, which is the imaging position distance, is included in Equation (2).
  • the distance measuring device 10A is provided with an imaging position distance deriving function.
  • the imaging position distance deriving function is realized by the CPU 100 operating as the deriving unit 111A by executing the imaging position distance deriving program 106A. It is.
  • the deriving unit 111A derives the imaging position distance based on the derived focal length.
  • irradiation position real space coordinates are also required.
  • the irradiation position real space coordinates are three-dimensional coordinates that specify the irradiation position of the laser light in the real space, that is, the irradiation position of the laser light on the subject in the real space.
  • the derivation unit 111A derives the irradiation position real space coordinates based on the actually measured distance acquired by the acquisition unit 110A.
  • the irradiation position real space coordinates are derived from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M shown in FIG.
  • (x Laser , y Laser , z Laser ) refers to irradiation space real space coordinates.
  • y Laser 0, which means that the optical axis L1 is at the same height as the optical axis L2 in the vertical direction.
  • y Laser is a positive value.
  • y Laser is a negative value.
  • the half angle of view ⁇ indicates half of the angle of view.
  • the emission angle ⁇ refers to an angle at which laser light is emitted from the emission unit 22.
  • the distance between reference points M refers to the distance between the first reference point P1 defined in the imaging device 14 and the second reference point P2 defined in the distance measuring unit 12.
  • An example of the first reference point P1 is the main point of the focus lens 50.
  • An example of the second reference point P2 is a point set in advance as the origin of coordinates that can specify the position of the three-dimensional space in the distance measuring unit 12.
  • one end of the left and right ends of the objective lens 38 when viewed from the front, or one corner of the casing when the casing (not shown) of the distance measuring unit 12 is a rectangular parallelepiped, that is, one apex. .
  • the derivation unit 111A Based on the distance acquired by the acquisition unit 110A, the derivation unit 111A specifies the position of the pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in each of the first captured image and the second captured image. An irradiation position pixel coordinate is derived.
  • the irradiation position pixel coordinates are roughly divided into first irradiation position pixel coordinates and second irradiation position pixel coordinates.
  • the first irradiation position pixel coordinates are two-dimensional coordinates that specify the position of a pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in the first captured image.
  • the second irradiation position pixel coordinate is a two-dimensional coordinate that specifies the position of the pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in the second captured image.
  • the derivation method of the X coordinate of the first irradiation position pixel coordinate and the derivation method of the Y coordinate of the first irradiation position pixel coordinate are the same in the derivation method except that the target coordinate axes are different. That is, the method of deriving the X coordinate of the first irradiation position pixel coordinate is a method of deriving the pixel in the row direction in the image sensor 60, whereas the method of deriving the Y coordinate of the first irradiation position pixel coordinate is The difference is that this is a derivation method for pixels in the column direction in the image sensor 60.
  • the row direction means the front view left-right direction of the imaging surface 60B
  • the column direction means the front view vertical direction of the imaging surface 60B.
  • the X coordinates of the first irradiation position pixel coordinates are based on the following formulas (4) to (6) from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M shown in FIG. Derived.
  • row-direction pixel at irradiation position refers to a pixel at a position corresponding to the irradiation position of the laser light in real space among the pixels in the row direction of the image sensor 60.
  • “Half the number of pixels in the row direction” refers to half of the number of pixels in the row direction in the image sensor 60.
  • the deriving unit 111A substitutes the distance M between the reference points and the emission angle ⁇ into the equation (4), substitutes the half angle of view ⁇ and the emission angle ⁇ into the equation (5), and sets the distance L as the equations (4) and ( Assign to 5).
  • the derivation unit 111A specifies the position of “the pixel in the row direction of the irradiation position” by substituting ⁇ x and X thus obtained and “half the number of pixels in the row direction” in Equation (6).
  • the X coordinate which is the coordinate is derived.
  • the X coordinate that specifies the position of the “irradiation position row direction pixel” is the X coordinate of the first irradiation position pixel coordinate.
  • the derivation unit 111A derives, as the second irradiation position pixel coordinates, coordinates that specify the pixel positions corresponding to the pixel positions specified by the first irradiation position pixel coordinates among the pixels of the second captured image.
  • irradiation position pixel coordinates when it is not necessary to distinguish between the first irradiation position pixel coordinates and the second irradiation position pixel coordinates, they are referred to as “irradiation position pixel coordinates”. Further, among the pixels in the captured image, derivation similar to the first irradiation position pixel coordinates or the second irradiation position pixel coordinates is used as a two-dimensional coordinate for specifying the position of the pixel corresponding to the actual irradiation position with respect to the subject by the laser light. The two-dimensional coordinates derived by the method are also referred to as “irradiation position pixel coordinates”.
  • the derivation unit 111A selectively executes the first derivation process and the second derivation process in accordance with the instructions received on the touch panel 88 when the position can be specified.
  • the position specifiable state refers to a state in which the position of the pixel specified by the irradiation position pixel coordinate is a position of a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. .
  • the derivation unit 111A executes the first derivation process when the position cannot be specified.
  • the position cannot be specified state is a position of a pixel that is different from a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. Refers to the state.
  • the first derivation process refers to a process of deriving the imaging position distance based on a plurality of pixel coordinates, an irradiation position real space coordinate, a focal distance, and a dimension of the imaging pixel 60A1 described later.
  • the plurality of pixel coordinates are present in the same planar region as the irradiation position of the laser light in real space in each of the first captured image and the second captured image, and can be specified at positions corresponding to each other. It refers to a plurality of two-dimensional coordinates that specify a plurality of pixels equal to or more than a pixel.
  • the parameters used for the first derivation process are not limited to the multiple pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • a plurality of parameters obtained by further adding one or more fine adjustment parameters to the dimensions of the plurality of pixel coordinates, irradiation position real space coordinates, focal length, and imaging pixel 60A1 are used in the first derivation process. Also good.
  • the second derivation process refers to a process for deriving the imaging position distance based on the irradiation position pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the parameters used for the second derivation process are not limited to the irradiation position pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • a plurality of parameters obtained by further adding one or more fine adjustment parameters to the irradiation position pixel coordinates, irradiation position real space coordinates, focal length, and dimensions of the imaging pixel 60A1 are used in the second derivation process. May be.
  • the second derivation process is performed when the actual irradiation position of the laser light is a position in the real space corresponding to the position of a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image.
  • the imaging position distance can be derived with higher accuracy than the first derivation process.
  • the second derivation process is a process for deriving the imaging position distance based on a plurality of parameters smaller than the number of parameters used in the derivation of the imaging position distance by the first derivation process.
  • the “plurality of parameters” referred to here refers to, for example, irradiation position pixel coordinates, irradiation position real space coordinates, focal length, and dimensions of the imaging pixel 60A1.
  • the derivation unit 111A When executing the first derivation process, the derivation unit 111A indicates a plane including three-dimensional coordinates in the real space corresponding to the plurality of pixel coordinates based on the plurality of pixel coordinates, the focal length, and the size of the imaging pixel 60A1. The orientation of the plane defined by the plane equation is derived. Then, the deriving unit 111A determines a plane equation based on the derived plane direction and irradiation position real space coordinates, and performs imaging based on the determined plane equation, the plurality of pixel coordinates, the focal length, and the size of the imaging pixel 60A1. The position distance is derived.
  • deriving “the direction of the plane” means deriving a, b, c in the equation (7), and determinating the “plane equation” by deriving d in the equation (7).
  • the imaging range 115 of the imaging device 14 of the distance measuring device 10A includes an area including the outer wall surface 121 of the office building 120 as a subject. This will be explained as a premise. Further, the outer wall surface 121 will be described on the assumption that it is a main subject and a laser light irradiation target.
  • the outer wall surface 121 is formed in a planar shape, and is an example of a planar region according to the technique of the present disclosure.
  • a plurality of rectangular windows 122 are provided on the outer wall surface 121.
  • a laterally long rectangular pattern 124 is drawn on the lower side of each window 122 on the outer wall surface 121, but not limited to this, the outer wall surface 121 is attached to the outer wall surface 121. It may be dirt or wrinkles.
  • the “planar shape” includes not only a flat surface but also a planar shape in a range that allows slight unevenness due to a window or a vent, for example, by visual observation or an existing image. Any plane or plane shape recognized as “planar” by an analysis technique may be used.
  • the distance measurement apparatus 10A will be described on the assumption that the distance to the outer wall surface 121 is measured by irradiating the outer wall surface 121 with laser light.
  • step 200 the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on. If the measurement imaging button 90 ⁇ / b> A is not turned on in step 200, the determination is negative and the routine proceeds to step 202. If the measurement imaging button 90A is turned on in step 200, the determination is affirmed and the routine proceeds to step 204.
  • step 202 the acquisition unit 110A determines whether or not a condition for ending the dimension derivation process is satisfied.
  • the conditions for ending the dimension derivation process are, for example, a condition that the dimension derivation button 90E is turned on again, and that the first predetermined time has elapsed without being affirmed after the execution of the process of step 200 is started. It refers to the conditions of the.
  • the first predetermined time refers to, for example, 1 minute.
  • step 202 If it is determined in step 202 that the condition for ending the dimension derivation process is not satisfied, the determination is negative and the routine proceeds to step 200. In step 202, if the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the dimension derivation process is terminated.
  • step 204 the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 206. .
  • the acquisition unit 110A acquires the actual measurement distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of step 204.
  • the acquisition unit 110 ⁇ / b> A acquires a captured image signal indicating a captured image obtained by the imaging device 14 by executing the process of Step 204.
  • the captured image indicated by the captured image signal acquired by executing the process of step 206 is a captured image obtained by capturing in the focused state by executing the process of step 204. is there.
  • the acquisition unit 110A causes the display unit 86 to start displaying the captured image indicated by the acquired captured image signal, and then proceeds to step 209.
  • step 209 the deriving unit 111A derives a focal length corresponding to the actually measured distance using the focal length deriving table 109A, and then proceeds to step 210.
  • the actually measured distance used in the process of step 209 indicates the actually measured distance acquired by the acquisition unit 110A by executing the process of step 206.
  • the focal distance corresponding to the actually measured distance refers to, for example, the focal distance associated with the derivation distance that matches the actually measured distance among the derivation distances stored in the focal distance derivation table 109A.
  • the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by an interpolation method.
  • the interpolation method employed in this embodiment include linear interpolation and nonlinear interpolation.
  • step 210 first, the deriving unit 111A derives the half angle of view ⁇ from the focal length based on the following formula (8).
  • dimension of the imaging pixel refers to the dimension of the imaging pixel 60A1.
  • f 0 refers to the focal length. Note that the focal length used in the processing of step 210 is a focal length derived by executing the processing of step 209.
  • step 210 the deriving unit 111A derives the irradiation position pixel coordinates from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M based on the equations (4) to (6), Thereafter, the process proceeds to step 212.
  • the distance L used in the process of step 210 refers to the actually measured distance acquired by the acquisition unit 110A by executing the process of step 206.
  • the half angle of view ⁇ used for deriving the irradiation position pixel coordinates is the half angle of view ⁇ derived from the focal length by the deriving unit 111A based on Expression (8).
  • step 212 the derivation unit 111A causes the display unit 86 to start displaying the measured distance and the irradiation position mark 136 superimposed on the captured image, as shown in FIG. 17 as an example, and then proceeds to step 214. .
  • the measured distance displayed by being superimposed on the captured image by executing the process of step 212 is the measured distance acquired by the acquisition unit 110A by executing the process of step 206.
  • the numerical value “1333325.0” corresponds to the actually measured distance, and the unit is millimeter.
  • the irradiation position mark 136 is a mark indicating the position of the pixel specified by the irradiation position pixel coordinates derived by the deriving unit 111 ⁇ / b> A by executing the process of step 210.
  • step 214 the derivation unit 111A causes the display unit 86 to start displaying a frame definition guidance message (not shown) superimposed on the captured image, and then proceeds to step 216.
  • the frame definition guidance message refers to a message that guides the user to define a rectangular frame in the display area of the captured image.
  • the rectangular frame is defined in accordance with an instruction from the user via the touch panel 88.
  • the frame prescription guidance message there is a message such as “Tap four points in the screen to prescribe a square frame that includes the irradiation position mark.”
  • the derivation unit 111A determines whether or not a square frame is correctly defined in the display area of the captured image via the touch panel 88.
  • the correctly defined quadrangular frame refers to a quadrangular frame 117 that includes the irradiation position mark 136 in the display area of the captured image, as shown in FIG. 18 as an example.
  • the frame 117 is defined by four points 119A, 119B, 119C, and 119D in the display area of the captured image.
  • the rectangular area surrounded by the frame 117 is associated with the irradiation position pixel coordinates corresponding to the irradiation position mark 136.
  • step 216 if the square frame is not correctly defined in the display area of the captured image via the touch panel 88, the determination is negative and the process proceeds to step 218.
  • step 216 when a square frame is correctly defined in the display area of the captured image via the touch panel 88, the determination is affirmed and the process proceeds to step 220.
  • step 218 the derivation unit 111A determines whether or not a condition for ending the dimension derivation process is satisfied.
  • the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
  • step 218 if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the process proceeds to step 216. If it is determined in step 218 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
  • step 220 the derivation unit 111A ends the display of the frame regulation guidance message on the display unit 86, and then proceeds to step 222.
  • the derivation unit 111A determines whether or not a quadrangular region exists within the prescribed quadrangular frame.
  • the quadrangular region indicates a trapezoidal region 123 as shown in FIG.
  • region 123 of the outer wall surface 121 when the part corresponding to the trapezoidal area
  • step 222 If it is determined in step 222 that there is no quadrangular area within the defined quadrangular frame, the determination is negative and the routine proceeds to step 230. In step 222, if a quadrangular region exists within the defined quadrangular frame, the determination is affirmed and the process proceeds to step 224. In the example shown in FIG. 18, since the trapezoidal region 123 exists in the frame 117, the determination in step 222 is affirmed in this case.
  • step 224 the derivation unit 111A ends the display of the measured distance and the irradiation position mark 136 on the display unit 86, and then proceeds to step 226.
  • step 226 the derivation unit 111A performs the above-described projective transformation process on the captured image based on the quadrangular region existing in the prescribed quadrangular frame.
  • the projective transformation process described above is performed on the captured image by the derivation unit 111 ⁇ / b> A based on the trapezoidal region 123 existing in the frame 117. Is done.
  • the derivation unit 111A causes the display unit 86 to start displaying the post-projection conversion image 87 as shown in FIG. 19 as an example, and then proceeds to step 232.
  • the post-projection conversion image 87 is an image obtained by performing a projective conversion process on a captured image.
  • the post-projection transformation image 87 includes a rectangular region 123 ⁇ / b> A that is a quadrangular region corresponding to the trapezoidal region 123.
  • step 230 the derivation unit 111A causes the display unit 86 to finish displaying the measured distance and the irradiation position mark 136, and then proceeds to step 232.
  • step 232 the derivation unit 111A starts display in which a pixel designation guidance message (not shown) is superimposed on the processing target image, and then proceeds to step 234.
  • the processing target image refers to the captured image or the post-projection converted image 87.
  • the determination in step 222 is negative, the captured image indicated by the captured image signal acquired by executing the process of step 206 is used as the processing target image. If the determination in step 222 is affirmative, the post-projection conversion image 87 is used as the processing target image.
  • the pixel designation guidance message refers to a message for guiding the user to designate two points, that is, two pixels in the display area of the processing target image.
  • step 234 the derivation unit 111A determines whether or not two pixels are designated by the user via the touch panel 88 among the pixels of the processing target image. In step 234, when two pixels are not designated by the user via the touch panel 88, determination is denied and it transfers to step 236. In step 234, when two pixels are designated by the user via the touch panel 88, the determination is affirmed and the process proceeds to step 238.
  • step 236 it is determined whether or not a condition for terminating the dimension derivation process is satisfied.
  • the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
  • step 236 if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the process proceeds to step 234. If it is determined in step 236 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
  • step 238 the derivation unit 111A ends the display of the pixel designation guidance message on the display unit 86, and then proceeds to step 242.
  • step 242 the deriving unit 111A derives the length of the area in the real space corresponding to the interval between the two pixels designated by the user via the touch panel 88 by using the dimension deriving function, and then in step 244.
  • the interval between two pixels designated by the user via the touch panel 88 is an example of the interval between a plurality of pixels according to the technique of the present disclosure.
  • step 242 the length of the area in the real space corresponding to the interval between the two pixels designated by the user via the touch panel 88 is derived by Expression (1).
  • u1 and u2 in Expression (1) are addresses of two pixels designated by the user via the touch panel 88.
  • L in Expression (1) is an actual measurement distance acquired by the acquisition unit 110A by executing the process of step 206.
  • f 0 in the equation (1) is a focal length derived by the deriving unit 111A by executing the process of step 209.
  • step 244 the derivation unit 111A causes the display unit 86 to start displaying the area length and the bidirectional arrow 125 superimposed on the processing target image, as shown in FIG. Migrate to
  • the length of the area displayed on the display unit 86 by executing the process of step 244 is the length of the area derived by the deriving unit 111A by executing the process of step 242.
  • the numerical value “63” corresponds to the length of the area, and the unit is millimeter.
  • the bidirectional arrow 125 displayed on the display unit 86 by executing the process of step 244 is an arrow that specifies between two pixels designated by the user via the touch panel 88.
  • step 246 the derivation unit 111A determines whether or not a condition for ending the dimension derivation process is satisfied.
  • the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
  • step 246 if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the determination in step 246 is performed again. If it is determined in step 246 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
  • step 248 the derivation unit 111A ends the display of the processing target image and the superimposed display information on the display unit 86, and then ends the dimension derivation process.
  • the superimposed display information refers to various types of information that are currently superimposed and displayed on the processing target image, such as the length of the area and the two-way arrow 125.
  • the imaging position distance deriving process realized by the CPU 100 executing the imaging position distance deriving function 106A by executing the imaging position distance deriving program 106A when the three-dimensional coordinate deriving button 90G is turned on.
  • first position the position of the distance measuring device 10A when the distance measuring unit 12 is located at the first measurement position and the imaging device 14 is located at the first imaging position
  • second position the position of the distance measuring device 10A when the distance measuring unit 12 is located at the second measurement position and the imaging device 14 is located at the second imaging position.
  • step 300 the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the first position. In step 300, if the measurement imaging button 90A is not turned on, the determination is negative and the routine proceeds to step 302. If the measurement imaging button 90A is turned on in step 300, the determination is affirmed and the routine proceeds to step 304.
  • the acquisition unit 110A determines whether or not a condition for ending the imaging position distance deriving process is satisfied.
  • the conditions for ending the imaging position distance derivation process include, for example, a condition that the 3D coordinate derivation button 90G is turned on again, a condition that an instruction to end the imaging position distance derivation process is received by the touch panel 88, and the like. .
  • step 302 If it is determined in step 302 that the conditions for ending the imaging position distance deriving process are not satisfied, the determination is negative and the routine proceeds to step 300. In step 302, when the condition for ending the imaging position distance deriving process is satisfied, the determination is affirmed and the imaging position distance deriving process ends.
  • step 304 the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 306. .
  • the actual distance measured by executing the process of step 304 is referred to as a “first actual distance”.
  • step 306 the acquisition unit 110A acquires the first actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of step 304.
  • the acquisition unit 110 ⁇ / b> A acquires a first captured image signal indicating a first captured image obtained by performing the processing in Step 304 and captured by the imaging device 14. Note that the first captured image indicated by the first captured image signal acquired by executing the processing of step 306 is obtained by being captured in the focused state by executing the processing of step 304. It is the 1st picked-up image.
  • step 308 the acquisition unit 110A starts to display the first captured image indicated by the first captured image signal acquired in the process of step 306 on the display unit 86 as shown in FIG. 26 as an example. Then, the process proceeds to step 310.
  • step 310 the deriving unit 111A derives a focal length corresponding to the first actually measured distance using the focal length deriving table 109A, and then proceeds to step 312.
  • the first measured distance used in the process of step 310 refers to the first measured distance acquired by the acquisition unit 110A by executing the process of step 306.
  • the focal distance corresponding to the first actually measured distance is, for example, the focal distance associated with the derivation distance that matches the first actually measured distance among the derivation distances stored in the focal distance derivation table 109A. Point to.
  • the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by the above-described interpolation method.
  • step 312 first, the deriving unit 111A derives the half angle of view ⁇ from the focal length based on Expression (8).
  • the focal length used in the processing of step 312 is a focal length derived by executing the processing of step 310.
  • step 312 the deriving unit 111A derives the irradiation position real space coordinates from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M based on Equation (3), and then The process proceeds to step 314.
  • the distance L used in the processing of step 312 indicates the first actually measured distance acquired by the acquisition unit 110A by executing the processing of step 306.
  • the half angle of view ⁇ used for deriving the irradiation position real space coordinates is the half angle of view ⁇ derived from the focal length by the deriving unit 111A based on the mathematical formula (8).
  • step 314 the deriving unit 111A derives the first irradiation position pixel coordinates from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M based on the equations (4) to (6). Thereafter, the process proceeds to step 316.
  • the distance L used in the process of step 314 indicates the first measured distance acquired by the acquisition unit 110A by executing the process of step 306.
  • the half angle of view ⁇ used in the derivation of the first irradiation position pixel coordinates is calculated based on the formula (8) from the focal length by the derivation unit 111A by executing the process of step 312. The derived half angle of view ⁇ .
  • step 316 the derivation unit 111A causes the display unit 86 to start displaying the first measured distance and the irradiation position mark 136 superimposed on the first captured image, as shown in FIG. The process proceeds to 318.
  • the first measured distance displayed by executing the process of step 316 indicates the first measured distance acquired by the acquiring unit 110A by executing the process of step 306.
  • the numerical value “1333325.0” corresponds to the first actually measured distance, and the unit is millimeter.
  • the irradiation position mark 136 is a mark indicating the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314.
  • the derivation unit 111A determines whether or not the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 matches the identifiable pixel position.
  • the identifiable pixel position refers to the position of a pixel that can be identified at a position corresponding to each other in each of the first captured image and the second captured image.
  • step 318 when the position of the pixel specified by the first irradiation position pixel coordinate derived by executing the process of step 314 matches the identifiable pixel position, the determination is affirmed and the process proceeds to step 320. Transition. In step 318, when the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 does not match the identifiable pixel position, the determination is negative and the process proceeds to step 342. Transition.
  • step 320 the derivation unit 111A causes the display unit 86 to display a match message 137A superimposed on the first captured image for a specific time (for example, 5 seconds), as shown in FIG. 28 as an example. Thereafter, the process proceeds to step 322.
  • the coincidence message 137A is a message indicating that the position of the pixel identified by the first irradiation position pixel coordinates derived by executing the process of step 314 coincides with the identifiable pixel position. Therefore, when the process of step 320 is executed, the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 may coincide with the identifiable pixel position. The user is notified.
  • the matching message 137A “Because the irradiation position of the laser light matches the characteristic position of the subject, the first derivation process or the second derivation process can be executed.”
  • the technology of the present disclosure is not limited to this.
  • only the message “the laser beam irradiation position matches the characteristic position of the subject” in the matching message 137A may be adopted and displayed.
  • any message may be used as long as it is a message notifying the coincidence between the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 and the identifiable pixel position. Also good.
  • the match message 137A is visually displayed.
  • an audible display such as an audio output by an audio playback device (not shown) or a permanent visual display such as an output of a printed matter by a printer. May be performed instead of visible display, or may be used in combination.
  • step 322 the derivation unit 111A starts a display in which the derivation process selection screen 139 is superimposed on the first captured image, as shown in FIG. 29 as an example, and then proceeds to step 324.
  • the derivation process selection screen 139 displays two soft keys, a first derivation process start button 139A and a second derivation process start button 139B.
  • the derivation process selection screen 139 also displays a message prompting to turn on either the first derivation process start button 139A or the second derivation process start button 139B.
  • the case where the first derivation process start button 139A is turned on means a case where the user desires to execute the first derivation process.
  • the case where the user desires to execute the first derivation process there is a case where the user has doubts about the content of the match message 137A.
  • the case where the user is suspicious in the content of the coincidence message 137A means that the irradiation position specified by the actual irradiation position of the laser beam and the irradiation position real space coordinates, for example, by exchanging the distance measuring unit 12 or changing the angle of view The case where it is judged by the user that there is a possibility of being misaligned.
  • the case where the second derivation process start button 139B is turned on means a case where the user desires to execute the second derivation process.
  • the user desires to execute the second derivation process
  • the case where the user has no doubt in the content of the coincidence message 137A indicates, for example, a case where the user determines that the actual irradiation position of the laser beam and the irradiation position specified by the irradiation position real space coordinates are not shifted. .
  • the second derivation process can reduce the load required to derive the imaging position distance compared to the first derivation process.
  • step 324 the derivation unit 111A determines whether or not the first derivation process start button 139A is turned on. If the first derivation process start button 139A is turned on in step 324, the determination is affirmed and the routine proceeds to step 328. If it is determined in step 324 that the first derivation process start button 139A is not turned on, the determination is negative and the routine proceeds to step 332.
  • step 328 the derivation unit 111A causes the display unit 86 to end the display of the derivation process selection screen 139 and to start displaying the target pixel designation guidance message (not shown) superimposed on the first captured image. Thereafter, the process proceeds to step 330.
  • the pixel-of-interest designation guidance message refers to, for example, a message for guiding the designation of a pixel of interest via the touch panel 88 from the first captured image.
  • the attention pixel designation guidance message there is a message “Please specify one pixel to be noticed (attention point)”.
  • the attention pixel designation guidance message displayed by executing the processing of step 328 is not displayed, for example, when the determination is affirmed in the processing of step 330A described later, that is, when the attention pixel is designated. Is done.
  • step 332 the derivation unit 111A determines whether or not the second derivation process start button 139B is turned on. If the second derivation process start button 139B is turned on in step 332, the determination is affirmed and the routine proceeds to step 334. If it is determined in step 332 that the second derivation start button 139B is not turned on, the determination is negative and the routine proceeds to step 338.
  • step 334 the derivation unit 111A ends the display of the derivation process selection screen 139 on the display unit 86, and displays the above-described attention pixel designation guidance message (not shown) superimposed on the first captured image.
  • the process proceeds to step 336. Note that the target pixel designation guidance message displayed by executing the process of step 334 is not displayed when the target pixel is specified in the process of step 336A described later, for example.
  • step 338 the deriving unit 111A determines whether or not a condition for ending the imaging position distance deriving process is satisfied.
  • the condition for ending the imaging position distance derivation process is the same as the condition used in the process of step 302.
  • step 338 If it is determined in step 338 that the conditions for ending the imaging position distance derivation are not satisfied, the determination is negative and the routine proceeds to step 324. If the condition for ending the imaging position distance deriving process is satisfied in step 338, the determination is affirmed and the routine proceeds to step 340.
  • step 340 the derivation unit 111A ends the display of the first captured image and the superimposed display information on the display unit 86, and then ends the imaging position distance derivation process.
  • the superimposed display information refers to various types of information that are currently displayed superimposed on the first captured image. For example, the first actually measured distance, the irradiation position mark 136, and the derivation process selection This refers to the screen 139 or the like.
  • step 342 the derivation unit 111A causes the display unit 86 to display a mismatch message 137B superimposed on the first captured image for a specific time (for example, 5 seconds), as shown in FIG. 30 as an example. Thereafter, the process proceeds to step 330.
  • the mismatch message 137B is a message indicating that the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the processing of step 314 does not match the identifiable pixel position.
  • “the position of the pixel specified by the first irradiation position pixel coordinates does not coincide with the identifiable pixel position”, in other words, the position of the pixel specified by the first irradiation position pixel coordinates can be specified. This means that the pixel position is different from the pixel position.
  • step 226 by executing the process of step 226, the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 207 matches the identifiable pixel position. The user is notified that it has not.
  • a message “The first derivation process is executed because the irradiation position of the laser beam did not match the characteristic position of the subject” is displayed as the mismatch message 137B.
  • the technology of the present disclosure is not limited to this. For example, only the message “The laser beam irradiation position did not match the characteristic position of the subject” in the mismatch message 137B may be adopted and displayed.
  • any message can be used as long as it is a message notifying the inconsistency between the position of the pixel specified by the first irradiation position pixel coordinates derived by the processing of step 314 and the identifiable pixel position. Also good.
  • the example shown in FIG. 30 shows a case where the discrepancy message 137B is displayed visually.
  • an audible display such as an audio output by an audio reproduction device (not shown) or a permanent visual display such as an output of a printed matter by a printer. May be performed instead of visible display, or may be used in combination.
  • step 330 the CPU 100 executes the first derivation process shown in FIGS. 22 and 23 as an example, and then ends the imaging position distance derivation process.
  • step 330 ⁇ / b> A the acquisition unit 110 ⁇ / b> A determines whether or not a pixel of interest has been designated from the first captured image via the touch panel 88 by the user.
  • the target pixel corresponds to the first designated pixel described above.
  • the touch panel 88 receives pixel designation information for designating two-dimensional coordinates corresponding to pixels included in the first captured image among the two-dimensional coordinates assigned to the touch panel 88. Therefore, in step 330A, it is determined that the pixel of interest has been designated when pixel designation information is received by the touch panel 88. That is, the pixel corresponding to the two-dimensional coordinate designated by the pixel designation information is set as the target pixel.
  • step 330A if the target pixel is not specified from the first captured image by the user via the touch panel 88, the determination is negative and the process proceeds to step 330B.
  • step 330A when the target pixel is designated from the first captured image by the user via the touch panel 88, the determination is affirmed and the process proceeds to step 330D.
  • step 330B the acquiring unit 110A determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 330B if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 330A.
  • step 330B when the condition for ending the first derivation process is satisfied, the determination is affirmed and the process proceeds to step 330C.
  • Step 330C the acquisition unit 110A performs the same process as the process in Step 340, and then ends the first derivation process.
  • Step 330D the acquisition unit 110A acquires the pixel coordinates of interest that specify the pixel specified by the user via the touch panel 88 in the first captured image, and then proceeds to Step 330E.
  • a pixel of interest 126 is given as shown in FIG.
  • the target pixel 126 is a pixel in the lower left corner of an image corresponding to the central window on the second floor of the outer wall surface in the first captured image.
  • the outer wall surface second floor central window refers to the window 122 at the center of the second floor of the office building 120 among the windows 122 provided on the outer wall surface 121.
  • the pixel-of-interest coordinates indicate two-dimensional coordinates that specify the pixel-of-interest 126 in the first captured image.
  • step 330E the acquiring unit 110A acquires the three characteristic pixel coordinates that specify the positions of the characteristic three pixels in the outer wall surface image 128 (the hatched area in the example illustrated in FIG. 32) of the first captured image, and then , The process proceeds to step 330F.
  • the outer wall surface image 128 refers to an image showing the outer wall surface 121 (see FIG. 16) in the first captured image.
  • the characteristic three pixels are pixels that can be specified at positions corresponding to each other in each of the first captured image and the second captured image.
  • the characteristic three pixels in the first captured image are separated from each other by a predetermined number of pixels or more by image analysis based on the spatial frequency of the image corresponding to the pattern or building material in the outer wall surface image 128. Pixels present at each of the three points specified according to a predetermined rule. For example, three pixels that indicate different vertices having the maximum spatial frequency in a circular area that is defined by a predetermined radius centered on the pixel of interest 126 and that satisfy the predetermined condition are characteristic three pixels. Extracted.
  • the three characteristic pixel coordinates correspond to the above-described plural pixel coordinates.
  • the characteristic three pixels are the first pixel 130, the second pixel 132, and the third pixel 134.
  • the first pixel 130 is a pixel in the upper left corner of the image corresponding to the central window on the second floor of the outer wall surface in the outer wall image 128.
  • the second pixel 132 is a pixel at the upper right corner of the image corresponding to the central window on the second floor of the outer wall surface.
  • the third pixel 134 is a pixel at the lower left corner of the image corresponding to the pattern 124 close to the lower part of the central window on the third floor of the outer wall.
  • the outer wall surface third floor central window refers to the window 122 at the center of the third floor of the office building 120 among the windows 122 provided on the outer wall surface 121.
  • step 330F the acquisition unit 110A performs the same process as the process in step 340, and then proceeds to step 330G illustrated in FIG.
  • Step 330G the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the second position. If it is determined in step 330G that the measurement imaging button 90A is not turned on, the determination is negative and the process proceeds to step 330H. If the measurement imaging button 90A is turned on in step 330G, the determination is affirmed and the routine proceeds to step 330I.
  • Step 330H the acquisition unit 110A determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of Step 302.
  • step 330H if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 330G.
  • step 330H when the condition for ending the first derivation process is satisfied, the determination is affirmed and the first derivation process is ended.
  • step 330I the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 330J.
  • a second actual distance an actual distance measured by executing the process of Step 330I or Step 336H (see FIG. 25) described later is referred to as a “second actual distance”.
  • Step 330J the acquisition unit 110A acquires the second actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of Step 330I.
  • the acquisition unit 110A acquires a second captured image signal indicating a second captured image obtained by being captured by the imaging device 14 by executing the process of Step 330I.
  • the second captured image indicated by the second captured image signal acquired by executing the process of step 330J is obtained by being captured in the focused state by executing the process of step 330I. It is the 2nd picked-up image.
  • the acquisition unit 110A causes the display unit 86 to start displaying the second captured image indicated by the second captured image signal acquired in the process of step 330J, and then proceeds to step 330L. .
  • step 330L the deriving unit 111A derives a focal length corresponding to the second actually measured distance using the focal length deriving table 109A, and then proceeds to step 330M.
  • the second actually measured distance used in the process of step 330L indicates the second actually measured distance acquired by the acquiring unit 110A by executing the process of step 330J.
  • the focal distance corresponding to the second actually measured distance is, for example, the focal distance associated with the derivation distance that matches the second actually measured distance among the derivation distances stored in the focal distance derivation table 109A. Point to.
  • the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by the above-described interpolation method.
  • step 330M the acquisition unit 110A specifies a corresponding target pixel that is a pixel corresponding to the target pixel 126 among the pixels included in the second captured image, and specifies the specified corresponding target pixel coordinate. Then, the process proceeds to step 330N.
  • the corresponding target pixel coordinates refer to two-dimensional coordinates that specify the corresponding target pixels in the second captured image.
  • the corresponding target pixel is specified by performing existing image analysis such as pattern matching on the first and second captured images as analysis targets. Note that the corresponding target pixel corresponds to the above-described second designated pixel, and when the target pixel 126 is specified from the first captured image, the processing of this step 330M is executed to uniquely identify the second captured pixel. Identified.
  • step 330N the acquisition unit 110A identifies the characteristic three pixels in the outer wall image corresponding to the outer wall image 128 (see FIG. 32) in the second captured image, and identifies the identified characteristic three pixels.
  • Corresponding feature pixel coordinates are acquired, and then the process proceeds to step 330P.
  • the corresponding characteristic pixel coordinates indicate two-dimensional coordinates that specify the characteristic three pixels specified in the second captured image.
  • the corresponding feature pixel coordinates are also two-dimensional coordinates corresponding to the three feature pixel coordinates acquired by the processing in step 330E in the second captured image, and correspond to the above-described plurality of pixel coordinates.
  • the characteristic three pixels of the second captured image are subjected to an existing image analysis such as pattern matching with the first and second captured images as analysis targets in the same manner as the above-described method of identifying the corresponding target pixel. It is specified by that.
  • step 330P the derivation unit 111A derives a plane equation a, b, and c shown in Equation (7) from the three feature pixel coordinates, the corresponding feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1, The orientation of the plane defined by the plane equation is derived.
  • the focal length used in the processing of step 330P is the focal length derived by executing the processing of step 330L.
  • the three feature pixel coordinates are (u L1 , v L1 ), (u L2 , v L2 ), (u L3 , v L3 ), and the corresponding feature pixel coordinates are (u R1 , v R1 ), (u R2 , Assuming v R2 ), (u R3 , v R3 ), the first to third feature pixel three-dimensional coordinates are defined by the following equations (9) to (11).
  • the first feature pixel three-dimensional coordinates refer to three-dimensional coordinates corresponding to (u L1 , v L1 ) and (u R1 , v R1 ).
  • the second feature pixel three-dimensional coordinates refer to three-dimensional coordinates corresponding to (u L2 , v L2 ) and (u R2 , v R2 ).
  • the third feature pixel three-dimensional coordinate refers to a three-dimensional coordinate corresponding to (u L3 , v L3 ) and (u R3 , v R3 ). In equations (9) to (11), “v R1 ”, “v R2 ”, and “v R3 ” are not used.
  • the derivation unit 111A has three mathematical expressions in an equivalent relationship obtained by substituting each of the first to third characteristic pixel three-dimensional coordinates shown in mathematical expressions (9) to (11) into the mathematical expression (7).
  • a, b, and c in Equation (7) are derived.
  • a, b, and c in Equation (7) being derived means that the plane orientation defined by the plane equation shown in Equation (7) is derived.
  • the deriving unit 111A determines the plane equation shown in the mathematical expression (7) based on the irradiation position real space coordinates derived in the process of step 312, and then proceeds to step 330R. That is, in this step 330P, the derivation unit 111A substitutes the a, b, c derived in the processing in step 330P and the irradiation position real space coordinates derived in the processing in step 312 into the mathematical formula (7). 7) Confirm d. Since a, b, and c of Expression (7) are derived by the process of Step 330P, when d of Expression (7) is determined by the process of Step 330Q, the plane equation shown by Expression (7) is determined. Is done.
  • step 330R the derivation unit 111A derives the imaging position distance based on the feature pixel three-dimensional coordinates and the plane equation, and then proceeds to step 330S.
  • the feature pixel three-dimensional coordinates used in the processing of step 330R indicate the first feature pixel three-dimensional coordinates.
  • the feature pixel three-dimensional coordinates used in the processing of step 330R are not limited to the first feature pixel three-dimensional coordinates, and may be the second feature pixel three-dimensional coordinates or the third feature pixel three-dimensional coordinates.
  • the plane equation used in step 330R is the plane equation determined in step 330Q.
  • step 330R “B” which is the imaging position distance is derived by substituting the three-dimensional feature pixel coordinates into the plane equation.
  • step 330S the derivation unit 111A causes the display unit 86 to start displaying the imaging position distance derived in step 330R on the second captured image, as shown in FIG. 33 as an example.
  • step 330S the deriving unit 111A stores the imaging position distance derived in the process of step 330R in a predetermined storage area, and then proceeds to step 330T.
  • the default storage area include a storage area of the primary storage unit 102 or a storage area of the secondary storage unit 104.
  • the numerical value “144656.1” corresponds to the imaging position distance derived by the processing in step 330R, and the unit is millimeters.
  • step 330T the derivation unit 111A determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • Step 330T when the condition for ending the first derivation process is not satisfied, the determination is denied and the determination in Step 330 is performed again. If the condition for ending the first derivation process is satisfied in step 330T, the determination is affirmed and the process proceeds to step 330U.
  • step 330U the derivation unit 111A ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the first derivation process.
  • the superimposed display information refers to various types of information that are currently displayed superimposed on the second captured image, such as the second actually measured distance and the imaging position distance.
  • step 336 shown in FIG. 21 the CPU 100 executes the second derivation process shown in FIG. 24 as an example, and thereafter ends the imaging position distance derivation process.
  • step 336A the acquisition unit 110A determines whether or not a pixel of interest has been designated from the first captured image via the touch panel 88 by the user.
  • the target pixel corresponds to the first designated pixel described above.
  • the touch panel 88 receives pixel designation information for designating two-dimensional coordinates corresponding to pixels included in the first captured image among the two-dimensional coordinates assigned to the touch panel 88. Therefore, in step 336A, it is determined that the pixel of interest has been designated when pixel designation information is received by the touch panel 88. That is, the pixel corresponding to the two-dimensional coordinate designated by the pixel designation information is set as the target pixel.
  • step 336A if the target pixel is not designated from the first captured image by the user via the touch panel 88, the determination is negative and the process proceeds to step 336B.
  • step 336A when the pixel of interest is designated from the first captured image via the touch panel 88 by the user, the determination is affirmed and the process proceeds to step 336D.
  • step 336B the acquisition unit 110A determines whether or not a condition for ending the second derivation process is satisfied.
  • the condition for ending the second derivation process is the same as the condition used in the process of step 302.
  • step 336B if the condition for ending the second derivation process is not satisfied, the determination is negative and the process proceeds to step 336A.
  • step 336B when the condition for ending the second derivation process is satisfied, the determination is affirmed and the process proceeds to step 336C.
  • step 336C the acquisition unit 110A performs the same process as the process in step 340, and then ends the second derivation process.
  • Step 336D the acquisition unit 110A acquires the pixel coordinates of interest specifying the pixel specified by the user via the touch panel 88 in the first captured image, and then proceeds to Step 336E.
  • the target pixel 126 may be cited, and the target pixel coordinate is the first captured image. Indicates a two-dimensional coordinate specifying the target pixel 126.
  • step 336E the acquisition unit 110D performs the same process as the process in step 340, and then proceeds to step 330F illustrated in FIG.
  • Step 336F the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the second position. If it is determined in step 336F that the measurement imaging button 90A is not turned on, the determination is negative and the process proceeds to step 336G. In step 336F, when the measurement imaging button 90A is turned on, the determination is affirmed and the process proceeds to step 336H.
  • step 336G the acquisition unit 110A determines whether or not a condition for ending the second derivation process is satisfied.
  • the condition for ending the second derivation process is the same as the condition used in the process of step 302.
  • step 336G if the condition for ending the second derivation process is not satisfied, the determination is negative and the process proceeds to step 336F.
  • step 336G when the condition for ending the second derivation process is satisfied, the determination is affirmed and the second derivation process is ended.
  • step 336H the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform the measurement of the second actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 336J. Transition.
  • step 336I the acquisition unit 110A acquires the second actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the processing in step 336H.
  • the acquisition unit 110A acquires a second captured image signal indicating a second captured image obtained by the imaging device 14 by executing the processing in Step 336H. Note that the second captured image indicated by the second captured image signal acquired by executing the processing of step 336I is obtained by being captured in the focused state by executing the processing of step 336H. It is the 2nd picked-up image.
  • step 336J the acquisition unit 110A causes the display unit 86 to start displaying the second captured image indicated by the second captured image signal acquired in the process of step 336I, and then proceeds to step 336K. .
  • step 336K the deriving unit 111A derives a focal length corresponding to the second actually measured distance using the focal length deriving table 109A, and then proceeds to step 336L.
  • the second actually measured distance used in the process of step 336K indicates the second actually measured distance acquired by the acquisition unit 110A by executing the process of step 336I.
  • a derivation method similar to the focal distance derivation method in the process of step 330L described above is employed.
  • Step 336L the acquisition unit 110A specifies the corresponding target pixel that is the pixel corresponding to the target pixel 126 among the pixels included in the second captured image, and specifies the specified corresponding target pixel coordinate. Then, the process proceeds to step 336M.
  • step 336L an acquisition method similar to the acquisition method of the corresponding target pixel coordinates in the process of step 330M described above is employed.
  • step 336M the deriving unit 111A derives the second irradiation position pixel coordinates, and then proceeds to step 336N. That is, in step 336M, the deriving unit 111A identifies the position of the pixel corresponding to the position of the pixel identified by the first irradiation position pixel coordinates derived in the process of step 314 among the pixels of the second captured image. The coordinates are derived as the second irradiation position pixel coordinates.
  • the pixels corresponding to the position of the pixel specified by the first irradiation position pixel coordinates are the first and second captured images as in the above-described method of specifying the corresponding target pixel.
  • the analysis target is specified by executing existing image analysis such as pattern matching.
  • step 336N the deriving unit 111A derives the imaging position distance based on the irradiation position real space coordinates, the irradiation position pixel coordinates, the focal length, the size of the imaging pixel 60A1, and the mathematical expression (2), and then proceeds to step 336P. Transition.
  • the irradiation position real space coordinates used in the process of step 336N indicate the irradiation position real space coordinates derived in the process of step 312.
  • the irradiation position pixel coordinates used in the process of step 336N indicate the first irradiation position pixel coordinates derived in the process of step 314 and the second irradiation position pixel coordinates derived in the process of step 336M.
  • the focal length used in the processing of step 336N indicates the focal length derived by the processing of step 336K.
  • the imaging position distance “B” is derived by substituting the irradiation position real space coordinates, the irradiation position pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1 into Expression (2).
  • step 336P the derivation unit 111A causes the display unit 86 to start displaying the imaging position distance derived in the process of step 336N on the second captured image, as illustrated in FIG. 33 as an example.
  • step 336P the deriving unit 111A stores the imaging position distance derived in the process of step 336N in a predetermined storage area, and then proceeds to step 336Q.
  • step 336Q the derivation unit 111A determines whether or not a condition for ending the second derivation process is satisfied.
  • the condition for ending the second derivation process is the same as the condition used in the process of step 302.
  • step 336Q when the condition for ending the second derivation process is not satisfied, the determination is denied and the determination in step 336Q is performed again. If the condition for ending the second derivation process is satisfied in step 336Q, the determination is affirmed and the process proceeds to step 336R.
  • step 336R the derivation unit 111A ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the second derivation process.
  • the superimposed display information refers to various types of information that are currently superimposed on the second captured image, such as the second actually measured distance and the imaging position distance.
  • the load for deriving the imaging position distance is smaller in the second derivation process than in the first derivation process.
  • the derivation accuracy of the imaging position distance by the second derivation process is equal to the imaging position distance by the first derivation process. It becomes higher than the derivation accuracy.
  • step 350 the derivation unit 111A performs the imaging position distance by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process. It is determined whether or not is already derived. In step 350, if the imaging position distance is not derived in any of the processing of step 330R included in the first derivation processing and the processing of step 336N included in the second derivation processing, the determination is negative, and the step 358. In step 350, if the imaging position distance has already been derived by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process, the determination is affirmed and the process proceeds to step 352. Transition.
  • the derivation unit 111A determines whether or not a condition for starting derivation of the designated pixel three-dimensional coordinates (hereinafter referred to as “derivation start condition”) is satisfied.
  • the derivation start condition include a condition that an instruction to start derivation of the designated pixel three-dimensional coordinates is accepted by the touch panel 88, a condition that the imaging position distance is displayed on the display unit 86, and the like.
  • step 352 if the derivation start condition is not satisfied, the determination is denied and the process proceeds to step 358. If the derivation start condition is satisfied in step 352, the determination is affirmed and the routine proceeds to step 354.
  • step 354 the deriving unit 111A derives the designated pixel three-dimensional coordinates based on the pixel-of-interest coordinates, the corresponding pixel-of-interest coordinates, the imaging position distance, the focal length, the size of the imaging pixel 60A1, and the formula (2), and then The process proceeds to step 356.
  • the pixel-of-interest coordinates used in the process of step 354 indicate the pixel-of-interest coordinates acquired by the process of step 330D included in the first derivation process or the process of step 336D included in the second derivation process.
  • the corresponding target pixel coordinates used in the process of step 354 are the corresponding target pixel coordinates acquired in the process of step 330M included in the first derivation process or the process of step 336L included in the second derivation process.
  • the imaging position distance used in the process of step 354 indicates the imaging position distance derived by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process.
  • the focal length used in the processing of step 354 indicates the focal length derived by the processing of step 330L included in the first derivation processing or the processing of 336K included in the second derivation processing.
  • the designated pixel three-dimensional coordinates are derived by substituting the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 into the formula (2). .
  • step 356 the derivation unit 111A causes the display unit 86 to superimpose the designated pixel three-dimensional coordinates derived in the process of step 354 on the second captured image, as shown in FIG.
  • the deriving unit 111A stores the designated pixel three-dimensional coordinates derived in the process of step 354 in a predetermined storage area, and then proceeds to step 358.
  • the default storage area include a storage area of the primary storage unit 102 or a storage area of the secondary storage unit 104.
  • (20161, 50134, 136892) corresponds to the designated pixel three-dimensional coordinates derived in the process of step 354.
  • the designated pixel three-dimensional coordinates are displayed close to the target pixel 126. Note that the target pixel 126 may be highlighted so as to be distinguishable from other pixels.
  • the derivation unit 111A determines whether or not a condition for ending the three-dimensional coordinate derivation process is satisfied.
  • a condition that an instruction for ending the three-dimensional coordinate derivation process is received from the touch panel 88 can be given.
  • Another example of the condition for ending the three-dimensional coordinate derivation process is a condition that the second predetermined time has passed without the determination being affirmed in step 350 after the determination in step 350 is denied.
  • the second predetermined time refers to, for example, 30 minutes.
  • step 358 If it is determined in step 358 that the conditions for ending the three-dimensional coordinate derivation process are not satisfied, the determination is negative and the routine proceeds to step 350. If it is determined in step 358 that the condition for ending the three-dimensional coordinate derivation process is satisfied, the determination is affirmed and the three-dimensional coordinate derivation process is ended.
  • the deriving unit 111A uses the focal length deriving table 109A indicating the correspondence between the actual measurement distance and the focal length, and derives the focal distance corresponding to the actual measurement distance acquired by the acquisition unit 110A.
  • the focal length can be increased with less precision than when the user inputs the length of the reference image included in the captured image. Can be derived.
  • the acquisition unit 110A acquires the first captured image, the second captured image, and the second measured distance (steps 330J and 336I).
  • the deriving unit 111A uses the focal length deriving table 109A to derive the focal length corresponding to the second measured distance acquired by the acquiring unit 110A (steps 330L and 336K). Then, the deriving unit 111A derives the focal position distance based on the derived focal distance (steps 330R and 336N).
  • the imaging position distance can be increased without much effort compared to the case where the user inputs the length of the reference image included in the captured image.
  • the accuracy can be derived.
  • the actual measurement distance is acquired by the acquisition unit 110A (step 206).
  • the focal length derivation table 109A is used by the deriving unit 111A to derive the focal length corresponding to the actually measured distance acquired by the acquiring unit 110A (step 209).
  • the real space corresponding to the interval between the two specified pixels Is derived (step 242).
  • the distance measuring device 10A in order to increase the focal length accuracy, the distance between the two specified pixels is actually compared with the case where the user inputs the length of the reference image included in the captured image.
  • the length of the area on the space can be derived with high accuracy without trouble.
  • the first derivation process and the second derivation process are selectively executed.
  • the actual irradiation position of the laser light is a position in real space corresponding to a position of a pixel different from a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image.
  • the imaging position distance is derived with higher accuracy than the second derivation process.
  • the second derivation process is the first when the actual irradiation position of the laser light is a position on the real space corresponding to the position of the pixel that can be specified at the position corresponding to each other in each of the first captured image and the second captured image.
  • the imaging position distance is derived with higher accuracy than the derivation process.
  • the first derivation process and the second derivation process are received by the touch panel 88 when the position of the pixel specified by the irradiation position pixel coordinates derived by the deriving unit 111A is the identifiable pixel position. Selectively executed according to the given instructions. Therefore, according to the distance measuring apparatus 10A, it is possible to derive the imaging position distance with higher accuracy compared to the case where the imaging position distance is derived by only one type of derivation process regardless of the irradiation position of the laser beam.
  • the first derivation process is executed when the pixel position specified by the irradiation position pixel coordinates derived by the derivation unit 111A is a pixel position different from the identifiable pixel position. Therefore, according to the distance measuring device 10A, when the pixel position specified by the irradiation position pixel coordinates is a pixel position different from the identifiable pixel position, the imaging position distance is derived by a derivation process different from the first derivation process. Compared to the case, the imaging position distance can be derived with high accuracy.
  • the second derivation process is a process of deriving the imaging position distance based on a plurality of parameters smaller than the number of parameters used in the derivation of the imaging position distance by the first derivation process. . Therefore, according to the distance measuring device 10A, it is possible to derive the imaging position distance with a low load compared to the case where the imaging position distance is derived only by the first derivation process regardless of the irradiation position of the laser beam.
  • the first derivation process is performed after the user recognizes that the position of the pixel specified by the first irradiation position pixel coordinate derived by the deriving unit 111A is the identifiable pixel position.
  • the second derivation process can be selected.
  • the mismatch message 137B is displayed on the display unit 86 when the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position. Is displayed. Therefore, according to the distance measuring apparatus 10A, the user is made to recognize that the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position. Thus, the first derivation process and the second derivation process can be selected.
  • the designated pixel three-dimensional coordinates are derived based on the imaging position distance derived by the imaging position distance deriving process (see FIG. 34). Therefore, according to the distance measuring apparatus 10A, the designated pixel three-dimensional coordinates can be derived with higher accuracy than when the imaging position distance is derived by only one type of derivation process regardless of the irradiation position of the laser beam. .
  • the designated pixel three-dimensional coordinates are defined based on the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal distance, and the dimensions of the imaging pixel 60A1 (see Expression (2)). ). Therefore, according to the distance measuring device 10A, the designated pixel 3 is compared with the case where the designated pixel three-dimensional coordinates are not defined based on the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1. Dimensional coordinates can be derived with high accuracy.
  • the derivation unit 111A uses a plane equation defined by the plane equation shown in Equation (7) based on the three feature pixel coordinates, the corresponding feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the direction is derived (step 330P).
  • the plane equation shown in Formula (7) is determined by the deriving unit 111A based on the orientation of the plane and the irradiation position real space coordinates derived in Step 312 (Step 330Q).
  • the imaging position distance is derived by the deriving unit 111A based on the determined plane equation and the feature pixel three-dimensional coordinates (for example, the first feature pixel three-dimensional coordinates) (step 330R).
  • the imaging position distance is not used without using the plane equation.
  • the imaging position distance can be derived with higher accuracy than when deriving.
  • the acquisition unit 110A acquires three feature pixel coordinates (step 330E), and the acquisition unit 110A acquires corresponding feature pixel coordinates (step 330N).
  • the deriving unit 111A derives the imaging position distance based on the three feature pixel coordinates, the corresponding feature pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1 (step 330R). Therefore, according to the distance measuring device 10A, the three feature pixel coordinates and the corresponding feature pixels are reduced in the number of operations compared to the case where the user designates the three characteristic pixel coordinates when acquiring the three feature pixel coordinates and the corresponding feature pixel coordinates.
  • the imaging position distance can be derived based on the coordinates.
  • the pixel designation information is received by the touch panel 88, the pixel designated by the accepted pixel designation information is set as the target pixel 126, and the target pixel coordinates are acquired by the acquisition unit 110A (Step S1). 330D, 336D).
  • the acquisition unit 110 ⁇ / b> A identifies a corresponding target pixel that is a pixel corresponding to the target pixel 126.
  • the corresponding pixel-of-interest coordinates that identify the corresponding pixel-of-interest pixel are acquired by the acquisition unit 110A (steps 330M and 336L). Therefore, according to the distance measuring device 10A, the designated pixels related to both the first captured image and the second captured image are quickly compared with the case where the designated pixels related to both the first captured image and the second captured image are specified by the user. Can be decided.
  • the ranging device 10A includes a ranging unit 12 and a ranging control unit 68, and the first measured distance and the second measured distance measured by the ranging unit 12 and the ranging control unit 68 are obtained by the acquiring unit 110A. Obtained by Therefore, according to the distance measuring device 10A, the first actually measured distance and the second actually measured distance acquired by the distance measuring unit 12 and the distance measuring control unit 68 are used for deriving the irradiation position real space coordinates and the irradiation position pixel coordinates. it can.
  • the distance measuring device 10A includes the imaging device 14, and the acquisition unit 110A acquires the first captured image and the second captured image obtained by imaging the subject by the imaging device 14. Therefore, according to the distance measuring device 10A, the first captured image and the second captured image obtained by imaging the subject by the imaging device 14 can be used for deriving the imaging position distance.
  • the derivation result by the derivation unit 111A is displayed by the display unit 86 (see FIGS. 33 and 35). Therefore, according to the distance measuring device 10A, the derivation result obtained by the derivation unit 111A can be easily recognized by the user as compared with the case where the derivation result obtained by the derivation unit 111A is not displayed by the display unit 86.
  • the focal length is derived using the focal length derivation table 109A.
  • the technique of the present disclosure is not limited to this, and for example, the following formula (12) May be used to derive the focal length.
  • “f 0 ” is a focal length.
  • “F zoom ” is a nominal focal length predetermined as a position in the optical axis direction of the zoom lens 52 in the lens unit 16, and is a fixed value. The position of the zoom lens 52 in the optical axis direction is defined by the distance from the imaging surface 60B to the subject, and the unit is millimeters.
  • “D” is an actually measured distance.
  • three feature pixel coordinates are exemplified, but the technology of the present disclosure is not limited to this.
  • two-dimensional coordinates that specify each of a predetermined number of characteristic four or more pixels may be adopted.
  • the case where the pixel-of-interest coordinates are acquired from the coordinates on the first captured image and the corresponding pixel-of-interest coordinates are acquired from the coordinates on the second captured image is exemplified.
  • the target pixel coordinates may be acquired from the coordinates on the second captured image
  • the corresponding target pixel coordinates may be acquired from the coordinates on the first captured image.
  • the case where the three feature pixel coordinates are acquired from the coordinates on the first captured image and the corresponding feature pixel coordinates are acquired from the coordinates on the second captured image is illustrated.
  • the technology is not limited to this.
  • three feature pixel coordinates may be acquired from coordinates on the second captured image, and corresponding feature pixel coordinates may be acquired from coordinates on the first captured image.
  • the technology of the present disclosure is not limited to this.
  • two-dimensional coordinates that specify each of the first pixel 130A, the second pixel 132A, and the third pixel 134A may be acquired by the acquisition unit 110A.
  • the first pixel 130 ⁇ / b> A, the second pixel 132 ⁇ / b> A, and the third pixel 134 ⁇ / b> A are three pixels that have the maximum area surrounded by the outer wall surface image 128.
  • the number of pixels is not limited to three pixels, but may be a predetermined number of pixels that is three or more pixels that maximizes the area surrounded by the outer wall surface image 128.
  • the three pixels having the maximum area surrounded by the outer wall surface image 128 are identified as characteristic three pixels, and the two-dimensional coordinates relating to the identified three pixels are defined as the three characteristic pixel coordinates. Obtained by the obtaining unit 110A.
  • the corresponding feature pixel coordinates corresponding to the three feature pixel coordinates are also acquired by the acquisition unit 110A. Therefore, according to the distance measuring device 10A, as compared with the case where three characteristic pixel coordinates and corresponding characteristic pixel coordinates that specify a plurality of pixels whose surrounding area is not the maximum are acquired as characteristic three pixels, the imaging position distance is set. It can be derived with high accuracy.
  • the imaging position distance deriving process is realized when the three-dimensional coordinate deriving button 90G is turned on has been described, but the technology of the present disclosure is not limited to this.
  • the imaging position distance derivation process may be executed when the imaging position distance derivation button 90F is turned on.
  • the imaging position distance deriving process described in the first embodiment is an example in the case where the ultimate purpose is to derive three-dimensional coordinates.
  • the target pixel coordinates and corresponding pixel coordinates required for derivation of the three-dimensional coordinates are acquired by the imaging position distance derivation process, but when only the imaging position distance is derived, the target pixel coordinates in the imaging position distance derivation process are obtained. And acquisition of corresponding pixel coordinates is unnecessary. Therefore, when the imaging position distance derivation button 90F is turned on, the CPU 100 derives the imaging position distance without acquiring the target pixel coordinates and the corresponding pixel coordinates, and then when the three-dimensional coordinate derivation button 90G is turned on. The target pixel coordinates and the corresponding pixel coordinates may be acquired.
  • the CPU 100 acquires the target pixel coordinates and the corresponding target pixel coordinates between the processing in step 352 and the processing in step 354 of the three-dimensional coordinate derivation processing illustrated in FIG.
  • the pixel coordinates of interest may be used in the process of step 354.
  • the derivation unit 111A forcibly executes the second derivation process, and when the determination is negative in step 318, the derivation unit 111A performs the first derivation process. It may be forcibly executed.
  • the distance measuring device 10B stores a size deriving program 105B in place of the size deriving program 105A in the secondary storage unit 104, as compared to the distance measuring device 10A. Is different. As shown in FIG. 6 as an example, the distance measuring device 10B stores an imaging position distance deriving program 106B in place of the imaging position distance deriving program 106A in the secondary storage unit 104, as compared with the distance measuring device 10A. The point is different. In addition, as shown in FIG. 6 as an example, the distance measuring device 10B is different from the distance measuring device 10A in that a focal length deriving table 109B is stored in the secondary storage unit 104 instead of the focal length deriving table 109A. Different.
  • the CPU 100 operates as an acquisition unit 110B and a deriving unit 111B as illustrated in FIG. 8 as an example by executing at least one of the dimension deriving program 105B and the imaging position distance deriving program 106B.
  • the acquisition unit 110B corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111B corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110B and the derivation unit 111B will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
  • the focal length deriving table 109B is a table showing the correspondence between the deriving distance, the position of the zoom lens 52 in the optical axis direction, and the focal length.
  • the optical axis direction of the zoom lens 52 indicates, for example, the direction of the optical axis L2.
  • the position of the zoom lens 52 in the optical axis direction is defined by the distance from the imaging surface 60B to the subject side, and the unit is millimeters.
  • the “position of the zoom lens 52 in the optical axis direction” stored in the focal length derivation table 109B is simply referred to as “derivation position”.
  • a plurality of derivation distances are defined in the focal length derivation table 109B, and a focal distance is associated with each of a plurality of derivation positions for each derivation distance. ing.
  • a different focal length is associated with each of 18 millimeters, 23 millimeters, 35 millimeters, and 55 millimeters of the derivation position with respect to 1 meter of the derivation distance.
  • a different focal length is associated with each derivation position for each derivation position.
  • the focal length derivation table 109B is a table derived from at least one result of, for example, an actual test of the distance measuring device 10B and a computer simulation based on the design specifications of the distance measuring device 10B.
  • FIG. 15 and FIG. The description will be given with reference.
  • the same step number is attached
  • the dimension derivation process (see FIG. 38) according to the second embodiment has a process of step 370 instead of the process of step 206, compared to the dimension derivation process (see FIG. 14) described in the first embodiment. Is different. Further, the dimension derivation process according to the second embodiment is different from the dimension derivation process described in the first embodiment in that it includes a process in step 372 instead of the process in step 209.
  • the position information refers to information indicating the position of the zoom lens 52 in the lens unit 16 in the optical axis direction (the direction of the optical axis L2).
  • information indicating the current position of the zoom lens 52 in the optical axis direction in the lens unit 16 is used as an example of the position information, but the technology of the present disclosure is limited to this. is not.
  • information indicating the position of the zoom lens 52 in the optical axis direction in the lens unit 16 at the imaging timing several frames (for example, two frames) ago can be used as the position information.
  • step 38 is different from the process in step 209 shown in FIG. 14 in that the derivation unit 111B derives the focal length using the focal length derivation table 109B instead of the focal length derivation table 109A.
  • step 372 the deriving unit 111B derives the focal distance corresponding to the actually measured distance and the position information using the focal distance deriving table 109B.
  • the actually measured distance used in the process of step 372 is the actually measured distance acquired by the acquisition unit 110B by executing the process of step 370.
  • the position information used in the process of step 372 is the position information acquired by the acquisition unit 110B by executing the process of step 370.
  • the focal length corresponding to the position information is a derivation position that matches the position in the optical axis direction of the zoom lens 52 indicated by the position information among a plurality of derivation positions included in the focal length derivation table 109B. Refers to the corresponding focal length.
  • the derivation unit 111B determines from the derivation position of the focal length derivation table 109B as described above.
  • the focal length is derived by the interpolation method.
  • FIG. 21, FIG. 22, FIG. 24, FIG. 39, and FIG. 40 are described with respect to the imaging position distance derivation processing realized by the CPU 100 executing the imaging position distance derivation program 106B and using the imaging position distance derivation function.
  • the description will be given with reference. Note that the same steps as those included in the imaging position distance deriving process (see FIGS. 21 to 25) described in the first embodiment are denoted by the same step numbers, and description thereof is omitted.
  • the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes the process of step 380 instead of the process of step 330J. Further, the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes the process of step 382 instead of the process of step 330L. Further, the imaging position distance deriving process according to the second embodiment differs from the imaging position distance deriving process described in the first embodiment in that it includes a process of step 390 instead of the process of step 336I. Furthermore, the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes a process of step 392 instead of the process of step 336K.
  • step 39 is different from the process in step 330J shown in FIG. 23 in that the acquisition unit 110B further acquires the position information.
  • step 382 shown in FIG. 39 differs from the process of step 330L shown in FIG. 23 in that the deriving unit 111B derives the focal distance using the focal distance deriving table 109B instead of the focal distance deriving table 109A.
  • the deriving unit 111B derives the focal length corresponding to the second actually measured distance and the position information using the focal length deriving table 109B.
  • the second actually measured distance used in the process of step 382 is the second actually measured distance acquired by the acquisition unit 110B by executing the process of step 380.
  • the position information used in the process of step 382 is the position information acquired by the acquisition unit 110B by executing the process of step 380.
  • the focal length is derived by the same derivation method as the focal length derivation method in step 372 shown in FIG.
  • step 40 is different from the process in step 336I shown in FIG. 25 in that the acquisition unit 110B further acquires position information.
  • step 392 shown in FIG. 40 differs from the process of step 336K shown in FIG. 25 in that the derivation unit 111B derives the focal length using the focal length derivation table 109B instead of the focal length derivation table 109A.
  • step 392 the deriving unit 111B derives the focal length corresponding to the second actually measured distance and the position information using the focal length deriving table 109B.
  • the position information used in the process of step 392 indicates the position information acquired by the acquisition unit 110B by executing the process of step 390.
  • the focal length is derived by the same derivation method as the focal length derivation method in the processing of step 372.
  • the second actually measured distance used in the process of step 392 is the second actually measured distance acquired by the acquisition unit 110B by executing the process of step 390.
  • the position information used in the process of step 392 is the position information acquired by the acquisition unit 110B by executing the process of step 390.
  • the focal length derivation table 109B indicating the correspondence between the measured distance, the position of the zoom lens 52 in the optical axis direction, and the focal length is defined. Also, the second measured distance and position information are acquired by the acquisition unit 110B (step 380). The deriving unit 111B uses the focal length deriving table 109B to derive the focal length corresponding to the second actually measured distance and the position information acquired by the acquiring unit 110B (step 382).
  • the position of the zoom lens 52 in the optical axis direction is larger than that in the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. Even if it changes, the focal length can be derived with high accuracy without taking time and effort.
  • the focal length corresponding to the second actually measured distance and the position information is derived by executing the process of step 382 (392) included in the imaging position distance deriving process has been described.
  • the technology of the present disclosure is not limited to this.
  • the first process refers to a process in which the acquisition unit 110B acquires the first measured distance, the first captured image signal, and the position information.
  • the second process refers to a process in which the deriving unit 111B derives a focal distance corresponding to the first actually measured distance and the position information using the focal distance deriving table 109B.
  • the first measured distance and position information used in the second process are the first measured distance and position information acquired by the acquisition unit 110B by executing the first process.
  • the focal length is derived using the focal length derivation table 109B.
  • the technique of the present disclosure is not limited to this, and for example, the above-described formula (12) May be used to derive the focal length.
  • Expression (12) the position in the optical axis direction of the zoom lens 52 indicated by the position information is adopted as “f zoom ”.
  • the focal distance derivation table 109B indicating the correspondence between the derivation distance, the derivation position, and the focal distance is exemplified, but the technology of the present disclosure is not limited to this.
  • the focal length may be derived using the focal length derivation table 109C shown in FIGS. 41A to 41E.
  • the focal length derivation table 109C shown in FIGS. 41A to 41E shows the correspondence relationship between the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance. It is a table to show. That is, in the focal distance derivation table 109C, the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance are associated with each other.
  • the temperature for derivation refers to the temperature of the region that affects the imaging by the imaging device 14.
  • the temperature of the region that affects imaging by the imaging device 14 refers to, for example, the temperature of the outside air, the temperature of the space inside the lens unit 16, or the temperature of the imaging element 60.
  • the unit of the temperature for derivation is “° C.”.
  • the derivation focus lens orientation refers to the orientation of the focus lens 50 with respect to the vertical direction.
  • the derivation focus lens posture is one of the parameters in the focal length derivation table 109C because the focus lens moving mechanism 53 moves by the weight of the focus lens 50 without depending on the power of the motor 57, so that the focus lens 50 emits light. This is because it moves along the axis L2 to the imaging surface 60B side or the subject side and affects the focal length.
  • the posture of the focus lens 50 with respect to the vertical direction can be defined by the angle formed by the vertical direction and the optical axis of the focus lens 50.
  • the derivation zoom lens orientation refers to the orientation of the zoom lens 52 with respect to the vertical direction.
  • the derivation zoom lens posture is one of the parameters in the focal length derivation table 109C.
  • the zoom lens moving mechanism 54 moves by the weight of the zoom lens 52 without depending on the power of the motor 56, so that the zoom lens 52 is lighted. This is because it moves along the axis L2 to the imaging surface 60B side or the subject side and affects the focal length.
  • the attitude of the zoom lens 52 with respect to the vertical direction can be defined by the angle formed by the vertical direction and the optical axis of the zoom lens 54.
  • a plurality of derivation positions are associated with each of a plurality of derivation distances.
  • a plurality of derivation temperatures are associated with each of the plurality of derivation positions.
  • a plurality of derivation focus lens postures are associated with each of the plurality of derivation temperatures.
  • a plurality of derivation zoom lens postures are associated with each of the plurality of derivation focus lens postures.
  • a focal length is individually associated with each of all prescribed derivation zoom lens postures.
  • the focal length derivation table 109C is a table derived from, for example, a result of at least one of a test by an actual device of the distance measuring device 10B and a computer simulation based on a design specification of the distance measuring device 10B.
  • the CPU 100 When deriving the focal distance using the focal distance deriving table 109C, the CPU 100 further acquires temperature information, focus lens attitude information, and zoom lens attitude information in addition to the actually measured distance and position information.
  • the temperature information refers to temperature information of a region that affects imaging by the imaging device 14.
  • information indicating the current temperature of a region that affects imaging by the imaging device 14 is employed, but the technology of the present disclosure is not limited to this. .
  • the temperature information may be a temperature detected by a sensor (not shown) or may be information received by the touch panel 88 as temperature information. Further, when the distance measuring device 10A can communicate with an external device such as a server via the Internet (not shown), even if the temperature is acquired from weather information provided from the external device via the Internet Good.
  • the focus lens attitude information refers to information indicating the attitude of the focus lens 50 with respect to the vertical direction.
  • information indicating the current posture of the focus lens 50 with respect to the vertical direction is used as an example of the focus lens posture information, but the technology of the present disclosure is not limited to this.
  • information indicating the current attitude of the focus lens 50 with respect to the vertical direction at the imaging timing several frames before (for example, two frames) can be used as the focus lens attitude information.
  • the focus lens attitude information may be, for example, a detection result obtained by detecting the attitude of the focus lens 50 with respect to the vertical direction by a sensor (not shown), or information received by the touch panel 88 as the focus lens attitude information. Also good.
  • the zoom lens attitude information refers to information indicating the attitude of the zoom lens 52 with respect to the vertical direction.
  • information indicating the current posture of the zoom lens 52 with respect to the vertical direction is used as an example of the zoom lens posture information, but the technology of the present disclosure is not limited to this.
  • information indicating the attitude of the zoom lens 52 with respect to the vertical direction at the imaging timing several frames (for example, two frames) ago can be used as the zoom lens attitude information.
  • the zoom lens posture information may be, for example, a detection result obtained by detecting the posture of the zoom lens 52 with respect to the vertical direction by a sensor (not shown), or information received by the touch panel 88 as zoom lens posture information. Also good.
  • the CPU 100 acquires the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information. Then, the CPU 100 uses the focal length derivation table 109C to derive the focal length corresponding to the acquired actual measurement distance, position information, temperature information, focus lens posture information, and zoom lens posture information. When parameters that match the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information do not exist in the focal length derivation table 109C, the focal length is calculated by the interpolation method as described in the above embodiments. May be derived.
  • the distance measuring device 10B allows the user to set the length of the reference image included in the captured image to the user in order to increase the accuracy of the focal length.
  • the position of the zoom lens 52 in the optical axis direction changes, the temperature of the region that affects the imaging by the imaging device 14 changes, the attitude of the focus lens 50 changes, and Even if the attitude of the zoom lens 52 changes, the focal length can be derived with high accuracy without taking time and effort.
  • the deriving distance, the deriving position, the deriving temperature, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other.
  • the disclosed technique is not limited to this.
  • the CPU 100 generates a focal length derivation table in which at least one of the derivation position, the derivation temperature, the derivation focus lens attitude, and the derivation zoom lens attitude is associated with the derivation distance and the focal distance. It may be used to derive the focal length.
  • the distance measuring device 10B captures a captured image in order to increase the accuracy of the focal distance. Even when the position of the zoom lens 52 in the optical axis direction is changed and the temperature of the region that affects the imaging by the imaging device 14 is changed as compared with the case where the user inputs the length of the reference image included in the The focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation temperature, the derivation focus lens posture, and the focal length are associated with each other
  • the distance measuring device 10B has high focal length accuracy.
  • the position of the zoom lens 52 in the direction of the optical axis changes compared to the case where the user inputs the length of the reference image included in the captured image, and the region that affects the imaging by the imaging device 14 Even if the temperature of the lens changes and the posture of the focus lens 50 changes, the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation zoom lens posture, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • the focal length derivation table is a table in which the derivation distance, the derivation zoom lens posture, and the focal length are associated with each other
  • the distance measuring device 10B captures a captured image in order to increase the accuracy of the focal length.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the focal length deriving table is a table in which the deriving distance, the deriving focus lens posture, the deriving zoom lens posture, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • the focal distance deriving table is a table in which the deriving distance, the deriving temperature, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other.
  • the focal length derivation table is a table in which the derivation distance, the derivation temperature, the derivation zoom lens attitude, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation focus lens posture, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • the focal distance deriving table is a table in which the deriving distance, the deriving position, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other.
  • the position of the zoom lens 52 in the optical axis direction changes and the attitude of the focus lens 50 changes compared to when the user inputs the length of the reference image included in the captured image.
  • position of the zoom lens 52 changes a focal distance can be derived
  • the distance measuring device 10B captures the captured image in order to increase the accuracy of the focal distance. Compared with the case where the user inputs the length of the reference image included in the image, even if the posture of the focus lens 50 changes, the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation temperature, and the focal length are associated with each other
  • the distance measuring device 10B is included in the captured image in order to improve the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation temperature, the derivation focus lens attitude, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the CPU 100 replaces the focal length derivation table 109C with the correspondence relationship between the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance.
  • the focal length may be derived using a prescribed focal length calculation formula.
  • the dependent variable of the focal length calculation formula is the focal length
  • the independent variables of the focal length calculation formula are the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information.
  • the independent variable of the focal length calculation formula may be at least one of position information, temperature information, focus lens posture information, and zoom lens posture information, and an actually measured distance.
  • the distance measuring apparatus 10C has an imaging position distance derivation program 106C in place of the imaging position distance derivation program 106A in the secondary storage unit 104, as compared to the distance measurement apparatus 10A. Is different in that is stored.
  • the CPU 100 operates as the acquisition unit 110C and the derivation unit 111C by executing the imaging position distance derivation program 106C (see FIG. 8).
  • the acquisition unit 110C corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111C corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110C and the derivation unit 111C will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
  • the 110C of acquisition parts perform control which displays the 1st picked-up image on the display part 86, and displays the outer wall surface image 128 so that it can distinguish with another area
  • the touch panel 88 receives region designation information for designating a coordinate acquisition target region in a state where the outer wall surface image 128 is displayed on the display unit 86.
  • the coordinate acquisition target area refers to a part of the closed area in the outer wall surface image 128.
  • the area designation information refers to information that designates a coordinate acquisition target area.
  • the acquiring unit 110C acquires the three characteristic pixel coordinates from the coordinate acquisition target area specified by the area specifying information received by the touch panel 88.
  • an imaging position distance derivation process realized by the CPU 100 executing the imaging position distance derivation function by executing the imaging position distance derivation program 106C.
  • the first derivation process will be described with reference to FIG.
  • the same steps as those in the flowchart shown in FIG. 22 are denoted by the same step numbers, and the description thereof is omitted.
  • steps 400 to 418 are provided instead of step 330E.
  • the acquisition unit 110C identifies the outer wall surface image 128 (see FIG. 32) from the first captured image, and then proceeds to step 402.
  • step 402 the acquisition unit 110C starts display on the display unit 86 in such a manner that the outer wall surface image 128 specified in the process of step 300 is emphasized so as to be distinguishable from other regions in the display region of the first captured image. Then, the process proceeds to step 404.
  • step 404 the acquisition unit 110C determines whether or not the area designation information has been received by the touch panel 88 and the coordinate acquisition target area has been specified by the received area designation information.
  • step 404 when the coordinate acquisition target area is not designated by the area designation information, the determination is denied and the process proceeds to step 406.
  • step 404 when the coordinate acquisition target area is designated by the area designation information, the determination is affirmed and the process proceeds to step 410.
  • step 406 the acquisition unit 110C determines whether or not a condition for ending the first derivation process is satisfied. If the condition for ending the first derivation process is not satisfied at step 406, the determination is negative and the routine proceeds to step 404. If the condition for ending the first derivation process is satisfied at step 406, the determination is affirmed and the routine proceeds to step 408.
  • step 408 the acquisition unit 110C executes the same process as the process in step 330C illustrated in FIG. 22, and then ends the first derivation process.
  • step 410 the acquisition unit 110 ⁇ / b> C ends display of the redesignation message displayed on the display unit 86 by executing the processing of step 414 described later on the display unit 86, and then proceeds to step 412. To do.
  • the acquisition unit 110 ⁇ / b> C has the characteristic three pixels described in the first embodiment in the coordinate acquisition target region 158 (see FIG. 43) designated by the region designation information received by the touch panel 88. It is determined whether or not.
  • a pattern image 160 indicating a pattern 124 is displayed in the coordinate acquisition target area 158. It is included.
  • the coordinate acquisition target area 158 includes a first pixel 162, a second pixel 164, and a third pixel 166 as characteristic three pixels.
  • the first pixel 162 is a pixel at the upper left corner of the pattern image 160 when viewed from the front
  • the second pixel 164 is a pixel at the lower left corner of the pattern image 160 when viewed from the front
  • the third pixel 166 is These are pixels at the lower right corner of the pattern image 160 when viewed from the front.
  • step 412 when there are no characteristic three pixels in the coordinate acquisition target area 158 specified by the area specifying information received by the touch panel 88, the determination is negative and the process proceeds to step 414.
  • step 414 if there are three characteristic pixels in the coordinate acquisition target area 158 specified by the area specifying information received by the touch panel 88, the determination is affirmed and the process proceeds to step 416. Note that the case where an affirmative determination is made in step 412 indicates a case where the coordinate acquisition target region 158 including the pattern image 160 is designated by the region designation information received by the touch panel 88, as shown in FIG. .
  • the acquisition unit 110C causes the display unit 86 to start displaying the redesignation message superimposed on the predetermined area of the first captured image, and then proceeds to step 404.
  • the re-designation message indicates, for example, a message “Please specify a closed area including a characteristic pattern or building material”.
  • an audible display such as audio output by an audio playback device (not shown)
  • the permanent visible display such as the output of the printed matter by the printer may be performed instead of the visible display, or may be used in combination.
  • step 416 the acquisition unit 110C causes the display unit 86 to end the emphasized display of the outer wall surface image 128, and then proceeds to step 418.
  • step 418 the acquisition unit 110C acquires three characteristic pixel coordinates that specify three characteristic pixels in the coordinate acquisition target area 158 specified by the area specification information received by the touch panel 88, and then proceeds to step 330F. To do.
  • the processing in this step 418 is executed, so that the two-dimensional coordinates that specify each of the first pixel 162, the second pixel 164, and the third pixel 166 are three feature pixel coordinates. Obtained by the obtaining unit 110C.
  • the outer wall surface image 128 is displayed on the display unit 86 so as to be distinguishable from other regions in the first captured image.
  • area designation information is received by the touch panel 88, and a coordinate acquisition target area that is a part of the outer wall surface image 128 is designated by the received area designation information.
  • the acquisition unit 110C acquires three characteristic pixel coordinates that specify the characteristic three pixels (step 418), and corresponds to the three characteristic pixel coordinates. Corresponding feature pixel coordinates are also acquired (step 330N).
  • the three feature pixel coordinates and the corresponding feature pixel coordinates are acquired with a smaller load than when the three feature pixel coordinates and the corresponding feature pixel coordinates are acquired for the entire outer wall surface image 128. be able to.
  • the distance measuring device 10D according to the fourth embodiment is different from the distance measuring device 10A in that an imaging position distance deriving program 106D is stored in the secondary storage unit 104 instead of the imaging position distance deriving program 106A ( (See FIG. 6).
  • the CPU 100 operates as the acquisition unit 110D and the derivation unit 111D as illustrated in FIG. 8 as an example by executing the imaging position distance derivation program 106D.
  • the acquisition unit 110D corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111D corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110D and the derivation unit 111D will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
  • the touch panel 88 receives the pixel designation information described in the first embodiment when each of the first captured image and the second captured image is displayed on the display unit 86.
  • the touch panel 88 also accepts the pixel designation information described in the first embodiment even when the second captured image is displayed on the display unit 86.
  • the acquisition unit 110D is a first two-dimensional coordinate that identifies each of the characteristic three pixels specified by the pixel specification information received by the touch panel 88. Get feature pixel coordinates.
  • the first feature pixel coordinates are two-dimensional coordinates corresponding to the three feature pixel coordinates described in the first embodiment.
  • the acquisition unit 110 ⁇ / b> D is a second two-dimensional coordinate that specifies each of the characteristic three pixels specified by the pixel specification information received by the touch panel 88. Get feature pixel coordinates.
  • the second feature pixel coordinates are two-dimensional coordinates corresponding to the corresponding feature pixel coordinates described in the first embodiment.
  • the deriving unit 111D derives the imaging position distance based on the target pixel coordinates, the corresponding target pixel coordinates, the first feature pixel coordinates, the second feature pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. To do.
  • FIG. 45 to 47 the same steps as those in the flowcharts shown in FIGS. 22 and 23 are denoted by the same step numbers, and description thereof is omitted.
  • 45 to 47 are different from the flowcharts shown in FIGS. 22 and 23 in that steps 450 to 474 are provided instead of step 330E. 45 to 47 is different from the flowcharts shown in FIGS. 22 and 23 in that steps 476 to 502 are provided instead of steps 330N to 330P.
  • step 450 shown in FIG. 45 the acquisition unit 110D executes the same processing as the processing in step 400 described in the third embodiment, and then proceeds to step 452.
  • step 452 the acquisition unit 110D executes the same process as the process in step 402 described in the third embodiment, and then proceeds to step 454.
  • step 454 the acquisition unit 110D determines whether or not the area designation information is received by the touch panel 88 and the first coordinate acquisition target area 178 (see FIG. 43) is specified by the received area designation information.
  • the first coordinate acquisition target area is an area corresponding to the coordinate acquisition target area 158 described in the second embodiment.
  • step 454 if the first coordinate acquisition target area 178 is not designated by the area designation information, the determination is denied and the process proceeds to step 456. In step 454, when the first coordinate acquisition target area 178 is designated by the area designation information, the determination is affirmed and the process proceeds to step 458.
  • step 456 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 456 if the condition for ending the first derivation process is not satisfied, the determination is denied and the routine proceeds to step 454. If the condition for ending the first derivation process is satisfied at step 456, the determination is affirmed and the routine proceeds to step 330C.
  • step 458 the acquisition unit 110D performs the same process as the process in step 410 described in the third embodiment, and then proceeds to step 460.
  • step 460 the acquisition unit 110D sets the first coordinate acquisition target region 178 specified by the region specification information received by the touch panel 88 for the display unit 86 to another region in the display region of the first captured image. It is highlighted so that it can be distinguished.
  • the acquisition unit 110D determines whether or not three pixels are designated by the pixel designation information received by the touch panel 88. In step 462, if three pixels are not designated by the pixel designation information received by the touch panel 88 (for example, if the number of designated pixels is less than three), the determination is negative and the routine proceeds to step 464. To do. If three pixels are designated by the pixel designation information received by the touch panel 88 at step 462, the determination is affirmed and the routine proceeds to step 468.
  • step 464 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 464 if the condition for ending the first derivation process is not satisfied, the determination is negative and the routine proceeds to step 462. If the condition for ending the first derivation process is satisfied in step 464, the determination is affirmed and the process proceeds to step 330C.
  • step 468 the acquisition unit 110D ends the display in which the first coordinate acquisition target area 178 is highlighted on the display unit 86, and then proceeds to step 470.
  • the acquisition unit 110D determines whether or not the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels.
  • the first coordinate acquisition target area 178 includes a pattern image 160.
  • the characteristic three pixels indicate a first pixel 162, a second pixel 164, and a third pixel 166 that are pixels at three corners of the pattern image 160, as shown in FIG. 44 as an example.
  • step 470 if the three pixels designated by the pixel designation information received by the touch panel 88 are not characteristic three pixels, the determination is negative and the routine proceeds to step 472. In step 470, if the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels, the determination is affirmed and the routine proceeds to step 474.
  • the acquisition unit 110D causes the display unit 86 to start displaying the redesignation message superimposed on the predetermined area of the first captured image, and then proceeds to step 454.
  • the re-designation message according to the fourth embodiment refers to, for example, a message “Please designate a characteristic 3 pixel after designating a closed region including a characteristic pattern or building material”. .
  • step 474 the acquisition unit 110D acquires the first characteristic pixel coordinates that specify the characteristic three pixels specified by the pixel specification information received by the touch panel 88, and then proceeds to step 330F.
  • the processing of this step 474 is executed, so that the two-dimensional coordinates that specify each of the first pixel 162, the second pixel 164, and the third pixel 166 are the first feature pixel coordinates. Acquired by the acquisition unit 110D.
  • the acquisition unit 110D specifies a corresponding outer wall surface image that is an outer wall surface image corresponding to the outer wall surface image 128 from the second captured image, and then proceeds to step 478.
  • step 478 the acquisition unit 110D causes the display unit 86 to display the corresponding outer wall surface image specified in the process of step 476 in a highlighted manner so as to be distinguishable from other regions in the display region of the second captured image. Thereafter, the process proceeds to step 480.
  • step 480 the acquisition unit 110D determines whether or not the area designation information has been received by the touch panel 88, and the second coordinate acquisition target area has been specified by the received area designation information.
  • the second coordinate acquisition target area is an area specified by the user via the touch panel 88 as an area corresponding to the first coordinate acquisition target area 178 (see FIG. 44) in the second captured image.
  • step 480 if the second coordinate acquisition target area is not designated by the area designation information, the determination is denied and the routine proceeds to step 482. If the second coordinate acquisition target area is designated by the area designation information in step 480, the determination is affirmed and the routine proceeds to step 484.
  • step 482 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 482 if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 480. In step 482, if the condition for ending the first derivation process is satisfied, the determination is affirmed and the routine proceeds to step 492.
  • step 484 the acquisition unit 110D causes the display unit 86 to terminate the display of the redesignation message displayed on the display unit 86 by executing the processing of step 498 described below, and then proceeds to step 486. To do.
  • step 486 the acquisition unit 110 ⁇ / b> D sets the second coordinate acquisition target area specified by the area specification information received by the touch panel 88 to the display unit 86 as another area in the display area of the second captured image.
  • the display is highlighted so as to be distinguishable, and then the process proceeds to step 488.
  • step 488 the acquisition unit 110D determines whether or not three pixels are designated by the pixel designation information received by the touch panel 88. In step 488, when three pixels are not designated by the pixel designation information received by the touch panel 88 (for example, when the number of designated pixels is less than three), the determination is negative and the process proceeds to step 490. To do. In step 488, when three pixels are designated by the pixel designation information received by the touch panel 88, the determination is affirmed and the process proceeds to step 494.
  • step 490 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 490 if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 488. In step 490, if the condition for ending the first derivation process is satisfied, the determination is affirmed and the routine proceeds to step 492.
  • step 492 the acquisition unit 110D ends the display of the second captured image on the display unit 86, and then ends the first derivation process.
  • step 494 the acquisition unit 110D causes the display unit 86 to end the display highlighting the second coordinate acquisition target region, and then proceeds to step 496.
  • step 496 the acquisition unit 110D determines whether or not the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels.
  • the second coordinate acquisition target area includes a pattern image corresponding to the pattern image 160.
  • the characteristic three pixels are pixels present at the three corners of the pattern image corresponding to the pattern image 160 in the second captured image.
  • the pixels present at the three corners of the pattern image corresponding to the pattern image 160 correspond to, for example, the pixel corresponding to the first pixel 162, the pixel corresponding to the second pixel 164, and the third pixel in the second captured image. Refers to the pixel to be
  • step 496 if the three pixels designated by the pixel designation information received by the touch panel 88 are not characteristic three pixels, the determination is negative and the routine proceeds to step 498. In step 496, if the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels, the determination is affirmed and the routine proceeds to step 500 shown in FIG.
  • step 498 the acquisition unit 110D causes the display unit 86 to start displaying the above-described redesignation message superimposed on the predetermined area of the second captured image, and then proceeds to step 480.
  • step 500 shown in FIG. 47 the acquisition unit 110D acquires the second feature pixel coordinates that specify the characteristic three pixels specified by the pixel specification information received by the touch panel 88, and then proceeds to step 502. .
  • this step 500 for example, in the second captured image, a two-dimensional coordinate specifying each of the pixel corresponding to the first pixel 162, the pixel corresponding to the second pixel 164, and the pixel corresponding to the third pixel 166. Is acquired by the acquisition unit 110D as the second feature pixel coordinates.
  • the deriving unit 111D derives a, b, and c of the plane equation shown in Expression (7) from the first feature pixel coordinates, the second feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the first feature pixel coordinates used in the process of step 502 are the first feature pixel coordinates acquired in the process of step 474, and correspond to the three feature pixel coordinates described in the first embodiment.
  • the second feature pixel coordinates used in the process of step 502 are the second feature pixel coordinates acquired in the process of step 500, and correspond to the corresponding feature pixel coordinates described in the first embodiment.
  • characteristic three pixels are designated via the touch panel 88 in the first captured image, and the first characteristic pixel coordinates that specify the designated characteristic three pixels are acquired by the acquisition unit. Obtained by 110D (step 474). Further, characteristic three pixels corresponding to the characteristic three pixels of the first captured image are designated in the second captured image via the touch panel 88 (step 470: Y). Further, second feature pixel coordinates that specify three characteristic pixels designated through the touch panel 88 in the second captured image are acquired by the acquisition unit 110D (step 500).
  • the imaging position distance is derived by the deriving unit 111D based on the target pixel coordinates, the corresponding target pixel coordinates, the first characteristic pixel coordinates, the second characteristic pixel coordinates, the focal position coordinates, the focal distance, and the dimensions of the imaging pixel 60A1. Is done. Therefore, according to the distance measuring device 10D, the imaging position distance can be derived based on the first feature pixel coordinates and the second feature pixel coordinates acquired according to the user's intention.
  • the distance measuring device 10E according to the fifth embodiment is different from the distance measuring device 10A in that an imaging position distance deriving program 106E is stored in the secondary storage unit 104 instead of the imaging position distance deriving program 106A. Further, the distance measuring device 10E is different from the distance measuring device 10A in that a three-dimensional coordinate derivation program 108B is stored in the secondary storage unit 104 instead of the three-dimensional coordinate derivation program 108A.
  • the CPU 100 operates as the acquisition unit 110E and the derivation unit 111E as illustrated in FIG. 8 as an example by executing the imaging position distance derivation program 106E.
  • the acquisition unit 110E corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111E corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110E and the derivation unit 111E will be described with respect to differences from the acquisition unit 110A and the derivation unit 111E described in the first embodiment.
  • the acquisition unit 110E is different from the acquisition unit 110A in that the second measured distance is acquired as a reference distance.
  • the deriving unit 111E refers to the distance between the first imaging position and the second imaging position based on the target pixel coordinates, the three characteristic pixel coordinates, the reference irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the image pickup position distance is derived.
  • the deriving unit 111E adjusts the imaging position distance with reference to the derived reference imaging position distance, thereby finally adopting the final imaging position distance as the distance between the first imaging position and the second imaging position. Is derived.
  • the deriving unit 111E derives the designated pixel three-dimensional coordinates based on the derived final imaging position distance.
  • the final designated pixel real space coordinates refer to the three-dimensional coordinates that are finally adopted as the three-dimensional coordinates that are the coordinates of the target pixel 126 in the real space.
  • an imaging position distance deriving process realized by the CPU 100 executing the imaging position distance deriving function 106E by executing the imaging position distance deriving program 106E.
  • the first derivation process will be described with reference to FIGS. Note that the same steps as those in the flowcharts shown in FIGS. 22 and 23 are denoted by the same step numbers, and description thereof is omitted.
  • step 550 is provided instead of step 330J.
  • step 552 to 568 are provided instead of steps 330Q to 330U.
  • the acquisition unit 110E acquires the second actually measured distance measured by executing the processing of Step 330I as a reference distance.
  • the acquisition unit 110E acquires a second captured image signal indicating the second captured image obtained by performing the processing in step 330I, and then proceeds to step 330K.
  • step 552 the deriving unit 111E determines the first plane equation, which is the plane equation shown in equation (7), based on the irradiation position real space coordinates derived in the process of step 312 shown in FIG. Go to 554.
  • step 554 the deriving unit 111E derives the imaging position distance based on the feature pixel three-dimensional coordinates and the first plane equation, and then proceeds to step 556.
  • step 556 the deriving unit 111E calculates the reference distance, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M acquired by the acquisition unit 110E in the process of step 550, based on the equation (3).
  • the reference irradiation position real space coordinates are derived, and then the process proceeds to step 558.
  • the reference distance used in the processing of step 556 is a distance corresponding to the distance L described in the first embodiment.
  • step 558 the deriving unit 111E determines the second plane equation, which is the plane equation shown in equation (7), based on the reference irradiation position real space coordinates derived in the process of step 556, and then proceeds to step 560. To do. That is, in this step 558, the deriving unit 111E substitutes a, b, c derived in the process of step 330P and the reference irradiation position real space coordinates derived in the process of step 556 into Equation (7), Determine d in equation (7). Since a, b, and c of Expression (7) are derived in the process of Step 330P, when d of Expression (7) is determined in the process of Step 558, the second plane equation is determined.
  • step 560 the deriving unit 111E derives the reference imaging position distance based on the feature pixel three-dimensional coordinates and the second plane equation, and then proceeds to step 562.
  • the reference imaging position distance corresponds to, for example, “B” shown in Equation (9), and is derived by substituting the first feature pixel three-dimensional coordinates into the second plane equation.
  • step 562 the deriving unit 111E refers to the reference imaging position distance derived in step 560 and adjusts the imaging position distance derived in step 554 to derive the final imaging position distance. Thereafter, the process proceeds to step 564.
  • adjusting the imaging position distance means, for example, obtaining an average value of the imaging position distance and the reference imaging position distance, and multiplying the average value of the imaging position distance and the reference imaging position distance by the first adjustment coefficient. Or, it means multiplying the imaging position distance by the second adjustment coefficient.
  • the first adjustment coefficient and the second adjustment coefficient are both coefficients that are uniquely determined according to, for example, the reference imaging position distance.
  • the first adjustment coefficient is, for example, a correspondence table in which the reference imaging position distance and the first adjustment coefficient are associated in advance, or the reference imaging position distance is an independent variable, and the first adjustment coefficient is dependent. It is derived from an arithmetic expression that is a variable.
  • the second adjustment coefficient is similarly derived.
  • the correspondence table or the calculation formula is for derivation derived from the result of a computer simulation or the like based on a test of the distance measuring device 10E by an actual device or a design specification of the distance measuring device 10E at the stage before the shipment of the distance measuring device 10E. Derived from a table or arithmetic expression.
  • an average value of the imaging position distance and the reference imaging position distance a value obtained by multiplying the average value of the imaging position distance and the reference imaging position distance by the first adjustment coefficient, or A value obtained by multiplying the imaging position distance by the second adjustment coefficient can be given.
  • step 564 the derivation unit 111E causes the display unit 86 to start display in which the final imaging position distance derived in the process of step 562 is superimposed on the second captured image, as shown in FIG. 49 as an example.
  • the deriving unit 111E stores the final imaging position distance derived in the process of step 562 in a predetermined storage area, and then proceeds to step 566.
  • the numerical value “1444621.7” corresponds to the final imaging position distance derived by the processing in step 562, and the unit is millimeters.
  • step 566 the derivation unit 111E determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 566 if the condition for ending the first derivation process is not satisfied, the determination is negative and the determination in step 566 is performed again. If the condition for ending the first derivation process is satisfied in step 566, the determination is affirmed and the process proceeds to step 568.
  • step 568 the derivation unit 111E ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the first derivation process.
  • the superimposed display information refers to various pieces of information that are currently displayed superimposed on the second captured image, for example, the final imaging position distance.
  • step 600 the derivation unit 111E determines whether or not the final imaging position distance has already been derived in the process of step 562 included in the first derivation process.
  • step 600 when the final imaging position distance is not derived in the process of step 562 included in the first derivation process, the determination is negative and the process proceeds to step 608.
  • step 600 if the final imaging position distance has already been derived in the process of step 562 included in the first derivation process, the determination is affirmed and the process proceeds to step 602.
  • step 602 the derivation unit 111E determines whether or not the derivation start condition is satisfied. If it is determined in step 602 that the derivation start condition is not satisfied, the determination is negative and the process proceeds to step 608. In step 602, if the derivation start condition is satisfied, the determination is affirmed and the process proceeds to step 604.
  • step 604 the deriving unit 111E derives the designated pixel three-dimensional coordinates based on the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, the size of the imaging pixel 60A1, and the formula (2). Thereafter, the process proceeds to step 606.
  • step 604 the designated pixel three-dimensional coordinates are derived by substituting the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 into Equation (2).
  • step 606 the derivation unit 111E causes the display unit 86 to superimpose and display the designated pixel three-dimensional coordinates derived in step 604 on the second captured image, as shown in FIG. 51 as an example.
  • the deriving unit 111E stores the designated pixel three-dimensional coordinates derived in the process of step 604 in a predetermined storage area, and then proceeds to step 608.
  • (20160, 50132, 137810) corresponds to the designated pixel three-dimensional coordinates derived in the process of step 454.
  • the designated pixel three-dimensional coordinates are displayed close to the target pixel 126.
  • step 608 the derivation unit 111E determines whether or not a condition for ending the three-dimensional coordinate derivation process is satisfied. If the condition for ending the three-dimensional coordinate derivation process is not satisfied at step 608, the determination is negative and the routine proceeds to step 600. If the condition for ending the three-dimensional coordinate derivation process is satisfied in step 608, the determination is affirmed and the three-dimensional coordinate derivation process is ended.
  • the distance from the second position to the subject is measured, and the reference distance that is the measured distance is acquired by the acquisition unit 110E (step 550).
  • the reference irradiation position real space coordinates are derived by the deriving unit 111E based on the reference distance (step 556).
  • the deriving unit 111E performs reference imaging based on the target pixel coordinates, the corresponding target pixel coordinates, the three feature pixel coordinates, the corresponding feature pixel coordinates, the reference irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • a position distance is derived (step 560).
  • the deriving unit 111E refers to the reference imaging position distance and adjusts the imaging position distance, thereby deriving the final imaging position distance (step 562). Therefore, according to the distance measuring device 10E, the distance between the first imaging position and the second imaging position can be derived with higher accuracy than when the reference imaging position distance is not used.
  • the designated pixel three-dimensional coordinates are derived based on the final imaging position distance derived by the imaging position distance deriving process (see FIG. 50). Therefore, according to the distance measuring device 10E, it is possible to derive the designated pixel three-dimensional coordinates with higher accuracy than when the final imaging position distance is not used.
  • the designated pixel three-dimensional coordinates are defined based on the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 (formula (2)). reference). Therefore, according to the distance measuring device 10D, the designated pixel is compared with the case where the designated pixel three-dimensional coordinate is not defined based on the final imaging position distance, the target pixel coordinate, the corresponding target pixel coordinate, the focal length, and the size of the imaging pixel 60A1. Three-dimensional coordinates can be derived with high accuracy.
  • the distance measured based on the laser beam emitted from the second position is used as the reference distance, but the technology of the present disclosure is not limited to this.
  • the distance measured based on the laser light emitted from the first position may be used as the reference distance.
  • an information processing system 650 includes distance measuring devices 10F1 and 10F2 and a PC 652.
  • the PC 652 can communicate with the distance measuring devices 10F1 and 10F2.
  • the PC 652 is an example of an information processing apparatus according to the technology of the present disclosure.
  • the distance measuring device 10F1 is disposed at the first position, and the distance measuring device 10F2 is disposed at a second position different from the first position.
  • the distance measuring devices 10F1 and 10F2 have the same configuration.
  • the distance measuring devices 10F1 and 10F2 are referred to as “range measuring device 10F” when it is not necessary to distinguish between them.
  • the distance measuring device 10F differs from the distance measuring device 10A in that it includes an imaging device 15 instead of the imaging device 14.
  • the imaging device 15 is different from the imaging device 14 in that an imaging device body 19 is provided instead of the imaging device body 18.
  • the imaging device main body 19 is different from the imaging device main body 18 in having a communication I / F 83.
  • the communication I / F 83 is connected to the bus line 84 and operates under the control of the main control unit 62.
  • the communication I / F 83 is connected to a communication network (not shown) such as the Internet, and controls transmission / reception of various information to / from the PC 652 connected to the communication network.
  • a communication network such as the Internet
  • the PC 652 includes a main control unit 653.
  • the main control unit 653 includes a CPU 654, a primary storage unit 656, and a secondary storage unit 658.
  • the CPU 654, the primary storage unit 656, and the secondary storage unit 658 are connected to each other via a bus line 660.
  • the PC 652 includes a communication I / F 662.
  • the communication I / F 662 is connected to the bus line 660 and operates under the control of the main control unit 653.
  • the communication I / F 662 is connected to a communication network, and controls transmission / reception of various information to / from the distance measuring device 10F connected to the communication network.
  • the PC 652 includes a reception unit 663 and a display unit 664.
  • the reception unit 663 is connected to the bus line 660 via a reception I / F (not shown), and the reception I / F transmits an instruction content signal indicating the content of the instruction received by the reception unit 663 to the main control unit. Output to 653.
  • the reception unit 663 is realized by, for example, a keyboard, a mouse, and a touch panel.
  • the display unit 664 is connected to the bus line 660 via a display control unit (not shown), and displays various information under the control of the display control unit.
  • the display unit 664 is realized by an LCD, for example.
  • the secondary storage unit 658 stores the dimension derivation program 105A (105B) described in the above embodiments. Further, the secondary storage unit 658 stores the imaging position distance derivation program 106A (106B, 106C, 106D, 106E) described in the above embodiments. The secondary storage unit 658 stores the three-dimensional coordinate derivation program 108A (108B) described in the above embodiments. Further, the secondary storage unit 658 stores the focal length derivation table 109A (109B) described in the above embodiments.
  • the dimension derivation programs 105A and 105B when it is not necessary to distinguish between the dimension derivation programs 105A and 105B, the dimension derivation programs are referred to as “dimension derivation programs” without reference numerals.
  • the imaging position distance deriving programs 106A, 106B, 106C, 106D, and 106E when there is no need to distinguish between the imaging position distance deriving programs 106A, 106B, 106C, 106D, and 106E, they are referred to as “imaging position distance deriving programs” without reference numerals.
  • the three-dimensional coordinate derivation programs 108A and 108B are referred to as “three-dimensional coordinate derivation programs” without reference numerals.
  • the focal length derivation tables 109A and 109B are referred to as “focal length derivation tables” without reference numerals.
  • the CPU 654 acquires the first captured image signal, the target pixel coordinate, the distance, and the like from the distance measuring device 10F1 via the communication I / F 662. In addition, the CPU 654 acquires the second captured image signal and the like from the distance measuring device 10F2 via the communication I / F 662.
  • the CPU 654 reads the dimension derivation program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the dimension derivation program.
  • the CPU 654 reads an imaging position distance deriving program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the imaging position distance deriving program.
  • the CPU 654 reads out the three-dimensional coordinate derivation program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the three-dimensional coordinate derivation program.
  • the dimension derivation program and the imaging position distance derivation program are collectively referred to as “derivation program”.
  • the CPU 654 operates as the acquisition unit 110A (110B, 11C, 110D, 110E) and the derivation unit 111A (111B, 111C, 111D, 111E) by executing the derivation program.
  • the PC 652 acquires the first captured image signal, the second captured image signal, the pixel-of-interest coordinates, the distance, and the like from the distance measuring device 10F via the communication I / F 662, and executes the derivation program.
  • the same operations and effects as those of the above embodiments can be obtained.
  • the distance measuring device 10A is realized by the distance measuring unit 12 and the imaging device 14 is exemplified.
  • the device 10G will be described. Note that in the seventh embodiment, the same components as those in the above embodiments are denoted by the same reference numerals, description thereof will be omitted, and only parts different from those in the above embodiments will be described.
  • the distance measuring device 10 ⁇ / b> G according to the seventh embodiment is different from the distance measuring device 10 ⁇ / b> A according to the first embodiment in that an imaging device 700 is provided instead of the imaging device 14. .
  • the distance measuring device 10G is different from the distance measuring device 10A in that it includes a smart device 702.
  • the imaging device 700 is different from the imaging device 14 in that it has an imaging device body 703 instead of the imaging device body 18.
  • the imaging device body 703 is different from the imaging device body 18 in that the imaging device body 703 includes a wireless communication unit 704 and a wireless communication antenna 706.
  • the wireless communication unit 704 is connected to the bus line 84 and the wireless communication antenna 706.
  • the main control unit 62 outputs transmission target information, which is information to be transmitted to the smart device 702, to the wireless communication unit 704.
  • the wireless communication unit 704 transmits the transmission target information input from the main control unit 62 to the smart device 702 via the wireless communication antenna 706 by radio waves.
  • the radio communication unit 704 acquires a signal corresponding to the received radio wave and outputs the acquired signal to the main control unit 62.
  • the smart device 702 includes a CPU 708, a primary storage unit 710, and a secondary storage unit 712.
  • the CPU 708, the primary storage unit 710, and the secondary storage unit 712 are connected to the bus line 714.
  • the CPU 708 controls the entire distance measuring device 10G including the smart device 702.
  • the primary storage unit 710 is a volatile memory used as a work area or the like when executing various programs.
  • An example of the primary storage unit 710 is a RAM.
  • the secondary storage unit 712 is a non-volatile memory that stores in advance a control program for controlling the overall operation of the distance measuring apparatus 10G including the smart device 702, various parameters, or the like.
  • An example of the secondary storage unit 712 is a flash memory or an EEPROM.
  • the smart device 702 includes a display unit 715, a touch panel 716, a wireless communication unit 718, and a wireless communication antenna 720.
  • the display unit 715 is connected to the bus line 714 via a display control unit (not shown), and displays various information under the control of the display control unit.
  • the display unit 715 is realized by an LCD, for example.
  • the touch panel 716 is overlaid on the display screen of the display unit 715, and accepts contact by an indicator.
  • the touch panel 716 is connected to the bus line 714 via a touch panel I / F (not shown), and outputs position information indicating the position touched by the indicator to the touch panel I / F.
  • the touch panel I / F operates the touch panel I / F in accordance with an instruction from the CPU 708, and outputs position information input from the touch panel 716 to the CPU 708.
  • the display unit 715 includes a measurement imaging button 90A, an imaging button (not shown), an imaging system operation mode switching button 90B, a wide-angle instruction button 90C, a telephoto instruction button 90D, an imaging position distance derivation button 90F, and a three-dimensional coordinate derivation button.
  • a soft key corresponding to 90G or the like is displayed (see FIG. 56).
  • a measurement imaging button 90A1 that functions as the measurement imaging button 90A is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
  • an imaging button (not shown) that functions as the imaging button described in the first embodiment is displayed as a soft key on the display unit 715 and pressed by the user via the touch panel 716.
  • an imaging system operation mode switching button 90B1 that functions as the imaging system operation mode switching button 90B is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
  • a wide-angle instruction button 90C1 that functions as the wide-angle instruction button 90C is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
  • a telephoto instruction button 90D1 that functions as the telephoto instruction button 90D is displayed as a soft key on the display unit 715 and is pressed by the user via the touch panel 716.
  • the display unit 715 displays a dimension derivation button 90E1 that functions as the dimension derivation button 90E as a soft key, and is pressed by the user via the touch panel 716.
  • An imaging position distance derivation button 90F1 that functions as the imaging position distance derivation button 90F is displayed as a soft key and is pressed by the user via the touch panel 716.
  • the display unit 715 displays a 3D coordinate derivation button 90G1 that functions as the 3D coordinate derivation button 90G as a soft key, and is pressed by the user via the touch panel 716.
  • the wireless communication unit 718 is connected to the bus line 714 and the wireless communication antenna 720.
  • the wireless communication unit 718 transmits the signal input from the CPU 708 to the imaging apparatus main body 703 via the wireless communication antenna 720 by radio waves.
  • the radio communication unit 718 acquires a signal corresponding to the received radio wave and outputs the acquired signal to the CPU 708. Accordingly, the imaging apparatus main body 703 is controlled by the smart device 702 by performing wireless communication with the smart device 702.
  • the secondary storage unit 712 stores a derivation program.
  • the CPU 708 reads out the derived program from the secondary storage unit 712, expands it in the primary storage unit 710, and executes the derived program.
  • the secondary storage unit 712 stores a three-dimensional coordinate derivation program.
  • the CPU 708 reads the three-dimensional coordinate derivation program from the secondary storage unit 712, develops it in the primary storage unit 710, and executes the three-dimensional coordinate derivation program. Further, the secondary storage unit 712 stores a focal length derivation table.
  • the CPU 708 operates as the acquisition unit 110A (110B, 110C, 110D, 110E) and the derivation unit 111A (111B, 111C, 111D, 111E) by executing the derivation program.
  • the smart device 702 executes the derivation program, so that the same operations and effects as in the above embodiments can be obtained.
  • the corresponding target pixel is specified by performing image analysis using the second captured image as an analysis target, and the corresponding target pixel coordinates specifying the specified corresponding target pixel are acquired ( Steps 330M and 336L), and the technology of the present disclosure is not limited to this.
  • the user may designate a pixel corresponding to the target pixel from the second captured image via the touch panel 88 as the corresponding target pixel.
  • the deriving unit 111A (111B, 111C, 111D, 111E) derives the irradiation space real space coordinates, the plane orientation, the imaging position distance, the designated pixel three-dimensional coordinates, and the like using arithmetic expressions.
  • the technology of the present disclosure is not limited to this.
  • the derivation unit 111A (111B, 111C, 111D, 111E) uses the table having the independent variable of the arithmetic expression as an input and the output of the dependent variable of the arithmetic expression as an output, the irradiation position real space coordinates, the plane orientation, and the imaging position
  • the distance, the specified pixel three-dimensional coordinates, and the like may be derived.
  • the case where the derivation program and the three-dimensional coordinate derivation program are read from the secondary storage unit 104 (658, 712) is exemplified, but it is not necessarily stored in the secondary storage unit 104 (658, 712) from the beginning. There is no need to keep it.
  • a derivation program and a three-dimensional coordinate derivation program are stored in an arbitrary portable storage medium 750 such as an SSD (Solid State Drive) or a USB (Universal Serial Bus) memory. Also good.
  • the derivation program of the storage medium 750 is installed in the distance measuring device 10A (10B, 10C, 10D, 10E) (hereinafter referred to as “the distance measuring device 10A etc.”) or the PC 652, and the installed derivation program is the CPU 100 ( 654, 708). Further, the three-dimensional coordinate derivation program of the storage medium 750 is installed in the distance measuring device 10A or the like or the PC 652, and the installed three-dimensional coordinate derivation program is executed by the CPU 100 (654, 708).
  • the derivation program and the three-dimensional coordinate derivation program are stored in a storage unit such as the distance measuring device 10A or the like or another computer or server device connected to the PC 652 via a communication network (not shown),
  • the three-dimensional coordinate derivation program may be downloaded in response to a request from the distance measuring device 10A or the like or the PC 652.
  • the downloaded derivation program is executed by the CPU 100 (654, 708).
  • the case where various types of information such as the irradiation position mark 136, the imaging position distance, and the designated pixel three-dimensional coordinates are displayed on the display unit 86 is illustrated, but the technology of the present disclosure is limited to this. It is not something.
  • various types of information may be displayed on a display unit of an external device that is used by being connected to the distance measuring device 10A or the like or the PC 652.
  • the external device there is a PC or a glasses-type or watch-type wearable terminal device.
  • the irradiation position mark 136, the imaging position distance, the specified pixel three-dimensional coordinates, and the like are visually displayed on the display unit 86 , but the technology of the present disclosure is limited to this. is not.
  • an audible display such as sound output by a sound reproduction device or a permanent visual display such as output of a printed matter by a printer may be performed instead of the visible display, or may be used in combination.
  • the irradiation position mark 136, the imaging position distance, the specified pixel three-dimensional coordinates, and the like are displayed on the display unit 86 , but the technology of the present disclosure is not limited to this. Absent. For example, at least one of the irradiation position mark 136, the imaging position distance, the designated pixel three-dimensional coordinates, and the like is displayed on a display unit (not shown) different from the display unit 86, and the rest is displayed on the display unit 86. You may make it do. Each of the irradiation position mark 136, the imaging position distance, the designated pixel three-dimensional coordinates, and the like may be individually displayed on a plurality of display units including the display unit 86.
  • laser light is exemplified as distance measurement light.
  • the technology of the present disclosure is not limited to this, and any directional light that is directional light may be used. Good.
  • it may be directional light obtained by a light emitting diode (LED: Light Emitting Diode) or a super luminescent diode (SLD).
  • the directivity of the directional light is preferably the same as the directivity of the laser light.
  • the directivity can be used for ranging within a range of several meters to several kilometers. preferable.
  • the dimension derivation process, the imaging position distance derivation process, and the three-dimensional coordinate derivation process described in the above embodiments are merely examples. Therefore, it goes without saying that unnecessary steps may be deleted, new steps may be added, and the processing order may be changed within a range not departing from the spirit.
  • Each process included in the dimension derivation process, the imaging position distance derivation process, and the three-dimensional coordinate derivation process may be realized only by a hardware configuration such as an ASIC, or a software configuration and a hardware configuration using a computer. May be realized in combination.
  • the distance measuring unit 12 is attached to the side surface of the imaging apparatus main body 18 included in the distance measuring apparatus 10A and the like, but the technology of the present disclosure is limited to this. It is not a thing.
  • the distance measuring unit 12 may be attached to the upper surface or the lower surface of the imaging apparatus main body 18.
  • a distance measuring device 10H may be applied instead of the distance measuring device 10A or the like.
  • the distance measuring device 10H has a distance measuring unit 12A instead of the distance measuring unit 12 as compared with the distance measuring device 10A and the like, and an image pickup device main body 18A instead of the image pickup device main body 18. Is different.
  • the distance measuring unit 12A is housed in the housing 18A1 of the imaging apparatus main body 18A, and the objective lenses 32 and 38 are the front side of the distance measuring apparatus 10G (the focus lens 50 is exposed). Side) is exposed from the housing 18A1.
  • the distance measuring unit 12A is preferably arranged so that the optical axes L1 and L2 are set at the same height in the vertical direction. Note that an opening (not shown) through which the distance measuring unit 12A can be inserted into and removed from the housing 18A1 may be formed in the housing 18A1.
  • the focal length deriving table 109A that can directly derive the focal length from the deriving distance is exemplified, but the technology of the present disclosure is not limited to this.
  • a focal length derivation table 109D having a correction value corresponding to the actually measured distance may be employed.
  • the focal length derivation table 109D different correction values are associated with each of the plurality of derivation distances. That is, in the focal distance derivation table 109D, the derivation distance and the correction value are associated with each other so that the focal distance is derived by correcting the derivation distance corresponding to the actually measured distance with the correction value.
  • the focal length derivation table 109D is a table derived from, for example, a result of at least one of a test by the actual device of the distance measuring device 10A and a computer simulation based on a design specification of the distance measuring device 10A.
  • the focal length derived table 109D used by CPU 100 7 mm focal length is derived by Y 1 in the correction value is multiplied to one meter deriving distance. Further, when the focal length derived table 109D used by CPU 100, 8 mm focal length is derived by Y 2 is multiplied by the correction value for the two meters deriving distance. Further, when the focal length derived table 109D used by CPU 100, 10 mm focal length is derived by the Y 3 of the correction value is multiplied to three meters deriving distance. Further, when the focal length derived table 109D is used by CPU 100, 12 mm focal length is derived by the Y 4 is the correction value is multiplied against 5m deriving distance.
  • focal length derived table 109D used by CPU 100 14 mm focal length is derived by the Y 5 a correction value is multiplied against 10m deriving distance. Further, when the focal length derived table 109D used by CPU 100, 16 mm focal length is derived by Y 6 in the correction value is multiplied by relative 30m deriving distance. Furthermore, if the focal length derived table 109D used by CPU 100, 18 mm focal length is derived by Y 7 the correction value is multiplied by relative infinity derivation distance.
  • a correction value is derived from the derivation distance in the focal distance derivation table 109D by the interpolation method described above, and the actual distance is derived using the derived correction value.
  • the focal length may be derived by correcting.
  • the multiplication coefficient for the derivation distance is defined as the correction value.
  • the correction value is not limited to the multiplication coefficient, and the focal distance is calculated from the derivation distance. Any numerical value may be used as long as it is used together with an operator necessary for deriving.
  • the CPU 100 corrects the measured distance with the correction value to derive the focal length corresponding to the measured distance, so that the distance measuring device 10A and the like can improve the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the technique of the present disclosure can also be applied when a moving image such as a live view image is captured at the second position.
  • the CPU 100 or the like may acquire the measured distance or the like while the moving image is captured, and derive the focal distance corresponding to the acquired measured distance or the like using the focal distance derivation table 109A or the like.
  • immediate derivation of the imaging position distance or the like is realized with high accuracy.
  • deriving the focal length while a moving image is being captured in this manner means that the focal length changes as focus control is continuously performed by so-called continuous AF (Auto-Focus). This is particularly effective when
  • distance measurement is performed without providing a focus determination area that is an area to be determined as a focus state
  • the technology of the present disclosure is not limited to this. Instead, distance measurement may be performed by irradiating the focus determination area with laser light.

Abstract

This information processing device is provided with: an acquisition unit which acquires a measured distance, which is a distance measured by a measuring unit included in a distance measurement device, which is provided with an imaging unit for imaging a subject and having a focus lens, and with the measurement unit, which measures the distance to the subject by irradiating the subject with directional light, i.e., light having directivity, and by receiving reflected light of the directional light; and a derivation unit which uses correspondence relation information that indicates the correspondence relation between the measured distance and the focal length of the aforementioned focus lens to derive the focal length corresponding to the measured distance obtained by the acquisition unit.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing apparatus, information processing method, and program
 本開示の技術は、情報処理装置、情報処理方法、及びプログラムに関する。 The technology of the present disclosure relates to an information processing apparatus, an information processing method, and a program.
 特開2004-157386号公報には、被写体を示す像信号に基づいて被写体までの距離を計測する技術が開示されている。また、国際公開2008-155961号公報には、予め定められた間隔だけ離間して搭載された一対の撮像部によって撮像されて得られた一対の撮像画像に基づいて被写体までの距離を計測する技術が開示されている。更に、特開2004-264827号公報には、画像データに基づいて撮像部での焦点距離を導出する技術が開示されている。 Japanese Patent Application Laid-Open No. 2004-157386 discloses a technique for measuring a distance to a subject based on an image signal indicating the subject. In addition, International Publication No. 2008-155961 discloses a technique for measuring a distance to a subject based on a pair of captured images obtained by imaging with a pair of imaging units mounted at a predetermined interval apart from each other. Is disclosed. Further, Japanese Patent Application Laid-Open No. 2004-264827 discloses a technique for deriving a focal length at an imaging unit based on image data.
 ところで、焦点距離の高精度化を図るべく、撮像されて得られた撮像画像に含まれる基準画像の長さがユーザによって測距装置に入力されると、測距装置が、入力された基準画像の長さに基づいて撮像部での焦点距離を補正する技術が知られている。 By the way, in order to increase the accuracy of the focal length, when the length of the reference image included in the captured image obtained by imaging is input to the distance measuring device by the user, the distance measuring device receives the input reference image. A technique for correcting the focal length in the imaging unit based on the length of the image pickup unit is known.
 しかしながら、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さを測距装置に入力する作業はユーザにとって煩わしい作業である。 However, it is troublesome for the user to input the length of the reference image included in the captured image to the distance measuring device in order to improve the focal length.
 本発明の一つの実施形態は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる情報処理装置、情報処理方法、及びプログラムを提供する。 In one embodiment of the present invention, in order to improve the focal length, the focal length can be set with high accuracy without taking time and effort compared to the case where the user inputs the length of the reference image included in the captured image. An information processing apparatus, an information processing method, and a program that can be derived are provided.
 本発明の第1の態様に係る情報処理装置は、フォーカスレンズを有する撮像部であって、被写体を撮像する撮像部、及び指向性のある光である指向性光を被写体に射出し、指向性光の反射光を受光することにより被写体までの距離を計測する計測部を含む測距装置に含まれる計測部により計測された距離である実測距離を取得する取得部と、実測距離とフォーカスレンズの焦点距離との対応関係を示す対応関係情報を用いて、取得部により取得された実測距離に対応する焦点距離を導出する導出部と、を含む。 An information processing apparatus according to a first aspect of the present invention is an imaging unit having a focus lens, and an imaging unit that captures an image of a subject, and directivity light that is directional light is emitted to the subject, thereby providing directivity. An acquisition unit that acquires an actual distance that is a distance measured by a measurement unit included in a distance measuring device that includes a measurement unit that measures a distance to a subject by receiving reflected light of the light, an actual distance and a focus lens A deriving unit for deriving a focal length corresponding to the actually measured distance obtained by the obtaining unit using correspondence relationship information indicating a correspondence relationship with the focal length.
 従って、本発明の第1の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the first aspect of the present invention takes less time than the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Thus, the focal length can be derived with high accuracy.
 本発明の第2の態様に係る情報処理装置は、本発明の第1の態様に係る情報処理装置において、対応関係情報は、実測距離に対応する補正値を含む情報であり、導出部は、実測距離が補正値で補正されることにより、実測距離に対応する焦点距離を導出する、とされている。 In the information processing device according to the second aspect of the present invention, in the information processing device according to the first aspect of the present invention, the correspondence information is information including a correction value corresponding to the actually measured distance, The focal distance corresponding to the actual distance is derived by correcting the actual distance with the correction value.
 従って、本発明の第2の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the second aspect of the present invention takes less time compared to the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length accuracy. Thus, the focal length can be derived with high accuracy.
 本発明の第3の態様に係る情報処理装置は、本発明の第1の態様に係る情報処理装置において、撮像部は、ズームレンズを更に有しており、対応関係情報は、実測距離、撮像部でのズームレンズの光軸方向の位置、及び焦点距離の対応関係を示す情報であり、取得部は、撮像部でのズームレンズの光軸方向の位置を示す位置情報を更に取得し、導出部は、対応関係情報を用いて、取得部により取得された実測距離及び位置情報に対応する焦点距離を導出する、とされている。 An information processing apparatus according to a third aspect of the present invention is the information processing apparatus according to the first aspect of the present invention, wherein the imaging unit further includes a zoom lens, and the correspondence information includes measured distance, imaging The information indicating the correspondence relationship between the position of the zoom lens in the optical axis direction and the focal length at the imaging unit, and the acquisition unit further acquires and derives position information indicating the position of the zoom lens in the optical axis direction at the imaging unit The unit uses the correspondence information to derive the focal distance corresponding to the actually measured distance and the position information acquired by the acquisition unit.
 従って、本発明の第3の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the third aspect of the present invention has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第4の態様に係る情報処理装置は、本発明の第3の態様に係る情報処理装置において、対応関係情報は、実測距離、撮像部でのズームレンズの光軸方向の位置、撮像部による撮像に影響を及ぼす領域の温度、及び焦点距離の対応関係を示す情報であり、取得部は、領域の温度を示す温度情報を更に取得し、導出部は、対応関係情報を用いて、取得部により取得された実測距離、位置情報、及び温度情報に対応する焦点距離を導出する、とされている。 In the information processing device according to the fourth aspect of the present invention, in the information processing device according to the third aspect of the present invention, the correspondence information includes an actually measured distance, a position of the zoom lens in the optical axis direction at the imaging unit, and imaging. Information indicating the correspondence between the temperature of the region affecting the imaging by the unit and the focal length, the acquisition unit further acquires temperature information indicating the temperature of the region, and the derivation unit uses the correspondence information, The focal distance corresponding to the measured distance, the position information, and the temperature information acquired by the acquisition unit is derived.
 従って、本発明の第4の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化し、かつ、撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the fourth aspect of the present invention has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes and the temperature of the region that affects imaging changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第5の態様に係る情報処理装置は、本発明の第4の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記撮像部による撮像に影響を及ぼす領域の温度、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、 The information processing apparatus according to a fifth aspect of the present invention is the information processing apparatus according to the fourth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit. Information indicating the correspondence between the position of the image, the temperature of the region that affects the imaging by the imaging unit, the attitude of the focus lens with respect to the vertical direction, and the focal length,
 前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、 The acquisition unit further acquires focus lens posture information indicating the posture of the focus lens with respect to the vertical direction,
 前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、前記温度情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The deriving unit derives the focal distance corresponding to the measured distance, the position information, the temperature information, and the focus lens attitude information acquired by the acquiring unit using the correspondence information. ing.
 従って、本発明の第5の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化し、かつ、撮像部による撮像に影響を及ぼす領域の温度が変化し、かつ、フォーカスレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the fifth aspect of the present invention has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the temperature of the region that affects the imaging by the imaging unit changes, and the attitude of the focus lens changes, the focal length is derived with high accuracy without taking time and effort. be able to.
 本発明の第6の態様に係る情報処理装置は、本発明の第5の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記撮像部による撮像に影響を及ぼす領域の温度、前記鉛直方向に対する前記フォーカスレンズの姿勢、前記鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、前記温度情報、前記フォーカスレンズ姿勢情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The information processing apparatus according to a sixth aspect of the present invention is the information processing apparatus according to the fifth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit. Information indicating a correspondence relationship between the position of the image, the temperature of the region that affects the imaging by the imaging unit, the attitude of the focus lens with respect to the vertical direction, the attitude of the zoom lens with respect to the vertical direction, and the focal length, The acquisition unit further acquires zoom lens posture information indicating the posture of the zoom lens with respect to the vertical direction, and the derivation unit uses the correspondence information to acquire the measured distance and the position acquired by the acquisition unit. The focal length corresponding to the information, the temperature information, the focus lens attitude information, and the zoom lens attitude information is derived.
 従って、本発明の第6の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化し、かつ、撮像部による撮像に影響を及ぼす領域の温度が変化し、かつ、フォーカスレンズの姿勢が変化し、かつ、ズームレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the sixth aspect of the present invention has a zoom lens light compared to a case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the temperature of the area that affects the imaging by the imaging unit changes, the attitude of the focus lens changes, and the attitude of the zoom lens changes, it takes time and effort. Therefore, the focal length can be derived with high accuracy.
 本発明の第7の態様に係る情報処理装置は、本発明の第3の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The information processing apparatus according to a seventh aspect of the present invention is the information processing apparatus according to the third aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit. The position of the zoom lens, the posture of the zoom lens with respect to the vertical direction, and information indicating the correspondence relationship of the focal length, the acquisition unit further acquires zoom lens posture information indicating the posture of the zoom lens with respect to the vertical direction, The deriving unit derives the focal distance corresponding to the measured distance, the position information, and the zoom lens attitude information acquired by the acquiring unit using the correspondence information.
 従って、本発明の第7の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化し、かつ、ズームレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the seventh aspect of the present invention provides light from the zoom lens in comparison with a case where the user inputs the length of the reference image included in the captured image in order to increase the accuracy of the focal length. Even if the position in the axial direction changes and the posture of the zoom lens changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第8の態様に係る情報処理装置は、本発明の第1の態様に係る情報処理装置において、前記撮像部は、ズームレンズを更に有しており、 The information processing apparatus according to the eighth aspect of the present invention is the information processing apparatus according to the first aspect of the present invention, wherein the imaging unit further includes a zoom lens,
 前記対応関係情報は、鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The correspondence information is information indicating a correspondence relationship between the posture of the zoom lens with respect to a vertical direction and the focal length, and the acquisition unit further includes zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction. And the derivation unit derives the focal distance corresponding to the measured distance and the zoom lens posture information acquired by the acquisition unit using the correspondence information.
 従って、本発明の第8の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the eighth aspect of the present invention, in order to increase the accuracy of the focal length, the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第9の態様に係る情報処理装置は、本発明の第8の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記鉛直方向に対する前記ズームレンズの姿勢、前記鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記ズームレンズ姿勢情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 An information processing device according to a ninth aspect of the present invention is the information processing device according to the eighth aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the zoom lens with respect to the vertical direction, and the vertical It is information indicating the correspondence relationship between the orientation of the focus lens with respect to the direction and the focal length, the acquisition unit further acquires focus lens orientation information indicating the orientation of the focus lens with respect to the vertical direction, and the derivation unit is The focal distance corresponding to the measured distance, the zoom lens attitude information, and the focus lens attitude information acquired by the acquisition unit is derived using the correspondence information.
 従って、本発明の第9の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの姿勢が変化し、かつ、フォーカスレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the ninth aspect of the present invention, in order to increase the accuracy of the focal length, the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if the angle changes and the posture of the focus lens changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第10の態様に係る情報処理装置は、本発明の第9の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、前記鉛直方向に対する前記ズームレンズの姿勢、前記鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記領域の温度を示す温度情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記温度情報、前記ズームレンズ姿勢情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The information processing apparatus according to a tenth aspect of the present invention is the information processing apparatus according to the ninth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the attitude of the zoom lens with respect to the vertical direction, the attitude of the focus lens with respect to the vertical direction, and the focal length, and the acquisition unit further acquires temperature information indicating the temperature of the region The derivation unit derives the focal distance corresponding to the measured distance, the temperature information, the zoom lens posture information, and the focus lens posture information acquired by the acquisition unit using the correspondence information. It is supposed to be.
 従って、本発明の第10の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの姿勢が変化し、かつ、フォーカスレンズの姿勢が変化し、かつ、撮像部による撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the tenth aspect of the present invention, in order to increase the accuracy of the focal length, the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if the angle of the focus lens changes, and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第11の態様に係る情報処理装置は、本発明の第8の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、前記鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記領域の温度を示す温度情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記温度情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 An information processing apparatus according to an eleventh aspect of the present invention is the information processing apparatus according to the eighth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the posture of the zoom lens with respect to the vertical direction and the focal length, the acquisition unit further acquires temperature information indicating a temperature of the region, and the derivation unit includes the correspondence relationship The focal distance corresponding to the measured distance, the temperature information, and the zoom lens attitude information acquired by the acquisition unit is derived using information.
 従って、本発明の第11の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの姿勢が変化し、かつ、撮像部による撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the eleventh aspect of the present invention, in order to increase the accuracy of the focal length, the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even when the temperature changes and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第12の態様に係る情報処理装置は、本発明の第3の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 An information processing apparatus according to a twelfth aspect of the present invention is the information processing apparatus according to the third aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the focus lens with respect to a vertical direction, and the focus. Information indicating a correspondence relationship between distances, wherein the acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction, and the derivation unit uses the correspondence relationship information to obtain the acquisition The focal distance corresponding to the measured distance, the position information, and the focus lens attitude information acquired by the unit is derived.
 従って、本発明の第12の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化し、かつ、フォーカスレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the twelfth aspect of the present invention has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes and the posture of the focus lens changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第13の態様に係る情報処理装置は、本発明の第12の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記鉛直方向に対する前記フォーカスレンズの姿勢、前記鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、前記フォーカスレンズ姿勢情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The information processing apparatus according to a thirteenth aspect of the present invention is the information processing apparatus according to the twelfth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit. The position of the focus lens with respect to the vertical direction, the posture of the zoom lens with respect to the vertical direction, and the information indicating the correspondence relationship between the focal lengths, and the acquisition unit is the posture of the zoom lens with respect to the vertical direction Zoom lens posture information is further obtained, and the derivation unit uses the correspondence information to obtain the measured distance, the position information, the focus lens posture information, and the zoom lens posture obtained by the acquisition unit. The focal length corresponding to the information is derived.
 従って、本発明の第13の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズの光軸方向の位置が変化し、かつ、フォーカスレンズの姿勢が変化し、かつ、ズームレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the thirteenth aspect of the present invention provides light from the zoom lens as compared with the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. Even if the position in the axial direction is changed, the posture of the focus lens is changed, and the posture of the zoom lens is changed, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第14の態様に係る情報処理装置は、本発明の第1の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 The information processing apparatus according to a fourteenth aspect of the present invention is the information processing apparatus according to the first aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the focus lens with respect to a vertical direction, and the focus. Information indicating a correspondence relationship between distances, wherein the acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction, and the derivation unit uses the correspondence relationship information to obtain the acquisition The focal distance corresponding to the measured distance and the focus lens attitude information acquired by the unit is derived.
 従って、本発明の第14の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、フォーカスレンズの姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the fourteenth aspect of the present invention, in order to increase the accuracy of the focal length, the focus lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第15の態様に係る情報処理装置は、本発明の第1の態様に係る情報処理装置において、対応関係情報は、実測距離、撮像部による撮像に影響を及ぼす領域の温度、及び焦点距離の対応関係を示す情報であり、取得部は、領域の温度を示す温度情報を更に取得し、導出部は、対応関係情報を用いて、取得部により取得された実測距離及び温度情報に対応する焦点距離を導出する、とされている。 In the information processing device according to the fifteenth aspect of the present invention, in the information processing device according to the first aspect of the present invention, the correspondence information includes an actually measured distance, a temperature of an area that affects imaging by the imaging unit, and a focal point. This is information indicating the correspondence between distances, the acquisition unit further acquires temperature information indicating the temperature of the region, and the derivation unit corresponds to the measured distance and temperature information acquired by the acquisition unit using the correspondence relationship information. The focal length is derived.
 従って、本発明の第15の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、撮像部による撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the fifteenth aspect of the present invention, in order to increase the accuracy of the focal length, imaging by the imaging unit is performed compared to when the user inputs the length of the reference image included in the captured image. Even if the temperature of the region affecting the temperature changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第16の態様に係る情報処理装置は、本発明の第15の態様に係る情報処理装置において、前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記温度情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する、とされている。 An information processing device according to a sixteenth aspect of the present invention is the information processing device according to the fifteenth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the attitude of the focus lens with respect to the vertical direction and the focal length, and the acquisition unit further acquires focus lens attitude information indicating the attitude of the focus lens with respect to the vertical direction, and derives the information. The unit derives the focal distance corresponding to the measured distance, the temperature information, and the focus lens attitude information acquired by the acquisition unit using the correspondence information.
 従って、本発明の第16の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、フォーカスレンズの姿勢が変化し、かつ、撮像部による撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing apparatus according to the sixteenth aspect of the present invention, in order to increase the accuracy of the focal length, the focus lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even when the temperature changes and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
 本発明の第17の態様に係る情報処理装置は、本発明の第1の態様から第16の態様の何れか1つの態様に係る情報処理装置において、取得部は、被写体が第1撮像位置から撮像部により撮像されて得られた第1撮像画像、及び被写体が第1撮像位置とは異なる第2撮像位置から撮像されて得られた第2撮像画像を取得し、第1撮像位置に対応する位置から計測部により指向性光が被写体に射出されて指向性光の反射光が受光されることにより計測された被写体までの距離を実測距離として取得し、導出部は、対応関係情報を用いて導出した焦点距離に基づいて第1撮像位置と第2撮像位置との距離である撮像位置距離を導出する、とされている。 An information processing device according to a seventeenth aspect of the present invention is the information processing device according to any one of the first to sixteenth aspects of the present invention, wherein the acquisition unit is configured to move the subject from the first imaging position. A first captured image obtained by imaging by the imaging unit and a second captured image obtained by imaging the subject from a second imaging position different from the first imaging position are acquired, and corresponds to the first imaging position. The directional light is emitted from the position by the measurement unit to the subject and the reflected light of the directional light is received to obtain the measured distance to the subject, and the derivation unit uses the correspondence information The imaging position distance, which is the distance between the first imaging position and the second imaging position, is derived based on the derived focal length.
 従って、本発明の第17の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに撮像位置距離を高精度に導出することができる。 Therefore, the information processing apparatus according to the seventeenth aspect of the present invention takes less time than the case where the user inputs the length of the reference image included in the captured image in order to increase the accuracy of the focal length. In addition, the imaging position distance can be derived with high accuracy.
 本発明の第18の態様に係る情報処理装置は、本発明の第1の態様から第17の態様の何れか1つの態様に係る情報処理装置において、導出部は、対応関係情報を用いて導出した焦点距離、取得部により取得された実測距離、及び計測部による実測距離の計測で用いられた指向性光が照射された撮像範囲で撮像部により撮像されて得られた画像内において指定された複数画素の間隔に基づいて、間隔に対応する実空間領域の寸法を導出する、とされている。 The information processing device according to an eighteenth aspect of the present invention is the information processing device according to any one of the first to seventeenth aspects of the present invention, wherein the derivation unit derives using the correspondence information. Specified in the image obtained by imaging by the imaging unit in the imaging range irradiated with the directional light used in the measurement of the measured distance acquired by the acquisition unit and the measured distance acquired by the measuring unit. The size of the real space region corresponding to the interval is derived based on the interval between the plurality of pixels.
 従って、本発明の第18の態様に係る情報処理装置は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに実空間領域の寸法を高精度に導出することができる。 Therefore, the information processing apparatus according to the eighteenth aspect of the present invention takes less time than the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. In addition, the dimensions of the real space region can be derived with high accuracy.
 本発明の第19の態様に係る情報処理方法は、フォーカスレンズを有する撮像部であって、被写体を撮像する撮像部、及び指向性のある光である指向性光を被写体に射出し、指向性光の反射光を受光することにより被写体までの距離を計測する計測部を含む測距装置に含まれる計測部により計測された距離である実測距離を取得し、実測距離とフォーカスレンズの焦点距離との対応関係を示す対応関係情報を用いて、取得した実測距離に対応する焦点距離を導出することを含む。 An information processing method according to a nineteenth aspect of the present invention is an image pickup unit having a focus lens, an image pickup unit for picking up an image of a subject, and directivity light, which is directional light, is emitted to the subject. Obtaining an actual distance measured by a measurement unit included in a distance measuring device including a measurement unit that measures the distance to the subject by receiving reflected light of the light, and calculating the actual distance and the focal length of the focus lens And deriving a focal length corresponding to the actually measured distance using correspondence relationship information indicating the correspondence relationship.
 従って、本発明の第19の態様に係る情報処理方法は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, in the information processing method according to the nineteenth aspect of the present invention, in order to increase the focal length accuracy, it takes less time than when the user inputs the length of the reference image included in the captured image. Thus, the focal length can be derived with high accuracy.
 本発明の第20の態様に係るプログラムは、コンピュータに、フォーカスレンズを有する撮像部であって、被写体を撮像する撮像部、及び指向性のある光である指向性光を被写体に射出し、指向性光の反射光を受光することにより被写体までの距離を計測する計測部を含む測距装置に含まれる計測部により計測された距離である実測距離を取得し、実測距離と撮像部での焦点距離との対応関係を示す対応関係情報を用いて、取得した実測距離に対応する焦点距離を導出することを含む処理を実行させるためのプログラムとされている。 A program according to a twentieth aspect of the present invention is an imaging unit having a focus lens in a computer, an imaging unit that images a subject, and directional light that is directional light is emitted to the subject and directed. The measured distance, which is the distance measured by the measuring unit included in the distance measuring device including the measuring unit that measures the distance to the subject by receiving the reflected light of the natural light, is obtained, and the measured distance and the focus at the imaging unit It is a program for executing a process including deriving a focal length corresponding to the acquired actually measured distance by using correspondence information indicating the correspondence with the distance.
 従って、本発明の第20の態様に係るプログラムは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, the program according to the twentieth aspect of the present invention is capable of focusing with less effort compared to the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. The distance can be derived with high accuracy.
 本発明の一つの実施形態によれば、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる、という効果が得られる。 According to one embodiment of the present invention, in order to increase the focal length, the focal length can be increased without much effort compared to the case where the user inputs the length of the reference image included in the captured image. The effect that it can derive | lead to precision is acquired.
第1~第6実施形態に係る測距装置の外観の一例を示す正面図である。FIG. 6 is a front view showing an example of an appearance of a distance measuring device according to the first to sixth embodiments. 第1~第5実施形態に係る測距装置のハードウェア構成の一例を示すブロック図である。FIG. 6 is a block diagram illustrating an example of a hardware configuration of a distance measuring device according to first to fifth embodiments. 第1~第7実施形態に係る測距装置による計測シーケンスの一例を示すタイムチャートである。10 is a time chart showing an example of a measurement sequence by the distance measuring apparatus according to the first to seventh embodiments. 第1~第7実施形態に係る測距装置による1回の計測を行う場合に要するレーザトリガ、発光信号、受光信号、及びカウント信号の一例を示すタイムチャートである。10 is a time chart showing an example of a laser trigger, a light emission signal, a light reception signal, and a count signal required for performing one measurement by the distance measuring apparatus according to the first to seventh embodiments. 第1~第7実施形態に係る測距装置による計測シーケンスで得られた計測値のヒストグラム(被写体までの距離(計測値)を横軸とし、計測回数を縦軸とした場合のヒストグラム)の一例を示すグラフである。An example of a histogram of measurement values obtained by a measurement sequence by the distance measuring apparatus according to the first to seventh embodiments (histogram when the distance to the subject (measurement value) is on the horizontal axis and the number of measurements is on the vertical axis) It is a graph which shows. 第1~第5実施形態に係る測距装置に含まれる主制御部のハードウェア構成の一例を示すブロック図である。FIG. 6 is a block diagram illustrating an example of a hardware configuration of a main control unit included in a distance measuring device according to first to fifth embodiments. 第1実施形態に係る焦点距離導出テーブルの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the focal distance derivation | leading-out table which concerns on 1st Embodiment. 第1~第7実施形態に係るCPUの要部機能の一例を示すブロック図である。FIG. 11 is a block diagram showing an example of main functions of a CPU according to the first to seventh embodiments. 指定された区域の寸法(長さ)を計測する方法の説明に供する説明図である。It is explanatory drawing with which it uses for description of the method of measuring the dimension (length) of the designated area. 第1~第5実施形態に係る測距装置と被写体との位置関係の一例を示す概略平面図である。FIG. 6 is a schematic plan view showing an example of a positional relationship between a distance measuring apparatus according to the first to fifth embodiments and a subject. 被写体の一部、第1撮像画像、第2撮像画像、第1撮像位置での撮像レンズの主点、及び第2撮像位置での撮像レンズの主点の位置関係の一例を示す概念図である。FIG. 6 is a conceptual diagram illustrating an example of a positional relationship among a part of a subject, a first captured image, a second captured image, a principal point of an imaging lens at a first imaging position, and a principal point of the imaging lens at a second imaging position. . 第1~第7実施形態に係る照射位置実空間座標の導出方法の説明に供する図である。FIG. 10 is a diagram for explaining a method for deriving irradiation position real space coordinates according to the first to seventh embodiments. 第1~第7実施形態に係る照射位置画素座標の導出方法の説明に供する図である。FIG. 10 is a diagram for explaining a method for deriving irradiation position pixel coordinates according to the first to seventh embodiments. 第1実施形態に係る寸法導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the dimension derivation | leading-out process which concerns on 1st Embodiment. 図14に示すフローチャートの続きである。It is a continuation of the flowchart shown in FIG. 第1~第7実施形態に係る撮像装置の撮影範囲に含まれる被写体の一例を示す概念図である。FIG. 16 is a conceptual diagram illustrating an example of a subject included in a shooting range of the imaging device according to the first to seventh embodiments. 第1実施形態に係る撮像装置により撮像されて得られた撮像画像に照射位置目印及び実測距離が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the irradiation position mark and the measurement distance were superimposed and displayed on the captured image obtained by imaging with the imaging device which concerns on 1st Embodiment. 撮像画像内の表示領域で四角形状の枠が規定された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state in which the square-shaped frame was prescribed | regulated by the display area in a captured image. 撮像画像に対して射影変換が実行されることで得られた射影変換後画像が表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state in which the image after projective conversion obtained by performing projective transformation with respect to a captured image was displayed. 処理対象画像に区域の長さ及び双方向矢印が重畳して表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the state of the state where the length of the area and the bidirectional arrow were superimposed and displayed on the processing target image. 第1~第7実施形態に係る撮像位置距離導出処理の流れの一例を示すフローチャートである。14 is a flowchart illustrating an example of a flow of an imaging position distance derivation process according to the first to seventh embodiments. 第1実施形態、第2実施形態、及び第5実施形態に係る撮像位置距離導出処理に含まれる第1導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the 1st derivation | leading-out process included in the imaging position distance derivation | leading-out process which concerns on 1st Embodiment, 2nd Embodiment, and 5th Embodiment. 第1実施形態に係る撮像位置距離導出処理に含まれる第1導出処理の流れの一例を示すフローチャートであって、図22に示すフローチャートの続きである。FIG. 23 is a flowchart showing an example of the flow of a first derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 22. 第1実施形態、第2実施形態、及び第5実施形態に係る撮像位置距離導出処理に含まれる第2導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the 2nd derivation | leading-out process included in the imaging position distance derivation | leading-out process which concerns on 1st Embodiment, 2nd Embodiment, and 5th Embodiment. 第1実施形態に係る撮像位置距離導出処理に含まれる第2導出処理の流れの一例を示すフローチャートであって、図24に示すフローチャートの続きである。FIG. 25 is a flowchart showing an example of a flow of a second derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 24. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像が表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state in which the 1st captured image acquired by image pick-up by the imaging device which concerns on 1st Embodiment was displayed. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像に照射位置目印及び第1実測距離が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the irradiation position mark and the 1st measurement distance were superimposed and displayed on the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像に一致メッセージが重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the coincidence message was superimposed and displayed on the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像に導出処理選択画面が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the derivation process selection screen was superimposed and displayed on the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像に不一致メッセージが重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the mismatch message was superimposed and displayed on the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像であって、注目画素が指定された第1撮像画像が表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the 1st picked-up image obtained by image pick-up by the image pick-up device concerning a 1st embodiment, and the 1st picked-up image to which attention pixel was specified was displayed. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像であって、注目画素及び第1~第3画素が特定された第1撮像画像が表示された状態の画面の一例を示す画面図である。An example of a screen in which a first captured image obtained by capturing an image with the imaging apparatus according to the first embodiment and displaying a first captured image in which a pixel of interest and first to third pixels are specified is displayed. FIG. 第1実施形態に係る撮像装置により撮像されて得られた第2撮像画像に撮像位置距離が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the imaging position distance was superimposed and displayed on the 2nd captured image obtained by imaging with the imaging device which concerns on 1st Embodiment. 第1実施形態に係る3次元座標導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the three-dimensional coordinate derivation process which concerns on 1st Embodiment. 第1実施形態に係る撮像装置により撮像されて得られた第2撮像画像に撮像位置距離及び指定画素3次元座標が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the imaging position distance and the designated pixel three-dimensional coordinate were superimposed and displayed on the 2nd captured image acquired by the imaging device which concerns on 1st Embodiment. 第1実施形態に係る撮像装置により撮像されて得られた第1撮像画像であって、第1~第3画素が特定された第1撮像画像が表示された状態の画面の一例を示す画面図である。The screen figure which shows an example of the screen of the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment, Comprising: The 1st captured image by which the 1st-3rd pixel was specified was displayed It is. 第2実施形態に係る焦点距離導出テーブルの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the focal distance derivation | leading-out table which concerns on 2nd Embodiment. 第2実施形態に係る寸法導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the dimension derivation | leading-out process which concerns on 2nd Embodiment. 第2実施形態に係る撮像位置距離導出処理に含まれる第1導出処理の流れの一例を示すフローチャートであって、図22に示すフローチャートの続きである。FIG. 23 is a flowchart showing an example of a flow of a first derivation process included in an imaging position distance derivation process according to the second embodiment, and is a continuation of the flowchart shown in FIG. 22. 第2実施形態に係る撮像位置距離導出処理に含まれる第2導出処理の流れの一例を示すフローチャートであって、図24に示すフローチャートの続きである。FIG. 25 is a flowchart showing an example of a flow of a second derivation process included in the imaging position distance derivation process according to the second embodiment, and is a continuation of the flowchart shown in FIG. 24. 焦点距離導出テーブルの構成の変形例を示す構成図である。It is a block diagram which shows the modification of a structure of a focal distance derivation | leading-out table. 焦点距離導出テーブルの構成の変形例を示す構成図である。It is a block diagram which shows the modification of a structure of a focal distance derivation | leading-out table. 焦点距離導出テーブルの構成の変形例を示す構成図である。It is a block diagram which shows the modification of a structure of a focal distance derivation | leading-out table. 焦点距離導出テーブルの構成の変形例を示す構成図である。It is a block diagram which shows the modification of a structure of a focal distance derivation | leading-out table. 焦点距離導出テーブルの構成の変形例を示す構成図である。It is a block diagram which shows the modification of a structure of a focal distance derivation | leading-out table. 第3実施形態に係る撮像位置距離導出処理に含まれる第1導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the 1st derivation | leading-out process included in the imaging position distance derivation | leading-out process which concerns on 3rd Embodiment. 第3及び第4実施形態に係る第1撮像画像、又は、第4実施形態に係る第2撮像画像が表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state in which the 1st captured image which concerns on 3rd and 4th embodiment or the 2nd captured image which concerns on 4th Embodiment was displayed. 第3及び第4実施形態に係る第1撮像画像、又は、第4実施形態に係る第2撮像画像が表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state in which the 1st captured image which concerns on 3rd and 4th embodiment or the 2nd captured image which concerns on 4th Embodiment was displayed. 第4実施形態に係る撮像位置距離導出処理に含まれる第1導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the 1st derivation | leading-out process included in the imaging position distance derivation | leading-out process which concerns on 4th Embodiment. 図45に示すフローチャートの続きである。It is a continuation of the flowchart shown in FIG. 図46に示すフローチャートの続きである。It is a continuation of the flowchart shown in FIG. 第5実施形態に係る撮像位置距離導出処理に含まれる第1導出処理の流れの一例を示すフローチャートであって、図22に示すフローチャートの続きである。FIG. 23 is a flowchart showing an example of a flow of a first derivation process included in an imaging position distance derivation process according to the fifth embodiment, and is a continuation of the flowchart shown in FIG. 22. 第5実施形態に係る撮像装置により撮像されて得られた第2撮像画像に最終撮像位置距離が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the last imaging position distance was superimposed and displayed on the 2nd captured image acquired by the imaging device which concerns on 5th Embodiment. 第5実施形態に係る3次元座標導出処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the three-dimensional coordinate derivation process which concerns on 5th Embodiment. 第5実施形態に係る撮像装置により撮像されて得られた第2撮像画像に最終撮像位置距離及び指定画素3次元座標が重畳表示された状態の画面の一例を示す画面図である。It is a screen figure which shows an example of the screen of the state by which the last imaging position distance and the designated pixel three-dimensional coordinate were superimposed and displayed on the 2nd captured image acquired by the imaging device which concerns on 5th Embodiment. 第6実施形態に係る情報処理システムに含まれる2台の測距装置、PC、及び被写体の位置関係の一例を示す概略平面図である。It is a schematic plan view which shows an example of the positional relationship of two ranging devices, PC, and a to-be-included object included in the information processing system according to the sixth embodiment. 第6実施形態に係る測距装置のハードウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the hardware constitutions of the ranging device which concerns on 6th Embodiment. 第6実施形態に係るPCのハードウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the hardware constitutions of PC concerning 6th Embodiment. 第7実施形態に係る測距装置のハードウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the hardware constitutions of the ranging apparatus which concerns on 7th Embodiment. 第7実施形態に係る測距装置に含まれるスマートデバイスの表示部にソフトキーとして表示された各種ボタンを含む画面の一例を示す画面図である。It is a screen figure which shows an example of the screen containing the various buttons displayed as a soft key on the display part of the smart device contained in the distance measuring device which concerns on 7th Embodiment. 第1~第5実施形態に係る寸法導出プログラム、撮像位置距離導出プログラム、及び3次元座標導出プログラムが測距装置又はPCにインストールされる態様の一例を示す概念図である。It is a conceptual diagram which shows an example of the aspect by which the dimension derivation | leading-out program, imaging position distance derivation | leading-out program, and three-dimensional coordinate derivation | leading-out program concerning 1st-5th embodiment are installed in a ranging device or PC. 第1~第6実施形態に係る測距装置の外観の変形例を示す正面図である。FIG. 10 is a front view showing a modified example of the appearance of the distance measuring apparatus according to the first to sixth embodiments. 第1実施形態に係る焦点距離導出テーブルの構成の変形例を示す構成図である。It is a block diagram which shows the modification of the structure of the focal distance derivation | leading-out table which concerns on 1st Embodiment.
 以下、添付図面に従って本開示の技術に係る実施形態の一例について説明する。なお、本実施形態では、説明の便宜上、測距装置10Aから計測対象となる被写体までの距離を単に「距離」又は「被写体までの距離」とも称する。また、本実施形態では、被写体に対する画角を単に「画角」とも称する。 Hereinafter, an example of an embodiment according to the technique of the present disclosure will be described with reference to the accompanying drawings. In this embodiment, for convenience of explanation, the distance from the distance measuring device 10A to the subject to be measured is also simply referred to as “distance” or “distance to the subject”. In the present embodiment, the angle of view with respect to the subject is also simply referred to as “angle of view”.
 [第1実施形態]
 一例として図1に示すように、本開示の技術に係る情報処理装置の一例である測距装置10Aは、測距ユニット12及び撮像装置14を備えている。なお、本実施形態では、測距ユニット12及び後述の測距制御部68(図2参照)が本開示の技術に係る計測部の一例であり、撮像装置14が本開示の技術に係る撮像部の一例である。
[First Embodiment]
As an example, as illustrated in FIG. 1, a distance measuring device 10 </ b> A that is an example of an information processing device according to the technology of the present disclosure includes a distance measuring unit 12 and an imaging device 14. In the present embodiment, the distance measurement unit 12 and a distance measurement control unit 68 (see FIG. 2) described later are examples of a measurement unit according to the technique of the present disclosure, and the imaging device 14 is an imaging unit according to the technique of the present disclosure. It is an example.
 撮像装置14は、レンズユニット16及び撮像装置本体18を備えており、レンズユニット16は、撮像装置本体18に対して着脱自在に取り付けられる。 The imaging device 14 includes a lens unit 16 and an imaging device body 18, and the lens unit 16 is detachably attached to the imaging device body 18.
 撮像装置本体18の正面視左側面にはホットシュー(Hot Shoe)20が設けられており、測距ユニット12は、ホットシュー20に対して着脱自在に取り付けられる。 A hot shoe 20 is provided on the left side of the image pickup apparatus main body 18 when viewed from the front, and the distance measuring unit 12 is detachably attached to the hot shoe 20.
 測距装置10Aは、測距ユニット12に測距用のレーザ光を射出させて測距を行う測距系機能と、撮像装置14に被写体を撮像させて撮像画像を得る撮像系機能とを備えている。なお、以下では、撮像画像を単に「画像」とも称する。また、以下では、説明の便宜上、鉛直方向において、測距ユニット12から射出されるレーザ光の光軸L1(図2参照)が、レンズユニット16の光軸L2(図2参照)と同一の高さであることを前提として説明する。 The distance measuring device 10A includes a distance measuring function for performing distance measurement by emitting a distance measuring laser beam to the distance measuring unit 12, and an image capturing function for obtaining a captured image by causing the image capturing device 14 to capture an image of a subject. ing. Hereinafter, the captured image is also simply referred to as “image”. In the following, for convenience of explanation, in the vertical direction, the optical axis L1 (see FIG. 2) of the laser light emitted from the distance measuring unit 12 is the same height as the optical axis L2 (see FIG. 2) of the lens unit 16. It is assumed that this is the case.
 測距装置10Aは、測距系機能を働かせることで、1回の指示に応じて1回の計測シーケンス(図3参照)を行い、1回の計測シーケンスが行われることで最終的に1つの距離が出力される。 The distance measuring device 10A operates the distance measuring system function to perform one measurement sequence (see FIG. 3) in response to one instruction, and finally, one measurement sequence is performed. The distance is output.
 測距装置10Aは、撮像系機能の動作モードとして、静止画撮像モードと動画撮像モードとを有する。静止画撮像モードは、静止画像を撮像する動作モードであり、動画撮像モードは、動画像を撮像する動作モードである。静止画撮像モード及び動画撮像モードは、ユーザの指示に応じて選択的に設定される。 The ranging device 10A has a still image capturing mode and a moving image capturing mode as operation modes of the image capturing system function. The still image capturing mode is an operation mode for capturing a still image, and the moving image capturing mode is an operation mode for capturing a moving image. The still image capturing mode and the moving image capturing mode are selectively set according to a user instruction.
 一例として図2に示すように、測距ユニット12は、射出部22、受光部24、及びコネクタ26を備えている。 As an example, as shown in FIG. 2, the distance measuring unit 12 includes an emitting unit 22, a light receiving unit 24, and a connector 26.
 コネクタ26は、ホットシュー20に接続可能とされており、コネクタ26がホットシュー20に接続された状態で、測距ユニット12は、撮像装置本体18の制御下で動作する。 The connector 26 can be connected to the hot shoe 20, and the distance measuring unit 12 operates under the control of the imaging apparatus main body 18 with the connector 26 connected to the hot shoe 20.
 射出部22は、LD(レーザダイオード:Laser Diode)30、集光レンズ(図示省略)、対物レンズ32、及びLDドライバ34を有する。 The emission unit 22 includes an LD (Laser Diode) 30, a condenser lens (not shown), an objective lens 32, and an LD driver 34.
 集光レンズ及び対物レンズ32は、LD30により射出されるレーザ光の光軸L1に沿って設けられており、LD30側から光軸L1に沿って集光レンズ及び対物レンズ32の順に配置されている。 The condenser lens and objective lens 32 are provided along the optical axis L1 of the laser light emitted from the LD 30, and are arranged in the order of the condenser lens and objective lens 32 along the optical axis L1 from the LD 30 side. .
 LD30は、本開示の技術に係る指向性光の一例である測距用のレーザ光を発光する。LD30により発光されるレーザ光は、有色のレーザ光であり、例えば、射出部22から数メートル程度の範囲内であれば、レーザ光の照射位置は、実空間上で視覚的に認識され、撮像装置14によって撮像されて得られた撮像画像からも視覚的に認識される。 LD 30 emits laser light for distance measurement, which is an example of directional light according to the technology of the present disclosure. The laser beam emitted by the LD 30 is a colored laser beam. For example, if the laser beam is within a range of about several meters from the emission unit 22, the irradiation position of the laser beam is visually recognized in real space and imaged. It is also visually recognized from a captured image obtained by imaging by the device 14.
 集光レンズは、LD30により発光されたレーザ光を集光し、集光したレーザ光を通過させる。対物レンズ32は、被写体に対向しており、集光レンズを通過したレーザ光を被写体に対して射出する。 The condensing lens condenses the laser light emitted by the LD 30 and passes the condensed laser light. The objective lens 32 faces the subject and emits laser light that has passed through the condenser lens to the subject.
 LDドライバ34は、コネクタ26及びLD30に接続されており、撮像装置本体18の指示に従ってLD30を駆動させてレーザ光を発光させる。 The LD driver 34 is connected to the connector 26 and the LD 30 and drives the LD 30 in accordance with an instruction from the imaging apparatus main body 18 to emit laser light.
 受光部24は、PD(フォトダイオード:Photo Diode)36、対物レンズ38、及び受光信号処理回路40を有する。対物レンズ38は、PD36の受光面側に配置されており、射出部22により射出されたレーザ光が被写体に当たって反射したレーザ光である反射レーザ光は対物レンズ38に入射される。対物レンズ38は、反射レーザ光を通過させ、PD36の受光面に導く。PD36は、対物レンズ38を通過した反射レーザ光を受光し、受光量に応じたアナログ信号を受光信号として出力する。 The light receiving unit 24 includes a PD (photodiode: Photo Diode) 36, an objective lens 38, and a light reception signal processing circuit 40. The objective lens 38 is disposed on the light receiving surface side of the PD 36, and reflected laser light, which is laser light reflected by the laser light emitted by the emission unit 22 when hitting the subject, is incident on the objective lens 38. The objective lens 38 passes the reflected laser light and guides it to the light receiving surface of the PD 36. The PD 36 receives the reflected laser light that has passed through the objective lens 38, and outputs an analog signal corresponding to the amount of received light as a light reception signal.
 受光信号処理回路40は、コネクタ26及びPD36に接続されており、PD36から入力された受光信号を増幅器(図示省略)で増幅し、増幅した受光信号に対してA/D(Analog/Digital)変換を行う。そして、受光信号処理回路40は、A/D変換によってデジタル化された受光信号を撮像装置本体18に出力する。 The light reception signal processing circuit 40 is connected to the connector 26 and the PD 36, amplifies the light reception signal input from the PD 36 by an amplifier (not shown), and performs A / D (Analog / Digital) conversion on the amplified light reception signal. I do. Then, the light reception signal processing circuit 40 outputs the light reception signal digitized by A / D conversion to the imaging apparatus body 18.
 撮像装置14は、マウント42,44を備えている。マウント42は、撮像装置本体18に設けられており、マウント44は、レンズユニット16に設けられている。レンズユニット16は、マウント42にマウント44が結合されることにより撮像装置本体18に交換可能に装着される。 The imaging device 14 includes mounts 42 and 44. The mount 42 is provided in the imaging apparatus main body 18, and the mount 44 is provided in the lens unit 16. The lens unit 16 is attached to the imaging apparatus main body 18 in a replaceable manner by coupling the mount 44 to the mount 42.
 レンズユニット16は、フォーカスレンズ50、ズームレンズ52、フォーカスレンズ移動機構53、ズームレンズ移動機構54、及びモータ56を備えている。 The lens unit 16 includes a focus lens 50, a zoom lens 52, a focus lens moving mechanism 53, a zoom lens moving mechanism 54, and a motor 56.
 被写体からの反射光である被写体光は、フォーカスレンズ50に入射される。フォーカスレンズ50は、被写体光を通過させ、ズームレンズ52に導く。 Subject light that is reflected light from the subject is incident on the focus lens 50. The focus lens 50 passes the subject light and guides it to the zoom lens 52.
 フォーカスレンズ移動機構53には、光軸L2に対してスライド可能にフォーカスレンズ50が取り付けられている。また、フォーカスレンズ移動機構53にはモータ57が接続されており、フォーカスレンズ移動機構53は、モータ57の動力を受けてフォーカスレンズ50を光軸L2方向に沿ってスライドさせる。 A focus lens 50 is attached to the focus lens moving mechanism 53 so as to be slidable with respect to the optical axis L2. A motor 57 is connected to the focus lens moving mechanism 53, and the focus lens moving mechanism 53 receives the power of the motor 57 and slides the focus lens 50 along the direction of the optical axis L2.
 ズームレンズ移動機構54には、光軸L2に対してスライド可能にズームレンズ52が取り付けられている。また、ズームレンズ移動機構54にはモータ56が接続されており、ズームレンズ移動機構54は、モータ56の動力を受けてズームレンズ52を光軸L2方向に沿ってスライドさせる。 A zoom lens 52 is attached to the zoom lens moving mechanism 54 so as to be slidable with respect to the optical axis L2. Further, a motor 56 is connected to the zoom lens moving mechanism 54, and the zoom lens moving mechanism 54 slides the zoom lens 52 along the optical axis L2 direction by receiving the power of the motor 56.
 モータ56,57は、マウント42,44を介して撮像装置本体18に接続されており、撮像装置本体18からの命令に従って駆動が制御される。なお、本実施形態では、モータ56,57の一例としてステッピングモータを適用している。従って、モータ56,57は、撮像装置本体18からの命令によりパルス電力に同期して動作する。 The motors 56 and 57 are connected to the image pickup apparatus main body 18 via mounts 42 and 44, and their driving is controlled in accordance with commands from the image pickup apparatus main body 18. In the present embodiment, stepping motors are applied as an example of the motors 56 and 57. Therefore, the motors 56 and 57 operate in synchronization with the pulse power according to a command from the imaging apparatus main body 18.
 撮像装置本体18は、撮像素子60、主制御部62、画像メモリ64、画像処理部66、測距制御部68、モータドライバ72,73、撮像素子ドライバ74、画像信号処理回路76、及び表示制御部78を備えている。また、撮像装置本体18は、タッチパネルI/F(Interface:インタフェース)79、受付I/F80、及びメディアI/F82を備えている。 The imaging device main body 18 includes an imaging device 60, a main control unit 62, an image memory 64, an image processing unit 66, a distance measurement control unit 68, motor drivers 72 and 73, an imaging device driver 74, an image signal processing circuit 76, and display control. A portion 78 is provided. The imaging device main body 18 includes a touch panel I / F (Interface) 79, a reception I / F 80, and a media I / F 82.
 主制御部62、画像メモリ64、画像処理部66、測距制御部68、モータドライバ72,73、撮像素子ドライバ74、画像信号処理回路76、及び表示制御部78は、バスライン84に接続されている。また、タッチパネルI/F79、受付I/F80、及びメディアI/F82も、バスライン84に接続されている。 The main control unit 62, image memory 64, image processing unit 66, distance measurement control unit 68, motor drivers 72 and 73, image sensor driver 74, image signal processing circuit 76, and display control unit 78 are connected to the bus line 84. ing. A touch panel I / F 79, a reception I / F 80, and a media I / F 82 are also connected to the bus line 84.
 撮像素子60は、CMOS(Complementary Metal Oxide Semicondutor)型のイメージセンサであり、カラーフィルタ(図示省略)を備えている。カラーフィルタは、輝度信号を得るために最も寄与するG(Green:緑)に対応するGフィルタ、R(Red:赤)に対応するRフィルタ、及びB(Blue:青)に対応するBフィルタを含む。撮像素子60は、マトリクス状に配置された複数の撮像画素60A1を含む撮像画素群60Aを有する。各撮像画素60A1には、カラーフィルタに含まれるRフィルタ、Gフィルタ、及びBフィルタの何れかのフィルタが割り当てられており、撮像画素群60Aは、被写体光を受光することにより被写体を撮像する。 The imaging element 60 is a CMOS (Complementary Metal Oxide Semiconductor) type image sensor, and includes a color filter (not shown). The color filter includes a G filter corresponding to G (Green: green) that contributes most to obtain a luminance signal, an R filter corresponding to R (Red: red), and a B filter corresponding to B (Blue: blue). Including. The imaging element 60 includes an imaging pixel group 60A including a plurality of imaging pixels 60A1 arranged in a matrix. Each of the imaging pixels 60A1 is assigned with any one of an R filter, a G filter, and a B filter included in the color filter, and the imaging pixel group 60A captures the subject by receiving the subject light.
 すなわち、ズームレンズ52を通過した被写体光は、撮像素子60の受光面である撮像面60Bに結像され、被写体光の受光量に応じた電荷が撮像画素60A1に蓄積される。撮像素子60は、各撮像画素60A1に蓄積された電荷を、被写体光が撮像面60Bで結像されて得られた被写体像に相当する画像を示す画像信号として出力する。 That is, the subject light that has passed through the zoom lens 52 is imaged on the imaging surface 60B, which is the light receiving surface of the imaging device 60, and charges corresponding to the amount of received light of the subject light are accumulated in the imaging pixel 60A1. The imaging element 60 outputs the electric charge accumulated in each imaging pixel 60A1 as an image signal indicating an image corresponding to a subject image obtained by imaging subject light on the imaging surface 60B.
 主制御部62は、バスライン84を介して測距装置10Aの全体を制御する。 The main control unit 62 controls the entire distance measuring device 10 </ b> A via the bus line 84.
 モータドライバ72は、マウント42,44を介してモータ56に接続されており、主制御部62の指示に従ってモータ56を制御する。モータドライバ73は、マウント42,44を介してモータ57に接続されており、主制御部62の指示に従ってモータ57を制御する。 The motor driver 72 is connected to the motor 56 via the mounts 42 and 44, and controls the motor 56 in accordance with instructions from the main control unit 62. The motor driver 73 is connected to the motor 57 via the mounts 42 and 44, and controls the motor 57 in accordance with instructions from the main control unit 62.
 撮像装置14は、画角変更機能を有する。画角変更機能は、ズームレンズ52を移動させることで画角を変更する機能であり、本実施形態において、画角変更機能は、ズームレンズ52、ズームレンズ移動機構54、モータ56、モータドライバ72、及び主制御部62によって実現される。なお、本実施形態では、ズームレンズ52による光学式の画角変更機能を例示しているが、本開示の技術はこれに限定されるものではなく、ズームレンズ52を利用しない電子式の画角変更機能であってもよい。 The imaging device 14 has a view angle changing function. The angle of view changing function is a function of changing the angle of view by moving the zoom lens 52. In this embodiment, the angle of view changing function includes the zoom lens 52, the zoom lens moving mechanism 54, the motor 56, and the motor driver 72. , And the main control unit 62. In this embodiment, the optical angle-of-view changing function by the zoom lens 52 is illustrated, but the technology of the present disclosure is not limited to this, and an electronic angle of view that does not use the zoom lens 52. It may be a change function.
 撮像素子ドライバ74は、撮像素子60に接続されており、主制御部62の制御下で、撮像素子60に駆動パルスを供給する。撮像画素群60Aに含まれる各撮像画素60A1は、撮像素子ドライバ74によって撮像素子60に供給された駆動パルスに従って駆動する。 The image sensor driver 74 is connected to the image sensor 60 and supplies drive pulses to the image sensor 60 under the control of the main control unit 62. Each imaging pixel 60A1 included in the imaging pixel group 60A is driven according to a driving pulse supplied to the imaging element 60 by the imaging element driver 74.
 画像信号処理回路76は、撮像素子60に接続されており、主制御部62の制御下で、撮像素子60から1フレーム分の画像信号を撮像画素60A1毎に読み出す。画像信号処理回路76は、読み出した画像信号に対して、相関二重サンプリング処理、自動利得調整、A/D変換等の各種処理を行う。画像信号処理回路76は、画像信号に対して各種処理を行うことでデジタル化した画像信号を、主制御部62から供給されるクロック信号で規定される特定のフレームレート(例えば、数十フレーム/秒)で1フレーム毎に画像メモリ64に出力する。画像メモリ64は、画像信号処理回路76から入力された画像信号を一時的に保持する。 The image signal processing circuit 76 is connected to the image sensor 60, and reads an image signal for one frame from the image sensor 60 for each imaging pixel 60A1 under the control of the main control unit 62. The image signal processing circuit 76 performs various processes such as correlated double sampling processing, automatic gain adjustment, and A / D conversion on the read image signal. The image signal processing circuit 76 converts the image signal digitized by performing various processes on the image signal into a specific frame rate (for example, several tens frames / s) defined by the clock signal supplied from the main control unit 62. Second) for every frame. The image memory 64 temporarily holds the image signal input from the image signal processing circuit 76.
 撮像装置本体18は、表示部86、タッチパネル88、受付デバイス90、及びメモリカード92を備えている。 The imaging apparatus body 18 includes a display unit 86, a touch panel 88, a receiving device 90, and a memory card 92.
 表示部86は、表示制御部78に接続されており、表示制御部78の制御下で各種情報を表示する。表示部86は、例えば、LCD(Liquid Crystal Display)により実現される。 The display unit 86 is connected to the display control unit 78 and displays various information under the control of the display control unit 78. The display unit 86 is realized by, for example, an LCD (Liquid Crystal Display).
 タッチパネル88は、表示部86の表示画面に重ねられており、ユーザの指又はタッチペン等の指示体による接触を受け付ける。タッチパネル88は、タッチパネルI/F79に接続されており、指示体により接触された位置を示す位置情報をタッチパネルI/F79に出力する。タッチパネルI/F79は、主制御部62の指示に従ってタッチパネル88を作動させ、タッチパネル88から入力された位置情報を主制御部62に出力する。なお、本実施形態では、タッチパネル88を例示しているが、これに限らず、タッチパネル88に代えて、測距装置10Aに接続して使用されるマウス(図示省略)を適用してもよいし、タッチパネル88及びマウスを併用してもよい。 The touch panel 88 is superimposed on the display screen of the display unit 86, and accepts contact with a user's finger or an indicator such as a touch pen. The touch panel 88 is connected to the touch panel I / F 79 and outputs position information indicating the position touched by the indicator to the touch panel I / F 79. The touch panel I / F 79 operates the touch panel 88 according to an instruction from the main control unit 62 and outputs position information input from the touch panel 88 to the main control unit 62. In the present embodiment, the touch panel 88 is illustrated, but not limited thereto, a mouse (not shown) connected to the distance measuring device 10A and used instead of the touch panel 88 may be applied. The touch panel 88 and a mouse may be used in combination.
 受付デバイス90は、計測撮像ボタン90A、撮像ボタン(図示省略)、撮像系動作モード切替ボタン90B、広角指示ボタン90C、及び望遠指示ボタン90Dを有する。また、受付デバイス90は、寸法導出ボタン90E、撮像位置距離導出ボタン90F、及び3次元座標導出ボタン90G等も有しており、ユーザによる各種指示を受け付ける。受付デバイス90は、受付I/F80に接続されており、受付I/F80は、受付デバイス90によって受け付けられた指示の内容を示す指示内容信号を主制御部62に出力する。 The reception device 90 includes a measurement imaging button 90A, an imaging button (not shown), an imaging system operation mode switching button 90B, a wide-angle instruction button 90C, and a telephoto instruction button 90D. The receiving device 90 also includes a dimension derivation button 90E, an imaging position distance derivation button 90F, a three-dimensional coordinate derivation button 90G, and the like, and accepts various instructions from the user. The reception device 90 is connected to the reception I / F 80, and the reception I / F 80 outputs an instruction content signal indicating the content of the instruction received by the reception device 90 to the main control unit 62.
 計測撮像ボタン90Aは、計測及び撮像の開始の指示を受け付ける押圧式のボタンである。撮像ボタンは、撮像の開始の指示を受け付ける押圧式のボタンである。撮像系動作モード切替ボタン90Bは、静止画撮像モードと動画撮像モードとを切り替える指示を受け付ける押圧式のボタンである。 The measurement imaging button 90A is a press-type button that receives an instruction to start measurement and imaging. The imaging button is a push button that receives an instruction to start imaging. The imaging system operation mode switching button 90B is a push-type button that receives an instruction to switch between the still image capturing mode and the moving image capturing mode.
 広角指示ボタン90Cは、画角を広角にする指示を受け付ける押圧式のボタンであり、広角側への画角の変更量は、許容される範囲内で、広角指示ボタン90Cへの押圧が継続して行われる押圧時間に応じて定まる。 The wide-angle instruction button 90C is a press-type button that receives an instruction to change the angle of view. The amount of change of the angle of view to the wide-angle side is within an allowable range, and the pressure on the wide-angle instruction button 90C is continued. It depends on the pressing time.
 望遠指示ボタン90Dは、画角を望遠にする指示を受け付ける押圧式のボタンであり、望遠側への画角の変更量は、許容される範囲内で、望遠指示ボタン90Dへの押圧が継続して行われる押圧時間に応じて定まる。 The telephoto instruction button 90D is a push-type button that accepts an instruction to set the angle of view to telephoto. The amount of change of the angle of view to the telephoto side is within an allowable range, and the press to the telephoto instruction button 90D is continued. It depends on the pressing time.
 寸法導出ボタン90Eは、後述の寸法導出処理の開始の指示を受け付ける押圧式のボタンである。撮像位置距離導出ボタン90Fは、後述の撮像位置距離導出処理の開始の指示を受け付ける押圧式のボタンである。3次元座標導出ボタン90Gは、後述の撮像位置距離導出処理及び後述の3次元座標導出処理の開始の指示を受け付ける押圧式のボタンである。 The dimension derivation button 90E is a push button that receives an instruction to start a dimension derivation process described later. The imaging position distance derivation button 90F is a push button that receives an instruction to start an imaging position distance derivation process described later. The three-dimensional coordinate derivation button 90G is a push button that receives an instruction to start an imaging position distance derivation process described later and a three-dimensional coordinate derivation process described later.
 なお、以下では、説明の便宜上、計測撮像ボタン90A及び撮像ボタンを区別して説明する必要がない場合、「レリーズボタン」と称する。また、以下では、説明の便宜上、広角指示ボタン90C及び望遠指示ボタン90Dを区別して説明する必要がない場合、「画角指示ボタン」と称する。 In the following, for convenience of explanation, when it is not necessary to distinguish between the measurement imaging button 90A and the imaging button, they are referred to as “release buttons”. In the following, for convenience of explanation, when there is no need to distinguish between the wide-angle instruction button 90C and the telephoto instruction button 90D, they are referred to as “view angle instruction buttons”.
 なお、本実施形態に係る測距装置10Aでは、マニュアルフォーカスモードとオートフォーカスモードとが受付デバイス90を介したユーザの指示に応じて選択的に設定される。レリーズボタンは、撮像準備指示状態と撮像指示状態との2段階の押圧操作を受け付ける。撮像準備指示状態とは、例えば、レリーズボタンが待機位置から中間位置(半押し位置)まで押下される状態を指し、撮像指示状態とは、レリーズボタンが中間位置を超えた最終押下位置(全押し位置)まで押下される状態を指す。なお、以下では、説明の便宜上、「レリーズボタンが待機位置から半押し位置まで押下された状態」を「半押し状態」と称し、「レリーズボタンが待機位置から全押し位置まで押下された状態」を「全押し状態」と称する。 In the distance measuring apparatus 10A according to the present embodiment, the manual focus mode and the autofocus mode are selectively set according to a user instruction via the reception device 90. The release button receives a two-stage pressing operation of an imaging preparation instruction state and an imaging instruction state. The imaging preparation instruction state refers to, for example, a state where the release button is pressed from the standby position to the intermediate position (half-pressed position), and the imaging instruction state refers to the final pressed position (full-pressed when the release button exceeds the intermediate position). The position is pressed down to (position). Hereinafter, for convenience of explanation, “the state where the release button is pressed from the standby position to the half-pressed position” is referred to as “half-pressed state”, and “the state where the release button is pressed from the standby position to the full-pressed position”. Is referred to as a “fully pressed state”.
 オートフォーカスモードでは、レリーズボタンが半押し状態にされることで撮像条件の調整が行われ、その後、引き続き全押し状態にすると本露光が行われる。つまり、本露光に先立ってレリーズボタンが半押し状態にされることでAE(Automatic Exposure)機能が働いて露出調整が行われた後、AF(Auto-Focus)機能が働いて焦点調整が行われ、レリーズボタンが全押し状態にされると本露光が行われる。 In the auto focus mode, the imaging condition is adjusted by pressing the release button halfway, and then the main exposure is performed when the release button is fully pressed. In other words, prior to the main exposure, the release button is pressed halfway, the exposure adjustment is performed by the AE (Automatic Exposure) function, and then the focus adjustment is performed by the AF (Auto-Focus) function. When the release button is fully pressed, the main exposure is performed.
 ここで、本露光とは、後述の静止画像ファイルを得るために行われる露光を指す。また、本実施形態において、露光とは、本露光の他に、後述のライブビュー画像を得るために行われる露光、及び後述の動画像ファイルを得るために行われる露光も意味する。以下では、説明の便宜上、これらの露光を区別して説明する必要がない場合、単に「露光」と称する。 Here, the main exposure refers to exposure performed to obtain a still image file described later. In the present embodiment, exposure means exposure performed for obtaining a live view image described later and exposure performed for obtaining a moving image file described later in addition to the main exposure. Hereinafter, for convenience of explanation, when it is not necessary to distinguish between these exposures, they are simply referred to as “exposure”.
 なお、本実施形態では、主制御部62がAE機能による露出調整及びAF機能による焦点調整を行う。また、本実施形態では、露出調整及び焦点調整が行われる場合を例示しているが、本開示の技術はこれに限定されるものではなく、露出調整又は焦点調整が行われるようにしてもよい。 In this embodiment, the main control unit 62 performs exposure adjustment by the AE function and focus adjustment by the AF function. Moreover, although the case where exposure adjustment and focus adjustment are performed is illustrated in the present embodiment, the technology of the present disclosure is not limited to this, and exposure adjustment or focus adjustment may be performed. .
 画像処理部66は、画像メモリ64から特定のフレームレートで1フレーム毎に画像信号を取得し、取得した画像信号に対して、ガンマ補正、輝度色差変換、及び圧縮処理等の各種処理を行う。 The image processing unit 66 acquires an image signal for each frame from the image memory 64 at a specific frame rate, and performs various processes such as gamma correction, luminance color difference conversion, and compression processing on the acquired image signal.
 画像処理部66は、各種処理を行って得た画像信号を特定のフレームレートで1フレーム毎に表示制御部78に出力する。また、画像処理部66は、各種処理を行って得た画像信号を、主制御部62の要求に応じて、主制御部62に出力する。 The image processing unit 66 outputs an image signal obtained by performing various processes to the display control unit 78 frame by frame at a specific frame rate. Further, the image processing unit 66 outputs an image signal obtained by performing various processes to the main control unit 62 in response to a request from the main control unit 62.
 表示制御部78は、主制御部62の制御下で、画像処理部66から入力された画像信号を1フレーム毎に特定のフレームレートで表示部86に出力する。 The display control unit 78 outputs the image signal input from the image processing unit 66 to the display unit 86 at a specific frame rate for each frame under the control of the main control unit 62.
 表示部86は、画像及び文字情報等を表示する。表示部86は、表示制御部78から特定のフレームレートで入力された画像信号により示される画像をライブビュー画像として表示する。ライブビュー画像は、連続的に撮像されて得られた連続フレーム画像であり、スルー画像とも称される。また、表示部86は、単一フレームで撮像されて得られた単一フレーム画像である静止画像も表示する。更に、表示部86は、ライブビュー画像の他に、再生画像及びメニュー画面等も表示する。 The display unit 86 displays images, character information, and the like. The display unit 86 displays the image indicated by the image signal input at a specific frame rate from the display control unit 78 as a live view image. The live view image is a continuous frame image obtained by continuously capturing images, and is also referred to as a through image. The display unit 86 also displays a still image that is a single frame image obtained by imaging in a single frame. Further, the display unit 86 displays a playback image, a menu screen, and the like in addition to the live view image.
 なお、本実施形態では、画像処理部66及び表示制御部78は、ASIC(Application Specific Integrated Circuit)によって実現されているが、本開示の技術はこれに限定されるものではない。例えば、画像処理部66及び表示制御部78の各々は、FPGA(Field-Programmable Gate Array)によって実現されてもよい。また、画像処理部66は、CPU(中央処理装置:Central Processing Unit)、ROM(Read Only Memory)、及びRAM(Random Access Memory)を含むコンピュータによって実現されてもよい。また、表示制御部78も、CPU、ROM、及びRAMを含むコンピュータによって実現されてもよい。更に、画像処理部66及び表示制御部78の各々は、ハードウェア構成及びソフトウェア構成の組み合わせによって実現されてもよい。 In this embodiment, the image processing unit 66 and the display control unit 78 are realized by ASIC (Application Specific Integrated Circuit), but the technology of the present disclosure is not limited to this. For example, each of the image processing unit 66 and the display control unit 78 may be realized by an FPGA (Field-Programmable Gate Array). The image processing unit 66 may be realized by a computer including a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory). The display control unit 78 may also be realized by a computer including a CPU, a ROM, and a RAM. Further, each of the image processing unit 66 and the display control unit 78 may be realized by a combination of a hardware configuration and a software configuration.
 主制御部62は、静止画撮像モード下でレリーズボタンによって静止画像の撮像の指示が受け付けられた場合、撮像素子ドライバ74を制御することで、撮像素子60に1フレーム分の露光を行わせる。主制御部62は、1フレーム分の露光が行われることによって得られた画像信号を画像処理部66から取得し、取得した画像信号に対して圧縮処理を施して特定の静止画像用フォーマットの静止画像ファイルを生成する。なお、ここで、特定の静止画像用フォーマットとは、例えば、JPEG(Joint Photographic Experts Group)を指す。 The main control unit 62 controls the image sensor driver 74 to cause the image sensor 60 to perform exposure for one frame when an instruction to capture a still image is received by the release button in the still image capturing mode. The main control unit 62 acquires an image signal obtained by performing exposure for one frame from the image processing unit 66, performs a compression process on the acquired image signal, and performs still image processing in a specific still image format. Generate an image file. Here, the specific still image format refers to, for example, JPEG (Joint Photographic Experts Group).
 主制御部62は、動画撮像モード下でレリーズボタンによって動画像の撮像の指示が受け付けられた場合、画像処理部66によりライブビュー画像用として表示制御部78に出力される画像信号を特定のフレームレートで1フレーム毎に取得する。そして、主制御部62は、画像処理部66から取得した画像信号に対して圧縮処理を施して特定の動画像用フォーマットの動画像ファイルを生成する。なお、ここで、特定の動画像用フォーマットとは、例えば、MPEG(Moving Picture Experts Group)を指す。なお、以下では、説明の便宜上、静止画像ファイル及び動画像ファイルを区別して説明する必要がない場合、画像ファイルと称する。 When an instruction to capture a moving image is received by the release button in the moving image capturing mode, the main control unit 62 outputs an image signal output from the image processing unit 66 to the display control unit 78 as a live view image for a specific frame. Get every frame at the rate. Then, the main control unit 62 performs a compression process on the image signal acquired from the image processing unit 66 to generate a moving image file in a specific moving image format. Here, the specific moving image format refers to, for example, MPEG (Moving Picture Experts Group). In the following, for convenience of explanation, when it is not necessary to distinguish between a still image file and a moving image file, they are referred to as image files.
 メディアI/F82は、メモリカード92に接続されており、主制御部62の制御下で、メモリカード92に対する画像ファイルの記録及び読み出しを行う。なお、メディアI/F82によってメモリカード92から読み出された画像ファイルは、主制御部62によって伸長処理が施されて表示部86に再生画像として表示される。 The media I / F 82 is connected to the memory card 92, and records and reads image files from and to the memory card 92 under the control of the main control unit 62. Note that the image file read from the memory card 92 by the media I / F 82 is decompressed by the main control unit 62 and displayed on the display unit 86 as a reproduced image.
 なお、主制御部62は、測距制御部68から入力された距離情報を画像ファイルに関連付けて、メディアI/F82を介してメモリカード92に保存する。そして、距離情報は、メモリカード92からメディアI/F82を介して主制御部62によって画像ファイルと共に読み出され、読み出された距離情報により示される距離は、関連する画像ファイルによる再生画像と共に表示部86に表示される。 Note that the main control unit 62 associates the distance information input from the distance measurement control unit 68 with the image file, and stores it in the memory card 92 via the media I / F 82. The distance information is read from the memory card 92 through the media I / F 82 by the main control unit 62 together with the image file, and the distance indicated by the read distance information is displayed together with the reproduced image by the related image file. Displayed on the part 86.
 測距制御部68は、主制御部62の制御下で、測距ユニット12を制御する。なお、本実施形態において、測距制御部68は、ASICによって実現されているが、本開示の技術はこれに限定されるものではない。例えば、測距制御部68は、FPGAによって実現されてもよい。また、測距制御部68は、CPU、ROM、及びRAMを含むコンピュータによって実現されてもよい。更に、測距制御部68は、ハードウェア構成及びソフトウェア構成の組み合わせによって実現されてもよい。 The distance measurement control unit 68 controls the distance measurement unit 12 under the control of the main control unit 62. In the present embodiment, the ranging control unit 68 is realized by an ASIC, but the technology of the present disclosure is not limited to this. For example, the distance measurement control unit 68 may be realized by an FPGA. The distance measurement control unit 68 may be realized by a computer including a CPU, a ROM, and a RAM. Further, the distance measurement control unit 68 may be realized by a combination of a hardware configuration and a software configuration.
 ホットシュー20は、バスライン84に接続されており、測距制御部68は、主制御部62の制御下で、LDドライバ34を制御することで、LD30によるレーザ光の発光を制御し、受光信号処理回路40から受光信号を取得する。測距制御部68は、レーザ光を発光させたタイミングと受光信号を取得したタイミングとを基に、被写体までの距離を導出し、導出した距離を示す距離情報を主制御部62に出力する。 The hot shoe 20 is connected to the bus line 84, and the distance measurement control unit 68 controls the LD driver 34 under the control of the main control unit 62 to control the light emission of the laser beam by the LD 30. A light reception signal is acquired from the signal processing circuit 40. The distance measurement control unit 68 derives the distance to the subject based on the timing at which the laser light is emitted and the timing at which the light reception signal is acquired, and outputs distance information indicating the derived distance to the main control unit 62.
 ここで、測距制御部68による被写体までの距離の計測について更に詳細に説明する。 Here, the measurement of the distance to the subject by the distance measurement control unit 68 will be described in more detail.
 一例として図3に示すように、測距装置10Aによる1回の計測シーケンスは、電圧調整期間、実計測期間、及び休止期間で規定される。 As an example, as shown in FIG. 3, one measurement sequence by the distance measuring device 10A is defined by a voltage adjustment period, an actual measurement period, and a pause period.
 電圧調整期間は、LD30及びPD36の駆動電圧を調整する期間である。実計測期間は、被写体までの距離を実際に計測する期間である。実計測期間では、LD30にレーザ光を発光させ、PD36に反射レーザ光を受光させる動作が数百回繰り返され、レーザ光を発光させたタイミングと受光信号を取得したタイミングとを基に、被写体までの距離が導出される。休止期間は、LD30及びPD36の駆動を休止させるための期間である。よって、1回の計測シーケンスでは、被写体までの距離の計測が数百回行われることになる。 The voltage adjustment period is a period for adjusting the drive voltage of the LD 30 and the PD 36. The actual measurement period is a period during which the distance to the subject is actually measured. In the actual measurement period, the operation of causing the LD 30 to emit laser light and causing the PD 36 to receive reflected laser light is repeated several hundred times. Based on the timing at which the laser light is emitted and the timing at which the received light signal is obtained, Is derived. The pause period is a period for stopping the driving of the LD 30 and the PD 36. Therefore, in one measurement sequence, the distance to the subject is measured several hundred times.
 なお、本実施形態では、電圧調整期間、実計測期間、及び休止期間の各々を数百ミリ秒としている。 In the present embodiment, each of the voltage adjustment period, the actual measurement period, and the rest period is set to several hundred milliseconds.
 一例として図4に示すように、測距制御部68には、測距制御部68がレーザ光の発光の指示を与えるタイミング、及び受光信号を取得するタイミングを規定するカウント信号が供給される。本実施形態では、カウント信号は、主制御部62によって生成されて測距制御部68に供給されるが、これに限らず、バスライン84に接続されたタイムカウンタ等の専用回路によって生成されて測距制御部68に供給されるようにしてもよい。 As an example, as shown in FIG. 4, the distance measurement control unit 68 is supplied with a count signal that defines the timing at which the distance measurement control unit 68 gives an instruction to emit laser light and the timing at which the light reception signal is acquired. In the present embodiment, the count signal is generated by the main control unit 62 and supplied to the distance measurement control unit 68, but is not limited thereto, and is generated by a dedicated circuit such as a time counter connected to the bus line 84. You may make it supply to the ranging control part 68. FIG.
 測距制御部68は、カウント信号に応じて、レーザ光を発光させるためのレーザトリガをLDドライバ34に出力する。LDドライバ34は、レーザトリガに応じて、LD30を駆動してレーザ光を発光させる。 The ranging control unit 68 outputs a laser trigger for emitting laser light to the LD driver 34 in accordance with the count signal. The LD driver 34 drives the LD 30 to emit laser light according to the laser trigger.
 図4に示す例では、レーザ光の発光時間が数十ナノ秒とされている。この場合、射出部22により数キロメートル先の被写体に向けて射出されたレーザ光が反射レーザ光としてPD36で受光されるまでの時間は、“数キロメートル×2/光速”≒数マイクロ秒となる。従って、数キロメートル先の被写体までの距離を計測するためには、一例として図3に示すように、最低必要時間として、数マイクロ秒の時間を要する。 In the example shown in FIG. 4, the laser light emission time is set to several tens of nanoseconds. In this case, the time until the laser light emitted toward the subject several kilometers ahead by the emitting unit 22 is received by the PD 36 as reflected laser light is “several kilometers × 2 / light speed” ≈several microseconds. Therefore, in order to measure the distance to the subject several kilometers away, as shown in FIG. 3 as an example, a minimum required time of several microseconds is required.
 なお、本実施形態では、レーザ光の往復時間等を考慮して、一例として図3に示すように、1回の計測時間を数ミリ秒としているが、被写体までの距離によりレーザ光の往復時間は異なるので、想定する距離に応じて1回あたりの計測時間を異ならせてもよい。 In this embodiment, taking into consideration the round trip time of the laser beam and the like, as shown in FIG. 3 as an example, the measurement time of one time is set to several milliseconds. However, the round trip time of the laser beam depends on the distance to the subject. Since they are different, the measurement time per time may be varied according to the assumed distance.
 測距制御部68は、1回の計測シーケンスにおける数百回の計測から得た計測値を基に、被写体までの距離を導出する場合、例えば、数百回の計測から得た計測値のヒストグラムを解析して被写体までの距離を導出する。 When the distance measurement control unit 68 derives the distance to the subject based on the measurement values obtained from several hundred measurements in one measurement sequence, for example, a histogram of the measurement values obtained from several hundred measurements To derive the distance to the subject.
 一例として図5に示すように、1回の計測シーケンスにおける数百回の計測から得られた計測値のヒストグラムでは、横軸が被写体までの距離であり、縦軸が計測回数であり、計測回数の最大値に対応する距離が測距結果として測距制御部68によって導出される。なお、図5に示すヒストグラムはあくまでも一例であり、被写体までの距離に代えて、レーザ光の往復時間(発光から受光までの経過時間)、又はレーザ光の往復時間の1/2等に基づいてヒストグラムが生成されてもよい。 As an example, as shown in FIG. 5, in a histogram of measurement values obtained from several hundred measurements in one measurement sequence, the horizontal axis is the distance to the subject, the vertical axis is the number of measurements, and the number of measurements. A distance corresponding to the maximum value is derived by the distance measurement control unit 68 as a distance measurement result. Note that the histogram shown in FIG. 5 is merely an example, based on the round trip time of the laser beam (elapsed time from light emission to light reception) or 1/2 of the round trip time of the laser beam instead of the distance to the subject. A histogram may be generated.
 一例として図6に示すように、主制御部62は、本開示の技術に係る取得部及び導出部の一例であるCPU100、一次記憶部102、及び二次記憶部104を備えている。CPU100は、測距装置10Aの全体を制御する。一次記憶部102は、各種プログラムの実行時のワークエリア等として用いられる揮発性のメモリである。一次記憶部102の一例としては、RAMが挙げられる。二次記憶部104は、測距装置10Aの作動を制御する制御プログラム又は各種パラメータ等を予め記憶する不揮発性のメモリである。二次記憶部104の一例としては、EEPROM(Electrically Erasable Programmable Read Only Memory)又はフラッシュメモリが挙げられる。CPU100、一次記憶部102、及び二次記憶部104は、バスライン84を介して相互に接続されている。 As an example, as illustrated in FIG. 6, the main control unit 62 includes a CPU 100, a primary storage unit 102, and a secondary storage unit 104, which are examples of an acquisition unit and a derivation unit according to the technology of the present disclosure. The CPU 100 controls the entire distance measuring device 10A. The primary storage unit 102 is a volatile memory used as a work area or the like when executing various programs. An example of the primary storage unit 102 is a RAM. The secondary storage unit 104 is a non-volatile memory that stores in advance a control program for controlling the operation of the distance measuring apparatus 10A, various parameters, or the like. Examples of the secondary storage unit 104 include an EEPROM (Electrically Erasable Programmable Read Only Memory) or a flash memory. The CPU 100, the primary storage unit 102, and the secondary storage unit 104 are connected to each other via the bus line 84.
 測距装置10Aでは、一例として図6に示すように、二次記憶部104が、寸法導出プログラム105A、撮像位置距離導出プログラム106A、3次元座標導出プログラム108A、及び焦点距離導出テーブル109Aを記憶している。なお、寸法導出プログラム105A及び撮像位置距離導出プログラム106Aは、本開示の技術に係るプログラムの一例である。また、焦点距離導出テーブル109Aは、本開示の技術に係る対応関係情報の一例である。 In the distance measuring device 10A, as shown in FIG. 6 as an example, the secondary storage unit 104 stores a size derivation program 105A, an imaging position distance derivation program 106A, a three-dimensional coordinate derivation program 108A, and a focal length derivation table 109A. ing. Note that the dimension derivation program 105A and the imaging position distance derivation program 106A are examples of programs according to the technique of the present disclosure. The focal length derivation table 109A is an example of correspondence information according to the technology of the present disclosure.
 一例として図7に示すように、焦点距離導出テーブル109Aは、実測距離と、フォーカスレンズ50の焦点距離との対応関係を示すテーブルである。実測距離とは、測距系機能を働かせて計測された被写体までの距離、すなわち、測距ユニット12及び測距制御部68により計測された被写体までの距離を指す。 As an example, as shown in FIG. 7, the focal length derivation table 109 </ b> A is a table showing a correspondence relationship between the actually measured distance and the focal length of the focus lens 50. The actually measured distance refers to the distance to the subject measured by using the ranging system function, that is, the distance to the subject measured by the distance measurement unit 12 and the distance measurement control unit 68.
 焦点距離導出テーブル109Aでは、複数の導出用距離の各々に対してフォーカスレンズ50の焦点距離が対応付けられている。導出用距離とは、測距装置10Aから被写体までの距離を指す。導出用距離は、実測距離の比較対象とされるパラメータである。なお、以下では、説明の便宜上、フォーカスレンズ50の焦点距離を単に「焦点距離」と称する。 In the focal length derivation table 109A, the focal length of the focus lens 50 is associated with each of a plurality of derivation distances. The derivation distance refers to the distance from the distance measuring device 10A to the subject. The derivation distance is a parameter to be compared with the actually measured distance. Hereinafter, for convenience of explanation, the focal length of the focus lens 50 is simply referred to as “focal length”.
 一例として図7に示すように、焦点距離導出テーブル109Aでは、導出用距離の1メートルに対して焦点距離の7ミリメートルが対応付けられている。また、焦点距離導出テーブル109Aでは、導出用距離の2メートルに対して焦点距離の8ミリメートルが対応付けられている。また、焦点距離導出テーブル109Aでは、導出用距離の3メートルに対して焦点距離の10ミリメートルが対応付けられている。また、焦点距離導出テーブル109Aでは、導出用距離の5メートルに対して焦点距離の12ミリメートルが対応付けられている。また、焦点距離導出テーブル109Aでは、導出用距離の10メートルに対して焦点距離の14ミリメートルが対応付けられている。また、焦点距離導出テーブル109Aでは、導出用距離の30メートルに対して焦点距離の16ミリメートルが対応付けられている。更に、焦点距離導出テーブル109Aでは、導出用距離の無限遠に対して焦点距離の18ミリメートルが対応付けられている。 As an example, as shown in FIG. 7, in the focal length derivation table 109A, a focal distance of 7 millimeters is associated with a derivation distance of 1 meter. In the focal length derivation table 109A, the focal length of 8 millimeters is associated with the derivation distance of 2 meters. Further, in the focal length derivation table 109A, a focal length of 10 millimeters is associated with a derivation distance of 3 meters. Further, in the focal length derivation table 109A, the focal length of 12 millimeters is associated with the derivation distance of 5 meters. In the focal length derivation table 109A, the focal distance of 14 millimeters is associated with the derivation distance of 10 meters. In the focal length derivation table 109A, the focal length of 16 millimeters is associated with the derivation distance of 30 meters. Further, in the focal length derivation table 109A, the focal distance of 18 millimeters is associated with the derivation distance of infinity.
 なお、焦点距離導出テーブル109Aは、例えば、測距装置10Aの実機による試験、及び、測距装置10Aの設計仕様等に基づくコンピュータ・シミュレーションの少なくとも一方の結果から導き出されたテーブルである。 Note that the focal length derivation table 109A is a table derived from at least one result of, for example, a test using an actual distance measuring device 10A and a computer simulation based on a design specification of the distance measuring device 10A.
 CPU100は、二次記憶部104から寸法導出プログラム105A、撮像位置距離導出プログラム106A、及び3次元座標導出プログラム108Aを読み出す。CPU100は、読み出した寸法導出プログラム105A、撮像位置距離導出プログラム106A、及び3次元座標導出プログラム108Aを一次記憶部102に展開する。そして、CPU100は、一次記憶部102に展開した寸法導出プログラム105A、撮像位置距離導出プログラム106A、及び3次元座標導出プログラム108Aを実行する。 The CPU 100 reads the dimension derivation program 105A, the imaging position distance derivation program 106A, and the three-dimensional coordinate derivation program 108A from the secondary storage unit 104. The CPU 100 expands the read dimension derivation program 105A, the imaging position distance derivation program 106A, and the three-dimensional coordinate derivation program 108A in the primary storage unit 102. Then, the CPU 100 executes a dimension deriving program 105A, an imaging position distance deriving program 106A, and a three-dimensional coordinate deriving program 108A developed in the primary storage unit 102.
 CPU100は、寸法導出プログラム105A及び撮像位置距離導出プログラム106Aのうちの少なくとも一方を実行することで、一例として図8に示すように、取得部110A及び導出部111Aとして動作する。 The CPU 100 operates as an acquisition unit 110A and a deriving unit 111A as illustrated in FIG. 8 as an example by executing at least one of the dimension deriving program 105A and the imaging position distance deriving program 106A.
 取得部110Aは、測距系機能を働かせて計測された実測距離を取得する。導出部111Aは、焦点距離導出テーブル109Aを用いて、取得部110Aで取得された実測距離に対応する焦点距離を導出する。 The obtaining unit 110A obtains the measured distance measured by using the ranging system function. The deriving unit 111A derives a focal length corresponding to the actually measured distance acquired by the acquiring unit 110A using the focal length deriving table 109A.
 測距装置10Aには、寸法導出機能が備えられており、寸法導出機能は、CPU100が寸法導出プログラム105Aを実行することにより取得部110A及び導出部111Aとして動作することで実現される機能である。 The distance measuring device 10A is provided with a dimension deriving function. The dimension deriving function is a function realized by the CPU 100 executing the dimension deriving program 105A and operating as the acquiring unit 110A and the deriving unit 111A. .
 寸法導出機能とは、一例として図9に示すように、指定された画素のアドレスu1,u2、並びに、測距ユニット12及び測距制御部68により計測された被写体までの距離L等に基づいて、被写体に含まれる実空間上の区域の長さLを導出したり、長さLに基づく面積を導出したりする機能を指す。 As shown in FIG. 9 as an example, the dimension derivation function is based on the addresses u1 and u2 of designated pixels, the distance L to the subject measured by the distance measurement unit 12 and the distance measurement control unit 68, and the like. refers or to derive the length L M of the area in the real space contained in the subject, a function or to derive an area based on the length L M.
 ここで、被写体までの距離Lとは、実測距離を指す。なお、以下では、説明の便宜上、被写体までの距離Lを単に「距離L」と称する。また、以下では、説明の便宜上、被写体に含まれる実空間上の区域の長さLを単に「長さL」と称する。また、「指定された画素」とは、例えば、ユーザによって撮像画像上で指定された2点に対応する撮像素子60における画素を指す。 Here, the distance L to the subject indicates an actually measured distance. Hereinafter, for convenience of explanation, the distance L to the subject is simply referred to as “distance L”. In the following, for convenience of explanation, the length L M of the area in the real space included in the subject is simply referred to as “length L M ”. The “designated pixel” refers to a pixel in the image sensor 60 corresponding to, for example, two points designated on the captured image by the user.
 長さLは、例えば、下記の数式(1)により算出される。数式(1)において、pは、撮像素子60に含まれる画素間のピッチであり、u1,u2は、ユーザによって指定された画素のアドレスであり、fは、焦点距離である。 The length L M is calculated, for example, by the following equation (1). In Equation (1), p is a pitch between pixels included in the image pickup device 60, u1, u2 is the address of the pixel that is specified by the user, f 0 is the focal length.
Figure JPOXMLDOC01-appb-M000001

 
Figure JPOXMLDOC01-appb-M000001

 
 数式(1)は、寸法の導出対象とされる対象物がフォーカスレンズ50に対して正面視で正対している状態で撮像されることを前提として用いられる数式である。従って、測距装置10Aでは、例えば、寸法の導出対象とされる対象物を含む被写体が、フォーカスレンズ50に対して正面視で正対していない状態で撮像された場合、射影変換処理が行われる。射影変換処理とは、例えば、撮像されて得られた撮像画像を、アフィン変換等の公知技術を利用して、撮像画像に含まれる四角形状の画像に基づいて正対視画像に変換する処理を指す。正対視画像とは、フォーカスレンズ50に対して正面視で正対している状態の画像を指す。そして、正対視画像を介して撮像素子60における画素のアドレスu1,u2が指定され、数式(1)より長さLが導出される。 Formula (1) is a formula used on the assumption that an object to be derived from a dimension is imaged in a state of facing the focus lens 50 in front view. Therefore, in the distance measuring device 10A, for example, when a subject including an object whose size is to be derived is captured in a state where the subject is not directly facing the focus lens 50 in front view, projective conversion processing is performed. . The projective transformation process is, for example, a process of converting a captured image obtained by capturing an image into a front-view image based on a rectangular image included in the captured image using a known technique such as affine transformation. Point to. The directly-facing image refers to an image in a state of facing the focus lens 50 in a front view. The pixel address u1, u2 in the image pickup device 60 is specified via the confronting vision image, the length L M is derived from Equation (1).
 また、測距装置10Aには、3次元座標導出機能が備えられており、3次元座標導出機能は、CPU100が3次元座標導出プログラム108Aを実行することにより取得部110A及び導出部111Aとして動作することで実現される機能である。 Further, the distance measuring device 10A is provided with a three-dimensional coordinate derivation function. The three-dimensional coordinate derivation function operates as the acquisition unit 110A and the derivation unit 111A when the CPU 100 executes the three-dimensional coordinate derivation program 108A. It is a function realized by this.
 ここで、3次元座標導出機能とは、後述の第1指定画素座標、後述の第2指定画素座標、後述の撮像位置距離、フォーカスレンズ50の焦点距離、及び撮像画素60A1の寸法から数式(2)に基づいて、後述の指定画素3次元座標を導出する機能を指す。 Here, the three-dimensional coordinate derivation function is a mathematical expression (2) from a first designated pixel coordinate described later, a second designated pixel coordinate described later, an imaging position distance described later, a focal length of the focus lens 50, and a dimension of the imaging pixel 60A1. ), A function for deriving designated pixel three-dimensional coordinates, which will be described later.
Figure JPOXMLDOC01-appb-M000002

 
Figure JPOXMLDOC01-appb-M000002

 
 なお、数式(2)において、“u”とは、第1指定画素座標のX座標を指す。また、数式(2)において、“v”とは、第1指定画素座標のY座標を指す。また、数式(2)において、“u”とは、第2指定画素座標のX座標を指す。また、数式(2)において、“B”とは、撮像位置距離を指す(図10及び図11参照)。また、数式(2)において、“f”とは、(焦点距離)/(撮像画素60A1の寸法)を指す。また、数式(2)において、(X,Y,Z)とは、指定画素3次元座標を指す。 In Equation (2), “u L ” refers to the X coordinate of the first designated pixel coordinate. In Expression (2), “v L ” indicates the Y coordinate of the first designated pixel coordinate. In Expression (2), “u R ” indicates the X coordinate of the second designated pixel coordinate. In Formula (2), “B” indicates an imaging position distance (see FIGS. 10 and 11). In Formula (2), “f” refers to (focal length) / (dimension of the imaging pixel 60A1). Further, in the formula (2), (X, Y, Z) indicates designated pixel three-dimensional coordinates.
 第1指定画素座標は、後述の第1撮像画像において、実空間上での位置が対応する画素として指定された第1指定画素を特定する2次元座標である。第2指定画素座標は、後述の第2撮像画像において、実空間上での位置が対応する画素として指定された第2指定画素を特定する2次元座標である。すなわち、第1指定画素及び第2指定画素は、実空間上での位置が互いに対応する画素として指定された画素であり、かつ、第1撮像画像及び第2撮像画像の各々において、互いに対応する位置で特定可能な画素である。そして、第1指定画素座標は、第1撮像画像上の2次元座標であり、第2指定画素座標は、第2撮像画像上の2次元座標である。また、指定画素3次元座標とは、第1指定画素座標及び第2指定画素座標に対応する実空間上での座標である3次元座標を指す。 The first designated pixel coordinates are two-dimensional coordinates that specify a first designated pixel whose position in the real space is designated as a corresponding pixel in a first captured image described later. The second designated pixel coordinates are two-dimensional coordinates that specify a second designated pixel designated as a pixel corresponding to a position in real space in a second captured image described later. That is, the first designated pixel and the second designated pixel are pixels whose positions in the real space are designated as corresponding pixels, and correspond to each other in each of the first captured image and the second captured image. It is a pixel that can be specified by position. The first designated pixel coordinates are two-dimensional coordinates on the first captured image, and the second designated pixel coordinates are two-dimensional coordinates on the second captured image. The designated pixel three-dimensional coordinates refer to three-dimensional coordinates that are coordinates on the real space corresponding to the first designated pixel coordinates and the second designated pixel coordinates.
 ここで、一例として図10及び図11に示すように、第1撮像画像とは、被写体が第1撮像位置から撮像装置14により撮像されて得られた撮像画像を指す。また、一例として図10及び図11に示すように、第2撮像画像とは、第1撮像位置からの撮像対象とされた被写体を含む被写体が第1撮像位置とは異なる第2撮像位置から撮像装置14により撮像されて得られた撮像画像を指す。なお、本実施形態では、説明の便宜上、第1撮像画像及び第2撮像画像に限らず、静止画像及び動画像を含めて、撮像装置14によって撮像されて得られた撮像画像を区別して説明する必要がない場合は単に「撮像画像」と称する。 Here, as shown in FIG. 10 and FIG. 11 as an example, the first captured image refers to a captured image obtained by capturing an image of the subject from the first imaging position by the imaging device 14. As an example, as illustrated in FIGS. 10 and 11, the second captured image is an image of a subject including a subject that is an imaging target from the first imaging position from a second imaging position different from the first imaging position. The captured image obtained by being imaged by the device 14 is indicated. In the present embodiment, for convenience of explanation, not only the first captured image and the second captured image but also a captured image obtained by being captured by the imaging device 14 including a still image and a moving image will be described separately. When it is not necessary, it is simply referred to as “captured image”.
 なお、図10に示す例では、測距ユニット12の位置として、第1撮像位置に対応する位置である第1計測位置、及び第2撮像位置に対応する位置である第2計測位置が示されている。第1計測位置とは、撮像装置14に対して測距ユニット12が正しく取り付けられている状態で被写体が第1撮像位置から撮像装置14により撮像される場合の測距ユニット12の位置を指す。第2計測位置とは、撮像装置14に対して測距ユニット12が正しく取り付けられている状態で被写体が第2撮像位置から撮像装置14により撮像される場合の測距ユニット12の位置を指す。 In the example illustrated in FIG. 10, as the position of the ranging unit 12, a first measurement position that is a position corresponding to the first imaging position and a second measurement position that is a position corresponding to the second imaging position are shown. ing. The first measurement position refers to the position of the distance measurement unit 12 when the subject is imaged by the image pickup device 14 from the first image pickup position with the distance measurement unit 12 correctly attached to the image pickup device 14. The second measurement position refers to the position of the distance measurement unit 12 when the subject is imaged by the imaging device 14 from the second imaging position in a state where the distance measurement unit 12 is correctly attached to the imaging device 14.
 撮像位置距離とは、第1撮像位置と第2撮像位置との距離を指す。撮像位置距離の一例としては、図11に示すように、第1撮像位置における撮像装置14のフォーカスレンズ50の主点Oと第2撮像位置における撮像装置14のフォーカスレンズ50の主点Oとの距離が挙げられるが、本開示の技術はこれに限定されるものではない。例えば、第1撮像位置における撮像装置14の撮像素子60の中央に位置する撮像画素60A1と第2撮像位置における撮像装置14の撮像素子60の中央に位置する撮像画素60A1との距離が撮像位置距離とされてもよい。 The imaging position distance refers to the distance between the first imaging position and the second imaging position. As an example of an imaging position distance, as shown in FIG. 11, the main point of the focus lens 50 of the imaging device 14 at the principal point O L and a second imaging position of the focus lens 50 of the imaging apparatus 14 in the first imaging position O R However, the technology of the present disclosure is not limited to this. For example, the distance between the imaging pixel 60A1 positioned at the center of the imaging device 60 of the imaging device 14 at the first imaging position and the imaging pixel 60A1 positioned at the center of the imaging device 60 of the imaging device 14 at the second imaging position is the imaging position distance. It may be said.
 図11に示す例では、第1撮像画像に含まれる画素Pが第1指定画素であり、第2撮像画像に含まれる画素Pが第2指定画素であり、画素P,Pは、被写体の点Pに対応する画素である。よって、画素Pの2次元座標である第1指定画素座標の(u,v)、及び画素Pの2次元座標である第2指定画素座標の(u,v)は、点Pの3次元座標である指定画素3次元座標の(X,Y,Z)に対応している。なお、数式(2)では、“v”は、使用されない。 In the example shown in FIG. 11, the pixel P L included in the first image is a first designated pixel, the pixel P R included in the second image is a second designated pixel, the pixel P L, P R is , The pixel corresponding to the point P of the subject. Thus, the first designated pixel coordinates are two-dimensional coordinates of the pixel P L (u L, v L ), and the second designated pixel coordinates are two-dimensional coordinates of the pixel P R (u R, v R ) is This corresponds to the designated pixel three-dimensional coordinates (X, Y, Z) which is the three-dimensional coordinates of the point P. In Equation (2), “v R ” is not used.
 なお、以下では、説明の便宜上、第1指定画素及び第2指定画素を区別して説明する必要がない場合、「指定画素」と称する。また、以下では、説明の便宜上、第1指定画素座標及び第2指定画素座標を区別して説明する必要がない場合、「指定画素座標」と称する。 In the following, for convenience of explanation, when it is not necessary to distinguish between the first designated pixel and the second designated pixel, they are referred to as “designated pixels”. In the following, for convenience of explanation, when there is no need to distinguish between the first designated pixel coordinates and the second designated pixel coordinates, they are referred to as “designated pixel coordinates”.
 ところで、測距装置10Aが3次元座標導出機能を働かせることで数式(2)に基づいて指定画素3次元座標を導出する場合、撮像位置距離を高精度に導出することが好ましい。なぜならば、数式(2)に撮像位置距離である“B”が含まれているからである。 By the way, when the ranging device 10A uses the three-dimensional coordinate derivation function to derive the designated pixel three-dimensional coordinates based on the formula (2), it is preferable to derive the imaging position distance with high accuracy. This is because “B”, which is the imaging position distance, is included in Equation (2).
 測距装置10Aには、撮像位置距離導出機能が備えられており、撮像位置距離導出機能は、CPU100が撮像位置距離導出プログラム106Aを実行することにより導出部111Aとして動作することで実現される機能である。 The distance measuring device 10A is provided with an imaging position distance deriving function. The imaging position distance deriving function is realized by the CPU 100 operating as the deriving unit 111A by executing the imaging position distance deriving program 106A. It is.
 導出部111Aは、導出した焦点距離に基づいて撮像位置距離を導出する。ここで、導出部111Aが撮像位置距離を導出するには、焦点距離の他に、照射位置実空間座標も要する。照射位置実空間座標は、実空間上でのレーザ光の照射位置、すなわち、レーザ光による被写体に対する実空間上での照射位置を特定する3次元座標である。 The deriving unit 111A derives the imaging position distance based on the derived focal length. Here, in order for the deriving unit 111A to derive the imaging position distance, in addition to the focal length, irradiation position real space coordinates are also required. The irradiation position real space coordinates are three-dimensional coordinates that specify the irradiation position of the laser light in the real space, that is, the irradiation position of the laser light on the subject in the real space.
 そこで、導出部111Aは、取得部110Aにより取得された実測距離に基づいて、照射位置実空間座標を導出する。 Therefore, the derivation unit 111A derives the irradiation position real space coordinates based on the actually measured distance acquired by the acquisition unit 110A.
 照射位置実空間座標は、一例として図12に示す距離L、半画角α、射出角度β、及び基準点間距離Mから、下記の数式(3)に基づいて導出される。数式(3)において、(xLaser,yLaser,zLaser)とは、照射位置実空間座標を指す。 The irradiation position real space coordinates are derived from the distance L, the half angle of view α, the emission angle β, and the reference point distance M shown in FIG. In Expression (3), (x Laser , y Laser , z Laser ) refers to irradiation space real space coordinates.
Figure JPOXMLDOC01-appb-M000003

 
Figure JPOXMLDOC01-appb-M000003

 
 数式(3)では、yLaser=0とされているが、これは、鉛直方向において光軸L1が光軸L2と同一の高さにあることを意味している。被写体に照射されたレーザ光の位置が被写体における光軸L2の位置よりも鉛直方向において高い位置の場合、yLaserは正値となる。被写体に照射されたレーザ光の位置が被写体における光軸L2の位置よりも鉛直方向において低い位置の場合、yLaserは負値となる。なお、以下では、説明の便宜上、“yLaser=0”であることを前提として説明する。 In Equation (3), y Laser = 0, which means that the optical axis L1 is at the same height as the optical axis L2 in the vertical direction. When the position of the laser beam applied to the subject is higher in the vertical direction than the position of the optical axis L2 in the subject, y Laser is a positive value. When the position of the laser light applied to the subject is lower in the vertical direction than the position of the optical axis L2 in the subject, y Laser is a negative value. In the following description, for convenience of explanation, it is assumed that “y Laser = 0”.
 ここで、一例として図12に示すように、半画角αとは、画角の半分を指す。射出角度βとは、射出部22からレーザ光が射出される角度を指す。基準点間距離Mとは、撮像装置14に規定された第1基準点P1と測距ユニット12に規定された第2基準点P2との距離を指す。第1基準点P1の一例としては、フォーカスレンズ50の主点が挙げられる。第2基準点P2の一例としては、測距ユニット12における3次元空間の位置を特定可能な座標の原点として予め設定された点が挙げられる。具体的には、対物レンズ38の正面視左右端の一端、又は測距ユニット12の筐体(図示省略)が直方体状である場合の筐体の1つの角、すなわち、1つの頂点が挙げられる。 Here, as an example, as shown in FIG. 12, the half angle of view α indicates half of the angle of view. The emission angle β refers to an angle at which laser light is emitted from the emission unit 22. The distance between reference points M refers to the distance between the first reference point P1 defined in the imaging device 14 and the second reference point P2 defined in the distance measuring unit 12. An example of the first reference point P1 is the main point of the focus lens 50. An example of the second reference point P2 is a point set in advance as the origin of coordinates that can specify the position of the three-dimensional space in the distance measuring unit 12. Specifically, one end of the left and right ends of the objective lens 38 when viewed from the front, or one corner of the casing when the casing (not shown) of the distance measuring unit 12 is a rectangular parallelepiped, that is, one apex. .
 導出部111Aは、取得部110Aにより取得された距離に基づいて、第1撮像画像及び第2撮像画像の各々において、照射位置実空間座標により特定される照射位置と対応する画素の位置を特定する照射位置画素座標を導出する。 Based on the distance acquired by the acquisition unit 110A, the derivation unit 111A specifies the position of the pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in each of the first captured image and the second captured image. An irradiation position pixel coordinate is derived.
 照射位置画素座標は、第1照射位置画素座標と第2照射位置画素座標とに大別される。第1照射位置画素座標は、第1撮像画像において照射位置実空間座標により特定される照射位置と対応する画素の位置を特定する2次元座標である。第2照射位置画素座標は、第2撮像画像において照射位置実空間座標により特定される照射位置と対応する画素の位置を特定する2次元座標である。 The irradiation position pixel coordinates are roughly divided into first irradiation position pixel coordinates and second irradiation position pixel coordinates. The first irradiation position pixel coordinates are two-dimensional coordinates that specify the position of a pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in the first captured image. The second irradiation position pixel coordinate is a two-dimensional coordinate that specifies the position of the pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in the second captured image.
 第1照射位置画素座標のX座標の導出方法及び第1照射位置画素座標のY座標の導出方法は、対象とする座標軸が異なるのみで導出方法の原理は同じである。すなわち、第1照射位置画素座標のX座標の導出方法は、撮像素子60における行方向の画素を対象とした導出方法であるのに対し、第1照射位置画素座標のY座標の導出方法は、撮像素子60における列方向の画素を対象とした導出方法である点で異なる。そのため、以下では、説明の便宜上、第1照射位置画素座標のX座標の導出方法を例示し、第1照射位置画素座標のY座標の導出方法の説明を省略する。なお、行方向とは、撮像面60Bの正面視左右方向を意味し、列方向とは、撮像面60Bの正面視上下方向を意味する。 The derivation method of the X coordinate of the first irradiation position pixel coordinate and the derivation method of the Y coordinate of the first irradiation position pixel coordinate are the same in the derivation method except that the target coordinate axes are different. That is, the method of deriving the X coordinate of the first irradiation position pixel coordinate is a method of deriving the pixel in the row direction in the image sensor 60, whereas the method of deriving the Y coordinate of the first irradiation position pixel coordinate is The difference is that this is a derivation method for pixels in the column direction in the image sensor 60. Therefore, in the following, for convenience of explanation, a method for deriving the X coordinate of the first irradiation position pixel coordinate is illustrated, and a description of the method of deriving the Y coordinate of the first irradiation position pixel coordinate is omitted. Note that the row direction means the front view left-right direction of the imaging surface 60B, and the column direction means the front view vertical direction of the imaging surface 60B.
 第1照射位置画素座標のX座標は、一例として図13に示す距離L、半画角α、射出角度β、及び基準点間距離Mから、下記の数式(4)~(6)に基づいて導出される。なお数式(6)において、「照射位置の行方向画素」とは、撮像素子60における行方向の画素のうちの実空間上でのレーザ光の照射位置に対応する位置の画素を指す。「行方向画素数の半分」とは、撮像素子60における行方向の画素数の半分を指す。 The X coordinates of the first irradiation position pixel coordinates are based on the following formulas (4) to (6) from the distance L, the half angle of view α, the emission angle β, and the reference point distance M shown in FIG. Derived. In Expression (6), “row-direction pixel at irradiation position” refers to a pixel at a position corresponding to the irradiation position of the laser light in real space among the pixels in the row direction of the image sensor 60. “Half the number of pixels in the row direction” refers to half of the number of pixels in the row direction in the image sensor 60.
Figure JPOXMLDOC01-appb-M000004

 
Figure JPOXMLDOC01-appb-M000004

 
Figure JPOXMLDOC01-appb-M000005

 
Figure JPOXMLDOC01-appb-M000005

 
Figure JPOXMLDOC01-appb-M000006

 
Figure JPOXMLDOC01-appb-M000006

 
 導出部111Aは、基準点間距離M及び射出角度βを数式(4)に代入し、半画角α及び射出角度βを数式(5)に代入し、距離Lを数式(4)及び数式(5)に代入する。導出部111Aは、このようにして得たΔx及びXと上記の「行方向画素数の半分」とを数式(6)に代入することで、「照射位置の行方向画素」の位置を特定する座標であるX座標を導出する。「照射位置の行方向画素」の位置を特定するX座標は、第1照射位置画素座標のX座標である。 The deriving unit 111A substitutes the distance M between the reference points and the emission angle β into the equation (4), substitutes the half angle of view α and the emission angle β into the equation (5), and sets the distance L as the equations (4) and ( Assign to 5). The derivation unit 111A specifies the position of “the pixel in the row direction of the irradiation position” by substituting Δx and X thus obtained and “half the number of pixels in the row direction” in Equation (6). The X coordinate which is the coordinate is derived. The X coordinate that specifies the position of the “irradiation position row direction pixel” is the X coordinate of the first irradiation position pixel coordinate.
 導出部111Aは、第2撮像画像の画素のうち、第1照射位置画素座標により特定される画素の位置と対応する画素の位置を特定する座標を第2照射位置画素座標として導出する。 The derivation unit 111A derives, as the second irradiation position pixel coordinates, coordinates that specify the pixel positions corresponding to the pixel positions specified by the first irradiation position pixel coordinates among the pixels of the second captured image.
 なお、以下では、説明の便宜上、第1照射位置画素座標及び第2照射位置画素座標を区別して説明する必要がない場合、「照射位置画素座標」と称する。また、撮像画像内の画素のうち、レーザ光による被写体に対する実際の照射位置と対応する画素の位置を特定する2次元座標として、第1照射位置画素座標又は第2照射位置画素座標と同様の導出方法で導出された2次元座標も「照射位置画素座標」と称する。 In the following, for convenience of explanation, when it is not necessary to distinguish between the first irradiation position pixel coordinates and the second irradiation position pixel coordinates, they are referred to as “irradiation position pixel coordinates”. Further, among the pixels in the captured image, derivation similar to the first irradiation position pixel coordinates or the second irradiation position pixel coordinates is used as a two-dimensional coordinate for specifying the position of the pixel corresponding to the actual irradiation position with respect to the subject by the laser light. The two-dimensional coordinates derived by the method are also referred to as “irradiation position pixel coordinates”.
 導出部111Aは、位置特定可能状態の場合に、第1導出処理と第2導出処理とをタッチパネル88で受け付けられた指示に従って選択的に実行する。ここで、位置特定可能状態とは、照射位置画素座標により特定される画素の位置が第1撮像画像及び第2撮像画像の各々において互いに対応する位置で特定可能な画素の位置である状態を指す。 The derivation unit 111A selectively executes the first derivation process and the second derivation process in accordance with the instructions received on the touch panel 88 when the position can be specified. Here, the position specifiable state refers to a state in which the position of the pixel specified by the irradiation position pixel coordinate is a position of a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. .
 また、導出部111Aは、位置特定不可能状態の場合に、第1導出処理を実行する。ここで、位置特定不可能状態とは、照射位置画素座標により特定される画素の位置が第1撮像画像と第2撮像画像との各々において互いに対応する位置で特定可能な画素と異なる画素の位置である状態を指す。 In addition, the derivation unit 111A executes the first derivation process when the position cannot be specified. Here, the position cannot be specified state is a position of a pixel that is different from a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. Refers to the state.
 ここで、第1導出処理とは、後述の複数画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に基づいて撮像位置距離を導出する処理を指す。複数画素座標とは、第1撮像画像及び第2撮像画像の各々において、実空間上でのレーザ光の照射位置と同一の平面状領域に存在し、かつ、互いに対応する位置で特定可能な3画素以上の複数画素を特定する複数の2次元座標を指す。なお、第1導出処理に用いられるパラメータは、複数画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に限定されるものではない。例えば、複数画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に、1つ以上の微調整用のパラメータを更に加えた複数のパラメータが第1導出処理で用いられるようにしてもよい。 Here, the first derivation process refers to a process of deriving the imaging position distance based on a plurality of pixel coordinates, an irradiation position real space coordinate, a focal distance, and a dimension of the imaging pixel 60A1 described later. The plurality of pixel coordinates are present in the same planar region as the irradiation position of the laser light in real space in each of the first captured image and the second captured image, and can be specified at positions corresponding to each other. It refers to a plurality of two-dimensional coordinates that specify a plurality of pixels equal to or more than a pixel. The parameters used for the first derivation process are not limited to the multiple pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. For example, a plurality of parameters obtained by further adding one or more fine adjustment parameters to the dimensions of the plurality of pixel coordinates, irradiation position real space coordinates, focal length, and imaging pixel 60A1 are used in the first derivation process. Also good.
 また、第2導出処理とは、照射位置画素座標と、照射位置実空間座標と、焦点距離と、撮像画素60A1の寸法とに基づいて撮像位置距離を導出する処理を指す。なお、第2導出処理に用いられるパラメータは、照射位置画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に限定されるものではない。例えば、照射位置画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に、1つ以上の微調整用のパラメータを更に加えた複数のパラメータが第2導出処理で用いられるようにしてもよい。 Further, the second derivation process refers to a process for deriving the imaging position distance based on the irradiation position pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. The parameters used for the second derivation process are not limited to the irradiation position pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. For example, a plurality of parameters obtained by further adding one or more fine adjustment parameters to the irradiation position pixel coordinates, irradiation position real space coordinates, focal length, and dimensions of the imaging pixel 60A1 are used in the second derivation process. May be.
 また、第2導出処理は、レーザ光の実際の照射位置が第1撮像画像及び第2撮像画像の各々において互いに対応する位置で特定可能な画素の位置に対応する実空間上の位置の場合に、第1導出処理よりも高精度に撮像位置距離を導出可能な処理である。また、第2導出処理は、第1導出処理による撮像位置距離の導出で用いられるパラメータの個数よりも少ない複数のパラメータに基づいて撮像位置距離を導出する処理である。なお、ここで言う「複数のパラメータ」とは、例えば、照射位置画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法を指す。 Further, the second derivation process is performed when the actual irradiation position of the laser light is a position in the real space corresponding to the position of a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. In this process, the imaging position distance can be derived with higher accuracy than the first derivation process. The second derivation process is a process for deriving the imaging position distance based on a plurality of parameters smaller than the number of parameters used in the derivation of the imaging position distance by the first derivation process. The “plurality of parameters” referred to here refers to, for example, irradiation position pixel coordinates, irradiation position real space coordinates, focal length, and dimensions of the imaging pixel 60A1.
 導出部111Aは、第1導出処理を実行する場合、複数画素座標、焦点距離、及び撮像画素60A1の寸法に基づいて、複数画素座標に対応する実空間上での3次元座標を含む平面を示す平面方程式により規定される平面の向きを導出する。そして、導出部111Aは、導出した平面の向きと照射位置実空間座標とに基づいて平面方程式を確定し、確定した平面方程式、複数画素座標、焦点距離、及び撮像画素60A1の寸法に基づいて撮像位置距離を導出する。 When executing the first derivation process, the derivation unit 111A indicates a plane including three-dimensional coordinates in the real space corresponding to the plurality of pixel coordinates based on the plurality of pixel coordinates, the focal length, and the size of the imaging pixel 60A1. The orientation of the plane defined by the plane equation is derived. Then, the deriving unit 111A determines a plane equation based on the derived plane direction and irradiation position real space coordinates, and performs imaging based on the determined plane equation, the plurality of pixel coordinates, the focal length, and the size of the imaging pixel 60A1. The position distance is derived.
 なお、撮像位置距離の導出に用いられる平面方程式は、下記の数式(7)によって規定される。従って、「平面の向き」を導出するとは、数式(7)におけるa,b,cを導出することを意味し、「平面方程式」を確定するとは、数式(7)におけるdを導出することで、平面方程式のa,b,c,dを確定することを意味する。 The plane equation used for deriving the imaging position distance is defined by the following equation (7). Therefore, deriving “the direction of the plane” means deriving a, b, c in the equation (7), and determinating the “plane equation” by deriving d in the equation (7). Means to determine a, b, c, d of the plane equation.
Figure JPOXMLDOC01-appb-M000007

 
Figure JPOXMLDOC01-appb-M000007

 
 次に、測距装置10Aの本開示の技術に係る部分の作用について説明する。 Next, the operation of the portion related to the technology of the present disclosure of the distance measuring device 10A will be described.
 先ず、寸法導出ボタン90Eがオンされた場合にCPU100が寸法導出プログラム105Aを実行することにより寸法導出機能を働かせることで実現される寸法導出処理について図14及び図15を参照して説明する。 First, the dimension derivation process realized by causing the CPU 100 to execute the dimension derivation program 105A when the dimension derivation button 90E is turned on, will be described with reference to FIGS.
 なお、以下では、説明の便宜上、一例として図16に示すように、測距装置10Aの撮像装置14の撮像範囲115にオフィスビル120の外壁面121を含む領域が被写体として含まれていることを前提として説明する。また、外壁面121は、主要被写体であり、かつ、レーザ光の照射対象であることを前提として説明する。 In the following, for convenience of explanation, as shown in FIG. 16 as an example, the imaging range 115 of the imaging device 14 of the distance measuring device 10A includes an area including the outer wall surface 121 of the office building 120 as a subject. This will be explained as a premise. Further, the outer wall surface 121 will be described on the assumption that it is a main subject and a laser light irradiation target.
 また、外壁面121は、平面状に形成されており、本開示の技術に係る平面状領域の一例である。また、一例として図16に示すように、外壁面121には、四角形状の複数の窓122が設けられている。また、一例として図16に示すように、外壁面121には、各窓122の下側に横長の長方形状の模様124が描かれているが、これに限らず、外壁面121に付された汚れ又は皹などであってもよい。 Further, the outer wall surface 121 is formed in a planar shape, and is an example of a planar region according to the technique of the present disclosure. As an example, as shown in FIG. 16, a plurality of rectangular windows 122 are provided on the outer wall surface 121. As an example, as shown in FIG. 16, a laterally long rectangular pattern 124 is drawn on the lower side of each window 122 on the outer wall surface 121, but not limited to this, the outer wall surface 121 is attached to the outer wall surface 121. It may be dirt or wrinkles.
 なお、本実施形態において、「平面状」には、平面のみならず、窓又は換気口等による若干の凹凸を許容する範囲での平面形状も含まれ、例えば、目視により、又は、既存の画像解析技術により、「平面状」と認識される平面又は平面形状であればよい。 In the present embodiment, the “planar shape” includes not only a flat surface but also a planar shape in a range that allows slight unevenness due to a window or a vent, for example, by visual observation or an existing image. Any plane or plane shape recognized as “planar” by an analysis technique may be used.
 また、以下では、説明の便宜上、測距装置10Aにより、外壁面121にレーザ光が照射されることで外壁面121までの距離が計測されることを前提として説明する。 In the following description, for convenience of explanation, the distance measurement apparatus 10A will be described on the assumption that the distance to the outer wall surface 121 is measured by irradiating the outer wall surface 121 with laser light.
 図14に示す寸法導出処理では、先ず、ステップ200で、取得部110Aは、計測撮像ボタン90Aがオンされたか否かを判定する。ステップ200において、計測撮像ボタン90Aがオンされていない場合は、判定が否定されて、ステップ202へ移行する。ステップ200において、計測撮像ボタン90Aがオンされた場合は、判定が肯定されて、ステップ204へ移行する。 In the dimension derivation process shown in FIG. 14, first, in step 200, the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on. If the measurement imaging button 90 </ b> A is not turned on in step 200, the determination is negative and the routine proceeds to step 202. If the measurement imaging button 90A is turned on in step 200, the determination is affirmed and the routine proceeds to step 204.
 ステップ202で、取得部110Aは、寸法導出処理を終了する条件を満足したか否かを判定する。寸法導出処理を終了する条件とは、例えば、寸法導出ボタン90Eが再びオンされたとの条件、及びステップ200の処理の実行が開始されてから肯定判定されることなく第1既定時間が経過したとの条件等を指す。なお、第1既定時間とは、例えば、1分を指す。 In step 202, the acquisition unit 110A determines whether or not a condition for ending the dimension derivation process is satisfied. The conditions for ending the dimension derivation process are, for example, a condition that the dimension derivation button 90E is turned on again, and that the first predetermined time has elapsed without being affirmed after the execution of the process of step 200 is started. It refers to the conditions of the. The first predetermined time refers to, for example, 1 minute.
 ステップ202において、寸法導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ200へ移行する。ステップ202において、寸法導出処理を終了する条件を満足した場合は、判定が肯定されて、寸法導出処理を終了する。 If it is determined in step 202 that the condition for ending the dimension derivation process is not satisfied, the determination is negative and the routine proceeds to step 200. In step 202, if the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the dimension derivation process is terminated.
 ステップ204で、取得部110Aは、測距ユニット12及び測距制御部68に対して実測距離の計測を実行させ、かつ、撮像装置14に対して撮像を実行させ、その後、ステップ206へ移行する。 In step 204, the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 206. .
 ステップ206で、取得部110Aは、ステップ204の処理が実行されることで測距ユニット12及び測距制御部68により計測された実測距離を取得する。また、ステップ206で、取得部110Aは、ステップ204の処理が実行されることで撮像装置14により撮像されて得られた撮像画像を示す撮像画像信号を取得する。なお、本ステップ206の処理が実行されることで取得される撮像画像信号により示される撮像画像は、ステップ204の処理が実行されることにより、合焦状態で撮像されて得られた撮像画像である。 In step 206, the acquisition unit 110A acquires the actual measurement distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of step 204. In Step 206, the acquisition unit 110 </ b> A acquires a captured image signal indicating a captured image obtained by the imaging device 14 by executing the process of Step 204. Note that the captured image indicated by the captured image signal acquired by executing the process of step 206 is a captured image obtained by capturing in the focused state by executing the process of step 204. is there.
 次のステップ208で、取得部110Aは、表示部86に対して、取得した撮像画像信号により示される撮像画像の表示を開始させ、その後、ステップ209へ移行する。 In the next step 208, the acquisition unit 110A causes the display unit 86 to start displaying the captured image indicated by the acquired captured image signal, and then proceeds to step 209.
 ステップ209で、導出部111Aは、焦点距離導出テーブル109Aを用いて、実測距離に対応する焦点距離を導出し、その後、ステップ210へ移行する。 In step 209, the deriving unit 111A derives a focal length corresponding to the actually measured distance using the focal length deriving table 109A, and then proceeds to step 210.
 本ステップ209の処理で用いられる実測距離とは、ステップ206の処理が実行されることで取得部110Aによって取得された実測距離を指す。そして、実測距離に対応する焦点距離とは、例えば、焦点距離導出テーブル109Aに格納されている導出用距離のうち、実測距離に一致する導出用距離に対応付けられている焦点距離を指す。 The actually measured distance used in the process of step 209 indicates the actually measured distance acquired by the acquisition unit 110A by executing the process of step 206. The focal distance corresponding to the actually measured distance refers to, for example, the focal distance associated with the derivation distance that matches the actually measured distance among the derivation distances stored in the focal distance derivation table 109A.
 なお、焦点距離導出テーブル109Aに、実測距離に一致する導出用距離が存在しない場合、導出部111Aは、焦点距離導出テーブル109Aの導出用距離から補間法で焦点距離を導出する。本実施形態において採用される補間法としては、例えば、線形補間又は非線形補間が挙げられる。 If there is no derivation distance that matches the measured distance in the focal distance derivation table 109A, the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by an interpolation method. Examples of the interpolation method employed in this embodiment include linear interpolation and nonlinear interpolation.
 ステップ210で、先ず、導出部111Aは、焦点距離から下記の数式(8)に基づいて半画角αを導出する。数式(8)において、“撮像画素の寸法”とは、撮像画素60A1の寸法を指す。数式(8)において、“f”とは、焦点距離を指す。なお、本ステップ210の処理で用いられる焦点距離は、ステップ209の処理が実行されることで導出された焦点距離である。 In step 210, first, the deriving unit 111A derives the half angle of view α from the focal length based on the following formula (8). In Expression (8), “dimension of the imaging pixel” refers to the dimension of the imaging pixel 60A1. In Formula (8), “f 0 ” refers to the focal length. Note that the focal length used in the processing of step 210 is a focal length derived by executing the processing of step 209.
Figure JPOXMLDOC01-appb-M000008

 
Figure JPOXMLDOC01-appb-M000008

 
 ステップ210で、次に、導出部111Aは、距離L、半画角α、射出角度β、及び基準点間距離Mから数式(4)~(6)に基づいて照射位置画素座標を導出し、その後、ステップ212へ移行する。なお、本ステップ210の処理で用いられる距離Lとは、ステップ206の処理が実行されることで取得部110Aにより取得された実測距離を指す。また、本ステップ210の処理において、照射位置画素座標の導出で用いられる半画角αは、導出部111Aにより焦点距離から数式(8)に基づいて導出された半画角αである。 Next, in step 210, the deriving unit 111A derives the irradiation position pixel coordinates from the distance L, the half angle of view α, the emission angle β, and the reference point distance M based on the equations (4) to (6), Thereafter, the process proceeds to step 212. It should be noted that the distance L used in the process of step 210 refers to the actually measured distance acquired by the acquisition unit 110A by executing the process of step 206. In the processing of step 210, the half angle of view α used for deriving the irradiation position pixel coordinates is the half angle of view α derived from the focal length by the deriving unit 111A based on Expression (8).
 ステップ212で、導出部111Aは、一例として図17に示すように、表示部86に対して、実測距離及び照射位置目印136を撮像画像に重畳した表示を開始させ、その後、ステップ214へ移行する。 In step 212, the derivation unit 111A causes the display unit 86 to start displaying the measured distance and the irradiation position mark 136 superimposed on the captured image, as shown in FIG. 17 as an example, and then proceeds to step 214. .
 本ステップ212の処理が実行されることで撮像画像に重畳して表示される実測距離は、ステップ206の処理が実行されることで取得部110Aにより取得された実測距離である。図17に示す例では、「133325.0」との数値が実測距離に該当し、単位はミリメートルである。図17に示す例において、照射位置目印136は、ステップ210の処理が実行されることで導出部111Aにより導出された照射位置画素座標により特定される画素の位置を示す目印である。 The measured distance displayed by being superimposed on the captured image by executing the process of step 212 is the measured distance acquired by the acquisition unit 110A by executing the process of step 206. In the example shown in FIG. 17, the numerical value “1333325.0” corresponds to the actually measured distance, and the unit is millimeter. In the example illustrated in FIG. 17, the irradiation position mark 136 is a mark indicating the position of the pixel specified by the irradiation position pixel coordinates derived by the deriving unit 111 </ b> A by executing the process of step 210.
 ステップ214で、導出部111Aは、表示部86に対して、枠規定案内メッセージ(図示省略)を撮像画像に重畳した表示を開始させ、その後、ステップ216へ移行する。 In step 214, the derivation unit 111A causes the display unit 86 to start displaying a frame definition guidance message (not shown) superimposed on the captured image, and then proceeds to step 216.
 枠規定案内メッセージとは、撮像画像の表示領域内に四角形状の枠を規定することをユーザに案内するメッセージを指す。四角形状の枠は、ユーザによるタッチパネル88を介した指示に従って規定される。枠規定案内メッセージの一例としては、「画面内の4点をタップして、照射位置目印を内包する四角形の枠を規定して下さい。」とのメッセージが挙げられる。 The frame definition guidance message refers to a message that guides the user to define a rectangular frame in the display area of the captured image. The rectangular frame is defined in accordance with an instruction from the user via the touch panel 88. As an example of the frame prescription guidance message, there is a message such as “Tap four points in the screen to prescribe a square frame that includes the irradiation position mark.”
 ステップ216で、導出部111Aは、タッチパネル88を介して撮像画像の表示領域内で四角形状の枠が正しく規定されたか否かを判定する。ここで、正しく規定された四角形状の枠とは、一例として図18に示すように、撮像画像の表示領域内において照射位置目印136を内包する四角形状の枠117を指す。図18に示す例では、枠117は、撮像画像の表示領域内の点119A,119B,119C,119Dの4点によって画定される。これにより、枠117で囲まれた四角形状の領域は、照射位置目印136に対応する照射位置画素座標に関連付けられる。 In step 216, the derivation unit 111A determines whether or not a square frame is correctly defined in the display area of the captured image via the touch panel 88. Here, the correctly defined quadrangular frame refers to a quadrangular frame 117 that includes the irradiation position mark 136 in the display area of the captured image, as shown in FIG. 18 as an example. In the example shown in FIG. 18, the frame 117 is defined by four points 119A, 119B, 119C, and 119D in the display area of the captured image. Thereby, the rectangular area surrounded by the frame 117 is associated with the irradiation position pixel coordinates corresponding to the irradiation position mark 136.
 ステップ216において、タッチパネル88を介して撮像画像の表示領域内で四角形状の枠が正しく規定されていない場合は、判定が否定されて、ステップ218へ移行する。ステップ216において、タッチパネル88を介して撮像画像の表示領域内で四角形状の枠が正しく規定された場合は、判定が肯定されて、ステップ220へ移行する。 In step 216, if the square frame is not correctly defined in the display area of the captured image via the touch panel 88, the determination is negative and the process proceeds to step 218. In step 216, when a square frame is correctly defined in the display area of the captured image via the touch panel 88, the determination is affirmed and the process proceeds to step 220.
 ステップ218で、導出部111Aは、寸法導出処理を終了する条件を満足したか否かを判定する。なお、本ステップ218の処理において、寸法導出処理を終了する条件は、ステップ202の処理で用いた条件と同様の条件である。 In step 218, the derivation unit 111A determines whether or not a condition for ending the dimension derivation process is satisfied. In the process of step 218, the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
 ステップ218において、寸法導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ216へ移行する。ステップ218において、寸法導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ248へ移行する。 In step 218, if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the process proceeds to step 216. If it is determined in step 218 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
 ステップ220で、導出部111Aは、表示部86に対して、枠規定案内メッセージの表示を終了させ、その後、ステップ222へ移行する。 In step 220, the derivation unit 111A ends the display of the frame regulation guidance message on the display unit 86, and then proceeds to step 222.
 ステップ222で、導出部111Aは、規定された四角形状の枠内に四角形状の領域が存在するか否かを判定する。四角形状の領域とは、一例として図18に示すように、台形状領域123を指す。なお、外壁面121(図16参照)のうちの台形状領域123に対応する部分は、フォーカスレンズ50に対して正面視で正対している状態で撮像されると、撮像画像に長方形状領域として表れる。 In step 222, the derivation unit 111A determines whether or not a quadrangular region exists within the prescribed quadrangular frame. As an example, the quadrangular region indicates a trapezoidal region 123 as shown in FIG. In addition, when the part corresponding to the trapezoidal area | region 123 of the outer wall surface 121 (refer FIG. 16) is imaged in the state facing directly with respect to the focus lens 50 by a front view, it will become a rectangular area | region in a captured image. appear.
 ステップ222において、規定された四角形状の枠内に四角形状の領域が存在しない場合は、判定が否定されて、ステップ230へ移行する。ステップ222において、規定された四角形状の枠内に四角形状の領域が存在する場合は、判定が肯定されて、ステップ224へ移行する。なお、図18に示す例では、枠117内に台形状領域123が存在するので、この場合、ステップ222において判定が肯定される。 If it is determined in step 222 that there is no quadrangular area within the defined quadrangular frame, the determination is negative and the routine proceeds to step 230. In step 222, if a quadrangular region exists within the defined quadrangular frame, the determination is affirmed and the process proceeds to step 224. In the example shown in FIG. 18, since the trapezoidal region 123 exists in the frame 117, the determination in step 222 is affirmed in this case.
 ステップ224で、導出部111Aは、表示部86に対して、実測距離及び照射位置目印136の表示を終了させ、その後、ステップ226へ移行する。 In step 224, the derivation unit 111A ends the display of the measured distance and the irradiation position mark 136 on the display unit 86, and then proceeds to step 226.
 ステップ226で、導出部111Aは、規定された四角形状の枠内に存在する四角形状の領域に基づいて、撮像画像に対して、上述した射影変換処理を実行する。例えば、図18に示す例を用いて説明すると、本ステップ226では、導出部111Aにより、枠117内に存在する台形状領域123に基づいて、撮像画像に対して、上述した射影変換処理が実行される。 In step 226, the derivation unit 111A performs the above-described projective transformation process on the captured image based on the quadrangular region existing in the prescribed quadrangular frame. For example, with reference to the example shown in FIG. 18, in this step 226, the projective transformation process described above is performed on the captured image by the derivation unit 111 </ b> A based on the trapezoidal region 123 existing in the frame 117. Is done.
 次のステップ228で、導出部111Aは、一例として図19に示すように、表示部86に対して、射影変換後画像87の表示を開始させ、その後、ステップ232へ移行する。射影変換後画像87は、撮像画像に対して射影変換処理が実行されることで得られた画像である。一例として図19に示すように、射影変換後画像87は、台形状領域123に対応する四角形状の領域である長方形状領域123Aを含む。 In the next step 228, the derivation unit 111A causes the display unit 86 to start displaying the post-projection conversion image 87 as shown in FIG. 19 as an example, and then proceeds to step 232. The post-projection conversion image 87 is an image obtained by performing a projective conversion process on a captured image. As an example, as shown in FIG. 19, the post-projection transformation image 87 includes a rectangular region 123 </ b> A that is a quadrangular region corresponding to the trapezoidal region 123.
 ステップ230で、導出部111Aは、表示部86に対して、実測距離及び照射位置目印136の表示を終了させ、その後、ステップ232へ移行する。 In step 230, the derivation unit 111A causes the display unit 86 to finish displaying the measured distance and the irradiation position mark 136, and then proceeds to step 232.
 ステップ232で、導出部111Aは、画素指定案内メッセージ(図示省略)を処理対象画像に重畳した表示を開始させ、その後、ステップ234へ移行する。 In step 232, the derivation unit 111A starts display in which a pixel designation guidance message (not shown) is superimposed on the processing target image, and then proceeds to step 234.
 ここで、処理対象画像とは、撮像画像又は射影変換後画像87を指す。ステップ222において判定が否定された場合、ステップ206の処理が実行されることで取得された撮像画像信号により示される撮像画像が処理対象画像として用いられる。また、ステップ222において判定が肯定された場合、射影変換後画像87が処理対象画像として用いられる。 Here, the processing target image refers to the captured image or the post-projection converted image 87. When the determination in step 222 is negative, the captured image indicated by the captured image signal acquired by executing the process of step 206 is used as the processing target image. If the determination in step 222 is affirmative, the post-projection conversion image 87 is used as the processing target image.
 また、画素指定案内メッセージとは、処理対象画像の表示領域内で2点、すなわち、2つの画素を指定することをユーザに案内するメッセージを指す。画素指定案内メッセージの一例としては、「画面内で2点をタップして、長さを計測したい区域の始点と終点とを指定して下さい。」とのメッセージが挙げられる。 Also, the pixel designation guidance message refers to a message for guiding the user to designate two points, that is, two pixels in the display area of the processing target image. As an example of the pixel designation guidance message, a message “Tap two points on the screen and designate the start point and the end point of the area whose length is to be measured.” Is given.
 ステップ234で、導出部111Aは、処理対象画像の画素のうち、ユーザによりタッチパネル88を介して2つの画素が指定されたか否かを判定する。ステップ234において、ユーザによりタッチパネル88を介して2つの画素が指定されていない場合は、判定が否定されて、ステップ236へ移行する。ステップ234において、ユーザによりタッチパネル88を介して2つの画素が指定された場合は、判定が肯定されて、ステップ238へ移行する。 In step 234, the derivation unit 111A determines whether or not two pixels are designated by the user via the touch panel 88 among the pixels of the processing target image. In step 234, when two pixels are not designated by the user via the touch panel 88, determination is denied and it transfers to step 236. In step 234, when two pixels are designated by the user via the touch panel 88, the determination is affirmed and the process proceeds to step 238.
 ステップ236で、寸法導出処理を終了する条件を満足したか否かを判定する。なお、本ステップ236の処理において、寸法導出処理を終了する条件は、ステップ202の処理で用いた条件と同様の条件である。 In step 236, it is determined whether or not a condition for terminating the dimension derivation process is satisfied. In the process of step 236, the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
 ステップ236において、寸法導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ234へ移行する。ステップ236において、寸法導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ248へ移行する。 In step 236, if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the process proceeds to step 234. If it is determined in step 236 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
 ステップ238で、導出部111Aは、表示部86に対して、画素指定案内メッセージの表示を終了させ、その後、ステップ242へ移行する。 In step 238, the derivation unit 111A ends the display of the pixel designation guidance message on the display unit 86, and then proceeds to step 242.
 ステップ242で、導出部111Aは、ユーザによりタッチパネル88を介して指定された2つの画素の間隔に対応する実空間上の区域の長さを、寸法導出機能を働かせて導出し、その後、ステップ244へ移行する。なお、本ステップ242の処理において、ユーザによりタッチパネル88を介して指定された2つの画素の間隔は、本開示の技術に係る複数画素の間隔の一例である。 In step 242, the deriving unit 111A derives the length of the area in the real space corresponding to the interval between the two pixels designated by the user via the touch panel 88 by using the dimension deriving function, and then in step 244. Migrate to Note that in the processing of step 242, the interval between two pixels designated by the user via the touch panel 88 is an example of the interval between a plurality of pixels according to the technique of the present disclosure.
 本ステップ242では、ユーザによってタッチパネル88を介して指定された2つの画素の間隔に対応する実空間上の区域の長さが、数式(1)により導出される。なお、この場合、数式(1)のu1,u2は、ユーザによりタッチパネル88を介して指定された2つの画素のアドレスである。また、数式(1)のLは、ステップ206の処理が実行されることで取得部110Aにより取得された実測距離である。更に、数式(1)のfは、ステップ209の処理が実行されることで導出部111Aにより導出された焦点距離である。 In step 242, the length of the area in the real space corresponding to the interval between the two pixels designated by the user via the touch panel 88 is derived by Expression (1). In this case, u1 and u2 in Expression (1) are addresses of two pixels designated by the user via the touch panel 88. In addition, L in Expression (1) is an actual measurement distance acquired by the acquisition unit 110A by executing the process of step 206. Further, f 0 in the equation (1) is a focal length derived by the deriving unit 111A by executing the process of step 209.
 ステップ244で、導出部111Aは、一例として図20に示すように、表示部86に対して、区域の長さ及び双方向矢印125を処理対象画像に重畳した表示を開始させ、その後、ステップ246へ移行する。 In step 244, the derivation unit 111A causes the display unit 86 to start displaying the area length and the bidirectional arrow 125 superimposed on the processing target image, as shown in FIG. Migrate to
 本ステップ244の処理が実行されることで表示部86に表示される区域の長さは、ステップ242の処理が実行されることで導出部111Aにより導出された区域の長さである。なお、図20に示す例では、「63」との数値が区域の長さに該当し、単位はミリメートルである。また、本ステップ244の処理が実行されることで表示部86に表示される双方向矢印125は、ユーザによりタッチパネル88を介して指定された2つの画素間を特定する矢印である。 The length of the area displayed on the display unit 86 by executing the process of step 244 is the length of the area derived by the deriving unit 111A by executing the process of step 242. In the example shown in FIG. 20, the numerical value “63” corresponds to the length of the area, and the unit is millimeter. In addition, the bidirectional arrow 125 displayed on the display unit 86 by executing the process of step 244 is an arrow that specifies between two pixels designated by the user via the touch panel 88.
 ステップ246で、導出部111Aは、寸法導出処理を終了する条件を満足したか否かを判定する。なお、本ステップ246の処理において、寸法導出処理を終了する条件は、ステップ202の処理で用いた条件と同様の条件である。 In step 246, the derivation unit 111A determines whether or not a condition for ending the dimension derivation process is satisfied. In the process of step 246, the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
 ステップ246において、寸法導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ246の判定が再び行われる。ステップ246において、寸法導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ248へ移行する。 In step 246, if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the determination in step 246 is performed again. If it is determined in step 246 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
 ステップ248で、導出部111Aは、表示部86に対して、処理対象画像及び重畳表示情報の表示を終了させ、その後、寸法導出処理を終了する。なお、本ステップ248の処理において、重畳表示情報とは、現時点で処理対象画像に重畳して表示されている各種情報を指し、例えば、区域の長さ及び双方向矢印125等を指す。 In step 248, the derivation unit 111A ends the display of the processing target image and the superimposed display information on the display unit 86, and then ends the dimension derivation process. Note that in the processing of step 248, the superimposed display information refers to various types of information that are currently superimposed and displayed on the processing target image, such as the length of the area and the two-way arrow 125.
 次に、3次元座標導出ボタン90Gがオンされた場合にCPU100が撮像位置距離導出プログラム106Aを実行することにより撮像位置距離導出機能を働かせることで実現される撮像位置距離導出処理について図21を参照して説明する。 Next, referring to FIG. 21, the imaging position distance deriving process realized by the CPU 100 executing the imaging position distance deriving function 106A by executing the imaging position distance deriving program 106A when the three-dimensional coordinate deriving button 90G is turned on. To explain.
 なお、以下では、説明の便宜上、第1計測位置に測距ユニット12が位置し、かつ、第1撮像位置に撮像装置14が位置している場合の測距装置10Aの位置を「第1位置」と称する。また、以下では、説明の便宜上、第2計測位置に測距ユニット12が位置し、かつ、第2撮像位置に撮像装置14が位置している場合の測距装置10Aの位置を「第2位置」と称する。 In the following, for convenience of explanation, the position of the distance measuring device 10A when the distance measuring unit 12 is located at the first measurement position and the imaging device 14 is located at the first imaging position is referred to as “first position”. ". In the following, for convenience of explanation, the position of the distance measuring device 10A when the distance measuring unit 12 is located at the second measurement position and the imaging device 14 is located at the second imaging position is referred to as “second position”. ".
 図21に示す撮像位置距離導出処理では、先ず、ステップ300で、取得部110Aは、第1位置で計測撮像ボタン90Aがオンされたか否かを判定する。ステップ300において、計測撮像ボタン90Aがオンされていない場合は、判定が否定されて、ステップ302へ移行する。ステップ300において、計測撮像ボタン90Aがオンされた場合は、判定が肯定されて、ステップ304へ移行する。 In the imaging position distance derivation process shown in FIG. 21, first, in step 300, the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the first position. In step 300, if the measurement imaging button 90A is not turned on, the determination is negative and the routine proceeds to step 302. If the measurement imaging button 90A is turned on in step 300, the determination is affirmed and the routine proceeds to step 304.
 ステップ302で、取得部110Aは、撮像位置距離導出処理を終了する条件を満足したか否かを判定する。撮像位置距離導出処理を終了する条件とは、例えば、3次元座標導出ボタン90Gが再びオンされたとの条件、及び撮像位置距離導出処理を終了する指示がタッチパネル88によって受け付けられたとの条件等を指す。 In step 302, the acquisition unit 110A determines whether or not a condition for ending the imaging position distance deriving process is satisfied. The conditions for ending the imaging position distance derivation process include, for example, a condition that the 3D coordinate derivation button 90G is turned on again, a condition that an instruction to end the imaging position distance derivation process is received by the touch panel 88, and the like. .
 ステップ302において、撮像位置距離導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ300へ移行する。ステップ302において、撮像位置距離導出処理を終了する条件を満足した場合は、判定が肯定されて、撮像位置距離導出処理を終了する。 If it is determined in step 302 that the conditions for ending the imaging position distance deriving process are not satisfied, the determination is negative and the routine proceeds to step 300. In step 302, when the condition for ending the imaging position distance deriving process is satisfied, the determination is affirmed and the imaging position distance deriving process ends.
 ステップ304で、取得部110Aは、測距ユニット12及び測距制御部68に対して実測距離の計測を実行させ、かつ、撮像装置14に対して撮像を実行させ、その後、ステップ306へ移行する。なお、以下では、説明の便宜上、本ステップ304の処理が実行されることで計測された実測距離を「第1実測距離」と称する。 In step 304, the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 306. . Hereinafter, for convenience of explanation, the actual distance measured by executing the process of step 304 is referred to as a “first actual distance”.
 ステップ306で、取得部110Aは、ステップ304の処理が実行されることで測距ユニット12及び測距制御部68により計測された第1実測距離を取得する。また、ステップ306で、取得部110Aは、ステップ304の処理が実行されることで撮像装置14により撮像されて得られた第1撮像画像を示す第1撮像画像信号を取得する。なお、本ステップ306の処理が実行されることで取得される第1撮像画像信号により示される第1撮像画像は、ステップ304の処理が実行されることにより、合焦状態で撮像されて得られた第1撮像画像である。 In step 306, the acquisition unit 110A acquires the first actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of step 304. In Step 306, the acquisition unit 110 </ b> A acquires a first captured image signal indicating a first captured image obtained by performing the processing in Step 304 and captured by the imaging device 14. Note that the first captured image indicated by the first captured image signal acquired by executing the processing of step 306 is obtained by being captured in the focused state by executing the processing of step 304. It is the 1st picked-up image.
 次のステップ308で、取得部110Aは、表示部86に対して、一例として図26に示すように、ステップ306の処理で取得した第1撮像画像信号により示される第1撮像画像の表示を開始させ、その後、ステップ310へ移行する。 In the next step 308, the acquisition unit 110A starts to display the first captured image indicated by the first captured image signal acquired in the process of step 306 on the display unit 86 as shown in FIG. 26 as an example. Then, the process proceeds to step 310.
 ステップ310で、導出部111Aは、焦点距離導出テーブル109Aを用いて、第1実測距離に対応する焦点距離を導出し、その後、ステップ312へ移行する。 In step 310, the deriving unit 111A derives a focal length corresponding to the first actually measured distance using the focal length deriving table 109A, and then proceeds to step 312.
 本ステップ310の処理で用いられる第1実測距離とは、ステップ306の処理が実行されることで取得部110Aによって取得された第1実測距離を指す。そして、第1実測距離に対応する焦点距離とは、例えば、焦点距離導出テーブル109Aに格納されている導出用距離のうち、第1実測距離に一致する導出用距離に対応付けられている焦点距離を指す。 The first measured distance used in the process of step 310 refers to the first measured distance acquired by the acquisition unit 110A by executing the process of step 306. The focal distance corresponding to the first actually measured distance is, for example, the focal distance associated with the derivation distance that matches the first actually measured distance among the derivation distances stored in the focal distance derivation table 109A. Point to.
 なお、焦点距離導出テーブル109Aに、第1実測距離に一致する導出用距離が存在しない場合、導出部111Aは、焦点距離導出テーブル109Aの導出用距離から上述の補間法で焦点距離を導出する。 If there is no derivation distance that matches the first actually measured distance in the focal distance derivation table 109A, the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by the above-described interpolation method.
 ステップ312で、先ず、導出部111Aは、焦点距離から数式(8)に基づいて半画角αを導出する。なお、本ステップ312の処理で用いられる焦点距離は、ステップ310の処理が実行されることで導出された焦点距離である。 In step 312, first, the deriving unit 111A derives the half angle of view α from the focal length based on Expression (8). Note that the focal length used in the processing of step 312 is a focal length derived by executing the processing of step 310.
 ステップ312で、次に、導出部111Aは、距離L、半画角α、射出角度β、及び基準点間距離Mから、数式(3)に基づいて、照射位置実空間座標を導出し、その後、ステップ314へ移行する。なお、本ステップ312の処理で用いられる距離Lとは、ステップ306の処理が実行されることで取得部110Aにより取得された第1実測距離を指す。また、本ステップ312の処理において、照射位置実空間座標の導出で用いられる半画角αは、導出部111Aにより焦点距離から数式(8)に基づいて導出された半画角αである。 Next, in step 312, the deriving unit 111A derives the irradiation position real space coordinates from the distance L, the half angle of view α, the emission angle β, and the reference point distance M based on Equation (3), and then The process proceeds to step 314. Note that the distance L used in the processing of step 312 indicates the first actually measured distance acquired by the acquisition unit 110A by executing the processing of step 306. In the processing of step 312, the half angle of view α used for deriving the irradiation position real space coordinates is the half angle of view α derived from the focal length by the deriving unit 111A based on the mathematical formula (8).
 ステップ314で、導出部111Aは、距離L、半画角α、射出角度β、及び基準点間距離Mから、数式(4)~(6)に基づいて、第1照射位置画素座標を導出し、その後、ステップ316へ移行する。なお、本ステップ314の処理で用いられる距離Lとは、ステップ306の処理が実行されることで取得部110Aにより取得された第1実測距離を指す。また、本ステップ314の処理において、第1照射位置画素座標の導出で用いられる半画角αは、ステップ312の処理が実行されることで導出部111Aにより焦点距離から数式(8)に基づいて導出された半画角αである。 In step 314, the deriving unit 111A derives the first irradiation position pixel coordinates from the distance L, the half angle of view α, the emission angle β, and the reference point distance M based on the equations (4) to (6). Thereafter, the process proceeds to step 316. Note that the distance L used in the process of step 314 indicates the first measured distance acquired by the acquisition unit 110A by executing the process of step 306. Further, in the process of step 314, the half angle of view α used in the derivation of the first irradiation position pixel coordinates is calculated based on the formula (8) from the focal length by the derivation unit 111A by executing the process of step 312. The derived half angle of view α.
 ステップ316で、導出部111Aは、一例として図27に示すように、表示部86に対して、第1実測距離及び照射位置目印136を第1撮像画像に重畳した表示を開始させ、その後、ステップ318へ移行する。 In step 316, the derivation unit 111A causes the display unit 86 to start displaying the first measured distance and the irradiation position mark 136 superimposed on the first captured image, as shown in FIG. The process proceeds to 318.
 本ステップ316の処理が実行されることで表示される第1実測距離とは、ステップ306の処理が実行されることで取得部110Aにより取得された第1実測距離を指す。図17に示す例では、「133325.0」との数値が第1実測距離に該当し、単位はミリメートルである。また、図27に示す例において、照射位置目印136は、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置を示す目印である。 The first measured distance displayed by executing the process of step 316 indicates the first measured distance acquired by the acquiring unit 110A by executing the process of step 306. In the example illustrated in FIG. 17, the numerical value “1333325.0” corresponds to the first actually measured distance, and the unit is millimeter. In the example shown in FIG. 27, the irradiation position mark 136 is a mark indicating the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314.
 ステップ318で、導出部111Aは、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致するか否かを判定する。ここで、特定可能画素位置とは、第1撮像画像及び第2撮像画像の各々において互いに対応する位置で特定可能な画素の位置を指す。 In step 318, the derivation unit 111A determines whether or not the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 matches the identifiable pixel position. Here, the identifiable pixel position refers to the position of a pixel that can be identified at a position corresponding to each other in each of the first captured image and the second captured image.
 ステップ318において、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致する場合は、判定が肯定されて、ステップ320へ移行する。ステップ318において、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致しない場合は、判定が否定されて、ステップ342へ移行する。 In step 318, when the position of the pixel specified by the first irradiation position pixel coordinate derived by executing the process of step 314 matches the identifiable pixel position, the determination is affirmed and the process proceeds to step 320. Transition. In step 318, when the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 does not match the identifiable pixel position, the determination is negative and the process proceeds to step 342. Transition.
 ステップ320で、導出部111Aは、一例として図28に示すように、表示部86に対して、特定の時間(例えば、5秒間)、一致メッセージ137Aを第1撮像画像に重畳して表示させ、その後、ステップ322へ移行する。一致メッセージ137Aは、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致することを示すメッセージである。よって、本ステップ320の処理が実行されることにより、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致することがユーザに通知される。 In step 320, the derivation unit 111A causes the display unit 86 to display a match message 137A superimposed on the first captured image for a specific time (for example, 5 seconds), as shown in FIG. 28 as an example. Thereafter, the process proceeds to step 322. The coincidence message 137A is a message indicating that the position of the pixel identified by the first irradiation position pixel coordinates derived by executing the process of step 314 coincides with the identifiable pixel position. Therefore, when the process of step 320 is executed, the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 may coincide with the identifiable pixel position. The user is notified.
 図28に示す例では、一致メッセージ137Aとして、「レーザ光の照射位置が被写体の特徴的な位置と一致しましたので、第1導出処理又は第2導出処理を実行することができます。」とのメッセージが示されているが、本開示の技術はこれに限定されるものではない。例えば、一致メッセージ137Aのうちの「レーザ光の照射位置が被写体の特徴的な位置と一致しました」とのメッセージのみを採用して表示してもよい。 In the example shown in FIG. 28, as the matching message 137A, “Because the irradiation position of the laser light matches the characteristic position of the subject, the first derivation process or the second derivation process can be executed.” However, the technology of the present disclosure is not limited to this. For example, only the message “the laser beam irradiation position matches the characteristic position of the subject” in the matching message 137A may be adopted and displayed.
 このように、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置と特定可能画素位置との一致を通知するメッセージであれば如何なるメッセージであってもよい。また、図28に示す例では、一致メッセージ137Aが可視表示される場合を示しているが、音声再生装置(図示省略)による音声の出力等の可聴表示又はプリンタによる印刷物の出力等の永久可視表示を可視表示に代えて行ってもよいし、併用してもよい。 As described above, any message may be used as long as it is a message notifying the coincidence between the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 and the identifiable pixel position. Also good. In the example shown in FIG. 28, the match message 137A is visually displayed. However, an audible display such as an audio output by an audio playback device (not shown) or a permanent visual display such as an output of a printed matter by a printer. May be performed instead of visible display, or may be used in combination.
 ステップ322で、導出部111Aは、一例として図29に示すように、導出処理選択画面139を第1撮像画像に重畳した表示を開始させ、その後、ステップ324へ移行する。導出処理選択画面139には、第1導出処理開始ボタン139A及び第2導出処理開始ボタン139Bの2つのソフトキーが表示されている。また、導出処理選択画面139には、第1導出処理開始ボタン139A及び第2導出処理開始ボタン139Bの何れかをオンすることを促すメッセージも表示されている。 In step 322, the derivation unit 111A starts a display in which the derivation process selection screen 139 is superimposed on the first captured image, as shown in FIG. 29 as an example, and then proceeds to step 324. The derivation process selection screen 139 displays two soft keys, a first derivation process start button 139A and a second derivation process start button 139B. The derivation process selection screen 139 also displays a message prompting to turn on either the first derivation process start button 139A or the second derivation process start button 139B.
 ここで、第1導出処理開始ボタン139Aがオンされる場合とは、ユーザが第1導出処理の実行を希望する場合を意味する。ユーザが第1導出処理の実行を希望する場合の一例としては、ユーザが一致メッセージ137Aの内容に疑いがある場合が挙げられる。ユーザが一致メッセージ137Aの内容に疑いがある場合とは、例えば、測距ユニット12の交換又は画角の変更等により、レーザ光の実際の照射位置と照射位置実空間座標により特定される照射位置とがずれている虞があるとユーザにより判断された場合等を指す。 Here, the case where the first derivation process start button 139A is turned on means a case where the user desires to execute the first derivation process. As an example of the case where the user desires to execute the first derivation process, there is a case where the user has doubts about the content of the match message 137A. The case where the user is suspicious in the content of the coincidence message 137A means that the irradiation position specified by the actual irradiation position of the laser beam and the irradiation position real space coordinates, for example, by exchanging the distance measuring unit 12 or changing the angle of view The case where it is judged by the user that there is a possibility of being misaligned.
 第2導出処理開始ボタン139Bがオンされる場合とは、ユーザが第2導出処理の実行を希望する場合を意味する。ユーザが第2導出処理の実行を希望する場合の一例としては、ユーザが一致メッセージ137Aの内容に疑いがない場合が挙げられる。ユーザが一致メッセージ137Aの内容に疑いがない場合とは、例えば、レーザ光の実際の照射位置と照射位置実空間座標により特定される照射位置とがずれていないとユーザにより判断された場合を指す。なお、第2導出処理による撮像位置距離の導出に用いるパラメータの個数は、第1導出処理による撮像位置距離の導出に用いるパラメータの個数よりも少ない。そのため、第2導出処理は、第1導出処理に比べ、撮像位置距離の導出に要する負荷を低減することができる。 The case where the second derivation process start button 139B is turned on means a case where the user desires to execute the second derivation process. As an example of a case where the user desires to execute the second derivation process, there is a case where the user has no doubt in the content of the match message 137A. The case where the user has no doubt in the content of the coincidence message 137A indicates, for example, a case where the user determines that the actual irradiation position of the laser beam and the irradiation position specified by the irradiation position real space coordinates are not shifted. . Note that the number of parameters used for deriving the imaging position distance by the second derivation process is smaller than the number of parameters used for deriving the imaging position distance by the first derivation process. Therefore, the second derivation process can reduce the load required to derive the imaging position distance compared to the first derivation process.
 ステップ324で、導出部111Aは、第1導出処理開始ボタン139Aがオンされたか否かを判定する。ステップ324において、第1導出処理開始ボタン139Aがオンされた場合は、判定が肯定されて、ステップ328へ移行する。ステップ324において、第1導出処理開始ボタン139Aがオンされていない場合は、判定が否定されて、ステップ332へ移行する。 In step 324, the derivation unit 111A determines whether or not the first derivation process start button 139A is turned on. If the first derivation process start button 139A is turned on in step 324, the determination is affirmed and the routine proceeds to step 328. If it is determined in step 324 that the first derivation process start button 139A is not turned on, the determination is negative and the routine proceeds to step 332.
 ステップ328で、導出部111Aは、表示部86に対して、導出処理選択画面139の表示を終了させ、かつ、注目画素指定案内メッセージ(図示省略)を第1撮像画像に重畳した表示を開始させ、その後、ステップ330へ移行する。 In step 328, the derivation unit 111A causes the display unit 86 to end the display of the derivation process selection screen 139 and to start displaying the target pixel designation guidance message (not shown) superimposed on the first captured image. Thereafter, the process proceeds to step 330.
 なお、注目画素指定案内メッセージとは、例えば、第1撮像画像から注目画素をタッチパネル88を介して指定することを案内するメッセージを指す。注目画素指定案内メッセージの一例としては、「注目すべき1画素(注目点)を指定して下さい。」とのメッセージが挙げられる。本ステップ328の処理が実行されることで表示された注目画素指定案内メッセージは、例えば、後述のステップ330Aの処理において判定が肯定された場合、すなわち、注目画素が指定された場合に、非表示される。 Note that the pixel-of-interest designation guidance message refers to, for example, a message for guiding the designation of a pixel of interest via the touch panel 88 from the first captured image. As an example of the attention pixel designation guidance message, there is a message “Please specify one pixel to be noticed (attention point)”. The attention pixel designation guidance message displayed by executing the processing of step 328 is not displayed, for example, when the determination is affirmed in the processing of step 330A described later, that is, when the attention pixel is designated. Is done.
 ステップ332で、導出部111Aは、第2導出処理開始ボタン139Bがオンされたか否かを判定する。ステップ332において、第2導出処理開始ボタン139Bがオンされた場合は、判定が肯定されて、ステップ334へ移行する。ステップ332において、第2導出処理開始ボタン139Bがオンされていない場合は、判定が否定されて、ステップ338へ移行する。 In step 332, the derivation unit 111A determines whether or not the second derivation process start button 139B is turned on. If the second derivation process start button 139B is turned on in step 332, the determination is affirmed and the routine proceeds to step 334. If it is determined in step 332 that the second derivation start button 139B is not turned on, the determination is negative and the routine proceeds to step 338.
 ステップ334で、導出部111Aは、表示部86に対して、導出処理選択画面139の表示を終了させ、かつ、上述した注目画素指定案内メッセージ(図示省略)を第1撮像画像に重畳した表示を開始させ、その後、ステップ336へ移行する。なお、本ステップ334の処理が実行されることで表示された注目画素指定案内メッセージは、例えば、後述のステップ336Aの処理において、注目画素が指定された場合に、非表示される。 In step 334, the derivation unit 111A ends the display of the derivation process selection screen 139 on the display unit 86, and displays the above-described attention pixel designation guidance message (not shown) superimposed on the first captured image. After that, the process proceeds to step 336. Note that the target pixel designation guidance message displayed by executing the process of step 334 is not displayed when the target pixel is specified in the process of step 336A described later, for example.
 ステップ338で、導出部111Aは、撮像位置距離導出処理を終了する条件を満足したか否かを判定する。なお、本ステップ338の処理において、撮像位置距離導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 338, the deriving unit 111A determines whether or not a condition for ending the imaging position distance deriving process is satisfied. In the process of step 338, the condition for ending the imaging position distance derivation process is the same as the condition used in the process of step 302.
 ステップ338において、撮像位置距離導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ324へ移行する。ステップ338において、撮像位置距離導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ340へ移行する。 If it is determined in step 338 that the conditions for ending the imaging position distance derivation are not satisfied, the determination is negative and the routine proceeds to step 324. If the condition for ending the imaging position distance deriving process is satisfied in step 338, the determination is affirmed and the routine proceeds to step 340.
 ステップ340で、導出部111Aは、表示部86に対して、第1撮像画像及び重畳表示情報の表示を終了させ、その後、撮像位置距離導出処理を終了する。なお、本ステップ340の処理において、重畳表示情報とは、現時点で第1撮像画像に重畳して表示されている各種情報を指し、例えば、第1実測距離、照射位置目印136、及び導出処理選択画面139等を指す。 In step 340, the derivation unit 111A ends the display of the first captured image and the superimposed display information on the display unit 86, and then ends the imaging position distance derivation process. In the process of step 340, the superimposed display information refers to various types of information that are currently displayed superimposed on the first captured image. For example, the first actually measured distance, the irradiation position mark 136, and the derivation process selection This refers to the screen 139 or the like.
 ステップ342で、導出部111Aは、一例として図30に示すように、表示部86に対して、特定の時間(例えば、5秒間)、不一致メッセージ137Bを第1撮像画像に重畳して表示させ、その後、ステップ330へ移行する。不一致メッセージ137Bは、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致していないことを示すメッセージである。ここで、「第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致していない」とは、換言すると、第1照射位置画素座標により特定される画素の位置が特定可能画素位置と異なる画素の位置であることを意味する。 In step 342, the derivation unit 111A causes the display unit 86 to display a mismatch message 137B superimposed on the first captured image for a specific time (for example, 5 seconds), as shown in FIG. 30 as an example. Thereafter, the process proceeds to step 330. The mismatch message 137B is a message indicating that the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the processing of step 314 does not match the identifiable pixel position. Here, “the position of the pixel specified by the first irradiation position pixel coordinates does not coincide with the identifiable pixel position”, in other words, the position of the pixel specified by the first irradiation position pixel coordinates can be specified. This means that the pixel position is different from the pixel position.
 このように、本ステップ226の処理が実行されることにより、ステップ207の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と一致していないことがユーザに通知される。 As described above, by executing the process of step 226, the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 207 matches the identifiable pixel position. The user is notified that it has not.
 図30に示す例では、不一致メッセージ137Bとして、「レーザ光の照射位置が被写体の特徴的な位置と一致しませんでしたので、第1導出処理を実行します。」とのメッセージが示されているが、本開示の技術はこれに限定されるものではない。例えば、不一致メッセージ137Bのうちの「レーザ光の照射位置が被写体の特徴的な位置と一致しませんでした。」とのメッセージのみを採用して表示してもよい。 In the example shown in FIG. 30, a message “The first derivation process is executed because the irradiation position of the laser beam did not match the characteristic position of the subject” is displayed as the mismatch message 137B. However, the technology of the present disclosure is not limited to this. For example, only the message “The laser beam irradiation position did not match the characteristic position of the subject” in the mismatch message 137B may be adopted and displayed.
 このように、ステップ314の処理が実行されることで導出された第1照射位置画素座標により特定される画素の位置と特定可能画素位置との不一致を通知するメッセージであれば如何なるメッセージであってもよい。また、図30に示す例では、不一致メッセージ137Bが可視表示される場合を示しているが、音声再生装置(図示省略)による音声の出力等の可聴表示又はプリンタによる印刷物の出力等の永久可視表示を可視表示に代えて行ってもよいし、併用してもよい。 In this way, any message can be used as long as it is a message notifying the inconsistency between the position of the pixel specified by the first irradiation position pixel coordinates derived by the processing of step 314 and the identifiable pixel position. Also good. In addition, the example shown in FIG. 30 shows a case where the discrepancy message 137B is displayed visually. However, an audible display such as an audio output by an audio reproduction device (not shown) or a permanent visual display such as an output of a printed matter by a printer. May be performed instead of visible display, or may be used in combination.
 ステップ330で、CPU100は、一例として図22及び図23に示す第1導出処理を実行し、その後、撮像位置距離導出処理を終了する。 In step 330, the CPU 100 executes the first derivation process shown in FIGS. 22 and 23 as an example, and then ends the imaging position distance derivation process.
 図22に示す第1導出処理では、先ず、ステップ330Aで、取得部110Aは、ユーザによりタッチパネル88を介して第1撮像画像から注目画素が指定されたか否かを判定する。ここで、注目画素は、上述した第1指定画素に相当する。なお、タッチパネル88は、タッチパネル88に付与されている2次元座標のうちの第1撮像画像に含まれる画素に対応する2次元座標を指定する画素指定情報を受け付ける。よって、本ステップ330Aでは、タッチパネル88により画素指定情報が受け付けられた場合に注目画素が指定されたと判定する。すなわち、画素指定情報により指定された2次元座標に対応する画素が注目画素とされる。 In the first derivation process shown in FIG. 22, first, in step 330 </ b> A, the acquisition unit 110 </ b> A determines whether or not a pixel of interest has been designated from the first captured image via the touch panel 88 by the user. Here, the target pixel corresponds to the first designated pixel described above. The touch panel 88 receives pixel designation information for designating two-dimensional coordinates corresponding to pixels included in the first captured image among the two-dimensional coordinates assigned to the touch panel 88. Therefore, in step 330A, it is determined that the pixel of interest has been designated when pixel designation information is received by the touch panel 88. That is, the pixel corresponding to the two-dimensional coordinate designated by the pixel designation information is set as the target pixel.
 ステップ330Aにおいて、ユーザによりタッチパネル88を介して第1撮像画像から注目画素が指定されていない場合は、判定が否定されて、ステップ330Bへ移行する。ステップ330Aにおいて、ユーザによりタッチパネル88を介して第1撮像画像から注目画素が指定された場合は、判定が肯定されて、ステップ330Dへ移行する。 In step 330A, if the target pixel is not specified from the first captured image by the user via the touch panel 88, the determination is negative and the process proceeds to step 330B. In step 330A, when the target pixel is designated from the first captured image by the user via the touch panel 88, the determination is affirmed and the process proceeds to step 330D.
 ステップ330Bで、取得部110Aは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ330Bの処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 330B, the acquiring unit 110A determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 330B, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ330Bにおいて、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ330Aへ移行する。ステップ330Bにおいて、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ330Cへ移行する。 In step 330B, if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 330A. In step 330B, when the condition for ending the first derivation process is satisfied, the determination is affirmed and the process proceeds to step 330C.
 ステップ330Cで、取得部110Aは、ステップ340の処理と同様の処理を実行した後、第1導出処理を終了する。 In Step 330C, the acquisition unit 110A performs the same process as the process in Step 340, and then ends the first derivation process.
 ステップ330Dで、取得部110Aは、第1撮像画像においてユーザによりタッチパネル88を介して指定された画素を特定する注目画素座標を取得し、その後、ステップ330Eへ移行する。なお、ここで、第1撮像画像においてユーザによりタッチパネル88を介して指定される画素としては、一例として図31に示すように、注目画素126が挙げられる。注目画素126は、一例として図31に示すように、第1撮像画像のうちの外壁面2階中央部窓に相当する画像の正面視左下隅の画素である。外壁面2階中央部窓とは、図16に示す例において、外壁面121に設けられている窓122のうちの、オフィスビル120の2階の中央部の窓122を指す。また、注目画素座標とは、第1撮像画像において注目画素126を特定する2次元座標を指す。 In Step 330D, the acquisition unit 110A acquires the pixel coordinates of interest that specify the pixel specified by the user via the touch panel 88 in the first captured image, and then proceeds to Step 330E. Here, as an example of a pixel specified by the user via the touch panel 88 in the first captured image, a pixel of interest 126 is given as shown in FIG. As an example, as shown in FIG. 31, the target pixel 126 is a pixel in the lower left corner of an image corresponding to the central window on the second floor of the outer wall surface in the first captured image. In the example shown in FIG. 16, the outer wall surface second floor central window refers to the window 122 at the center of the second floor of the office building 120 among the windows 122 provided on the outer wall surface 121. Also, the pixel-of-interest coordinates indicate two-dimensional coordinates that specify the pixel-of-interest 126 in the first captured image.
 ステップ330Eで、取得部110Aは、第1撮像画像のうちの外壁面画像128(図32に示す例のハッチング領域)において特徴的な3画素の位置を特定する3特徴画素座標を取得し、その後、ステップ330Fへ移行する。 In step 330E, the acquiring unit 110A acquires the three characteristic pixel coordinates that specify the positions of the characteristic three pixels in the outer wall surface image 128 (the hatched area in the example illustrated in FIG. 32) of the first captured image, and then , The process proceeds to step 330F.
 ここで、外壁面画像128とは、第1撮像画像のうちの外壁面121(図16参照)を示す画像を指す。特徴的な3画素は、第1撮像画像及び第2撮像画像の各々において互いに対応する位置で特定可能な画素である。第1撮像画像での特徴的な3画素は、互いに予め定められた画素数以上離れており、外壁面画像128のうちの模様又は建材等に相当する画像の空間周波数等を基に画像解析により既定の規則に従って特定された3点の各々に存在する画素である。例えば、注目画素126を中心とした予め定められた半径で既定される円領域内で最大の空間周波数を有する異なる頂点を示し、かつ、既定条件を満足する3つの画素が特徴的な3画素として抽出される。なお、3特徴画素座標は、上述した複数画素座標に相当する。 Here, the outer wall surface image 128 refers to an image showing the outer wall surface 121 (see FIG. 16) in the first captured image. The characteristic three pixels are pixels that can be specified at positions corresponding to each other in each of the first captured image and the second captured image. The characteristic three pixels in the first captured image are separated from each other by a predetermined number of pixels or more by image analysis based on the spatial frequency of the image corresponding to the pattern or building material in the outer wall surface image 128. Pixels present at each of the three points specified according to a predetermined rule. For example, three pixels that indicate different vertices having the maximum spatial frequency in a circular area that is defined by a predetermined radius centered on the pixel of interest 126 and that satisfy the predetermined condition are characteristic three pixels. Extracted. The three characteristic pixel coordinates correspond to the above-described plural pixel coordinates.
 図32に示す例において、特徴的な3画素は、第1画素130、第2画素132、及び第3画素134である。第1画素130は、外壁面画像128のうち、外壁面2階中央部窓に相当する画像の正面視左上隅の画素である。第2画素132は、外壁面2階中央部窓に相当する画像の正面視右上隅の画素である。第3画素134は、外壁面3階中央部窓の下部に近接する模様124に相当する画像の正面視左下隅の画素である。なお、外壁面3階中央窓とは、図16に示す例において、外壁面121に設けられている窓122のうちの、オフィスビル120の3階の中央部の窓122を指す。 32, the characteristic three pixels are the first pixel 130, the second pixel 132, and the third pixel 134. The first pixel 130 is a pixel in the upper left corner of the image corresponding to the central window on the second floor of the outer wall surface in the outer wall image 128. The second pixel 132 is a pixel at the upper right corner of the image corresponding to the central window on the second floor of the outer wall surface. The third pixel 134 is a pixel at the lower left corner of the image corresponding to the pattern 124 close to the lower part of the central window on the third floor of the outer wall. In the example shown in FIG. 16, the outer wall surface third floor central window refers to the window 122 at the center of the third floor of the office building 120 among the windows 122 provided on the outer wall surface 121.
 ステップ330Fで、取得部110Aは、ステップ340の処理と同様の処理を実行した後、図23に示すステップ330Gへ移行する。 In step 330F, the acquisition unit 110A performs the same process as the process in step 340, and then proceeds to step 330G illustrated in FIG.
 ステップ330Gで、取得部110Aは、第2位置で計測撮像ボタン90Aがオンされたか否かを判定する。ステップ330Gにおいて、計測撮像ボタン90Aがオンされていない場合は、判定が否定されて、ステップ330Hへ移行する。ステップ330Gにおいて、計測撮像ボタン90Aがオンされた場合は、判定が肯定されて、ステップ330Iへ移行する。 In Step 330G, the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the second position. If it is determined in step 330G that the measurement imaging button 90A is not turned on, the determination is negative and the process proceeds to step 330H. If the measurement imaging button 90A is turned on in step 330G, the determination is affirmed and the routine proceeds to step 330I.
 ステップ330Hで、取得部110Aは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ330Hの処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In Step 330H, the acquisition unit 110A determines whether or not a condition for ending the first derivation process is satisfied. In the process of Step 330H, the condition for ending the first derivation process is the same as the condition used in the process of Step 302.
 ステップ330Hにおいて、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ330Gへ移行する。ステップ330Hにおいて、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、第1導出処理を終了する。 In step 330H, if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 330G. In step 330H, when the condition for ending the first derivation process is satisfied, the determination is affirmed and the first derivation process is ended.
 ステップ330Iで、取得部110Aは、測距ユニット12及び測距制御部68に対して実測距離の計測を実行させ、かつ、撮像装置14に対して撮像を実行させ、その後、ステップ330Jへ移行する。なお、以下では、説明の便宜上、本ステップ330Iの処理又は後述のステップ336H(図25参照)が実行されることで計測された実測距離を「第2実測距離」と称する。 In step 330I, the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 330J. . Hereinafter, for convenience of explanation, an actual distance measured by executing the process of Step 330I or Step 336H (see FIG. 25) described later is referred to as a “second actual distance”.
 ステップ330Jで、取得部110Aは、ステップ330Iの処理が実行されることで測距ユニット12及び測距制御部68により計測された第2実測距離を取得する。また、ステップ330Jで、取得部110Aは、ステップ330Iの処理が実行されることで撮像装置14により撮像されて得られた第2撮像画像を示す第2撮像画像信号を取得する。なお、本ステップ330Jの処理が実行されることで取得される第2撮像画像信号により示される第2撮像画像は、ステップ330Iの処理が実行されることにより、合焦状態で撮像されて得られた第2撮像画像である。 In Step 330J, the acquisition unit 110A acquires the second actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of Step 330I. In Step 330J, the acquisition unit 110A acquires a second captured image signal indicating a second captured image obtained by being captured by the imaging device 14 by executing the process of Step 330I. Note that the second captured image indicated by the second captured image signal acquired by executing the process of step 330J is obtained by being captured in the focused state by executing the process of step 330I. It is the 2nd picked-up image.
 次のステップ330Kで、取得部110Aは、表示部86に対して、ステップ330Jの処理で取得した第2撮像画像信号により示される第2撮像画像の表示を開始させ、その後、ステップ330Lへ移行する。 In the next step 330K, the acquisition unit 110A causes the display unit 86 to start displaying the second captured image indicated by the second captured image signal acquired in the process of step 330J, and then proceeds to step 330L. .
 ステップ330Lで、導出部111Aは、焦点距離導出テーブル109Aを用いて、第2実測距離に対応する焦点距離を導出し、その後、ステップ330Mへ移行する。 In step 330L, the deriving unit 111A derives a focal length corresponding to the second actually measured distance using the focal length deriving table 109A, and then proceeds to step 330M.
 本ステップ330Lの処理で用いられる第2実測距離とは、ステップ330Jの処理が実行されることで取得部110Aによって取得された第2実測距離を指す。そして、第2実測距離に対応する焦点距離とは、例えば、焦点距離導出テーブル109Aに格納されている導出用距離のうち、第2実測距離に一致する導出用距離に対応付けられている焦点距離を指す。 The second actually measured distance used in the process of step 330L indicates the second actually measured distance acquired by the acquiring unit 110A by executing the process of step 330J. The focal distance corresponding to the second actually measured distance is, for example, the focal distance associated with the derivation distance that matches the second actually measured distance among the derivation distances stored in the focal distance derivation table 109A. Point to.
 なお、焦点距離導出テーブル109Aに、第2実測距離に一致する導出用距離が存在しない場合、導出部111Aは、焦点距離導出テーブル109Aの導出用距離から上述の補間法で焦点距離を導出する。 If there is no derivation distance that matches the second actually measured distance in the focal distance derivation table 109A, the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by the above-described interpolation method.
 ステップ330Mで、取得部110Aは、第2撮像画像に含まれる画素のうち、上記の注目画素126に対応する画素である対応注目画素を特定し、特定した対応注目画素を特定する対応注目画素座標を取得し、その後、ステップ330Nへ移行する。なお、ここで、対応注目画素座標とは、第2撮像画像において対応注目画素を特定する2次元座標を指す。また、対応注目画素は、第1及び第2撮像画像を解析対象としてパターンマッチング等の既存の画像解析を実行することで特定される。なお、対応注目画素は、上述した第2指定画素に相当し、第1撮像画像から注目画素126が特定されると、本ステップ330Mの処理が実行されることで、第2撮像画像から一意に特定される。 In step 330M, the acquisition unit 110A specifies a corresponding target pixel that is a pixel corresponding to the target pixel 126 among the pixels included in the second captured image, and specifies the specified corresponding target pixel coordinate. Then, the process proceeds to step 330N. Here, the corresponding target pixel coordinates refer to two-dimensional coordinates that specify the corresponding target pixels in the second captured image. In addition, the corresponding target pixel is specified by performing existing image analysis such as pattern matching on the first and second captured images as analysis targets. Note that the corresponding target pixel corresponds to the above-described second designated pixel, and when the target pixel 126 is specified from the first captured image, the processing of this step 330M is executed to uniquely identify the second captured pixel. Identified.
 ステップ330Nで、取得部110Aは、第2撮像画像のうちの外壁面画像128(図32参照)に対応する外壁面画像において特徴的な3画素を特定し、特定した特徴的な3画素を特定する対応特徴画素座標を取得し、その後、ステップ330Pへ移行する。なお、対応特徴画素座標とは、第2撮像画像において特定した特徴的な3画素を特定する2次元座標を指す。また、対応特徴画素座標は、第2撮像画像において、上記ステップ330Eの処理で取得された3特徴画素座標に対応する2次元座標でもあり、上述した複数画素座標に相当する。また、第2撮像画像のうちの特徴的な3画素は、上述の対応注目画素の特定方法と同様に、第1及び第2撮像画像を解析対象としてパターンマッチング等の既存の画像解析を実行することで特定される。 In step 330N, the acquisition unit 110A identifies the characteristic three pixels in the outer wall image corresponding to the outer wall image 128 (see FIG. 32) in the second captured image, and identifies the identified characteristic three pixels. Corresponding feature pixel coordinates are acquired, and then the process proceeds to step 330P. Note that the corresponding characteristic pixel coordinates indicate two-dimensional coordinates that specify the characteristic three pixels specified in the second captured image. The corresponding feature pixel coordinates are also two-dimensional coordinates corresponding to the three feature pixel coordinates acquired by the processing in step 330E in the second captured image, and correspond to the above-described plurality of pixel coordinates. In addition, the characteristic three pixels of the second captured image are subjected to an existing image analysis such as pattern matching with the first and second captured images as analysis targets in the same manner as the above-described method of identifying the corresponding target pixel. It is specified by that.
 ステップ330Pで、導出部111Aは、3特徴画素座標、対応特徴画素座標、焦点距離、及び撮像画素60A1の寸法から、数式(7)に示す平面方程式のa,b,cを導出することで、平面方程式により規定される平面の向きを導出する。本ステップ330Pの処理で用いられる焦点距離は、ステップ330Lの処理が実行されることで導出された焦点距離である。 In step 330P, the derivation unit 111A derives a plane equation a, b, and c shown in Equation (7) from the three feature pixel coordinates, the corresponding feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1, The orientation of the plane defined by the plane equation is derived. The focal length used in the processing of step 330P is the focal length derived by executing the processing of step 330L.
 ここで、3特徴画素座標を(uL1,vL1),(uL2,vL2),(uL3,vL3)とし、対応特徴画素座標を(uR1,vR1),(uR2,vR2),(uR3,vR3)とすると、下記の数式(9)~(11)により第1~第3特徴画素3次元座標が規定される。第1特徴画素3次元座標とは、(uL1,vL1)及び(uR1,vR1)に対応する3次元座標を指す。第2特徴画素3次元座標とは、(uL2,vL2)及び(uR2,vR2)に対応する3次元座標を指す。第3特徴画素3次元座標とは、(uL3,vL3)及び(uR3,vR3)に対応する3次元座標を指す。なお、数式(9)~(11)では、“vR1”、“vR2”、及び“vR3”は使用されない。 Here, the three feature pixel coordinates are (u L1 , v L1 ), (u L2 , v L2 ), (u L3 , v L3 ), and the corresponding feature pixel coordinates are (u R1 , v R1 ), (u R2 , Assuming v R2 ), (u R3 , v R3 ), the first to third feature pixel three-dimensional coordinates are defined by the following equations (9) to (11). The first feature pixel three-dimensional coordinates refer to three-dimensional coordinates corresponding to (u L1 , v L1 ) and (u R1 , v R1 ). The second feature pixel three-dimensional coordinates refer to three-dimensional coordinates corresponding to (u L2 , v L2 ) and (u R2 , v R2 ). The third feature pixel three-dimensional coordinate refers to a three-dimensional coordinate corresponding to (u L3 , v L3 ) and (u R3 , v R3 ). In equations (9) to (11), “v R1 ”, “v R2 ”, and “v R3 ” are not used.
Figure JPOXMLDOC01-appb-M000009

 
Figure JPOXMLDOC01-appb-M000009

 
Figure JPOXMLDOC01-appb-M000010

 
Figure JPOXMLDOC01-appb-M000010

 
Figure JPOXMLDOC01-appb-M000011

 
Figure JPOXMLDOC01-appb-M000011

 
 本ステップ330Pにおいて、導出部111Aは、数式(9)~(11)に示す第1~第3特徴画素3次元座標の各々を数式(7)に代入して得られる等価関係にある3つの数式から数式(7)のa,b,cを最適化することで、数式(7)のa,b,cを導出する。このように、数式(7)のa,b,cが導出されるということは、数式(7)に示す平面方程式により規定される平面の向きが導出されることを意味する。 In step 330P, the derivation unit 111A has three mathematical expressions in an equivalent relationship obtained by substituting each of the first to third characteristic pixel three-dimensional coordinates shown in mathematical expressions (9) to (11) into the mathematical expression (7). By optimizing a, b, and c in Equation (7), a, b, and c in Equation (7) are derived. Thus, a, b, and c in Equation (7) being derived means that the plane orientation defined by the plane equation shown in Equation (7) is derived.
 次のステップ330Qで、導出部111Aは、ステップ312の処理で導出した照射位置実空間座標に基づいて数式(7)に示す平面方程式を確定し、その後、ステップ330Rへ移行する。すなわち、本ステップ330Pにおいて、導出部111Aは、ステップ330Pの処理で導出したa,b,c及びステップ312の処理で導出した照射位置実空間座標を数式(7)に代入することで、数式(7)のdを確定する。ステップ330Pの処理で数式(7)のa,b,cが導出されているので、本ステップ330Qの処理で数式(7)のdが確定されると、数式(7)に示す平面方程式が確定される。 In the next step 330Q, the deriving unit 111A determines the plane equation shown in the mathematical expression (7) based on the irradiation position real space coordinates derived in the process of step 312, and then proceeds to step 330R. That is, in this step 330P, the derivation unit 111A substitutes the a, b, c derived in the processing in step 330P and the irradiation position real space coordinates derived in the processing in step 312 into the mathematical formula (7). 7) Confirm d. Since a, b, and c of Expression (7) are derived by the process of Step 330P, when d of Expression (7) is determined by the process of Step 330Q, the plane equation shown by Expression (7) is determined. Is done.
 ステップ330Rで、導出部111Aは、特徴画素3次元座標及び平面方程式に基づいて、撮像位置距離を導出し、その後、ステップ330Sへ移行する。 In step 330R, the derivation unit 111A derives the imaging position distance based on the feature pixel three-dimensional coordinates and the plane equation, and then proceeds to step 330S.
 ここで、本ステップ330Rの処理で用いられる特徴画素3次元座標とは、第1特徴画素3次元座標を指す。なお、本ステップ330Rの処理で用いられる特徴画素3次元座標は、第1特徴画素3次元座標に限らず、第2特徴画素3次元座標又は第3特徴画素3次元座標であってもよい。また、本ステップ330Rで用いられる平面方程式とは、ステップ330Qで確定された平面方程式を指す。 Here, the feature pixel three-dimensional coordinates used in the processing of step 330R indicate the first feature pixel three-dimensional coordinates. Note that the feature pixel three-dimensional coordinates used in the processing of step 330R are not limited to the first feature pixel three-dimensional coordinates, and may be the second feature pixel three-dimensional coordinates or the third feature pixel three-dimensional coordinates. The plane equation used in step 330R is the plane equation determined in step 330Q.
 よって、本ステップ330Rでは、特徴画素3次元座標が平面方程式に代入されることで、撮像位置距離である“B”が導出される。 Therefore, in this step 330R, “B” which is the imaging position distance is derived by substituting the three-dimensional feature pixel coordinates into the plane equation.
 ステップ330Sで、導出部111Aは、一例として図33に示すように、表示部86に対して、ステップ330Rの処理で導出された撮像位置距離を第2撮像画像に重畳した表示を開始させる。また、ステップ330Sで、導出部111Aは、ステップ330Rの処理で導出された撮像位置距離を既定の記憶領域に記憶し、その後、ステップ330Tへ移行する。なお、既定の記憶領域の一例としては、一次記憶部102の記憶領域又は二次記憶部104の記憶領域が挙げられる。 In step 330S, the derivation unit 111A causes the display unit 86 to start displaying the imaging position distance derived in step 330R on the second captured image, as shown in FIG. 33 as an example. In step 330S, the deriving unit 111A stores the imaging position distance derived in the process of step 330R in a predetermined storage area, and then proceeds to step 330T. Note that examples of the default storage area include a storage area of the primary storage unit 102 or a storage area of the secondary storage unit 104.
 なお、図33に示す例では、「144656.1」との数値が、ステップ330Rの処理で導出された撮像位置距離に該当し、単位はミリメートルである。 Note that, in the example shown in FIG. 33, the numerical value “144656.1” corresponds to the imaging position distance derived by the processing in step 330R, and the unit is millimeters.
 ステップ330Tで、導出部111Aは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ330Tの処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 330T, the derivation unit 111A determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 330T, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ330Tにおいて、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、本ステップ330の判定が再び行われる。ステップ330Tにおいて、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ330Uへ移行する。 In Step 330T, when the condition for ending the first derivation process is not satisfied, the determination is denied and the determination in Step 330 is performed again. If the condition for ending the first derivation process is satisfied in step 330T, the determination is affirmed and the process proceeds to step 330U.
 ステップ330Uで、導出部111Aは、表示部86に対して、第2撮像画像及び重畳表示情報の表示を終了させ、その後、第1導出処理を終了する。なお、本ステップ330Uの処理において、重畳表示情報とは、現時点で第2撮像画像に重畳して表示されている各種情報を指し、例えば、第2実測距離及び撮像位置距離等を指す。 In step 330U, the derivation unit 111A ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the first derivation process. Note that in the processing of step 330U, the superimposed display information refers to various types of information that are currently displayed superimposed on the second captured image, such as the second actually measured distance and the imaging position distance.
 一方、図21に示すステップ336で、CPU100は、一例として図24に示す第2導出処理を実行し、その後、撮像位置距離導出処理を終了する。 On the other hand, in step 336 shown in FIG. 21, the CPU 100 executes the second derivation process shown in FIG. 24 as an example, and thereafter ends the imaging position distance derivation process.
 図24に示す第2導出処理では、先ず、ステップ336Aで、取得部110Aは、ユーザによりタッチパネル88を介して第1撮像画像から注目画素が指定されたか否かを判定する。ここで、注目画素は、上述した第1指定画素に相当する。なお、タッチパネル88は、タッチパネル88に付与されている2次元座標のうちの第1撮像画像に含まれる画素に対応する2次元座標を指定する画素指定情報を受け付ける。よって、本ステップ336Aでは、タッチパネル88により画素指定情報が受け付けられた場合に注目画素が指定されたと判定する。すなわち、画素指定情報により指定された2次元座標に対応する画素が注目画素とされる。 In the second derivation process shown in FIG. 24, first, in step 336A, the acquisition unit 110A determines whether or not a pixel of interest has been designated from the first captured image via the touch panel 88 by the user. Here, the target pixel corresponds to the first designated pixel described above. The touch panel 88 receives pixel designation information for designating two-dimensional coordinates corresponding to pixels included in the first captured image among the two-dimensional coordinates assigned to the touch panel 88. Therefore, in step 336A, it is determined that the pixel of interest has been designated when pixel designation information is received by the touch panel 88. That is, the pixel corresponding to the two-dimensional coordinate designated by the pixel designation information is set as the target pixel.
 ステップ336Aにおいて、ユーザによりタッチパネル88を介して第1撮像画像から注目画素が指定されていない場合は、判定が否定されて、ステップ336Bへ移行する。ステップ336Aにおいて、ユーザによりタッチパネル88を介して第1撮像画像から注目画素が指定された場合は、判定が肯定されて、ステップ336Dへ移行する。 In step 336A, if the target pixel is not designated from the first captured image by the user via the touch panel 88, the determination is negative and the process proceeds to step 336B. In step 336A, when the pixel of interest is designated from the first captured image via the touch panel 88 by the user, the determination is affirmed and the process proceeds to step 336D.
 ステップ336Bで、取得部110Aは、第2導出処理を終了する条件を満足したか否かを判定する。なお、ステップ336Bの処理において、第2導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 336B, the acquisition unit 110A determines whether or not a condition for ending the second derivation process is satisfied. In the process of step 336B, the condition for ending the second derivation process is the same as the condition used in the process of step 302.
 ステップ336Bにおいて、第2導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ336Aへ移行する。ステップ336Bにおいて、第2導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ336Cへ移行する。 In step 336B, if the condition for ending the second derivation process is not satisfied, the determination is negative and the process proceeds to step 336A. In step 336B, when the condition for ending the second derivation process is satisfied, the determination is affirmed and the process proceeds to step 336C.
 ステップ336Cで、取得部110Aは、ステップ340の処理と同様の処理を実行した後、第2導出処理を終了する。 In step 336C, the acquisition unit 110A performs the same process as the process in step 340, and then ends the second derivation process.
 ステップ336Dで、取得部110Aは、第1撮像画像においてユーザによりタッチパネル88を介して指定された画素を特定する注目画素座標を取得し、その後、ステップ336Eへ移行する。なお、ここで、第1撮像画像においてユーザによりタッチパネル88を介して指定される画素としては、一例として図31に示すように、注目画素126が挙げられ、注目画素座標とは、第1撮像画像において注目画素126を特定する2次元座標を指す。 In Step 336D, the acquisition unit 110A acquires the pixel coordinates of interest specifying the pixel specified by the user via the touch panel 88 in the first captured image, and then proceeds to Step 336E. Here, as an example of the pixel specified by the user via the touch panel 88 in the first captured image, as illustrated in FIG. 31, the target pixel 126 may be cited, and the target pixel coordinate is the first captured image. Indicates a two-dimensional coordinate specifying the target pixel 126.
 ステップ336Eで、取得部110Dは、ステップ340の処理と同様の処理を実行した後、図25に示すステップ330Fへ移行する。 In step 336E, the acquisition unit 110D performs the same process as the process in step 340, and then proceeds to step 330F illustrated in FIG.
 ステップ336Fで、取得部110Aは、第2位置で計測撮像ボタン90Aがオンされたか否かを判定する。ステップ336Fにおいて、計測撮像ボタン90Aがオンされていない場合は、判定が否定されて、ステップ336Gへ移行する。ステップ336Fにおいて、計測撮像ボタン90Aがオンされた場合は、判定が肯定されて、ステップ336Hへ移行する。 In Step 336F, the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the second position. If it is determined in step 336F that the measurement imaging button 90A is not turned on, the determination is negative and the process proceeds to step 336G. In step 336F, when the measurement imaging button 90A is turned on, the determination is affirmed and the process proceeds to step 336H.
 ステップ336Gで、取得部110Aは、第2導出処理を終了する条件を満足したか否かを判定する。なお、ステップ336Gの処理において、第2導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 336G, the acquisition unit 110A determines whether or not a condition for ending the second derivation process is satisfied. In the process of step 336G, the condition for ending the second derivation process is the same as the condition used in the process of step 302.
 ステップ336Gにおいて、第2導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ336Fへ移行する。ステップ336Gにおいて、第2導出処理を終了する条件を満足した場合は、判定が肯定されて、第2導出処理を終了する。 In step 336G, if the condition for ending the second derivation process is not satisfied, the determination is negative and the process proceeds to step 336F. In step 336G, when the condition for ending the second derivation process is satisfied, the determination is affirmed and the second derivation process is ended.
 ステップ336Hで、取得部110Aは、測距ユニット12及び測距制御部68に対して第2実測距離の計測を実行させ、かつ、撮像装置14に対して撮像を実行させ、その後、ステップ336Jへ移行する。 In step 336H, the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform the measurement of the second actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 336J. Transition.
 ステップ336Iで、取得部110Aは、ステップ336Hの処理が実行されることで測距ユニット12及び測距制御部68により計測された第2実測距離を取得する。また、ステップ336Iで、取得部110Aは、ステップ336Hの処理が実行されることで撮像装置14により撮像されて得られた第2撮像画像を示す第2撮像画像信号を取得する。なお、本ステップ336Iの処理が実行されることで取得される第2撮像画像信号により示される第2撮像画像は、ステップ336Hの処理が実行されることにより、合焦状態で撮像されて得られた第2撮像画像である。 In step 336I, the acquisition unit 110A acquires the second actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the processing in step 336H. In Step 336I, the acquisition unit 110A acquires a second captured image signal indicating a second captured image obtained by the imaging device 14 by executing the processing in Step 336H. Note that the second captured image indicated by the second captured image signal acquired by executing the processing of step 336I is obtained by being captured in the focused state by executing the processing of step 336H. It is the 2nd picked-up image.
 次のステップ336Jで、取得部110Aは、表示部86に対して、ステップ336Iの処理で取得した第2撮像画像信号により示される第2撮像画像の表示を開始させ、その後、ステップ336Kへ移行する。 In the next step 336J, the acquisition unit 110A causes the display unit 86 to start displaying the second captured image indicated by the second captured image signal acquired in the process of step 336I, and then proceeds to step 336K. .
 ステップ336Kで、導出部111Aは、焦点距離導出テーブル109Aを用いて、第2実測距離に対応する焦点距離を導出し、その後、ステップ336Lへ移行する。 In step 336K, the deriving unit 111A derives a focal length corresponding to the second actually measured distance using the focal length deriving table 109A, and then proceeds to step 336L.
 本ステップ336Kの処理で用いられる第2実測距離とは、ステップ336Iの処理が実行されることで取得部110Aによって取得された第2実測距離を指す。なお、本ステップ336Kの処理では、上述したステップ330Lの処理での焦点距離の導出方法と同様の導出方法が採用されている。 The second actually measured distance used in the process of step 336K indicates the second actually measured distance acquired by the acquisition unit 110A by executing the process of step 336I. In the process of step 336K, a derivation method similar to the focal distance derivation method in the process of step 330L described above is employed.
 ステップ336Lで、取得部110Aは、第2撮像画像に含まれる画素のうち、上記の注目画素126に対応する画素である対応注目画素を特定し、特定した対応注目画素を特定する対応注目画素座標を取得し、その後、ステップ336Mへ移行する。なお、本ステップ336Lの処理では、上述したステップ330Mの処理での対応注目画素座標の取得方法と同様の取得方法が採用されている。 In Step 336L, the acquisition unit 110A specifies the corresponding target pixel that is the pixel corresponding to the target pixel 126 among the pixels included in the second captured image, and specifies the specified corresponding target pixel coordinate. Then, the process proceeds to step 336M. In the process of step 336L, an acquisition method similar to the acquisition method of the corresponding target pixel coordinates in the process of step 330M described above is employed.
 ステップ336Mで、導出部111Aは、第2照射位置画素座標を導出し、その後、ステップ336Nへ移行する。すなわち、本ステップ336Mにおいて、導出部111Aは、第2撮像画像の画素のうち、ステップ314の処理で導出した第1照射位置画素座標により特定される画素の位置と対応する画素の位置を特定する座標を第2照射位置画素座標として導出する。 In step 336M, the deriving unit 111A derives the second irradiation position pixel coordinates, and then proceeds to step 336N. That is, in step 336M, the deriving unit 111A identifies the position of the pixel corresponding to the position of the pixel identified by the first irradiation position pixel coordinates derived in the process of step 314 among the pixels of the second captured image. The coordinates are derived as the second irradiation position pixel coordinates.
 なお、第2撮像画像の画素のうち、第1照射位置画素座標により特定される画素の位置と対応する画素は、上述した対応注目画素の特定方法と同様に、第1及び第2撮像画像を解析対象としてパターンマッチング等の既存の画像解析を実行することで特定される。 Of the pixels of the second captured image, the pixels corresponding to the position of the pixel specified by the first irradiation position pixel coordinates are the first and second captured images as in the above-described method of specifying the corresponding target pixel. The analysis target is specified by executing existing image analysis such as pattern matching.
 ステップ336Nで、導出部111Aは、照射位置実空間座標、照射位置画素座標、焦点距離、撮像画素60A1の寸法、及び数式(2)に基づいて、撮像位置距離を導出し、その後、ステップ336Pへ移行する。 In step 336N, the deriving unit 111A derives the imaging position distance based on the irradiation position real space coordinates, the irradiation position pixel coordinates, the focal length, the size of the imaging pixel 60A1, and the mathematical expression (2), and then proceeds to step 336P. Transition.
 ここで、本ステップ336Nの処理で用いられる照射位置実空間座標とは、ステップ312の処理で導出された照射位置実空間座標を指す。また、本ステップ336Nの処理で用いられる照射位置画素座標とは、ステップ314の処理で導出された第1照射位置画素座標、及びステップ336Mの処理で導出された第2照射位置画素座標を指す。更に、本ステップ336Nの処理で用いられる焦点距離とは、ステップ336Kの処理で導出された焦点距離を指す。 Here, the irradiation position real space coordinates used in the process of step 336N indicate the irradiation position real space coordinates derived in the process of step 312. Further, the irradiation position pixel coordinates used in the process of step 336N indicate the first irradiation position pixel coordinates derived in the process of step 314 and the second irradiation position pixel coordinates derived in the process of step 336M. Further, the focal length used in the processing of step 336N indicates the focal length derived by the processing of step 336K.
 よって、本ステップ336Nでは、照射位置実空間座標、照射位置画素座標、焦点距離、及び撮像画素60A1の寸法が数式(2)に代入されることで、撮像位置距離である“B”が導出される。 Therefore, in this step 336N, the imaging position distance “B” is derived by substituting the irradiation position real space coordinates, the irradiation position pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1 into Expression (2). The
 ステップ336Pで、導出部111Aは、一例として図33に示すように、表示部86に対して、ステップ336Nの処理で導出された撮像位置距離を第2撮像画像に重畳した表示を開始させる。また、ステップ336Pで、導出部111Aは、ステップ336Nの処理で導出された撮像位置距離を既定の記憶領域に記憶し、その後、ステップ336Qへ移行する。 In step 336P, the derivation unit 111A causes the display unit 86 to start displaying the imaging position distance derived in the process of step 336N on the second captured image, as illustrated in FIG. 33 as an example. In step 336P, the deriving unit 111A stores the imaging position distance derived in the process of step 336N in a predetermined storage area, and then proceeds to step 336Q.
 ステップ336Qで、導出部111Aは、第2導出処理を終了する条件を満足したか否かを判定する。なお、ステップ336Qの処理において、第2導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 336Q, the derivation unit 111A determines whether or not a condition for ending the second derivation process is satisfied. In the process of step 336Q, the condition for ending the second derivation process is the same as the condition used in the process of step 302.
 ステップ336Qにおいて、第2導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ336Qの判定が再び行われる。ステップ336Qにおいて、第2導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ336Rへ移行する。 In step 336Q, when the condition for ending the second derivation process is not satisfied, the determination is denied and the determination in step 336Q is performed again. If the condition for ending the second derivation process is satisfied in step 336Q, the determination is affirmed and the process proceeds to step 336R.
 ステップ336Rで、導出部111Aは、表示部86に対して、第2撮像画像及び重畳表示情報の表示を終了させ、その後、第2導出処理を終了する。なお、本ステップ336Qの処理において、重畳表示情報とは、現時点で第2撮像画像に重畳して表示されている各種情報を指し、例えば、第2実測距離及び撮像位置距離等を指す。 In step 336R, the derivation unit 111A ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the second derivation process. Note that in the processing of this step 336Q, the superimposed display information refers to various types of information that are currently superimposed on the second captured image, such as the second actually measured distance and the imaging position distance.
 このように、第2導出処理では、第1導出処理のように平面方程式を用いる必要がない。そのため、第2導出処理は、第1導出処理に比べ、撮像位置距離の導出にかかる負荷が小さい。また、レーザ光の実際の照射位置が特定可能画素位置に対応する実空間上での位置と一致する場合、第2導出処理による撮像位置距離の導出精度は、第1導出処理による撮像位置距離の導出精度よりも高くなる。 Thus, in the second derivation process, it is not necessary to use a plane equation as in the first derivation process. For this reason, the load for deriving the imaging position distance is smaller in the second derivation process than in the first derivation process. In addition, when the actual irradiation position of the laser light coincides with the position in the real space corresponding to the identifiable pixel position, the derivation accuracy of the imaging position distance by the second derivation process is equal to the imaging position distance by the first derivation process. It becomes higher than the derivation accuracy.
 次に、3次元座標導出ボタン90Gがオンされた場合にCPU100が3次元座標導出プログラム108Aを実行することにより3次元座標導出機能を働かせることで実現される3次元座標導出処理について図34を参照して説明する Next, with reference to FIG. 34, the three-dimensional coordinate derivation process realized by the CPU 100 executing the three-dimensional coordinate derivation program 108A when the three-dimensional coordinate derivation button 90G is turned on to activate the three-dimensional coordinate derivation function will be described with reference to FIG. To explain
 図34に示す3次元座標導出処理では、先ず、ステップ350で、導出部111Aは、第1導出処理に含まれるステップ330Rの処理、又は第2導出処理に含まれるステップ336Nの処理で撮像位置距離が既に導出されているか否かを判定する。ステップ350において、第1導出処理に含まれるステップ330Rの処理及び第2導出処理に含まれるステップ336Nの処理の何れの処理でも撮像位置距離が導出されていない場合は、判定が否定されて、ステップ358へ移行する。ステップ350において、第1導出処理に含まれるステップ330Rの処理、又は第2導出処理に含まれるステップ336Nの処理で撮像位置距離が既に導出されている場合は、判定が肯定されて、ステップ352へ移行する。 In the three-dimensional coordinate derivation process shown in FIG. 34, first, in step 350, the derivation unit 111A performs the imaging position distance by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process. It is determined whether or not is already derived. In step 350, if the imaging position distance is not derived in any of the processing of step 330R included in the first derivation processing and the processing of step 336N included in the second derivation processing, the determination is negative, and the step 358. In step 350, if the imaging position distance has already been derived by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process, the determination is affirmed and the process proceeds to step 352. Transition.
 ステップ352で、導出部111Aは、指定画素3次元座標の導出を開始する条件(以下、「導出開始条件」という)を満足したか否かを判定する。導出開始条件の一例としては、指定画素3次元座標の導出を開始する指示がタッチパネル88によって受け付けられたとの条件、又は撮像位置距離が表示部86に表示されたとの条件等が挙げられる。 In step 352, the derivation unit 111A determines whether or not a condition for starting derivation of the designated pixel three-dimensional coordinates (hereinafter referred to as “derivation start condition”) is satisfied. Examples of the derivation start condition include a condition that an instruction to start derivation of the designated pixel three-dimensional coordinates is accepted by the touch panel 88, a condition that the imaging position distance is displayed on the display unit 86, and the like.
 ステップ352において、導出開始条件を満足していない場合は、判定が否定されて、ステップ358へ移行する。ステップ352において、導出開始条件を満足した場合は、判定が肯定されて、ステップ354へ移行する。 In step 352, if the derivation start condition is not satisfied, the determination is denied and the process proceeds to step 358. If the derivation start condition is satisfied in step 352, the determination is affirmed and the routine proceeds to step 354.
 ステップ354で、導出部111Aは、注目画素座標、対応注目画素座標、撮像位置距離、焦点距離、撮像画素60A1の寸法、及び数式(2)に基づいて、指定画素3次元座標を導出し、その後、ステップ356へ移行する。 In step 354, the deriving unit 111A derives the designated pixel three-dimensional coordinates based on the pixel-of-interest coordinates, the corresponding pixel-of-interest coordinates, the imaging position distance, the focal length, the size of the imaging pixel 60A1, and the formula (2), and then The process proceeds to step 356.
 ここで、本ステップ354の処理で用いられる注目画素座標とは、第1導出処理に含まれるステップ330Dの処理、又は第2導出処理に含まれるステップ336Dの処理で取得された注目画素座標を指す。また、本ステップ354の処理で用いられる対応注目画素座標とは、第1導出処理に含まれるステップ330Mの処理、又は第2導出処理に含まれるステップ336Lの処理で取得された対応注目画素座標を指す。また、本ステップ354の処理で用いられる撮像位置距離とは、第1導出処理に含まれるステップ330Rの処理、又は第2導出処理に含まれるステップ336Nの処理で導出された撮像位置距離を指す。更に、本ステップ354の処理で用いられる焦点距離とは、第1導出処理に含まれるステップ330Lの処理、又は第2導出処理に含まれる336Kの処理で導出された焦点距離を指す。 Here, the pixel-of-interest coordinates used in the process of step 354 indicate the pixel-of-interest coordinates acquired by the process of step 330D included in the first derivation process or the process of step 336D included in the second derivation process. . Further, the corresponding target pixel coordinates used in the process of step 354 are the corresponding target pixel coordinates acquired in the process of step 330M included in the first derivation process or the process of step 336L included in the second derivation process. Point to. Further, the imaging position distance used in the process of step 354 indicates the imaging position distance derived by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process. Furthermore, the focal length used in the processing of step 354 indicates the focal length derived by the processing of step 330L included in the first derivation processing or the processing of 336K included in the second derivation processing.
 よって、本ステップ354では、注目画素座標、対応注目画素座標、撮像位置距離、焦点距離、及び撮像画素60A1の寸法が数式(2)に代入されることで、指定画素3次元座標が導出される。 Therefore, in step 354, the designated pixel three-dimensional coordinates are derived by substituting the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 into the formula (2). .
 ステップ356で、導出部111Aは、一例として図35に示すように、表示部86に対して、ステップ354の処理で導出された指定画素3次元座標を第2撮像画像に重畳して表示させる。また、ステップ356で、導出部111Aは、ステップ354の処理で導出された指定画素3次元座標を既定の記憶領域に記憶し、その後、ステップ358へ移行する。なお、既定の記憶領域の一例としては、一次記憶部102の記憶領域又は二次記憶部104の記憶領域が挙げられる。 In step 356, the derivation unit 111A causes the display unit 86 to superimpose the designated pixel three-dimensional coordinates derived in the process of step 354 on the second captured image, as shown in FIG. In step 356, the deriving unit 111A stores the designated pixel three-dimensional coordinates derived in the process of step 354 in a predetermined storage area, and then proceeds to step 358. Note that examples of the default storage area include a storage area of the primary storage unit 102 or a storage area of the secondary storage unit 104.
 なお、図35に示す例では、(20161,50134,136892)が、ステップ354の処理で導出された指定画素3次元座標に該当する。また、図35に示す例では、指定画素3次元座標が注目画素126に近接して表示されている。なお、注目画素126は、他の画素と区別可能に強調表示されるようにしてもよい。 In the example shown in FIG. 35, (20161, 50134, 136892) corresponds to the designated pixel three-dimensional coordinates derived in the process of step 354. In the example shown in FIG. 35, the designated pixel three-dimensional coordinates are displayed close to the target pixel 126. Note that the target pixel 126 may be highlighted so as to be distinguishable from other pixels.
 ステップ358で、導出部111Aは、3次元座標導出処理を終了する条件を満足したか否かを判定する。3次元座標導出処理を終了する条件の一例としては、タッチパネル88により3次元座標導出処理を終了する指示が受け付けられたとの条件が挙げられる。3次元座標導出処理を終了する条件の他の例としては、ステップ350で判定が否定されてからステップ350で判定が肯定されることなく第2既定時間が経過したとの条件等が挙げられる。なお、第2既定時間とは、例えば、30分を指す。 In step 358, the derivation unit 111A determines whether or not a condition for ending the three-dimensional coordinate derivation process is satisfied. As an example of the condition for ending the three-dimensional coordinate derivation process, a condition that an instruction for ending the three-dimensional coordinate derivation process is received from the touch panel 88 can be given. Another example of the condition for ending the three-dimensional coordinate derivation process is a condition that the second predetermined time has passed without the determination being affirmed in step 350 after the determination in step 350 is denied. The second predetermined time refers to, for example, 30 minutes.
 ステップ358において、本3次元座標導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ350へ移行する。ステップ358において、本3次元座標導出処理を終了する条件を満足した場合は、判定が肯定されて、本3次元座標導出処理を終了する。 If it is determined in step 358 that the conditions for ending the three-dimensional coordinate derivation process are not satisfied, the determination is negative and the routine proceeds to step 350. If it is determined in step 358 that the condition for ending the three-dimensional coordinate derivation process is satisfied, the determination is affirmed and the three-dimensional coordinate derivation process is ended.
 以上説明したように、測距装置10Aでは、測距ユニット12及び測距制御部68により計測された実測距離が取得部110Aによって取得される。そして、導出部111Aにより、実測距離と焦点距離との対応関係を示す焦点距離導出テーブル109Aが用いられて、取得部110Aで取得された実測距離に対応する焦点距離が導出される。 As described above, in the distance measuring device 10A, the actual distance measured by the distance measuring unit 12 and the distance measurement control unit 68 is acquired by the acquisition unit 110A. Then, the deriving unit 111A uses the focal length deriving table 109A indicating the correspondence between the actual measurement distance and the focal length, and derives the focal distance corresponding to the actual measurement distance acquired by the acquisition unit 110A.
 従って、測距装置10Aによれば、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, according to the distance measuring device 10A, in order to increase the focal length, the focal length can be increased with less precision than when the user inputs the length of the reference image included in the captured image. Can be derived.
 また、測距装置10Aでは、取得部110Aにより、第1撮像画像、第2撮像画像、及び第2実測距離が取得される(ステップ330J,336I)。また、導出部111Aにより、焦点距離導出テーブル109Aが用いられて、取得部110Aで取得された第2実測距離に対応する焦点距離が導出される(ステップ330L,336K)。そして、導出部111Aにより、導出された焦点距離に基づいて焦点位置距離が導出される(ステップ330R,336N)。 In the distance measuring device 10A, the acquisition unit 110A acquires the first captured image, the second captured image, and the second measured distance (steps 330J and 336I). The deriving unit 111A uses the focal length deriving table 109A to derive the focal length corresponding to the second measured distance acquired by the acquiring unit 110A ( steps 330L and 336K). Then, the deriving unit 111A derives the focal position distance based on the derived focal distance ( steps 330R and 336N).
 従って、測距装置10Aによれば、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに撮像位置距離を高精度に導出することができる。 Therefore, according to the distance measuring apparatus 10A, in order to increase the focal length accuracy, the imaging position distance can be increased without much effort compared to the case where the user inputs the length of the reference image included in the captured image. The accuracy can be derived.
 また、測距装置10Aでは、取得部110Aにより実測距離が取得される(ステップ206)。また、導出部111Aにより、焦点距離導出テーブル109Aが用いられて、取得部110Aで取得された実測距離に対応する焦点距離が導出される(ステップ209)。そして、導出部111Aにより、導出された焦点距離、指定された2つの画素の間隔、及び取得部110Aで取得された実測距離に基づいて、指定された2つの画素の間隔に対応する実空間上の区域の長さが導出される(ステップ242)。 Further, in the distance measuring device 10A, the actual measurement distance is acquired by the acquisition unit 110A (step 206). Further, the focal length derivation table 109A is used by the deriving unit 111A to derive the focal length corresponding to the actually measured distance acquired by the acquiring unit 110A (step 209). Based on the focal distance derived by the deriving unit 111A, the interval between the two specified pixels, and the measured distance acquired by the acquiring unit 110A, the real space corresponding to the interval between the two specified pixels Is derived (step 242).
 従って、測距装置10Aによれば、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、指定された2つの画素の間隔に実空間上の区域の長さを手間をかけずに高精度に導出することができる。 Therefore, according to the distance measuring device 10A, in order to increase the focal length accuracy, the distance between the two specified pixels is actually compared with the case where the user inputs the length of the reference image included in the captured image. The length of the area on the space can be derived with high accuracy without trouble.
 また、測距装置10Aでは、第1導出処理及び第2導出処理が選択的に実行される。第1導出処理は、レーザ光の実際の照射位置が第1撮像画像及び第2撮像画像の各々において互いに対応する位置で特定可能な画素と異なる画素の位置に対応する実空間上の位置の場合に、第2導出処理よりも高精度に撮像位置距離を導出する処理とされている。第2導出処理は、レーザ光の実際の照射位置が第1撮像画像及び第2撮像画像の各々において互いに対応する位置で特定可能な画素の位置に対応する実空間上の位置の場合に第1導出処理よりも高精度に撮像位置距離を導出する処理とされている。そして、測距装置10Aでは、導出部111Aにより導出された照射位置画素座標により特定される画素の位置が特定可能画素位置の場合に、第1導出処理と第2導出処理とがタッチパネル88によって受け付けられた指示に従って選択的に実行される。従って、測距装置10Aによれば、レーザ光の照射位置に拘わらず1種類の導出処理のみで撮像位置距離を導出する場合に比べ、撮像位置距離を高精度に導出することができる。 In the distance measuring apparatus 10A, the first derivation process and the second derivation process are selectively executed. In the first derivation process, the actual irradiation position of the laser light is a position in real space corresponding to a position of a pixel different from a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. In addition, the imaging position distance is derived with higher accuracy than the second derivation process. The second derivation process is the first when the actual irradiation position of the laser light is a position on the real space corresponding to the position of the pixel that can be specified at the position corresponding to each other in each of the first captured image and the second captured image. The imaging position distance is derived with higher accuracy than the derivation process. In the distance measuring device 10A, the first derivation process and the second derivation process are received by the touch panel 88 when the position of the pixel specified by the irradiation position pixel coordinates derived by the deriving unit 111A is the identifiable pixel position. Selectively executed according to the given instructions. Therefore, according to the distance measuring apparatus 10A, it is possible to derive the imaging position distance with higher accuracy compared to the case where the imaging position distance is derived by only one type of derivation process regardless of the irradiation position of the laser beam.
 また、測距装置10Aでは、導出部111Aにより導出された照射位置画素座標により特定される画素の位置が特定可能画素位置と異なる画素の位置の場合に、第1導出処理が実行される。従って、測距装置10Aによれば、照射位置画素座標により特定される画素の位置が特定可能画素位置と異なる画素の位置の場合に第1導出処理とは異なる導出処理で撮像位置距離を導出する場合に比べ、撮像位置距離を高精度に導出することができる。 Further, in the distance measuring device 10A, the first derivation process is executed when the pixel position specified by the irradiation position pixel coordinates derived by the derivation unit 111A is a pixel position different from the identifiable pixel position. Therefore, according to the distance measuring device 10A, when the pixel position specified by the irradiation position pixel coordinates is a pixel position different from the identifiable pixel position, the imaging position distance is derived by a derivation process different from the first derivation process. Compared to the case, the imaging position distance can be derived with high accuracy.
 また、測距装置10Aでは、第2導出処理は、第1導出処理による撮像位置距離の導出で用いられるパラメータの個数よりも少ない複数のパラメータに基づいて撮像位置距離を導出する処理とされている。従って、測距装置10Aによれば、レーザ光の照射位置に拘わらず第1導出処理のみで撮像位置距離を導出する場合に比べ、撮像位置距離を低負荷で導出することができる。 In the distance measuring device 10A, the second derivation process is a process of deriving the imaging position distance based on a plurality of parameters smaller than the number of parameters used in the derivation of the imaging position distance by the first derivation process. . Therefore, according to the distance measuring device 10A, it is possible to derive the imaging position distance with a low load compared to the case where the imaging position distance is derived only by the first derivation process regardless of the irradiation position of the laser beam.
 また、測距装置10Aでは、導出部111Aにより導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置の場合に、一致メッセージ137Aが表示部86に表示される。従って、測距装置10Aによれば、導出部111Aにより導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置であることをユーザに認識させた上で第1導出処理又は第2導出処理を選択させることができる。 In the distance measuring device 10A, when the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is the identifiable pixel position, the coincidence message 137A is displayed on the display unit 86. Therefore, according to the distance measuring device 10A, the first derivation process is performed after the user recognizes that the position of the pixel specified by the first irradiation position pixel coordinate derived by the deriving unit 111A is the identifiable pixel position. Alternatively, the second derivation process can be selected.
 また、測距装置10Aでは、導出部111Aにより導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と異なる画素の位置の場合に、不一致メッセージ137Bが表示部86に表示される。従って、測距装置10Aによれば、導出部111Aにより導出された第1照射位置画素座標により特定される画素の位置が特定可能画素位置と異なる画素の位置であることをユーザに認識させた上で第1導出処理と第2導出処理とを選択させることができる。 In the distance measuring device 10A, the mismatch message 137B is displayed on the display unit 86 when the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position. Is displayed. Therefore, according to the distance measuring apparatus 10A, the user is made to recognize that the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position. Thus, the first derivation process and the second derivation process can be selected.
 また、測距装置10Aでは、撮像位置距離導出処理で導出された撮像位置距離に基づいて、指定画素3次元座標が導出される(図34参照)。従って、測距装置10Aによれば、レーザ光の照射位置に拘わらず1種類の導出処理のみで撮像位置距離が導出される場合に比べ、指定画素3次元座標を高精度に導出することができる。 In the distance measuring device 10A, the designated pixel three-dimensional coordinates are derived based on the imaging position distance derived by the imaging position distance deriving process (see FIG. 34). Therefore, according to the distance measuring apparatus 10A, the designated pixel three-dimensional coordinates can be derived with higher accuracy than when the imaging position distance is derived by only one type of derivation process regardless of the irradiation position of the laser beam. .
 また、測距装置10Aでは、指定画素3次元座標は、注目画素座標、対応注目画素座標、撮像位置距離、焦点距離、及び撮像画素60A1の寸法に基づいて規定されている(数式(2)参照)。従って、測距装置10Aによれば、指定画素3次元座標が注目画素座標、対応注目画素座標、撮像位置距離、焦点距離、及び撮像画素60A1の寸法に基づいて規定されない場合に比べ、指定画素3次元座標を高精度に導出することができる。 In the distance measuring device 10A, the designated pixel three-dimensional coordinates are defined based on the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal distance, and the dimensions of the imaging pixel 60A1 (see Expression (2)). ). Therefore, according to the distance measuring device 10A, the designated pixel 3 is compared with the case where the designated pixel three-dimensional coordinates are not defined based on the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1. Dimensional coordinates can be derived with high accuracy.
 また、測距装置10Aでは、導出部111Aにより、3特徴画素座標、対応特徴画素座標、焦点距離、及び撮像画素60A1の寸法に基づいて、数式(7)に示す平面方程式により規定される平面の向きが導出される(ステップ330P)。また、導出部111Aにより、平面の向きとステップ312の処理で導出された照射位置実空間座標とに基づいて数式(7)に示す平面方程式が確定される(ステップ330Q)。そして、導出部111Aにより、確定された平面方程式、及び特徴画素3次元座標(例えば、第1特徴画素3次元座標)に基づいて撮像位置距離が導出される(ステップ330R)。従って、測距装置10Aによれば、導出部111Aにより導出された照射位置画素座標により特定される画素の位置が特定可能画素位置と異なる画素の位置の場合に平面方程式を用いずに撮像位置距離を導出する場合に比べ、撮像位置距離を高精度に導出することができる。 In the distance measuring device 10A, the derivation unit 111A uses a plane equation defined by the plane equation shown in Equation (7) based on the three feature pixel coordinates, the corresponding feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1. The direction is derived (step 330P). In addition, the plane equation shown in Formula (7) is determined by the deriving unit 111A based on the orientation of the plane and the irradiation position real space coordinates derived in Step 312 (Step 330Q). Then, the imaging position distance is derived by the deriving unit 111A based on the determined plane equation and the feature pixel three-dimensional coordinates (for example, the first feature pixel three-dimensional coordinates) (step 330R). Therefore, according to the distance measuring apparatus 10A, when the pixel position specified by the irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position, the imaging position distance is not used without using the plane equation. The imaging position distance can be derived with higher accuracy than when deriving.
 また、測距装置10Aでは、取得部110Aにより3特徴画素座標が取得され(ステップ330E)、取得部110Aにより対応特徴画素座標が取得される(ステップ330N)。そして、導出部111Aにより、3特徴画素座標、対応特徴画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に基づいて撮像位置距離が導出される(ステップ330R)。従って、測距装置10Aによれば、3特徴画素座標及び対応特徴画素座標を取得するにあたって特徴的な3画素をユーザに指定させる場合に比べ、少ない操作数で、3特徴画素座標及び対応特徴画素座標を基に撮像位置距離を導出することができる。 In the distance measuring apparatus 10A, the acquisition unit 110A acquires three feature pixel coordinates (step 330E), and the acquisition unit 110A acquires corresponding feature pixel coordinates (step 330N). The deriving unit 111A derives the imaging position distance based on the three feature pixel coordinates, the corresponding feature pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1 (step 330R). Therefore, according to the distance measuring device 10A, the three feature pixel coordinates and the corresponding feature pixels are reduced in the number of operations compared to the case where the user designates the three characteristic pixel coordinates when acquiring the three feature pixel coordinates and the corresponding feature pixel coordinates. The imaging position distance can be derived based on the coordinates.
 また、測距装置10Aでは、画素指定情報がタッチパネル88によって受け付けられ、受け付けられた画素指定情報により指定された画素が注目画素126とされ、取得部110Aにより、注目画素座標が取得される(ステップ330D,336D)。また、取得部110Aにより、注目画素126に対応する画素である対応注目画素が特定される。そして、取得部110Aにより、対応画素注目画素を特定する対応注目画素座標が取得される(ステップ330M,336L)。従って、測距装置10Aによれば、第1撮像画像及び第2撮像画像の両方に関する指定画素がユーザにより指定される場合に比べ、第1撮像画像及び第2撮像画像の両方に関する指定画素を迅速に決めることができる。 In the distance measuring device 10A, the pixel designation information is received by the touch panel 88, the pixel designated by the accepted pixel designation information is set as the target pixel 126, and the target pixel coordinates are acquired by the acquisition unit 110A (Step S1). 330D, 336D). In addition, the acquisition unit 110 </ b> A identifies a corresponding target pixel that is a pixel corresponding to the target pixel 126. Then, the corresponding pixel-of-interest coordinates that identify the corresponding pixel-of-interest pixel are acquired by the acquisition unit 110A ( steps 330M and 336L). Therefore, according to the distance measuring device 10A, the designated pixels related to both the first captured image and the second captured image are quickly compared with the case where the designated pixels related to both the first captured image and the second captured image are specified by the user. Can be decided.
 また、測距装置10Aには、測距ユニット12及び測距制御部68が含まれ、測距ユニット12及び測距制御部68によって計測された第1実測距離及び第2実測距離が取得部110Aにより取得される。従って、測距装置10Aによれば、測距ユニット12及び測距制御部68により取得された第1実測距離及び第2実測距離を照射位置実空間座標及び照射位置画素座標の導出に用いることができる。 The ranging device 10A includes a ranging unit 12 and a ranging control unit 68, and the first measured distance and the second measured distance measured by the ranging unit 12 and the ranging control unit 68 are obtained by the acquiring unit 110A. Obtained by Therefore, according to the distance measuring device 10A, the first actually measured distance and the second actually measured distance acquired by the distance measuring unit 12 and the distance measuring control unit 68 are used for deriving the irradiation position real space coordinates and the irradiation position pixel coordinates. it can.
 また、測距装置10Aには、撮像装置14が含まれ、撮像装置14により被写体が撮像されて得られた第1撮像画像及び第2撮像画像が取得部110Aにより取得される。従って、測距装置10Aによれば、撮像装置14により被写体が撮像されて得られた第1撮像画像及び第2撮像画像を撮像位置距離の導出に用いることができる。 Further, the distance measuring device 10A includes the imaging device 14, and the acquisition unit 110A acquires the first captured image and the second captured image obtained by imaging the subject by the imaging device 14. Therefore, according to the distance measuring device 10A, the first captured image and the second captured image obtained by imaging the subject by the imaging device 14 can be used for deriving the imaging position distance.
 更に、測距装置10Aでは、導出部111Aによる導出結果が表示部86によって表示される(図33及び図35参照)。従って、測距装置10Aによれば、導出部111Aによる導出結果が表示部86によって表示されない場合に比べ、導出部111Aによる導出結果をユーザに容易に認識させることができる。 Furthermore, in the distance measuring device 10A, the derivation result by the derivation unit 111A is displayed by the display unit 86 (see FIGS. 33 and 35). Therefore, according to the distance measuring device 10A, the derivation result obtained by the derivation unit 111A can be easily recognized by the user as compared with the case where the derivation result obtained by the derivation unit 111A is not displayed by the display unit 86.
 なお、上記第1実施形態では、焦点距離導出テーブル109Aを用いて焦点距離を導出する場合について説明したが、本開示の技術はこれに限定されるものではなく、例えば、下記の数式(12)を用いて焦点距離を導出してもよい。なお、数式(12)において、“f”は、焦点距離である。“fzoom”は、レンズユニット16内でのズームレンズ52の光軸方向の位置として予め定められた公称焦点距離であり、固定値とされている。ズームレンズ52の光軸方向の位置は、撮像面60Bから被写体側への距離で規定されており、単位はミリメートルである。“d”は、実測距離である。 In the first embodiment, the case where the focal length is derived using the focal length derivation table 109A has been described. However, the technique of the present disclosure is not limited to this, and for example, the following formula (12) May be used to derive the focal length. In Expression (12), “f 0 ” is a focal length. “F zoom ” is a nominal focal length predetermined as a position in the optical axis direction of the zoom lens 52 in the lens unit 16, and is a fixed value. The position of the zoom lens 52 in the optical axis direction is defined by the distance from the imaging surface 60B to the subject, and the unit is millimeters. “D” is an actually measured distance.
Figure JPOXMLDOC01-appb-M000012

 
Figure JPOXMLDOC01-appb-M000012

 
 また、上記第1実施形態では、3特徴画素座標を例示したが、本開示の技術はこれに限定されるものでない。例えば、3特徴画素座標に代えて、特徴的な4画素以上の既定数の画素の各々を特定する2次元座標を採用してもよい。 In the first embodiment, three feature pixel coordinates are exemplified, but the technology of the present disclosure is not limited to this. For example, instead of the three characteristic pixel coordinates, two-dimensional coordinates that specify each of a predetermined number of characteristic four or more pixels may be adopted.
 また、上記第1実施形態では、注目画素座標が第1撮像画像上の座標から取得され、対応注目画素座標が第2撮像画像上の座標から取得される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、注目画素座標が第2撮像画像上の座標から取得され、対応注目画素座標が第1撮像画像上の座標から取得されるようにしてもよい。 In the first embodiment, the case where the pixel-of-interest coordinates are acquired from the coordinates on the first captured image and the corresponding pixel-of-interest coordinates are acquired from the coordinates on the second captured image is exemplified. Is not limited to this. For example, the target pixel coordinates may be acquired from the coordinates on the second captured image, and the corresponding target pixel coordinates may be acquired from the coordinates on the first captured image.
 また、上記第1実施形態では、3特徴画素座標が第1撮像画像上の座標から取得され、対応特徴画素座標が第2撮像画像上の座標から取得される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、3特徴画素座標が第2撮像画像上の座標から取得され、対応特徴画素座標が第1撮像画像上の座標から取得されるようにしてもよい。 In the first embodiment, the case where the three feature pixel coordinates are acquired from the coordinates on the first captured image and the corresponding feature pixel coordinates are acquired from the coordinates on the second captured image is illustrated. The technology is not limited to this. For example, three feature pixel coordinates may be acquired from coordinates on the second captured image, and corresponding feature pixel coordinates may be acquired from coordinates on the first captured image.
 また、上記第1実施形態では、3特徴画素座標として、第1画素130、第2画素132、及び第3画素134の各々を特定する2次元座標が取得部110Aによって取得される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、図36に示すように、第1画素130A、第2画素132A、及び第3画素134Aの各々を特定する2次元座標が取得部110Aによって取得されるようにしてもよい。第1画素130A、第2画素132A、及び第3画素134Aは、外壁面画像128において取り囲まれる面積が最大となる3画素である。なお、3画素に限らず、外壁面画像128において取り囲まれる面積が最大となる3画素以上の既定数の画素であればよい。 In the first embodiment, a case where two-dimensional coordinates specifying each of the first pixel 130, the second pixel 132, and the third pixel 134 is acquired by the acquisition unit 110A as the three characteristic pixel coordinates is illustrated. However, the technology of the present disclosure is not limited to this. For example, as illustrated in FIG. 36, two-dimensional coordinates that specify each of the first pixel 130A, the second pixel 132A, and the third pixel 134A may be acquired by the acquisition unit 110A. The first pixel 130 </ b> A, the second pixel 132 </ b> A, and the third pixel 134 </ b> A are three pixels that have the maximum area surrounded by the outer wall surface image 128. Note that the number of pixels is not limited to three pixels, but may be a predetermined number of pixels that is three or more pixels that maximizes the area surrounded by the outer wall surface image 128.
 このように、図36に示す例では、外壁面画像128において取り囲まれる面積が最大となる3画素が特徴的な3画素として特定され、特定された3画素に関する2次元座標が3特徴画素座標として取得部110Aによって取得される。また、取得部110Aにより、3特徴画素座標に対応する対応特徴画素座標も取得される。従って、測距装置10Aによれば、特徴的な3画素として、取り囲む面積が最大とならない複数の画素を特定する3特徴画素座標及び対応特徴画素座標が取得される場合に比べ、撮像位置距離を高精度に導出することができる。 In this way, in the example shown in FIG. 36, the three pixels having the maximum area surrounded by the outer wall surface image 128 are identified as characteristic three pixels, and the two-dimensional coordinates relating to the identified three pixels are defined as the three characteristic pixel coordinates. Obtained by the obtaining unit 110A. In addition, the corresponding feature pixel coordinates corresponding to the three feature pixel coordinates are also acquired by the acquisition unit 110A. Therefore, according to the distance measuring device 10A, as compared with the case where three characteristic pixel coordinates and corresponding characteristic pixel coordinates that specify a plurality of pixels whose surrounding area is not the maximum are acquired as characteristic three pixels, the imaging position distance is set. It can be derived with high accuracy.
 また、上記第1実施形態では、3次元座標導出ボタン90Gがオンされた場合に撮像位置距離導出処理が実現される場合について説明したが、本開示の技術はこれに限定されるものではない。例えば、撮像位置距離導出ボタン90Fがオンされた場合に撮像位置距離導出処理が実行されるようにしてもよい。上記第1実施形態で説明した撮像位置距離導出処理は、3次元座標の導出を最終的な目的とした場合の一例である。 In the first embodiment, the case where the imaging position distance deriving process is realized when the three-dimensional coordinate deriving button 90G is turned on has been described, but the technology of the present disclosure is not limited to this. For example, the imaging position distance derivation process may be executed when the imaging position distance derivation button 90F is turned on. The imaging position distance deriving process described in the first embodiment is an example in the case where the ultimate purpose is to derive three-dimensional coordinates.
 そのため、3次元座標の導出で要する注目画素座標及び対応画素座標を撮像位置距離導出処理で取得しているが、撮像位置距離の導出のみが目的の場合、撮像位置距離導出処理での注目画素座標及び対応画素座標の取得は不要である。よって、CPU100は、撮像位置距離導出ボタン90Fがオンされた場合、注目画素座標及び対応画素座標を取得せずに撮像位置距離を導出し、次いで、3次元座標導出ボタン90Gがオンされた場合に、注目画素座標及び対応画素座標を取得してもよい。この場合、CPU100は、例えば、図34に示す3次元座標導出処理のステップ352の処理とステップ354の処理との間で注目画素座標及び対応注目画素座標を取得し、取得した注目画素座標及び対応注目画素座標をステップ354の処理で用いればよい。 Therefore, the target pixel coordinates and corresponding pixel coordinates required for derivation of the three-dimensional coordinates are acquired by the imaging position distance derivation process, but when only the imaging position distance is derived, the target pixel coordinates in the imaging position distance derivation process are obtained. And acquisition of corresponding pixel coordinates is unnecessary. Therefore, when the imaging position distance derivation button 90F is turned on, the CPU 100 derives the imaging position distance without acquiring the target pixel coordinates and the corresponding pixel coordinates, and then when the three-dimensional coordinate derivation button 90G is turned on. The target pixel coordinates and the corresponding pixel coordinates may be acquired. In this case, for example, the CPU 100 acquires the target pixel coordinates and the corresponding target pixel coordinates between the processing in step 352 and the processing in step 354 of the three-dimensional coordinate derivation processing illustrated in FIG. The pixel coordinates of interest may be used in the process of step 354.
 また、上記第1実施形態では、第1導出処理と第2導出処理とが選択的に実行される場合について説明したが、本開示の技術はこれに限定されるものではない。例えば、ステップ318において判定が肯定された場合に、導出部111Aにより、第2導出処理が強制的に実行され、ステップ318において判定が否定された場合に、導出部111Aにより、第1導出処理が強制的に実行されるようにしてもよい。 In the first embodiment, the case where the first derivation process and the second derivation process are selectively executed has been described, but the technology of the present disclosure is not limited to this. For example, when the determination in step 318 is affirmed, the derivation unit 111A forcibly executes the second derivation process, and when the determination is negative in step 318, the derivation unit 111A performs the first derivation process. It may be forcibly executed.
 [第2実施形態]
 上記第1実施形態では、焦点距離導出テーブル109Aを用いて焦点距離を導出する場合を例示したが、本第2実施形態では、焦点距離導出テーブル109B(図6及び図37参照)を用いて焦点距離を導出する場合について説明する。なお、本第2実施形態では、上記第1実施形態で説明した構成要素と同一の構成要素については同一の符号を付し、その説明を省略する。
[Second Embodiment]
In the first embodiment, the case where the focal length is derived using the focal length derivation table 109A is exemplified. However, in the second embodiment, the focal length is derived using the focal length derivation table 109B (see FIGS. 6 and 37). A case where the distance is derived will be described. In the second embodiment, the same components as those described in the first embodiment are denoted by the same reference numerals, and the description thereof is omitted.
 本第2実施形態に係る測距装置10Bは、一例として図6に示すように、測距装置10Aに比べ、二次記憶部104に寸法導出プログラム105Aに代えて寸法導出プログラム105Bが記憶されている点が異なる。また、測距装置10Bは、一例として図6に示すように、測距装置10Aに比べ、二次記憶部104に撮像位置距離導出プログラム106Aに代えて撮像位置距離導出プログラム106Bが記憶されている点が異なる。また、測距装置10Bは、一例として図6に示すように、測距装置10Aに比べ、二次記憶部104に焦点距離導出テーブル109Aに代えて焦点距離導出テーブル109Bが記憶されている点が異なる。 As shown in FIG. 6 as an example, the distance measuring device 10B according to the second embodiment stores a size deriving program 105B in place of the size deriving program 105A in the secondary storage unit 104, as compared to the distance measuring device 10A. Is different. As shown in FIG. 6 as an example, the distance measuring device 10B stores an imaging position distance deriving program 106B in place of the imaging position distance deriving program 106A in the secondary storage unit 104, as compared with the distance measuring device 10A. The point is different. In addition, as shown in FIG. 6 as an example, the distance measuring device 10B is different from the distance measuring device 10A in that a focal length deriving table 109B is stored in the secondary storage unit 104 instead of the focal length deriving table 109A. Different.
 CPU100は、寸法導出プログラム105B及び撮像位置距離導出プログラム106Bのうちの少なくとも一方を実行することで、一例として図8に示すように、取得部110B及び導出部111Bとして動作する。取得部110Bは、上記第1実施形態で説明した取得部110Aに対応し、導出部111Bは、上記第1実施形態で説明した導出部111Aに対応する。なお、本第2実施形態では、説明の便宜上、取得部110B及び導出部111Bについては、上記第1実施形態で説明した取得部110A及び導出部111Aと異なる部分について説明する。 The CPU 100 operates as an acquisition unit 110B and a deriving unit 111B as illustrated in FIG. 8 as an example by executing at least one of the dimension deriving program 105B and the imaging position distance deriving program 106B. The acquisition unit 110B corresponds to the acquisition unit 110A described in the first embodiment, and the derivation unit 111B corresponds to the derivation unit 111A described in the first embodiment. In the second embodiment, for the sake of convenience, the acquisition unit 110B and the derivation unit 111B will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
 一例として図37に示すように、焦点距離導出テーブル109Bは、導出用距離と、ズームレンズ52の光軸方向の位置と、焦点距離との対応関係を示すテーブルである。ここで、ズームレンズ52の光軸方向とは、例えば、光軸L2の方向を指す。なお、ズームレンズ52の光軸方向の位置は、撮像面60Bから被写体側への距離で規定されており、単位はミリメートルである。また、以下では、説明の便宜上、焦点距離導出テーブル109Bに格納されている「ズームレンズ52の光軸方向の位置」を単に「導出用位置」と称する。 As an example, as shown in FIG. 37, the focal length deriving table 109B is a table showing the correspondence between the deriving distance, the position of the zoom lens 52 in the optical axis direction, and the focal length. Here, the optical axis direction of the zoom lens 52 indicates, for example, the direction of the optical axis L2. The position of the zoom lens 52 in the optical axis direction is defined by the distance from the imaging surface 60B to the subject side, and the unit is millimeters. In the following, for convenience of explanation, the “position of the zoom lens 52 in the optical axis direction” stored in the focal length derivation table 109B is simply referred to as “derivation position”.
 一例として図37に示すように、焦点距離導出テーブル109Bでは、複数の導出用距離が規定されており、導出用距離毎に、複数の導出用位置の各々に対して、焦点距離が対応付けられている。図37に示す例では、導出用距離の1メートルに対して、導出用位置の18ミリメートル、23ミリメートル、35ミリメートル、及び55ミリメートルの各々について、異なる焦点距離が対応付けられている。導出用距離の2メートル、3メートル、5メートル、10メートル、30メートル、及び無限遠の各々に対しても同様に、各導出用位置について、異なる焦点距離が対応付けられている。 As an example, as shown in FIG. 37, a plurality of derivation distances are defined in the focal length derivation table 109B, and a focal distance is associated with each of a plurality of derivation positions for each derivation distance. ing. In the example shown in FIG. 37, a different focal length is associated with each of 18 millimeters, 23 millimeters, 35 millimeters, and 55 millimeters of the derivation position with respect to 1 meter of the derivation distance. Similarly, for each of the derivation distances of 2 meters, 3 meters, 5 meters, 10 meters, 30 meters, and infinity, a different focal length is associated with each derivation position.
 なお、焦点距離導出テーブル109Bは、例えば、測距装置10Bの実機による試験、及び、測距装置10Bの設計仕様等に基づくコンピュータ・シミュレーションの少なくとも一方の結果から導き出されたテーブルである。 Note that the focal length derivation table 109B is a table derived from at least one result of, for example, an actual test of the distance measuring device 10B and a computer simulation based on the design specifications of the distance measuring device 10B.
 次に、測距装置10Bの本開示の技術に係る部分の作用として、CPU100が寸法導出プログラム105Bを実行することにより寸法導出機能を働かせることで実現される寸法導出処理について図15及び図38を参照して説明する。なお、上記第1実施形態で説明した寸法導出処理(図14及び図15参照)に含まれるステップと同一のステップについては、同一のステップ番号を付して、その説明を省略する。 Next, as an operation of the portion related to the technology of the present disclosure of the distance measuring device 10B, FIG. 15 and FIG. The description will be given with reference. In addition, the same step number is attached | subjected about the step included in the dimension derivation process (refer FIG.14 and FIG.15) demonstrated in the said 1st Embodiment, and the description is abbreviate | omitted.
 本第2実施形態に係る寸法導出処理(図38参照)は、上記第1実施形態で説明した寸法導出処理(図14参照)に比べ、ステップ206の処理に代えてステップ370の処理を有する点が異なる。また、本第2実施形態に係る寸法導出処理は、上記第1実施形態で説明した寸法導出処理に比べ、ステップ209の処理に代えてステップ372の処理を有する点が異なる。 The dimension derivation process (see FIG. 38) according to the second embodiment has a process of step 370 instead of the process of step 206, compared to the dimension derivation process (see FIG. 14) described in the first embodiment. Is different. Further, the dimension derivation process according to the second embodiment is different from the dimension derivation process described in the first embodiment in that it includes a process in step 372 instead of the process in step 209.
 図38に示すステップ370の処理は、図14に示すステップ206の処理に比べ、取得部110Bが位置情報を更に取得する点が異なる。なお、ここで、位置情報とは、レンズユニット16内でのズームレンズ52の光軸方向(光軸L2の方向)の位置を示す情報を指す。 38 is different from the process in step 206 shown in FIG. 14 in that the acquisition unit 110B further acquires position information. Here, the position information refers to information indicating the position of the zoom lens 52 in the lens unit 16 in the optical axis direction (the direction of the optical axis L2).
 なお、本実施形態では、位置情報の一例として、レンズユニット16内でのズームレンズ52の光軸方向の現在位置を示す情報を採用しているが、本開示の技術はこれに限定されるものではない。例えば、数フレーム(例えば、2フレーム)前の撮像タイミングでのレンズユニット16内でのズームレンズ52の光軸方向の位置を示す情報を位置情報として用いることも可能である。 In the present embodiment, information indicating the current position of the zoom lens 52 in the optical axis direction in the lens unit 16 is used as an example of the position information, but the technology of the present disclosure is limited to this. is not. For example, information indicating the position of the zoom lens 52 in the optical axis direction in the lens unit 16 at the imaging timing several frames (for example, two frames) ago can be used as the position information.
 図38に示すステップ372の処理は、図14に示すステップ209の処理に比べ、導出部111Bが焦点距離導出テーブル109Aに代えて焦点距離導出テーブル109Bを用いて焦点距離を導出する点が異なる。 38 is different from the process in step 209 shown in FIG. 14 in that the derivation unit 111B derives the focal length using the focal length derivation table 109B instead of the focal length derivation table 109A.
 すなわち、ステップ372で、導出部111Bは、焦点距離導出テーブル109Bを用いて、実測距離及び位置情報に対応する焦点距離を導出する。本ステップ372の処理で用いられる実測距離は、ステップ370の処理が実行されることで取得部110Bによって取得された実測距離である。また、本ステップ372の処理で用いられる位置情報は、ステップ370の処理が実行されることで取得部110Bによって取得された位置情報である。 That is, in step 372, the deriving unit 111B derives the focal distance corresponding to the actually measured distance and the position information using the focal distance deriving table 109B. The actually measured distance used in the process of step 372 is the actually measured distance acquired by the acquisition unit 110B by executing the process of step 370. Further, the position information used in the process of step 372 is the position information acquired by the acquisition unit 110B by executing the process of step 370.
 ここで、位置情報に対応する焦点距離とは、焦点距離導出テーブル109Bに含まれる複数の導出用位置のうち、位置情報により示されるズームレンズ52の光軸方向の位置に一致する導出用位置に対応する焦点距離を指す。 Here, the focal length corresponding to the position information is a derivation position that matches the position in the optical axis direction of the zoom lens 52 indicated by the position information among a plurality of derivation positions included in the focal length derivation table 109B. Refers to the corresponding focal length.
 なお、焦点距離導出テーブル109Bに、位置情報により示されるズームレンズ52の光軸方向の位置に一致する導出用位置が存在しない場合、導出部111Bは、焦点距離導出テーブル109Bの導出用位置から上述の補間法で焦点距離を導出する。 In the case where there is no derivation position that matches the position in the optical axis direction of the zoom lens 52 indicated by the position information in the focal length derivation table 109B, the derivation unit 111B determines from the derivation position of the focal length derivation table 109B as described above. The focal length is derived by the interpolation method.
 次に、CPU100が撮像位置距離導出プログラム106Bを実行することにより撮像位置距離導出機能を働かせることで実現される撮像位置距離導出処理について図21、図22、図24、図39、及び図40を参照して説明する。なお、上記第1実施形態で説明した撮像位置距離導出処理(図21~図25参照)に含まれるステップと同一のステップについては、同一のステップ番号を付して、その説明を省略する。 Next, FIG. 21, FIG. 22, FIG. 24, FIG. 39, and FIG. 40 are described with respect to the imaging position distance derivation processing realized by the CPU 100 executing the imaging position distance derivation program 106B and using the imaging position distance derivation function. The description will be given with reference. Note that the same steps as those included in the imaging position distance deriving process (see FIGS. 21 to 25) described in the first embodiment are denoted by the same step numbers, and description thereof is omitted.
 本第2実施形態に係る撮像位置距離導出処理は、上記第1実施形態で説明した撮像位置距離導出処理に比べ、ステップ330Jの処理に代えてステップ380の処理を有する点が異なる。また、本第2実施形態に係る撮像位置距離導出処理は、上記第1実施形態で説明した撮像位置距離導出処理に比べ、ステップ330Lの処理に代えてステップ382の処理を有する点が異なる。また、本第2実施形態に係る撮像位置距離導出処理は、上記第1実施形態で説明した撮像位置距離導出処理に比べ、ステップ336Iの処理に代えてステップ390の処理を有する点が異なる。更に、本第2実施形態に係る撮像位置距離導出処理は、上記第1実施形態で説明した撮像位置距離導出処理に比べ、ステップ336Kの処理に代えてステップ392の処理を有する点が異なる。 The imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes the process of step 380 instead of the process of step 330J. Further, the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes the process of step 382 instead of the process of step 330L. Further, the imaging position distance deriving process according to the second embodiment differs from the imaging position distance deriving process described in the first embodiment in that it includes a process of step 390 instead of the process of step 336I. Furthermore, the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes a process of step 392 instead of the process of step 336K.
 図39に示すステップ380の処理は、図23に示すステップ330Jの処理に比べ、取得部110Bが位置情報を更に取得する点が異なる。 39 is different from the process in step 330J shown in FIG. 23 in that the acquisition unit 110B further acquires the position information.
 図39に示すステップ382の処理は、図23に示すステップ330Lの処理に比べ、導出部111Bが焦点距離導出テーブル109Aに代えて焦点距離導出テーブル109Bを用いて焦点距離を導出する点が異なる。 The process of step 382 shown in FIG. 39 differs from the process of step 330L shown in FIG. 23 in that the deriving unit 111B derives the focal distance using the focal distance deriving table 109B instead of the focal distance deriving table 109A.
 すなわち、ステップ382で、導出部111Bは、焦点距離導出テーブル109Bを用いて、第2実測距離及び位置情報に対応する焦点距離を導出する。本ステップ382の処理で用いられる第2実測距離は、ステップ380の処理が実行されることで取得部110Bによって取得された第2実測距離である。また、本ステップ382の処理で用いられる位置情報は、ステップ380の処理が実行されることで取得部110Bによって取得された位置情報である。なお、本ステップ382では、図15に示すステップ372の処理での焦点距離の導出方法と同様の導出方法により焦点距離が導出される。 That is, in step 382, the deriving unit 111B derives the focal length corresponding to the second actually measured distance and the position information using the focal length deriving table 109B. The second actually measured distance used in the process of step 382 is the second actually measured distance acquired by the acquisition unit 110B by executing the process of step 380. Further, the position information used in the process of step 382 is the position information acquired by the acquisition unit 110B by executing the process of step 380. In step 382, the focal length is derived by the same derivation method as the focal length derivation method in step 372 shown in FIG.
 図40に示すステップ390の処理は、図25に示すステップ336Iの処理に比べ、取得部110Bが位置情報を更に取得する点が異なる。 40 is different from the process in step 336I shown in FIG. 25 in that the acquisition unit 110B further acquires position information.
 図40に示すステップ392の処理は、図25に示すステップ336Kの処理に比べ、導出部111Bが焦点距離導出テーブル109Aに代えて焦点距離導出テーブル109Bを用いて焦点距離を導出する点が異なる。 The process of step 392 shown in FIG. 40 differs from the process of step 336K shown in FIG. 25 in that the derivation unit 111B derives the focal length using the focal length derivation table 109B instead of the focal length derivation table 109A.
 すなわち、ステップ392で、導出部111Bは、焦点距離導出テーブル109Bを用いて、第2実測距離及び位置情報に対応する焦点距離を導出する。 That is, in step 392, the deriving unit 111B derives the focal length corresponding to the second actually measured distance and the position information using the focal length deriving table 109B.
 本ステップ392の処理で用いられる位置情報は、ステップ390の処理が実行されることで取得部110Bによって取得された位置情報を指す。なお、本ステップ392では、ステップ372の処理での焦点距離の導出方法と同様の導出方法により焦点距離が導出される。本ステップ392の処理で用いられる第2実測距離は、ステップ390の処理が実行されることで取得部110Bによって取得された第2実測距離である。また、本ステップ392の処理で用いられる位置情報は、ステップ390の処理が実行されることで取得部110Bによって取得された位置情報である。 The position information used in the process of step 392 indicates the position information acquired by the acquisition unit 110B by executing the process of step 390. In step 392, the focal length is derived by the same derivation method as the focal length derivation method in the processing of step 372. The second actually measured distance used in the process of step 392 is the second actually measured distance acquired by the acquisition unit 110B by executing the process of step 390. Further, the position information used in the process of step 392 is the position information acquired by the acquisition unit 110B by executing the process of step 390.
 以上説明したように、測距装置10Bでは、実測距離と、ズームレンズ52の光軸方向の位置と、焦点距離との対応関係を示す焦点距離導出テーブル109Bが規定されている。また、取得部110Bにより、第2実測距離及び位置情報が取得される(ステップ380)。そして、導出部111Bにより、焦点距離導出テーブル109Bが用いられて、取得部110Bで取得された第2実測距離及び位置情報に対応する焦点距離が導出される(ステップ382)。 As described above, in the distance measuring device 10B, the focal length derivation table 109B indicating the correspondence between the measured distance, the position of the zoom lens 52 in the optical axis direction, and the focal length is defined. Also, the second measured distance and position information are acquired by the acquisition unit 110B (step 380). The deriving unit 111B uses the focal length deriving table 109B to derive the focal length corresponding to the second actually measured distance and the position information acquired by the acquiring unit 110B (step 382).
 従って、測距装置10Bによれば、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Therefore, according to the distance measuring device 10B, the position of the zoom lens 52 in the optical axis direction is larger than that in the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. Even if it changes, the focal length can be derived with high accuracy without taking time and effort.
 なお、上記第2実施形態では、撮像位置距離導出処理に含まれるステップ382(392)の処理が実行されることで第2実測距離及び位置情報に対応する焦点距離が導出される場合について説明したが本開示の技術はこれに限定されるものではない。 In the second embodiment, the case where the focal length corresponding to the second actually measured distance and the position information is derived by executing the process of step 382 (392) included in the imaging position distance deriving process has been described. However, the technology of the present disclosure is not limited to this.
 例えば、図21に示す撮像位置距離導出処理において、ステップ306の処理に代えて第1処理(図示省略)を適用し、ステップ310の処理に代えて第2処理(図示省略)を適用するようにしてもよい。ここで、第1処理とは、取得部110Bが、第1実測距離、第1撮像画像信号、及び位置情報を取得する処理を指す。また、第2処理とは、導出部111Bが、焦点距離導出テーブル109Bを用いて第1実測距離及び位置情報に対応する焦点距離を導出する処理を指す。第2処理で用いられる第1実測距離及び位置情報は、第1処理が実行されることで取得部110Bによって取得された第1実測距離及び位置情報である。 For example, in the imaging position distance derivation process shown in FIG. 21, the first process (not shown) is applied instead of the process of step 306, and the second process (not shown) is applied instead of the process of step 310. May be. Here, the first process refers to a process in which the acquisition unit 110B acquires the first measured distance, the first captured image signal, and the position information. The second process refers to a process in which the deriving unit 111B derives a focal distance corresponding to the first actually measured distance and the position information using the focal distance deriving table 109B. The first measured distance and position information used in the second process are the first measured distance and position information acquired by the acquisition unit 110B by executing the first process.
 また、上記第2実施形態では、焦点距離導出テーブル109Bを用いて焦点距離を導出する場合について説明したが、本開示の技術はこれに限定されるものではなく、例えば、上述した数式(12)を用いて焦点距離を導出してもよい。この場合、数式(12)では、“fzoom”として、位置情報により示されるズームレンズ52の光軸方向の位置が採用されている。 In the second embodiment, the case where the focal length is derived using the focal length derivation table 109B has been described. However, the technique of the present disclosure is not limited to this, and for example, the above-described formula (12) May be used to derive the focal length. In this case, in Expression (12), the position in the optical axis direction of the zoom lens 52 indicated by the position information is adopted as “f zoom ”.
 また、上記第2実施形態では、導出用距離、導出用位置、及び焦点距離の対応関係を示す焦点距離導出テーブル109Bを例示したが、本開示の技術はこれに限定されるものではない。例えば、図41A~図41Eに示す焦点距離導出テーブル109Cを用いて焦点距離を導出してもよい。 In the second embodiment, the focal distance derivation table 109B indicating the correspondence between the derivation distance, the derivation position, and the focal distance is exemplified, but the technology of the present disclosure is not limited to this. For example, the focal length may be derived using the focal length derivation table 109C shown in FIGS. 41A to 41E.
 図41A~図41Eに示す焦点距離導出テーブル109Cは、導出用距離と、導出用位置と、導出用温度と、導出用フォーカスレンズ姿勢と、導出用ズームレンズ姿勢と、焦点距離との対応関係を示すテーブルである。すなわち、焦点距離導出テーブル109Cでは、導出用距離と、導出用位置と、導出用温度と、導出用フォーカスレンズ姿勢と、導出用ズームレンズ姿勢と、焦点距離とが対応付けられている。 The focal length derivation table 109C shown in FIGS. 41A to 41E shows the correspondence relationship between the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance. It is a table to show. That is, in the focal distance derivation table 109C, the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance are associated with each other.
 ここで、導出用温度とは、撮像装置14による撮像に影響を及ぼす領域の温度を指す。撮像装置14による撮像に影響を及ぼす領域の温度とは、例えば、外気の温度、レンズユニット16の内部の空間の温度、又は撮像素子60の温度等を指す。なお、導出用温度の単位は“℃”である。 Here, the temperature for derivation refers to the temperature of the region that affects the imaging by the imaging device 14. The temperature of the region that affects imaging by the imaging device 14 refers to, for example, the temperature of the outside air, the temperature of the space inside the lens unit 16, or the temperature of the imaging element 60. The unit of the temperature for derivation is “° C.”.
 また、導出用フォーカスレンズ姿勢とは、鉛直方向に対するフォーカスレンズ50の姿勢を指す。導出用フォーカスレンズ姿勢が焦点距離導出テーブル109Cにおいてパラメータの1つとされているのは、フォーカスレンズ移動機構53がモータ57の動力に依らずにフォーカスレンズ50の自重により動くことでフォーカスレンズ50が光軸L2に沿って撮像面60B側又は被写体側へ移動してしまい、焦点距離に影響を及ぼすからである。 Also, the derivation focus lens orientation refers to the orientation of the focus lens 50 with respect to the vertical direction. The derivation focus lens posture is one of the parameters in the focal length derivation table 109C because the focus lens moving mechanism 53 moves by the weight of the focus lens 50 without depending on the power of the motor 57, so that the focus lens 50 emits light. This is because it moves along the axis L2 to the imaging surface 60B side or the subject side and affects the focal length.
 なお、本実施形態において、鉛直方向に対するフォーカスレンズ50の姿勢は、鉛直方向とフォーカスレンズ50の光軸との成す角度で規定することが可能である。 In the present embodiment, the posture of the focus lens 50 with respect to the vertical direction can be defined by the angle formed by the vertical direction and the optical axis of the focus lens 50.
 また、導出用ズームレンズ姿勢とは、鉛直方向に対するズームレンズ52の姿勢を指す。導出用ズームレンズ姿勢が焦点距離導出テーブル109Cにおいてパラメータの1つとされているのは、ズームレンズ移動機構54がモータ56の動力に依らずにズームレンズ52の自重により動くことでズームレンズ52が光軸L2に沿って撮像面60B側又は被写体側へ移動してしまい、焦点距離に影響を及ぼすからである。 Also, the derivation zoom lens orientation refers to the orientation of the zoom lens 52 with respect to the vertical direction. The derivation zoom lens posture is one of the parameters in the focal length derivation table 109C. The zoom lens moving mechanism 54 moves by the weight of the zoom lens 52 without depending on the power of the motor 56, so that the zoom lens 52 is lighted. This is because it moves along the axis L2 to the imaging surface 60B side or the subject side and affects the focal length.
 なお、本実施形態において、鉛直方向に対するズームレンズ52の姿勢は、鉛直方向とズームレンズ54の光軸との成す角度で規定することが可能である。 In the present embodiment, the attitude of the zoom lens 52 with respect to the vertical direction can be defined by the angle formed by the vertical direction and the optical axis of the zoom lens 54.
 焦点距離導出テーブル109Cでは、複数の導出用距離の各々に対して複数の導出用位置が対応付けられている。また、複数の導出用位置の各々に対して複数の導出用温度が対応付けられている。また、複数の導出用温度の各々に対して複数の導出用フォーカスレンズ姿勢が対応付けられている。また、複数の導出用フォーカスレンズ姿勢の各々に対して複数の導出用ズームレンズ姿勢が対応付けられている。そして、規定されている全ての導出用ズームレンズ姿勢の各々に対して個別に焦点距離が対応付けられている。 In the focal length derivation table 109C, a plurality of derivation positions are associated with each of a plurality of derivation distances. A plurality of derivation temperatures are associated with each of the plurality of derivation positions. Also, a plurality of derivation focus lens postures are associated with each of the plurality of derivation temperatures. A plurality of derivation zoom lens postures are associated with each of the plurality of derivation focus lens postures. A focal length is individually associated with each of all prescribed derivation zoom lens postures.
 なお、焦点距離導出テーブル109Cは、例えば、測距装置10Bの実機による試験、及び、測距装置10Bの設計仕様等に基づくコンピュータ・シミュレーションの少なくとも一方の結果から導き出されたテーブルである。 Note that the focal length derivation table 109C is a table derived from, for example, a result of at least one of a test by an actual device of the distance measuring device 10B and a computer simulation based on a design specification of the distance measuring device 10B.
 CPU100は、焦点距離導出テーブル109Cを用いて焦点距離を導出する場合、実測距離及び位置情報の他に、温度情報、フォーカスレンズ姿勢情報、及びズームレンズ姿勢情報を更に取得する。 When deriving the focal distance using the focal distance deriving table 109C, the CPU 100 further acquires temperature information, focus lens attitude information, and zoom lens attitude information in addition to the actually measured distance and position information.
 温度情報とは、撮像装置14による撮像に影響を及ぼす領域の温度情報を指す。なお、本実施形態では、温度情報の一例として、撮像装置14による撮像に影響を及ぼす領域の現在の温度を示す情報を採用しているが、本開示の技術はこれに限定されるものではない。例えば、数フレーム(例えば、2フレーム)前の撮像タイミングでの撮像装置14による撮像に影響を及ぼす領域の温度を示す情報を温度情報として用いることも可能である。 The temperature information refers to temperature information of a region that affects imaging by the imaging device 14. In the present embodiment, as an example of the temperature information, information indicating the current temperature of a region that affects imaging by the imaging device 14 is employed, but the technology of the present disclosure is not limited to this. . For example, it is also possible to use information indicating the temperature of a region that affects imaging by the imaging device 14 at imaging timing several frames (for example, 2 frames) before as temperature information.
 温度情報は、センサ(図示省略)により検出された温度であってもよいし、温度情報としてタッチパネル88によって受け付けられた情報であってもよい。また、測距装置10Aがインターネット(図示省略)を介してサーバ等の外部装置と通信可能とされている場合、外部装置からインターネットを介して提供される天気情報から取得される温度であってもよい。 The temperature information may be a temperature detected by a sensor (not shown) or may be information received by the touch panel 88 as temperature information. Further, when the distance measuring device 10A can communicate with an external device such as a server via the Internet (not shown), even if the temperature is acquired from weather information provided from the external device via the Internet Good.
 フォーカスレンズ姿勢情報とは、鉛直方向に対するフォーカスレンズ50の姿勢を示す情報を指す。なお、本実施形態では、フォーカスレンズ姿勢情報の一例として、鉛直方向に対する現在のフォーカスレンズ50の姿勢を示す情報を採用しているが、本開示の技術はこれに限定されるものではない。例えば、数フレーム(例えば、2フレーム)前の撮像タイミングでの鉛直方向に対する現在のフォーカスレンズ50の姿勢を示す情報をフォーカスレンズ姿勢情報として用いることも可能である。 The focus lens attitude information refers to information indicating the attitude of the focus lens 50 with respect to the vertical direction. In the present embodiment, information indicating the current posture of the focus lens 50 with respect to the vertical direction is used as an example of the focus lens posture information, but the technology of the present disclosure is not limited to this. For example, information indicating the current attitude of the focus lens 50 with respect to the vertical direction at the imaging timing several frames before (for example, two frames) can be used as the focus lens attitude information.
 フォーカスレンズ姿勢情報は、例えば、センサ(図示省略)により鉛直方向に対するフォーカスレンズ50の姿勢が検出された検出結果であってもよいし、フォーカスレンズ姿勢情報としてタッチパネル88によって受け付けられた情報であってもよい。 The focus lens attitude information may be, for example, a detection result obtained by detecting the attitude of the focus lens 50 with respect to the vertical direction by a sensor (not shown), or information received by the touch panel 88 as the focus lens attitude information. Also good.
 ズームレンズ姿勢情報とは、鉛直方向に対するズームレンズ52の姿勢を示す情報を指す。なお、本実施形態では、ズームレンズ姿勢情報の一例として、鉛直方向に対する現在のズームレンズ52の姿勢を示す情報を採用しているが、本開示の技術はこれに限定されるものではない。例えば、数フレーム(例えば、2フレーム)前の撮像タイミングでの鉛直方向に対するズームレンズ52の姿勢を示す情報をズームレンズ姿勢情報として用いることも可能である。 The zoom lens attitude information refers to information indicating the attitude of the zoom lens 52 with respect to the vertical direction. In the present embodiment, information indicating the current posture of the zoom lens 52 with respect to the vertical direction is used as an example of the zoom lens posture information, but the technology of the present disclosure is not limited to this. For example, information indicating the attitude of the zoom lens 52 with respect to the vertical direction at the imaging timing several frames (for example, two frames) ago can be used as the zoom lens attitude information.
 ズームレンズ姿勢情報は、例えば、センサ(図示省略)により鉛直方向に対するズームレンズ52の姿勢が検出された検出結果であってもよいし、ズームレンズ姿勢情報としてタッチパネル88によって受け付けられた情報であってもよい。 The zoom lens posture information may be, for example, a detection result obtained by detecting the posture of the zoom lens 52 with respect to the vertical direction by a sensor (not shown), or information received by the touch panel 88 as zoom lens posture information. Also good.
 CPU100は、実測距離、位置情報、温度情報、フォーカスレンズ姿勢情報、及びズームレンズ姿勢情報を取得する。そして、CPU100は、焦点距離導出テーブル109Cを用いて、取得した実測距離、位置情報、温度情報、フォーカスレンズ姿勢情報、及びズームレンズ姿勢情報に対応する焦点距離を導出する。実測距離、位置情報、温度情報、フォーカスレンズ姿勢情報、及びズームレンズ姿勢情報に一致するパラメータが焦点距離導出テーブル109Cに存在しない場合は、上記各実施形態で説明したように、補間法により焦点距離が導出されるようにすればよい。 The CPU 100 acquires the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information. Then, the CPU 100 uses the focal length derivation table 109C to derive the focal length corresponding to the acquired actual measurement distance, position information, temperature information, focus lens posture information, and zoom lens posture information. When parameters that match the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information do not exist in the focal length derivation table 109C, the focal length is calculated by the interpolation method as described in the above embodiments. May be derived.
 このように、測距装置10Bによって焦点距離導出テーブル109Cが用いられることで、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化し、かつ、撮像装置14による撮像に影響を及ぼす領域の温度が変化し、かつ、フォーカスレンズ50の姿勢が変化し、かつ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 As described above, by using the focal length deriving table 109C by the distance measuring device 10B, the distance measuring device 10B allows the user to set the length of the reference image included in the captured image to the user in order to increase the accuracy of the focal length. Compared to the case of inputting, the position of the zoom lens 52 in the optical axis direction changes, the temperature of the region that affects the imaging by the imaging device 14 changes, the attitude of the focus lens 50 changes, and Even if the attitude of the zoom lens 52 changes, the focal length can be derived with high accuracy without taking time and effort.
 なお、焦点距離導出テーブル109Cでは、導出用距離と、導出用位置と、導出用温度と、導出用フォーカスレンズ姿勢と、導出用ズームレンズ姿勢と、焦点距離とが対応付けられているが、本開示の技術はこれに限定されるものではない。例えば、CPU100は、導出用位置、導出用温度、導出用フォーカスレンズ姿勢、及び導出用ズームレンズ姿勢のうちの少なくとも1つと、導出用距離と、焦点距離とが対応付けられた焦点距離導出テーブルを用いて焦点距離を導出するようにしてもよい。 In the focal distance deriving table 109C, the deriving distance, the deriving position, the deriving temperature, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other. The disclosed technique is not limited to this. For example, the CPU 100 generates a focal length derivation table in which at least one of the derivation position, the derivation temperature, the derivation focus lens attitude, and the derivation zoom lens attitude is associated with the derivation distance and the focal distance. It may be used to derive the focal length.
 例えば、焦点距離導出テーブルが導出用距離、導出用位置、導出用温度、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化し、かつ、撮像装置14による撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal distance derivation table is a table in which the derivation distance, the derivation position, the derivation temperature, and the focal distance are associated with each other, the distance measuring device 10B captures a captured image in order to increase the accuracy of the focal distance. Even when the position of the zoom lens 52 in the optical axis direction is changed and the temperature of the region that affects the imaging by the imaging device 14 is changed as compared with the case where the user inputs the length of the reference image included in the The focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用位置、導出用温度、導出用フォーカスレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化し、かつ、撮像装置14による撮像に影響を及ぼす領域の温度が変化し、かつ、フォーカスレンズ50の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Further, for example, when the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation temperature, the derivation focus lens posture, and the focal length are associated with each other, the distance measuring device 10B has high focal length accuracy. In order to achieve the same, the position of the zoom lens 52 in the direction of the optical axis changes compared to the case where the user inputs the length of the reference image included in the captured image, and the region that affects the imaging by the imaging device 14 Even if the temperature of the lens changes and the posture of the focus lens 50 changes, the focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用位置、導出用ズームレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化し、かつ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation zoom lens posture, and the focal length are associated with each other, the distance measuring device 10B increases the focal length accuracy. In addition, compared with the case where the user inputs the length of the reference image included in the captured image, even if the position of the zoom lens 52 in the optical axis direction is changed and the attitude of the zoom lens 52 is changed, it takes time and effort. Therefore, the focal length can be derived with high accuracy.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用ズームレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Further, for example, when the focal length derivation table is a table in which the derivation distance, the derivation zoom lens posture, and the focal length are associated with each other, the distance measuring device 10B captures a captured image in order to increase the accuracy of the focal length. Compared with the case where the user inputs the length of the reference image included in the image, even if the posture of the zoom lens 52 changes, the focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用フォーカスレンズ姿勢、導出用ズームレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、フォーカスレンズ50の姿勢が変化し、かつ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal length deriving table is a table in which the deriving distance, the deriving focus lens posture, the deriving zoom lens posture, and the focal length are associated with each other, the distance measuring device 10B increases the focal length accuracy. For the sake of illustration, even when the orientation of the focus lens 50 is changed and the orientation of the zoom lens 52 is changed as compared with the case where the user inputs the length of the reference image included in the captured image, it does not take time and effort. The focal length can be derived with high accuracy.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用温度、導出用フォーカスレンズ姿勢、導出用ズームレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、撮像装置14による撮像に影響を及ぼす領域の温度が変化し、かつ、フォーカスレンズ50の姿勢が変化し、かつ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Further, for example, when the focal distance deriving table is a table in which the deriving distance, the deriving temperature, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other, Compared with the case where the user inputs the length of the reference image included in the captured image in order to improve the accuracy, the temperature of the region that affects the image capturing by the image capturing device 14 changes, and the attitude of the focus lens 50 Even if the angle changes and the posture of the zoom lens 52 changes, the focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用温度、導出用ズームレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、撮像装置14による撮像に影響を及ぼす領域の温度が変化し、かつ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal length derivation table is a table in which the derivation distance, the derivation temperature, the derivation zoom lens attitude, and the focal length are associated with each other, the distance measuring device 10B increases the focal length accuracy. In addition, even when the temperature of the region that affects the imaging by the imaging device 14 is changed and the posture of the zoom lens 52 is changed as compared with the case where the user inputs the length of the reference image included in the captured image, The focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用位置、導出用フォーカスレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化し、かつ、フォーカスレンズ50の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation focus lens posture, and the focal length are associated with each other, the distance measuring device 10B increases the focal length accuracy. In addition, compared with the case where the user inputs the length of the reference image included in the captured image, even if the position of the zoom lens 52 in the optical axis direction changes and the attitude of the focus lens 50 changes, it takes time and effort. Therefore, the focal length can be derived with high accuracy.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用位置、導出用フォーカスレンズ姿勢、導出用ズームレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、ズームレンズ52の光軸方向の位置が変化し、かつ、フォーカスレンズ50の姿勢が変化し、かつ、ズームレンズ52の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Further, for example, when the focal distance deriving table is a table in which the deriving distance, the deriving position, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other, In order to improve accuracy, the position of the zoom lens 52 in the optical axis direction changes and the attitude of the focus lens 50 changes compared to when the user inputs the length of the reference image included in the captured image. And even if the attitude | position of the zoom lens 52 changes, a focal distance can be derived | led-out with high precision, without an effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用フォーカスレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、フォーカスレンズ50の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 Further, for example, when the focal distance deriving table is a table in which the deriving distance, the deriving focus lens posture, and the focal distance are associated with each other, the distance measuring device 10B captures the captured image in order to increase the accuracy of the focal distance. Compared with the case where the user inputs the length of the reference image included in the image, even if the posture of the focus lens 50 changes, the focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用温度、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、撮像装置14による撮像に影響を及ぼす領域の温度が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal length derivation table is a table in which the derivation distance, the derivation temperature, and the focal length are associated with each other, the distance measuring device 10B is included in the captured image in order to improve the focal length accuracy. Compared with the case where the length of the reference image to be input is input by the user, even if the temperature of the region that affects the imaging by the imaging device 14 changes, the focal length can be derived with high accuracy without taking time and effort.
 また、例えば、焦点距離導出テーブルが導出用距離、導出用温度、導出用フォーカスレンズ姿勢、及び焦点距離が対応付けられたテーブルの場合、測距装置10Bは、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、撮像装置14による撮像に影響を及ぼす領域の温度が変化し、かつ、フォーカスレンズ50の姿勢が変化したとしても、手間をかけずに焦点距離を高精度に導出することができる。 For example, when the focal length derivation table is a table in which the derivation distance, the derivation temperature, the derivation focus lens attitude, and the focal length are associated with each other, the distance measuring device 10B increases the focal length accuracy. In addition, even when the temperature of a region that affects imaging by the imaging device 14 is changed and the posture of the focus lens 50 is changed as compared with the case where the user inputs the length of the reference image included in the captured image, The focal length can be derived with high accuracy without taking time and effort.
 なお、CPU100は、焦点距離導出テーブル109Cに代えて、導出用距離と、導出用位置と、導出用温度と、導出用フォーカスレンズ姿勢と、導出用ズームレンズ姿勢と、焦点距離との対応関係を規定した焦点距離演算式を用いて焦点距離を導出するようにしてもよい。この場合、例えば、焦点距離演算式の従属変数が焦点距離とされ、焦点距離演算式の独立変数が実測距離、位置情報、温度情報、フォーカスレンズ姿勢情報、及びズームレンズ姿勢情報とされる。また、焦点距離演算式の独立変数は、位置情報、温度情報、フォーカスレンズ姿勢情報、及びズームレンズ姿勢情報のうちの少なくとも1つと実測距離とであってもよい。 The CPU 100 replaces the focal length derivation table 109C with the correspondence relationship between the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance. The focal length may be derived using a prescribed focal length calculation formula. In this case, for example, the dependent variable of the focal length calculation formula is the focal length, and the independent variables of the focal length calculation formula are the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information. Further, the independent variable of the focal length calculation formula may be at least one of position information, temperature information, focus lens posture information, and zoom lens posture information, and an actually measured distance.
 [第3実施形態]
 上記第1実施形態では、外壁面画像128の全体を対象として3特徴画素座標が取得される場合について説明したが、本第3実施形態では、外壁面画像128の一部を対象として3特徴画素座標が取得される場合について説明する。なお、本第3実施形態では、上記第1実施形態で説明した構成要素と同一の構成要素については同一の符号を付し、その説明を省略する。
[Third Embodiment]
In the first embodiment, the case where the three feature pixel coordinates are acquired for the entire outer wall image 128 has been described. However, in the third embodiment, the three feature pixels are used for a part of the outer wall image 128. A case where coordinates are acquired will be described. In the third embodiment, the same components as those described in the first embodiment are denoted by the same reference numerals, and the description thereof is omitted.
 本第3実施形態に係る測距装置10Cは、一例として図6に示すように、測距装置10Aに比べ、二次記憶部104に撮像位置距離導出プログラム106Aに代えて撮像位置距離導出プログラム106Cが記憶されている点が異なる。 As shown in FIG. 6 as an example, the distance measuring apparatus 10C according to the third embodiment has an imaging position distance derivation program 106C in place of the imaging position distance derivation program 106A in the secondary storage unit 104, as compared to the distance measurement apparatus 10A. Is different in that is stored.
 CPU100は、撮像位置距離導出プログラム106Cを実行することで、取得部110C及び導出部111Cとして動作する(図8参照)。 The CPU 100 operates as the acquisition unit 110C and the derivation unit 111C by executing the imaging position distance derivation program 106C (see FIG. 8).
 取得部110Cは、上記第1実施形態で説明した取得部110Aに対応し、導出部111Cは、上記第1実施形態で説明した導出部111Aに対応する。なお、本第3実施形態では、説明の便宜上、取得部110C及び導出部111Cについては、上記第1実施形態で説明した取得部110A及び導出部111Aと異なる部分について説明する。 The acquisition unit 110C corresponds to the acquisition unit 110A described in the first embodiment, and the derivation unit 111C corresponds to the derivation unit 111A described in the first embodiment. In the third embodiment, for the convenience of explanation, the acquisition unit 110C and the derivation unit 111C will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
 取得部110Cは、表示部86に対して第1撮像画像を表示させ、かつ、表示領域内で外壁面画像128を他の領域と区別可能に表示させる制御を行う。タッチパネル88は、表示部86に外壁面画像128が表示された状態で座標取得対象領域を指定する領域指定情報を受け付ける。ここで、座標取得対象領域とは、外壁面画像128のうちの一部の閉領域を指す。領域指定情報とは、座標取得対象領域を指定する情報を指す。 110C of acquisition parts perform control which displays the 1st picked-up image on the display part 86, and displays the outer wall surface image 128 so that it can distinguish with another area | region within a display area. The touch panel 88 receives region designation information for designating a coordinate acquisition target region in a state where the outer wall surface image 128 is displayed on the display unit 86. Here, the coordinate acquisition target area refers to a part of the closed area in the outer wall surface image 128. The area designation information refers to information that designates a coordinate acquisition target area.
 取得部110Cは、タッチパネル88によって受け付けられた領域指定情報により指定された座標取得対象領域から、3特徴画素座標を取得する。 The acquiring unit 110C acquires the three characteristic pixel coordinates from the coordinate acquisition target area specified by the area specifying information received by the touch panel 88.
 次に、測距装置10Cの本開示の技術に係る部分の作用として、CPU100が撮像位置距離導出プログラム106Cを実行することにより撮像位置距離導出機能を働かせることで実現される撮像位置距離導出処理の第1導出処理について図42を参照して説明する。なお、本第3実施形態では、図22に示すフローチャートと同一のステップについては、同一のステップ番号を付して、その説明を省略する。 Next, as an operation of the portion related to the technology of the present disclosure of the distance measuring device 10C, an imaging position distance derivation process realized by the CPU 100 executing the imaging position distance derivation function by executing the imaging position distance derivation program 106C. The first derivation process will be described with reference to FIG. In the third embodiment, the same steps as those in the flowchart shown in FIG. 22 are denoted by the same step numbers, and the description thereof is omitted.
 図42に示すフローチャートは、図22に示すフローチャートに比べ、ステップ330Eに代えてステップ400~418を有する点が異なる。 42 differs from the flowchart shown in FIG. 22 in that steps 400 to 418 are provided instead of step 330E.
 図42に示すステップ400で、取得部110Cは、第1撮像画像から外壁面画像128(図32参照)を特定し、その後、ステップ402へ移行する。 42, the acquisition unit 110C identifies the outer wall surface image 128 (see FIG. 32) from the first captured image, and then proceeds to step 402.
 ステップ402で、取得部110Cは、表示部86に対して、ステップ300の処理で特定した外壁面画像128を、第1撮像画像の表示領域内の他の領域と区別可能に強調した表示を開始させ、その後、ステップ404へ移行する。 In step 402, the acquisition unit 110C starts display on the display unit 86 in such a manner that the outer wall surface image 128 specified in the process of step 300 is emphasized so as to be distinguishable from other regions in the display region of the first captured image. Then, the process proceeds to step 404.
 ステップ404で、取得部110Cは、タッチパネル88によって領域指定情報が受け付けられ、受け付けられた領域指定情報により座標取得対象領域が指定されたか否かを判定する。 In step 404, the acquisition unit 110C determines whether or not the area designation information has been received by the touch panel 88 and the coordinate acquisition target area has been specified by the received area designation information.
 ステップ404において、領域指定情報により座標取得対象領域が指定されていない場合は、判定が否定されて、ステップ406へ移行する。ステップ404において、領域指定情報により座標取得対象領域が指定された場合は、判定が肯定されて、ステップ410へ移行する。 In step 404, when the coordinate acquisition target area is not designated by the area designation information, the determination is denied and the process proceeds to step 406. In step 404, when the coordinate acquisition target area is designated by the area designation information, the determination is affirmed and the process proceeds to step 410.
 ステップ406で、取得部110Cは、第1導出処理を終了する条件を満足したか否かを判定する。ステップ406において、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ404へ移行する。ステップ406において、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ408へ移行する。 In step 406, the acquisition unit 110C determines whether or not a condition for ending the first derivation process is satisfied. If the condition for ending the first derivation process is not satisfied at step 406, the determination is negative and the routine proceeds to step 404. If the condition for ending the first derivation process is satisfied at step 406, the determination is affirmed and the routine proceeds to step 408.
 ステップ408で、取得部110Cは、図22に示すステップ330Cの処理と同様の処理を実行し、その後、第1導出処理を終了する。 In step 408, the acquisition unit 110C executes the same process as the process in step 330C illustrated in FIG. 22, and then ends the first derivation process.
 ステップ410で、取得部110Cは、表示部86に対して、後述のステップ414の処理が実行されることで表示部86に表示される再指定メッセージの表示を終了させ、その後、ステップ412へ移行する。 In step 410, the acquisition unit 110 </ b> C ends display of the redesignation message displayed on the display unit 86 by executing the processing of step 414 described later on the display unit 86, and then proceeds to step 412. To do.
 ステップ412で、取得部110Cは、タッチパネル88によって受け付けられた領域指定情報により指定された座標取得対象領域158(図43参照)に、上記第1実施形態で説明した特徴的な3画素が存在するか否かを判定する。 In step 412, the acquisition unit 110 </ b> C has the characteristic three pixels described in the first embodiment in the coordinate acquisition target region 158 (see FIG. 43) designated by the region designation information received by the touch panel 88. It is determined whether or not.
 一例として図43に示すように、タッチパネル88によって受け付けられた領域指定情報により座標取得対象領域158が指定された場合、座標取得対象領域158には、模様124(図16参照)を示す模様画像160が含まれている。 As an example, as shown in FIG. 43, when the coordinate acquisition target area 158 is specified by the area specifying information received by the touch panel 88, a pattern image 160 indicating a pattern 124 (see FIG. 16) is displayed in the coordinate acquisition target area 158. It is included.
 図44に示す例では、座標取得対象領域158に、特徴的な3画素として、第1画素162、第2画素164、及び第3画素166が含まれている。図44に示す例において、第1画素162は、模様画像160の正面視左上隅の画素であり、第2画素164は、模様画像160の正面視左下隅の画素であり、第3画素166は、模様画像160の正面視右下隅の画素である。 44, the coordinate acquisition target area 158 includes a first pixel 162, a second pixel 164, and a third pixel 166 as characteristic three pixels. In the example shown in FIG. 44, the first pixel 162 is a pixel at the upper left corner of the pattern image 160 when viewed from the front, the second pixel 164 is a pixel at the lower left corner of the pattern image 160 when viewed from the front, and the third pixel 166 is These are pixels at the lower right corner of the pattern image 160 when viewed from the front.
 ステップ412において、タッチパネル88によって受け付けられた領域指定情報により指定された座標取得対象領域158に特徴的な3画素が存在しない場合は、判定が否定されて、ステップ414へ移行する。ステップ414において、タッチパネル88によって受け付けられた領域指定情報により指定された座標取得対象領域158に特徴的な3画素が存在する場合は、判定が肯定されて、ステップ416へ移行する。なお、本ステップ412において肯定判定される場合とは、例えば、図43に示すように、タッチパネル88によって受け付けられた領域指定情報により模様画像160を含む座標取得対象領域158が指定された場合を指す。 In step 412, when there are no characteristic three pixels in the coordinate acquisition target area 158 specified by the area specifying information received by the touch panel 88, the determination is negative and the process proceeds to step 414. In step 414, if there are three characteristic pixels in the coordinate acquisition target area 158 specified by the area specifying information received by the touch panel 88, the determination is affirmed and the process proceeds to step 416. Note that the case where an affirmative determination is made in step 412 indicates a case where the coordinate acquisition target region 158 including the pattern image 160 is designated by the region designation information received by the touch panel 88, as shown in FIG. .
 ステップ414で、取得部110Cは、表示部86に対して再指定メッセージを第1撮像画像の既定領域に重畳した表示を開始させ、その後、ステップ404へ移行する。再指定メッセージとは、例えば、「特徴的な模様又は建材等が含まれる閉領域を指定して下さい」とのメッセージを指す。なお、ここでは、再指定メッセージが可視表示される場合を例示しているが、本開示の技術はこれに限定されるものではなく、音声再生装置(図示省略)による音声の出力等の可聴表示又はプリンタによる印刷物の出力等の永久可視表示を可視表示に代えて行ってもよいし、併用してもよい。 In step 414, the acquisition unit 110C causes the display unit 86 to start displaying the redesignation message superimposed on the predetermined area of the first captured image, and then proceeds to step 404. The re-designation message indicates, for example, a message “Please specify a closed area including a characteristic pattern or building material”. Here, the case where the re-designation message is displayed visually is illustrated, but the technology of the present disclosure is not limited to this, and an audible display such as audio output by an audio playback device (not shown) Alternatively, the permanent visible display such as the output of the printed matter by the printer may be performed instead of the visible display, or may be used in combination.
 ステップ416で、取得部110Cは、表示部86に対して、外壁面画像128の強調した表示を終了させ、その後、ステップ418へ移行する。 In step 416, the acquisition unit 110C causes the display unit 86 to end the emphasized display of the outer wall surface image 128, and then proceeds to step 418.
 ステップ418で、取得部110Cは、タッチパネル88によって受け付けられた領域指定情報により指定された座標取得対象領域158において特徴的な3画素を特定する3特徴画素座標を取得し、その後、ステップ330Fへ移行する。なお、図44に示す例では、本ステップ418の処理が実行されることで、第1画素162、第2画素164、及び第3画素166の各々を特定する2次元座標が3特徴画素座標として取得部110Cによって取得される。 In step 418, the acquisition unit 110C acquires three characteristic pixel coordinates that specify three characteristic pixels in the coordinate acquisition target area 158 specified by the area specification information received by the touch panel 88, and then proceeds to step 330F. To do. In the example shown in FIG. 44, the processing in this step 418 is executed, so that the two-dimensional coordinates that specify each of the first pixel 162, the second pixel 164, and the third pixel 166 are three feature pixel coordinates. Obtained by the obtaining unit 110C.
 以上説明したように、測距装置10Cでは、第1撮像画像において外壁面画像128が他の領域と区別可能に表示部86に表示される。また、タッチパネル88によって領域指定情報が受け付けられ、受け付けられた領域指定情報により、外壁面画像128の一部である座標取得対象領域が指定される。そして、取得部110Cにより、座標取得対象領域に特徴的な3画素が含まれている場合、特徴的な3画素を特定する3特徴画素座標が取得され(ステップ418)、3特徴画素座標に対応する対応特徴画素座標も取得される(ステップ330N)。従って、測距装置10Cによれば、外壁面画像128の全体を対象として3特徴画素座標及び対応特徴画素座標を取得する場合に比べ、小さな負荷で3特徴画素座標及び対応特徴画素座標を取得することができる。 As described above, in the distance measuring device 10C, the outer wall surface image 128 is displayed on the display unit 86 so as to be distinguishable from other regions in the first captured image. In addition, area designation information is received by the touch panel 88, and a coordinate acquisition target area that is a part of the outer wall surface image 128 is designated by the received area designation information. Then, when the characteristic acquisition target area includes three characteristic pixels, the acquisition unit 110C acquires three characteristic pixel coordinates that specify the characteristic three pixels (step 418), and corresponds to the three characteristic pixel coordinates. Corresponding feature pixel coordinates are also acquired (step 330N). Therefore, according to the distance measuring device 10 </ b> C, the three feature pixel coordinates and the corresponding feature pixel coordinates are acquired with a smaller load than when the three feature pixel coordinates and the corresponding feature pixel coordinates are acquired for the entire outer wall surface image 128. be able to.
 [第4実施形態]
 上記各実施形態では、画像解析により特定の画像内で特徴的な3画素が探索されて特定される場合について説明したが、本第4実施形態では、特徴的な3画素がタッチパネル88に対する操作に従って指定される場合について説明する。なお、本第4実施形態では、上記各実施形態で説明した構成要素と同一の構成要素については同一の符号を付し、その説明を省略する。
[Fourth Embodiment]
In each of the above embodiments, the case where three characteristic pixels are searched and specified in a specific image by image analysis has been described. However, in the fourth embodiment, the characteristic three pixels are determined according to an operation on the touch panel 88. The case where it is specified will be described. In the fourth embodiment, the same components as those described in the above embodiments are denoted by the same reference numerals, and the description thereof is omitted.
 本第4実施形態に係る測距装置10Dは、測距装置10Aに比べ、二次記憶部104に撮像位置距離導出プログラム106Aに代えて撮像位置距離導出プログラム106Dが記憶されている点が異なる(図6参照)。 The distance measuring device 10D according to the fourth embodiment is different from the distance measuring device 10A in that an imaging position distance deriving program 106D is stored in the secondary storage unit 104 instead of the imaging position distance deriving program 106A ( (See FIG. 6).
 CPU100は、撮像位置距離導出プログラム106Dを実行することで、一例として図8に示すように、取得部110D及び導出部111Dとして動作する。 The CPU 100 operates as the acquisition unit 110D and the derivation unit 111D as illustrated in FIG. 8 as an example by executing the imaging position distance derivation program 106D.
 取得部110Dは、上記第1実施形態で説明した取得部110Aに対応し、導出部111Dは、上記第1実施形態で説明した導出部111Aに対応する。なお、本第4実施形態では、説明の便宜上、取得部110D及び導出部111Dについては、上記第1実施形態で説明した取得部110A及び導出部111Aと異なる部分について説明する。 The acquisition unit 110D corresponds to the acquisition unit 110A described in the first embodiment, and the derivation unit 111D corresponds to the derivation unit 111A described in the first embodiment. In the fourth embodiment, for the convenience of explanation, the acquisition unit 110D and the derivation unit 111D will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
 タッチパネル88は、第1撮像画像及び第2撮像画像の各々が表示部86に表示されている場合、上記第1実施形態で説明した画素指定情報を受け付ける。また、タッチパネル88は、第2撮像画像が表示部86に表示されている場合も、上記第1実施形態で説明した画素指定情報を受け付ける。 The touch panel 88 receives the pixel designation information described in the first embodiment when each of the first captured image and the second captured image is displayed on the display unit 86. The touch panel 88 also accepts the pixel designation information described in the first embodiment even when the second captured image is displayed on the display unit 86.
 取得部110Dは、第1撮像画像が表示部86に表示されている場合、タッチパネル88によって受け付けられた画素指定情報により指定された特徴的な3画素の各々を特定する2次元座標である第1特徴画素座標を取得する。第1特徴画素座標は、上記第1実施形態で説明した3特徴画素座標に対応する2次元座標である。 When the first captured image is displayed on the display unit 86, the acquisition unit 110D is a first two-dimensional coordinate that identifies each of the characteristic three pixels specified by the pixel specification information received by the touch panel 88. Get feature pixel coordinates. The first feature pixel coordinates are two-dimensional coordinates corresponding to the three feature pixel coordinates described in the first embodiment.
 取得部110Dは、第2撮像画像が表示部86に表示されている場合、タッチパネル88によって受け付けられた画素指定情報により指定された特徴的な3画素の各々を特定する2次元座標である第2特徴画素座標を取得する。第2特徴画素座標は、上記第1実施形態で説明した対応特徴画素座標に対応する2次元座標である。 When the second captured image is displayed on the display unit 86, the acquisition unit 110 </ b> D is a second two-dimensional coordinate that specifies each of the characteristic three pixels specified by the pixel specification information received by the touch panel 88. Get feature pixel coordinates. The second feature pixel coordinates are two-dimensional coordinates corresponding to the corresponding feature pixel coordinates described in the first embodiment.
 導出部111Dは、注目画素座標、対応注目画素座標、第1特徴画素座標、第2特徴画素座標、照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に基づいて、撮像位置距離を導出する。 The deriving unit 111D derives the imaging position distance based on the target pixel coordinates, the corresponding target pixel coordinates, the first feature pixel coordinates, the second feature pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. To do.
 次に、測距装置10Dの本開示の技術に係る部分の作用として、CPU100が撮像位置距離導出プログラム150を実行することにより撮像位置距離導出機能を働かせることで実現される撮像位置距離導出処理の第1導出処理について図45~図47を参照して説明する。なお、図22及び図23に示すフローチャートと同一のステップについては、同一のステップ番号を付して、その説明を省略する。 Next, as an operation of the portion related to the technology of the present disclosure of the distance measuring device 10D, an imaging position distance derivation process realized by the CPU 100 executing the imaging position distance derivation function by executing the imaging position distance derivation program 150. The first derivation process will be described with reference to FIGS. 45 to 47. FIG. Note that the same steps as those in the flowcharts shown in FIGS. 22 and 23 are denoted by the same step numbers, and description thereof is omitted.
 図45~図47に示すフローチャートは、図22及び図23に示すフローチャートに比べ、ステップ330Eに代えてステップ450~ステップ474を有する点が異なる。また、図45~図47に示すフローチャートは、図22及び図23に示すフローチャートに比べ、ステップ330N~ステップ330Pに代えてステップ476~ステップ502を有する点が異なる。 45 to 47 are different from the flowcharts shown in FIGS. 22 and 23 in that steps 450 to 474 are provided instead of step 330E. 45 to 47 is different from the flowcharts shown in FIGS. 22 and 23 in that steps 476 to 502 are provided instead of steps 330N to 330P.
 図45に示すステップ450で、取得部110Dは、上記第3実施形態で説明したステップ400の処理と同様の処理を実行し、その後、ステップ452へ移行する。 In step 450 shown in FIG. 45, the acquisition unit 110D executes the same processing as the processing in step 400 described in the third embodiment, and then proceeds to step 452.
 ステップ452で、取得部110Dは、上記第3実施形態で説明したステップ402の処理と同様の処理を実行し、その後、ステップ454へ移行する。 In step 452, the acquisition unit 110D executes the same process as the process in step 402 described in the third embodiment, and then proceeds to step 454.
 ステップ454で、取得部110Dは、タッチパネル88によって領域指定情報が受け付けられ、受け付けられた領域指定情報により第1座標取得対象領域178(図43参照)が指定されたか否かを判定する。なお、第1座標取得対象領域は、上記第2実施形態で説明した座標取得対象領域158に対応する領域である。 In step 454, the acquisition unit 110D determines whether or not the area designation information is received by the touch panel 88 and the first coordinate acquisition target area 178 (see FIG. 43) is specified by the received area designation information. Note that the first coordinate acquisition target area is an area corresponding to the coordinate acquisition target area 158 described in the second embodiment.
 ステップ454において、領域指定情報により第1座標取得対象領域178が指定されていない場合は、判定が否定されて、ステップ456へ移行する。ステップ454において、領域指定情報により第1座標取得対象領域178が指定された場合は、判定が肯定されて、ステップ458へ移行する。 In step 454, if the first coordinate acquisition target area 178 is not designated by the area designation information, the determination is denied and the process proceeds to step 456. In step 454, when the first coordinate acquisition target area 178 is designated by the area designation information, the determination is affirmed and the process proceeds to step 458.
 ステップ456で、取得部110Dは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ456の処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 456, the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 456, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ456において、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ454へ移行する。ステップ456において、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ330Cへ移行する。 In step 456, if the condition for ending the first derivation process is not satisfied, the determination is denied and the routine proceeds to step 454. If the condition for ending the first derivation process is satisfied at step 456, the determination is affirmed and the routine proceeds to step 330C.
 ステップ458で、取得部110Dは、上記第3実施形態で説明したステップ410の処理と同様の処理を実行し、その後、ステップ460へ移行する。 In step 458, the acquisition unit 110D performs the same process as the process in step 410 described in the third embodiment, and then proceeds to step 460.
 ステップ460で、取得部110Dは、表示部86に対して、タッチパネル88で受け付けられた領域指定情報により指定された第1座標取得対象領域178を、第1撮像画像の表示領域内の他の領域と区別可能に強調して表示させる。 In step 460, the acquisition unit 110D sets the first coordinate acquisition target region 178 specified by the region specification information received by the touch panel 88 for the display unit 86 to another region in the display region of the first captured image. It is highlighted so that it can be distinguished.
 次のステップ462で、取得部110Dは、タッチパネル88によって受け付けられた画素指定情報により3画素が指定されたか否かを判定する。ステップ462において、タッチパネル88によって受け付けられた画素指定情報により3画素が指定されていない場合(例えば、指定された画素の個数が3個未満の場合)は、判定が否定されて、ステップ464へ移行する。ステップ462において、タッチパネル88によって受け付けられた画素指定情報により3画素が指定された場合は、判定が肯定されて、ステップ468へ移行する。 In the next step 462, the acquisition unit 110D determines whether or not three pixels are designated by the pixel designation information received by the touch panel 88. In step 462, if three pixels are not designated by the pixel designation information received by the touch panel 88 (for example, if the number of designated pixels is less than three), the determination is negative and the routine proceeds to step 464. To do. If three pixels are designated by the pixel designation information received by the touch panel 88 at step 462, the determination is affirmed and the routine proceeds to step 468.
 ステップ464で、取得部110Dは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ464の処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 464, the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 464, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ464において、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ462へ移行する。ステップ464において、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ330Cへ移行する。 In step 464, if the condition for ending the first derivation process is not satisfied, the determination is negative and the routine proceeds to step 462. If the condition for ending the first derivation process is satisfied in step 464, the determination is affirmed and the process proceeds to step 330C.
 ステップ468で、取得部110Dは、表示部86に対して、第1座標取得対象領域178を強調した表示を終了させ、その後、ステップ470へ移行する。 In step 468, the acquisition unit 110D ends the display in which the first coordinate acquisition target area 178 is highlighted on the display unit 86, and then proceeds to step 470.
 ステップ470で、取得部110Dは、タッチパネル88によって受け付けられた画素指定情報により指定された3画素が特徴的な3画素か否かを判定する。一例として図43に示すように、タッチパネル88によって受け付けられた領域指定情報により第1座標取得対象領域178が指定された場合、第1座標取得対象領域178には、模様画像160が含まれている。この場合、特徴的な3画素とは、一例として図44に示すように、模様画像160の3隅に存在する画素である第1画素162、第2画素164、及び第3画素166を指す。 In step 470, the acquisition unit 110D determines whether or not the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels. As an example, as shown in FIG. 43, when the first coordinate acquisition target area 178 is specified by the area specification information received by the touch panel 88, the first coordinate acquisition target area 178 includes a pattern image 160. . In this case, the characteristic three pixels indicate a first pixel 162, a second pixel 164, and a third pixel 166 that are pixels at three corners of the pattern image 160, as shown in FIG. 44 as an example.
 ステップ470において、タッチパネル88によって受け付けられた画素指定情報により指定された3画素が特徴的な3画素でない場合は、判定が否定されて、ステップ472へ移行する。ステップ470において、タッチパネル88によって受け付けられた画素指定情報により指定された3画素が特徴的な3画素の場合は、判定が肯定されて、ステップ474へ移行する。 In step 470, if the three pixels designated by the pixel designation information received by the touch panel 88 are not characteristic three pixels, the determination is negative and the routine proceeds to step 472. In step 470, if the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels, the determination is affirmed and the routine proceeds to step 474.
 ステップ472で、取得部110Dは、表示部86に対して再指定メッセージを第1撮像画像の既定領域に重畳した表示を開始させ、その後、ステップ454へ移行する。本第4実施形態に係る再指定メッセージとは、例えば、「特徴的な模様又は建材等が含まれる閉領域を指定した上で、特徴的な3画素を指定して下さい」とのメッセージを指す。 In step 472, the acquisition unit 110D causes the display unit 86 to start displaying the redesignation message superimposed on the predetermined area of the first captured image, and then proceeds to step 454. The re-designation message according to the fourth embodiment refers to, for example, a message “Please designate a characteristic 3 pixel after designating a closed region including a characteristic pattern or building material”. .
 ステップ474で、取得部110Dは、タッチパネル88によって受け付けられた画素指定情報により指定された特徴的な3画素を特定する第1特徴画素座標を取得し、その後、ステップ330Fへ移行する。なお、図44に示す例では、本ステップ474の処理が実行されることで、第1画素162、第2画素164、及び第3画素166の各々を特定する2次元座標が第1特徴画素座標として取得部110Dによって取得される。 In step 474, the acquisition unit 110D acquires the first characteristic pixel coordinates that specify the characteristic three pixels specified by the pixel specification information received by the touch panel 88, and then proceeds to step 330F. In the example shown in FIG. 44, the processing of this step 474 is executed, so that the two-dimensional coordinates that specify each of the first pixel 162, the second pixel 164, and the third pixel 166 are the first feature pixel coordinates. Acquired by the acquisition unit 110D.
 図46に示すステップ476で、取得部110Dは、第2撮像画像から外壁面画像128に対応する外壁面画像である対応外壁面画像を特定し、その後、ステップ478へ移行する。 46, the acquisition unit 110D specifies a corresponding outer wall surface image that is an outer wall surface image corresponding to the outer wall surface image 128 from the second captured image, and then proceeds to step 478.
 ステップ478で、取得部110Dは、表示部86に対して、ステップ476の処理で特定した対応外壁面画像を、第2撮像画像の表示領域内の他の領域と区別可能に強調して表示させ、その後、ステップ480へ移行する。 In step 478, the acquisition unit 110D causes the display unit 86 to display the corresponding outer wall surface image specified in the process of step 476 in a highlighted manner so as to be distinguishable from other regions in the display region of the second captured image. Thereafter, the process proceeds to step 480.
 ステップ480で、取得部110Dは、タッチパネル88によって領域指定情報が受け付けられ、受け付けられた領域指定情報により第2座標取得対象領域が指定されたか否かを判定する。なお、第2座標取得対象領域は、第2撮像画像において、第1座標取得対象領域178(図44参照)に対応する領域としてユーザによってタッチパネル88を介して指定された領域である。 In step 480, the acquisition unit 110D determines whether or not the area designation information has been received by the touch panel 88, and the second coordinate acquisition target area has been specified by the received area designation information. The second coordinate acquisition target area is an area specified by the user via the touch panel 88 as an area corresponding to the first coordinate acquisition target area 178 (see FIG. 44) in the second captured image.
 ステップ480において、領域指定情報により第2座標取得対象領域が指定されていない場合は、判定が否定されて、ステップ482へ移行する。ステップ480において、領域指定情報により第2座標取得対象領域が指定された場合は、判定が肯定されて、ステップ484へ移行する。 In step 480, if the second coordinate acquisition target area is not designated by the area designation information, the determination is denied and the routine proceeds to step 482. If the second coordinate acquisition target area is designated by the area designation information in step 480, the determination is affirmed and the routine proceeds to step 484.
 ステップ482で、取得部110Dは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ482の処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 482, the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 482, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ482において、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ480へ移行する。ステップ482において、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ492へ移行する。 In step 482, if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 480. In step 482, if the condition for ending the first derivation process is satisfied, the determination is affirmed and the routine proceeds to step 492.
 ステップ484で、取得部110Dは、表示部86に対して、後述のステップ498の処理が実行されることで表示部86に表示される再指定メッセージの表示を終了させ、その後、ステップ486へ移行する。 In step 484, the acquisition unit 110D causes the display unit 86 to terminate the display of the redesignation message displayed on the display unit 86 by executing the processing of step 498 described below, and then proceeds to step 486. To do.
 ステップ486で、取得部110Dは、表示部86に対して、タッチパネル88で受け付けられた領域指定情報により指定された第2座標取得対象領域を、第2撮像画像の表示領域内の他の領域と区別可能に強調して表示させ、その後、ステップ488へ移行する。 In step 486, the acquisition unit 110 </ b> D sets the second coordinate acquisition target area specified by the area specification information received by the touch panel 88 to the display unit 86 as another area in the display area of the second captured image. The display is highlighted so as to be distinguishable, and then the process proceeds to step 488.
 ステップ488で、取得部110Dは、タッチパネル88によって受け付けられた画素指定情報により3画素が指定されたか否かを判定する。ステップ488において、タッチパネル88によって受け付けられた画素指定情報により3画素が指定されていない場合(例えば、指定された画素の個数が3個未満の場合)は、判定が否定されて、ステップ490へ移行する。ステップ488において、タッチパネル88によって受け付けられた画素指定情報により3画素が指定された場合は、判定が肯定されて、ステップ494へ移行する。 In step 488, the acquisition unit 110D determines whether or not three pixels are designated by the pixel designation information received by the touch panel 88. In step 488, when three pixels are not designated by the pixel designation information received by the touch panel 88 (for example, when the number of designated pixels is less than three), the determination is negative and the process proceeds to step 490. To do. In step 488, when three pixels are designated by the pixel designation information received by the touch panel 88, the determination is affirmed and the process proceeds to step 494.
 ステップ490で、取得部110Dは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ490の処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 490, the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 490, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ490において、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ488へ移行する。ステップ490において、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ492へ移行する。 In step 490, if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 488. In step 490, if the condition for ending the first derivation process is satisfied, the determination is affirmed and the routine proceeds to step 492.
 ステップ492で、取得部110Dは、表示部86に対して、第2撮像画像の表示を終了させ、その後、第1導出処理を終了する。 In step 492, the acquisition unit 110D ends the display of the second captured image on the display unit 86, and then ends the first derivation process.
 ステップ494で、取得部110Dは、表示部86に対して、第2座標取得対象領域を強調した表示を終了させ、その後、ステップ496へ移行する。 In step 494, the acquisition unit 110D causes the display unit 86 to end the display highlighting the second coordinate acquisition target region, and then proceeds to step 496.
 ステップ496で、取得部110Dは、タッチパネル88によって受け付けられた画素指定情報により指定された3画素が特徴的な3画素か否かを判定する。 In step 496, the acquisition unit 110D determines whether or not the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels.
 タッチパネル88によって受け付けられた領域指定情報により第2座標取得対象領域が指定された場合、第2座標取得対象領域には、模様画像160に対応する模様画像が含まれている。この場合、特徴的な3画素とは、第2撮像画像において、模様画像160に対応する模様画像の3隅に存在する画素である。模様画像160に対応する模様画像の3隅に存在する画素とは、例えば、第2撮像画像において、第1画素162に対応する画素、第2画素164に対応する画素、及び第3画素に対応する画素を指す。 When the second coordinate acquisition target area is specified by the area specifying information received by the touch panel 88, the second coordinate acquisition target area includes a pattern image corresponding to the pattern image 160. In this case, the characteristic three pixels are pixels present at the three corners of the pattern image corresponding to the pattern image 160 in the second captured image. The pixels present at the three corners of the pattern image corresponding to the pattern image 160 correspond to, for example, the pixel corresponding to the first pixel 162, the pixel corresponding to the second pixel 164, and the third pixel in the second captured image. Refers to the pixel to be
 ステップ496において、タッチパネル88によって受け付けられた画素指定情報により指定された3画素が特徴的な3画素でない場合は、判定が否定されて、ステップ498へ移行する。ステップ496において、タッチパネル88によって受け付けられた画素指定情報により指定された3画素が特徴的な3画素の場合は、判定が肯定されて、図47に示すステップ500へ移行する。 In step 496, if the three pixels designated by the pixel designation information received by the touch panel 88 are not characteristic three pixels, the determination is negative and the routine proceeds to step 498. In step 496, if the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels, the determination is affirmed and the routine proceeds to step 500 shown in FIG.
 ステップ498で、取得部110Dは、表示部86に対して、上述した再指定メッセージを第2撮像画像の既定領域に重畳した表示を開始させ、その後、ステップ480へ移行する。 In step 498, the acquisition unit 110D causes the display unit 86 to start displaying the above-described redesignation message superimposed on the predetermined area of the second captured image, and then proceeds to step 480.
 図47に示すステップ500で、取得部110Dは、タッチパネル88によって受け付けられた画素指定情報により指定された特徴的な3画素を特定する第2特徴画素座標を取得し、その後、ステップ502へ移行する。なお、本ステップ500では、例えば、第2撮像画像において、第1画素162に対応する画素、第2画素164に対応する画素、及び第3画素166に対応する画素の各々を特定する2次元座標が第2特徴画素座標として取得部110Dによって取得される。 In step 500 shown in FIG. 47, the acquisition unit 110D acquires the second feature pixel coordinates that specify the characteristic three pixels specified by the pixel specification information received by the touch panel 88, and then proceeds to step 502. . In this step 500, for example, in the second captured image, a two-dimensional coordinate specifying each of the pixel corresponding to the first pixel 162, the pixel corresponding to the second pixel 164, and the pixel corresponding to the third pixel 166. Is acquired by the acquisition unit 110D as the second feature pixel coordinates.
 ステップ502で、導出部111Dは、第1特徴画素座標、第2特徴画素座標、焦点距離、及び撮像画素60A1の寸法から、数式(7)に示す平面方程式のa,b,cを導出することで、平面方程式により規定される平面の向きを導出する。なお、本ステップ502の処理で用いられる第1特徴画素座標は、ステップ474の処理で取得された第1特徴画素座標であり、上記第1実施形態で説明した3特徴画素座標に相当する。また、本ステップ502の処理で用いられる第2特徴画素座標は、ステップ500の処理で取得された第2特徴画素座標であり、上記第1実施形態で説明した対応特徴画素座標に相当する。 In step 502, the deriving unit 111D derives a, b, and c of the plane equation shown in Expression (7) from the first feature pixel coordinates, the second feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1. Thus, the orientation of the plane defined by the plane equation is derived. The first feature pixel coordinates used in the process of step 502 are the first feature pixel coordinates acquired in the process of step 474, and correspond to the three feature pixel coordinates described in the first embodiment. The second feature pixel coordinates used in the process of step 502 are the second feature pixel coordinates acquired in the process of step 500, and correspond to the corresponding feature pixel coordinates described in the first embodiment.
 以上説明したように、測距装置10Dでは、第1撮像画像においてタッチパネル88を介して特徴的な3画素が指定され、指定された特徴的な3画素を特定する第1特徴画素座標が取得部110Dにより取得される(ステップ474)。また、第2撮像画像においてタッチパネル88を介して、第1撮像画像の特徴的な3画素に対応する特徴的な3画素が指定される(ステップ470:Y)。また、第2撮像画像においてタッチパネル88を介して指定された特徴的な3画素を特定する第2特徴画素座標が取得部110Dにより取得される(ステップ500)。そして、導出部111Dにより、注目画素座標、対応注目画素座標、第1特徴画素座標、第2特徴画素座標、焦点位置座標、焦点距離、及び撮像画素60A1の寸法に基づいて、撮像位置距離が導出される。従って、測距装置10Dによれば、ユーザの意思に従って取得された第1特徴画素座標及び第2特徴画素座標を基に撮像位置距離を導出することができる。 As described above, in the distance measuring device 10 </ b> D, characteristic three pixels are designated via the touch panel 88 in the first captured image, and the first characteristic pixel coordinates that specify the designated characteristic three pixels are acquired by the acquisition unit. Obtained by 110D (step 474). Further, characteristic three pixels corresponding to the characteristic three pixels of the first captured image are designated in the second captured image via the touch panel 88 (step 470: Y). Further, second feature pixel coordinates that specify three characteristic pixels designated through the touch panel 88 in the second captured image are acquired by the acquisition unit 110D (step 500). Then, the imaging position distance is derived by the deriving unit 111D based on the target pixel coordinates, the corresponding target pixel coordinates, the first characteristic pixel coordinates, the second characteristic pixel coordinates, the focal position coordinates, the focal distance, and the dimensions of the imaging pixel 60A1. Is done. Therefore, according to the distance measuring device 10D, the imaging position distance can be derived based on the first feature pixel coordinates and the second feature pixel coordinates acquired according to the user's intention.
 [第5実施形態]
 上記第1実施形態では、第1導出処理で1つの平面方程式に基づいて撮像位置距離を導出する場合を例示したが、本第5実施形態では、第1導出処理で2つの平面方程式に基づいて最終撮像位置距離を導出する場合について説明する。なお、本第5実施形態では、上記第1実施形態で説明した構成要素と同一の構成要素については同一の符号を付し、その説明を省略する。
[Fifth Embodiment]
In the first embodiment, the case where the imaging position distance is derived based on one plane equation in the first derivation process is illustrated, but in the fifth embodiment, based on two plane equations in the first derivation process. A case where the final imaging position distance is derived will be described. In the fifth embodiment, the same components as those described in the first embodiment are denoted by the same reference numerals, and the description thereof is omitted.
 本第5実施形態に係る測距装置10Eは、測距装置10Aに比べ、二次記憶部104に撮像位置距離導出プログラム106Aに代えて撮像位置距離導出プログラム106Eが記憶されている点が異なる。また、測距装置10Eは、測距装置10Aに比べ、二次記憶部104に3次元座標導出プログラム108Aに代えて3次元座標導出プログラム108Bが記憶されている点が異なる。 The distance measuring device 10E according to the fifth embodiment is different from the distance measuring device 10A in that an imaging position distance deriving program 106E is stored in the secondary storage unit 104 instead of the imaging position distance deriving program 106A. Further, the distance measuring device 10E is different from the distance measuring device 10A in that a three-dimensional coordinate derivation program 108B is stored in the secondary storage unit 104 instead of the three-dimensional coordinate derivation program 108A.
 CPU100は、撮像位置距離導出プログラム106Eを実行することで、一例として図8に示すように、取得部110E及び導出部111Eとして動作する。 The CPU 100 operates as the acquisition unit 110E and the derivation unit 111E as illustrated in FIG. 8 as an example by executing the imaging position distance derivation program 106E.
 取得部110Eは、上記第1実施形態で説明した取得部110Aに対応し、導出部111Eは、上記第1実施形態で説明した導出部111Aに対応する。なお、本第5実施形態では、説明の便宜上、取得部110E及び導出部111Eについては、上記第1実施形態で説明した取得部110A及び導出部111Eと異なる部分について説明する。 The acquisition unit 110E corresponds to the acquisition unit 110A described in the first embodiment, and the derivation unit 111E corresponds to the derivation unit 111A described in the first embodiment. In the fifth embodiment, for the sake of convenience of explanation, the acquisition unit 110E and the derivation unit 111E will be described with respect to differences from the acquisition unit 110A and the derivation unit 111E described in the first embodiment.
 取得部110Eは、取得部110Aに比べ、第2実測距離を参照用距離として取得する点が異なる。 The acquisition unit 110E is different from the acquisition unit 110A in that the second measured distance is acquired as a reference distance.
 導出部111Eは、注目画素座標、3特徴画素座標、参照用照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に基づいて、第1撮像位置と第2撮像位置との距離である参照用撮像位置距離を導出する。そして、導出部111Eは、導出した参照用撮像位置距離を参照して撮像位置距離を調整することで、第1撮像位置と第2撮像位置との距離として最終的に採用される最終撮像位置距離を導出する。 The deriving unit 111E refers to the distance between the first imaging position and the second imaging position based on the target pixel coordinates, the three characteristic pixel coordinates, the reference irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. The image pickup position distance is derived. Then, the deriving unit 111E adjusts the imaging position distance with reference to the derived reference imaging position distance, thereby finally adopting the final imaging position distance as the distance between the first imaging position and the second imaging position. Is derived.
 また、導出部111Eは、導出した最終撮像位置距離に基づいて指定画素3次元座標を導出する。最終指定画素実空間座標とは、注目画素126の実空間上での座標である3次元座標として最終的に採用される3次元座標を指す。 Further, the deriving unit 111E derives the designated pixel three-dimensional coordinates based on the derived final imaging position distance. The final designated pixel real space coordinates refer to the three-dimensional coordinates that are finally adopted as the three-dimensional coordinates that are the coordinates of the target pixel 126 in the real space.
 次に、測距装置10Eの本開示の技術に係る部分の作用として、CPU100が撮像位置距離導出プログラム106Eを実行することにより撮像位置距離導出機能を働かせることで実現される撮像位置距離導出処理の第1導出処理について図22及び図48を参照して説明する。なお、図22及び図23に示すフローチャートと同一のステップについては、同一のステップ番号を付して、その説明を省略する。 Next, as an operation of the portion related to the technique of the present disclosure of the distance measuring device 10E, an imaging position distance deriving process realized by the CPU 100 executing the imaging position distance deriving function 106E by executing the imaging position distance deriving program 106E. The first derivation process will be described with reference to FIGS. Note that the same steps as those in the flowcharts shown in FIGS. 22 and 23 are denoted by the same step numbers, and description thereof is omitted.
 図48に示すフローチャートは、図22及び図23に示すフローチャートに比べ、ステップ330Jに代えてステップ550を有する点が異なる。また、図48に示すフローチャートは、図22及び図23に示すフローチャートに比べ、ステップ330Q~ステップ330Uに代えてステップ552~ステップ568を有する点が異なる。 48 is different from the flowchart shown in FIGS. 22 and 23 in that step 550 is provided instead of step 330J. 48 is different from the flowcharts shown in FIGS. 22 and 23 in that steps 552 to 568 are provided instead of steps 330Q to 330U.
 図48に示すステップ550で、取得部110Eは、ステップ330Iの処理が実行されることで計測された第2実測距離を参照用距離として取得する。また、ステップ550で、取得部110Eは、ステップ330Iの処理が実行されることで撮像されて得られた第2撮像画像を示す第2撮像画像信号を取得し、その後、ステップ330Kへ移行する。 48, the acquisition unit 110E acquires the second actually measured distance measured by executing the processing of Step 330I as a reference distance. In step 550, the acquisition unit 110E acquires a second captured image signal indicating the second captured image obtained by performing the processing in step 330I, and then proceeds to step 330K.
 ステップ552で、導出部111Eは、図21に示すステップ312の処理で導出された照射位置実空間座標に基づいて数式(7)に示す平面方程式である第1平面方程式を確定し、その後、ステップ554へ移行する。 In step 552, the deriving unit 111E determines the first plane equation, which is the plane equation shown in equation (7), based on the irradiation position real space coordinates derived in the process of step 312 shown in FIG. Go to 554.
 ステップ554で、導出部111Eは、特徴画素3次元座標及び第1平面方程式に基づいて、撮像位置距離を導出し、その後、ステップ556へ移行する。 In step 554, the deriving unit 111E derives the imaging position distance based on the feature pixel three-dimensional coordinates and the first plane equation, and then proceeds to step 556.
 ステップ556で、導出部111Eは、ステップ550の処理で取得部110Eによって取得された参照用距離、半画角α、射出角度β、及び基準点間距離Mから、数式(3)に基づいて、参照用照射位置実空間座標を導出し、その後、ステップ558へ移行する。なお、本ステップ556の処理で用いられる参照用距離は、上記第1実施形態で説明した距離Lに対応する距離である。 In step 556, the deriving unit 111E calculates the reference distance, the half angle of view α, the emission angle β, and the reference point distance M acquired by the acquisition unit 110E in the process of step 550, based on the equation (3). The reference irradiation position real space coordinates are derived, and then the process proceeds to step 558. Note that the reference distance used in the processing of step 556 is a distance corresponding to the distance L described in the first embodiment.
 ステップ558で、導出部111Eは、ステップ556の処理で導出した参照用照射位置実空間座標に基づいて数式(7)に示す平面方程式である第2平面方程式を確定し、その後、ステップ560へ移行する。すなわち、本ステップ558において、導出部111Eは、ステップ330Pの処理で導出したa,b,c及びステップ556の処理で導出した参照用照射位置実空間座標を数式(7)に代入することで、数式(7)のdを確定する。ステップ330Pの処理で数式(7)のa,b,cが導出されているので、本ステップ558の処理で数式(7)のdが確定されると、第2平面方程式が確定される。 In step 558, the deriving unit 111E determines the second plane equation, which is the plane equation shown in equation (7), based on the reference irradiation position real space coordinates derived in the process of step 556, and then proceeds to step 560. To do. That is, in this step 558, the deriving unit 111E substitutes a, b, c derived in the process of step 330P and the reference irradiation position real space coordinates derived in the process of step 556 into Equation (7), Determine d in equation (7). Since a, b, and c of Expression (7) are derived in the process of Step 330P, when d of Expression (7) is determined in the process of Step 558, the second plane equation is determined.
 ステップ560で、導出部111Eは、特徴画素3次元座標及び第2平面方程式に基づいて、参照用撮像位置距離を導出し、その後、ステップ562へ移行する。なお、参照用撮像位置距離は、例えば、数式(9)に示す“B”に相当し、第1特徴画素3次元座標が第2平面方程式に代入されることで導出される。 In step 560, the deriving unit 111E derives the reference imaging position distance based on the feature pixel three-dimensional coordinates and the second plane equation, and then proceeds to step 562. Note that the reference imaging position distance corresponds to, for example, “B” shown in Equation (9), and is derived by substituting the first feature pixel three-dimensional coordinates into the second plane equation.
 ステップ562で、導出部111Eは、ステップ560の処理で導出した参照用撮像位置距離を参照して、ステップ554の処理で導出した撮像位置距離を調整することで、最終撮像位置距離を導出し、その後、ステップ564へ移行する。ここで、撮像位置距離を調整するとは、例えば、撮像位置距離及び参照用撮像位置距離の平均値を求めること、撮像位置距離及び参照用撮像位置距離の平均値に第1調整用係数を乗じること、又は第2調整用係数を撮像位置距離に乗じることを指す。 In step 562, the deriving unit 111E refers to the reference imaging position distance derived in step 560 and adjusts the imaging position distance derived in step 554 to derive the final imaging position distance. Thereafter, the process proceeds to step 564. Here, adjusting the imaging position distance means, for example, obtaining an average value of the imaging position distance and the reference imaging position distance, and multiplying the average value of the imaging position distance and the reference imaging position distance by the first adjustment coefficient. Or, it means multiplying the imaging position distance by the second adjustment coefficient.
 なお、第1調整用係数及び第2調整用係数は、何れも、例えば、参照用撮像位置距離に応じて一意に定まる係数である。第1調整用係数は、例えば、参照用撮像位置距離と第1調整用係数とが予め対応付けられた対応テーブル、又は、参照用撮像位置距離が独立変数とされ、第1調整用係数が従属変数とされた演算式から導出される。第2調整用係数も同様に導出される。対応テーブル又は演算式は、測距装置10Eの出荷前の段階で、測距装置10Eの実機による試験、又は測距装置10Eの設計仕様等に基づくコンピュータ・シミュレーション等の結果から導き出された導出用テーブル又は演算式から導出される。 Note that the first adjustment coefficient and the second adjustment coefficient are both coefficients that are uniquely determined according to, for example, the reference imaging position distance. The first adjustment coefficient is, for example, a correspondence table in which the reference imaging position distance and the first adjustment coefficient are associated in advance, or the reference imaging position distance is an independent variable, and the first adjustment coefficient is dependent. It is derived from an arithmetic expression that is a variable. The second adjustment coefficient is similarly derived. The correspondence table or the calculation formula is for derivation derived from the result of a computer simulation or the like based on a test of the distance measuring device 10E by an actual device or a design specification of the distance measuring device 10E at the stage before the shipment of the distance measuring device 10E. Derived from a table or arithmetic expression.
 よって、最終撮像位置距離の一例としては、撮像位置距離及び参照用撮像位置距離の平均値、撮像位置距離及び参照用撮像位置距離の平均値に第1調整用係数を乗じて得た値、又は撮像位置距離に第2調整用係数を乗じて得た値が挙げられる。 Therefore, as an example of the final imaging position distance, an average value of the imaging position distance and the reference imaging position distance, a value obtained by multiplying the average value of the imaging position distance and the reference imaging position distance by the first adjustment coefficient, or A value obtained by multiplying the imaging position distance by the second adjustment coefficient can be given.
 ステップ564で、導出部111Eは、表示部86に対して、一例として図49に示すように、ステップ562の処理で導出された最終撮像位置距離を第2撮像画像に重畳した表示を開始させる。また、ステップ564で、導出部111Eは、ステップ562の処理で導出された最終撮像位置距離を既定の記憶領域に記憶し、その後、ステップ566へ移行する。 In step 564, the derivation unit 111E causes the display unit 86 to start display in which the final imaging position distance derived in the process of step 562 is superimposed on the second captured image, as shown in FIG. 49 as an example. In step 564, the deriving unit 111E stores the final imaging position distance derived in the process of step 562 in a predetermined storage area, and then proceeds to step 566.
 なお、図49に示す例では、「144621.7」との数値が、ステップ562の処理で導出された最終撮像位置距離に該当し、単位はミリメートルである。 In the example shown in FIG. 49, the numerical value “1444621.7” corresponds to the final imaging position distance derived by the processing in step 562, and the unit is millimeters.
 ステップ566で、導出部111Eは、第1導出処理を終了する条件を満足したか否かを判定する。なお、ステップ566の処理において、第1導出処理を終了する条件は、ステップ302の処理で用いた条件と同様の条件である。 In step 566, the derivation unit 111E determines whether or not a condition for ending the first derivation process is satisfied. In the process of step 566, the condition for ending the first derivation process is the same as the condition used in the process of step 302.
 ステップ566において、第1導出処理を終了する条件を満足していない場合は、判定が否定されて、本ステップ566の判定が再び行われる。ステップ566において、第1導出処理を終了する条件を満足した場合は、判定が肯定されて、ステップ568へ移行する。 In step 566, if the condition for ending the first derivation process is not satisfied, the determination is negative and the determination in step 566 is performed again. If the condition for ending the first derivation process is satisfied in step 566, the determination is affirmed and the process proceeds to step 568.
 ステップ568で、導出部111Eは、表示部86に対して、第2撮像画像及び重畳表示情報の表示を終了させ、その後、第1導出処理を終了する。なお、本ステップ568の処理において、重畳表示情報とは、現時点で第2撮像画像に重畳して表示されている各種情報を指し、例えば、最終撮像位置距離を指す。 In step 568, the derivation unit 111E ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the first derivation process. Note that in the processing of step 568, the superimposed display information refers to various pieces of information that are currently displayed superimposed on the second captured image, for example, the final imaging position distance.
 次に、3次元座標導出ボタン90Gがオンされた場合にCPU100が3次元座標導出プログラム108Bを実行することにより3次元座標導出機能を働かせることで実現される3次元座標導出処理について図50を参照して説明する。なお、ここでは、説明の便宜上、本第5実施形態に係る第1導出処理がCPU100によって実行されたことを前提として説明する。 Next, with reference to FIG. 50, the three-dimensional coordinate derivation process realized by the CPU 100 executing the three-dimensional coordinate derivation program 108B when the three-dimensional coordinate derivation button 90G is turned on to operate the three-dimensional coordinate derivation function will be described with reference to FIG. To explain. Here, for convenience of explanation, the description will be made on the assumption that the first derivation process according to the fifth embodiment is executed by the CPU 100.
 図50に示す3次元座標導出処理では、先ず、ステップ600で、導出部111Eは、第1導出処理に含まれるステップ562の処理で最終撮像位置距離が既に導出されているか否かを判定する。ステップ600において、第1導出処理に含まれるステップ562の処理で最終撮像位置距離が導出されていない場合は、判定が否定されて、ステップ608へ移行する。ステップ600において、第1導出処理に含まれるステップ562の処理で最終撮像位置距離が既に導出されている場合は、判定が肯定されて、ステップ602へ移行する。 In the three-dimensional coordinate derivation process shown in FIG. 50, first, in step 600, the derivation unit 111E determines whether or not the final imaging position distance has already been derived in the process of step 562 included in the first derivation process. In step 600, when the final imaging position distance is not derived in the process of step 562 included in the first derivation process, the determination is negative and the process proceeds to step 608. In step 600, if the final imaging position distance has already been derived in the process of step 562 included in the first derivation process, the determination is affirmed and the process proceeds to step 602.
 ステップ602で、導出部111Eは、導出開始条件を満足したか否かを判定する。ステップ602において、導出開始条件を満足していない場合は、判定が否定されて、ステップ608へ移行する。ステップ602において、導出開始条件を満足した場合は、判定が肯定されて、ステップ604へ移行する。 In step 602, the derivation unit 111E determines whether or not the derivation start condition is satisfied. If it is determined in step 602 that the derivation start condition is not satisfied, the determination is negative and the process proceeds to step 608. In step 602, if the derivation start condition is satisfied, the determination is affirmed and the process proceeds to step 604.
 ステップ604で、導出部111Eは、注目画素座標、対応注目画素座標、最終撮像位置距離、焦点距離、撮像画素60A1の寸法、及び数式(2)に基づいて、指定画素3次元座標を導出し、その後、ステップ606へ移行する。 In step 604, the deriving unit 111E derives the designated pixel three-dimensional coordinates based on the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, the size of the imaging pixel 60A1, and the formula (2). Thereafter, the process proceeds to step 606.
 なお、本ステップ604では、注目画素座標、対応注目画素座標、最終撮像位置距離、焦点距離、及び撮像画素60A1の寸法が数式(2)に代入されることで、指定画素3次元座標が導出される。 Note that in step 604, the designated pixel three-dimensional coordinates are derived by substituting the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 into Equation (2). The
 ステップ606で、導出部111Eは、一例として図51に示すように、表示部86に対して、ステップ604の処理で導出された指定画素3次元座標を第2撮像画像に重畳して表示させる。また、ステップ606で、導出部111Eは、ステップ604の処理で導出された指定画素3次元座標を既定の記憶領域に記憶し、その後、ステップ608へ移行する。 In step 606, the derivation unit 111E causes the display unit 86 to superimpose and display the designated pixel three-dimensional coordinates derived in step 604 on the second captured image, as shown in FIG. 51 as an example. In step 606, the deriving unit 111E stores the designated pixel three-dimensional coordinates derived in the process of step 604 in a predetermined storage area, and then proceeds to step 608.
 なお、図51に示す例では、(20160,50132,137810)が、ステップ454の処理で導出された指定画素3次元座標に該当する。また、図51に示す例では、指定画素3次元座標が注目画素126に近接して表示されている。 In the example shown in FIG. 51, (20160, 50132, 137810) corresponds to the designated pixel three-dimensional coordinates derived in the process of step 454. In the example shown in FIG. 51, the designated pixel three-dimensional coordinates are displayed close to the target pixel 126.
 ステップ608で、導出部111Eは、3次元座標導出処理を終了する条件を満足したか否かを判定する。ステップ608において、3次元座標導出処理を終了する条件を満足していない場合は、判定が否定されて、ステップ600へ移行する。ステップ608において、3次元座標導出処理を終了する条件を満足した場合は、判定が肯定されて、3次元座標導出処理を終了する。 In step 608, the derivation unit 111E determines whether or not a condition for ending the three-dimensional coordinate derivation process is satisfied. If the condition for ending the three-dimensional coordinate derivation process is not satisfied at step 608, the determination is negative and the routine proceeds to step 600. If the condition for ending the three-dimensional coordinate derivation process is satisfied in step 608, the determination is affirmed and the three-dimensional coordinate derivation process is ended.
 以上説明したように、測距装置10Eでは、第2位置から被写体までの距離が計測され、計測された距離である参照用距離が取得部110Eにより取得される(ステップ550)。また、導出部111Eにより、参照用距離に基づいて参照用照射位置実空間座標が導出される(ステップ556)。また、導出部111Eにより、注目画素座標、対応注目画素座標、3特徴画素座標、対応特徴画素座標、参照用照射位置実空間座標、焦点距離、及び撮像画素60A1の寸法に基づいて、参照用撮像位置距離が導出される(ステップ560)。そして、導出部111Eにより、参照用撮像位置距離が参照されて撮像位置距離が調整されることで、最終撮像位置距離が導出される(ステップ562)。従って、測距装置10Eによれば、参照用撮像位置距離を用いない場合に比べ、第1撮像位置と第2撮像位置との距離を高精度に導出することができる。 As described above, in the distance measuring device 10E, the distance from the second position to the subject is measured, and the reference distance that is the measured distance is acquired by the acquisition unit 110E (step 550). Further, the reference irradiation position real space coordinates are derived by the deriving unit 111E based on the reference distance (step 556). In addition, the deriving unit 111E performs reference imaging based on the target pixel coordinates, the corresponding target pixel coordinates, the three feature pixel coordinates, the corresponding feature pixel coordinates, the reference irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. A position distance is derived (step 560). The deriving unit 111E refers to the reference imaging position distance and adjusts the imaging position distance, thereby deriving the final imaging position distance (step 562). Therefore, according to the distance measuring device 10E, the distance between the first imaging position and the second imaging position can be derived with higher accuracy than when the reference imaging position distance is not used.
 また、測距装置10Eでは、撮像位置距離導出処理で導出された最終撮像位置距離に基づいて、指定画素3次元座標が導出される(図50参照)。従って、測距装置10Eによれば、最終撮像位置距離を用いない場合に比べ、指定画素3次元座標を高精度に導出することができる。 In the distance measuring device 10E, the designated pixel three-dimensional coordinates are derived based on the final imaging position distance derived by the imaging position distance deriving process (see FIG. 50). Therefore, according to the distance measuring device 10E, it is possible to derive the designated pixel three-dimensional coordinates with higher accuracy than when the final imaging position distance is not used.
 更に、測距装置10Eでは、指定画素3次元座標は、注目画素座標、対応注目画素座標、最終撮像位置距離、焦点距離、及び撮像画素60A1の寸法に基づいて規定されている(数式(2)参照)。従って、測距装置10Dによれば、指定画素3次元座標が最終撮像位置距離、注目画素座標、対応注目画素座標、焦点距離、及び撮像画素60A1の寸法に基づいて規定されない場合に比べ、指定画素3次元座標を高精度に導出することができる。 Further, in the distance measuring device 10E, the designated pixel three-dimensional coordinates are defined based on the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 (formula (2)). reference). Therefore, according to the distance measuring device 10D, the designated pixel is compared with the case where the designated pixel three-dimensional coordinate is not defined based on the final imaging position distance, the target pixel coordinate, the corresponding target pixel coordinate, the focal length, and the size of the imaging pixel 60A1. Three-dimensional coordinates can be derived with high accuracy.
 なお、上記第5実施形態では、第2位置から射出されたレーザ光を基に計測された距離を参照用距離としたが、本開示の技術はこれに限定されるものではない。例えば、第1位置から射出されたレーザ光を基に計測された距離を参照用距離としてもよい。 In the fifth embodiment, the distance measured based on the laser beam emitted from the second position is used as the reference distance, but the technology of the present disclosure is not limited to this. For example, the distance measured based on the laser light emitted from the first position may be used as the reference distance.
 [第6実施形態]
 上記各実施形態では、1台の測距装置により撮像位置距離等が導出される場合について説明したが、本第6実施形態では、2台の測距装置及びパーソナル・コンピュータ(以下、PCと称する)により撮像位置距離等が導出される場合について説明する。なお、PCとは、Personal Computerの略語である。なお、本第6実施形態では、上記各実施形態で説明した構成要素と同一の構成要素については同一の符号を付し、その説明を省略する。
[Sixth Embodiment]
In each of the above embodiments, the case where the imaging position distance and the like are derived by one distance measuring device has been described, but in the sixth embodiment, two distance measuring devices and a personal computer (hereinafter referred to as a PC). The case where the imaging position distance or the like is derived from the above is described. Note that “PC” is an abbreviation for “Personal Computer”. In the sixth embodiment, the same components as those described in the above embodiments are denoted by the same reference numerals, and the description thereof is omitted.
 一例として図52に示すように、本第6実施形態に係る情報処理システム650は、測距装置10F1,10F2及びPC652を有する。なお、本第6実施形態では、PC652が、測距装置10F1,10F2と通信可能とされている。また、本第6実施形態において、PC652は、本開示の技術に係る情報処理装置の一例である。 As an example, as shown in FIG. 52, an information processing system 650 according to the sixth embodiment includes distance measuring devices 10F1 and 10F2 and a PC 652. In the sixth embodiment, the PC 652 can communicate with the distance measuring devices 10F1 and 10F2. In the sixth embodiment, the PC 652 is an example of an information processing apparatus according to the technology of the present disclosure.
 一例として図52に示すように、測距装置10F1は第1位置に配置されており、測距装置10F2は第1位置と異なる第2位置に配置されている。 As an example, as shown in FIG. 52, the distance measuring device 10F1 is disposed at the first position, and the distance measuring device 10F2 is disposed at a second position different from the first position.
 一例として図53に示すように、測距装置10F1,10F2は同一の構成とされている。なお、以下では、測距装置10F1,10F2を区別して説明する必要がない場合、「測距装置10F」と称する。 As an example, as shown in FIG. 53, the distance measuring devices 10F1 and 10F2 have the same configuration. Hereinafter, the distance measuring devices 10F1 and 10F2 are referred to as “range measuring device 10F” when it is not necessary to distinguish between them.
 測距装置10Fは、測距装置10Aに比べ、撮像装置14に代えて撮像装置15を有する点が異なる。撮像装置15は、撮像装置14に比べ、撮像装置本体18に代えて撮像装置本体19を有する点が異なる。 The distance measuring device 10F differs from the distance measuring device 10A in that it includes an imaging device 15 instead of the imaging device 14. The imaging device 15 is different from the imaging device 14 in that an imaging device body 19 is provided instead of the imaging device body 18.
 撮像装置本体19は、撮像装置本体18に比べ、通信I/F83を有する点が異なる。通信I/F83は、バスライン84に接続されており、主制御部62の制御下で動作する。 The imaging device main body 19 is different from the imaging device main body 18 in having a communication I / F 83. The communication I / F 83 is connected to the bus line 84 and operates under the control of the main control unit 62.
 通信I/F83は、例えば、インターネットなどの通信網(図示省略)に接続されており、通信網に接続されたPC652との間の各種情報の送受信を司る。 The communication I / F 83 is connected to a communication network (not shown) such as the Internet, and controls transmission / reception of various information to / from the PC 652 connected to the communication network.
 一例として図54に示すように、PC652は、主制御部653を備えている。主制御部653は、CPU654、一次記憶部656、及び二次記憶部658を有する。CPU654、一次記憶部656、及び二次記憶部658は、バスライン660を介して相互に接続されている。 As an example, as shown in FIG. 54, the PC 652 includes a main control unit 653. The main control unit 653 includes a CPU 654, a primary storage unit 656, and a secondary storage unit 658. The CPU 654, the primary storage unit 656, and the secondary storage unit 658 are connected to each other via a bus line 660.
 また、PC652は、通信I/F662を備えている。通信I/F662は、バスライン660に接続されており、主制御部653の制御下で動作する。通信I/F662は、通信網に接続されており、通信網に接続された測距装置10Fとの間の各種情報の送受信を司る。 In addition, the PC 652 includes a communication I / F 662. The communication I / F 662 is connected to the bus line 660 and operates under the control of the main control unit 653. The communication I / F 662 is connected to a communication network, and controls transmission / reception of various information to / from the distance measuring device 10F connected to the communication network.
 また、PC652は、受付部663及び表示部664を備えている。受付部663は、受付I/F(図示省略)を介してバスライン660に接続されており、受付I/Fは、受付部663によって受け付けられた指示の内容を示す指示内容信号を主制御部653に出力する。なお、受付部663は、例えば、キーボード、マウス、及びタッチパネルにより実現される。 Further, the PC 652 includes a reception unit 663 and a display unit 664. The reception unit 663 is connected to the bus line 660 via a reception I / F (not shown), and the reception I / F transmits an instruction content signal indicating the content of the instruction received by the reception unit 663 to the main control unit. Output to 653. Note that the reception unit 663 is realized by, for example, a keyboard, a mouse, and a touch panel.
 表示部664は、表示制御部(図示省略)を介してバスライン660に接続されており、表示制御部の制御下で各種情報を表示する。なお、表示部664は、例えば、LCDにより実現される。 The display unit 664 is connected to the bus line 660 via a display control unit (not shown), and displays various information under the control of the display control unit. The display unit 664 is realized by an LCD, for example.
 二次記憶部658は、上記各実施形態で説明した寸法導出プログラム105A(105B)を記憶している。また、二次記憶部658は、上記各実施形態で説明した撮像位置距離導出プログラム106A(106B,106C,106D,106E)を記憶している。また、二次記憶部658は、上記各実施形態で説明した3次元座標導出プログラム108A(108B)を記憶している。更に、二次記憶部658は、上記各実施形態で説明した焦点距離導出テーブル109A(109B)を記憶している。 The secondary storage unit 658 stores the dimension derivation program 105A (105B) described in the above embodiments. Further, the secondary storage unit 658 stores the imaging position distance derivation program 106A (106B, 106C, 106D, 106E) described in the above embodiments. The secondary storage unit 658 stores the three-dimensional coordinate derivation program 108A (108B) described in the above embodiments. Further, the secondary storage unit 658 stores the focal length derivation table 109A (109B) described in the above embodiments.
 なお、以下では、説明の便宜上、寸法導出プログラム105A,105Bを区別して説明する必要がない場合、符号を付さずに「寸法導出プログラム」と称する。また、以下では、説明の便宜上、撮像位置距離導出プログラム106A,106B,106C,106D,106Eを区別して説明する必要がない場合、符号を付さずに「撮像位置距離導出プログラム」と称する。また、以下では、説明の便宜上、3次元座標導出プログラム108A,108Bを区別して説明する必要がない場合、符号を付さずに「3次元座標導出プログラム」と称する。更に、以下では、説明の便宜上、焦点距離導出テーブル109A,109Bを区別して説明する必要がない場合、符号を付さずに「焦点距離導出テーブル」と称する。 In the following, for convenience of explanation, when it is not necessary to distinguish between the dimension derivation programs 105A and 105B, the dimension derivation programs are referred to as “dimension derivation programs” without reference numerals. In the following, for convenience of explanation, when there is no need to distinguish between the imaging position distance deriving programs 106A, 106B, 106C, 106D, and 106E, they are referred to as “imaging position distance deriving programs” without reference numerals. In the following, for convenience of explanation, when it is not necessary to distinguish between the three-dimensional coordinate derivation programs 108A and 108B, the three-dimensional coordinate derivation programs 108A and 108B are referred to as “three-dimensional coordinate derivation programs” without reference numerals. Further, in the following, for convenience of explanation, when it is not necessary to distinguish between the focal length derivation tables 109A and 109B, the focal length derivation tables 109A and 109B are referred to as “focal length derivation tables” without reference numerals.
 CPU654は、測距装置10F1から第1撮像画像信号、注目画素座標、及び距離等を通信I/F662を介して取得する。また、CPU654は、測距装置10F2から第2撮像画像信号等を通信I/F662を介して取得する。 The CPU 654 acquires the first captured image signal, the target pixel coordinate, the distance, and the like from the distance measuring device 10F1 via the communication I / F 662. In addition, the CPU 654 acquires the second captured image signal and the like from the distance measuring device 10F2 via the communication I / F 662.
 CPU654は、二次記憶部658から寸法導出プログラムを読み出して一次記憶部656に展開し、寸法導出プログラムを実行する。また、CPU654は、二次記憶部658から撮像位置距離導出プログラムを読み出して一次記憶部656に展開し、撮像位置距離導出プログラムを実行する。更に、CPU654は、二次記憶部658から3次元座標導出プログラムを読み出して一次記憶部656に展開し、3次元座標導出プログラムを実行する。なお、以下では、説明の便宜上、寸法導出プログラム及び撮像位置距離導出プログラムを総称する場合、「導出プログラム」と称する。 The CPU 654 reads the dimension derivation program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the dimension derivation program. In addition, the CPU 654 reads an imaging position distance deriving program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the imaging position distance deriving program. Further, the CPU 654 reads out the three-dimensional coordinate derivation program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the three-dimensional coordinate derivation program. In the following, for convenience of explanation, the dimension derivation program and the imaging position distance derivation program are collectively referred to as “derivation program”.
 CPU654は、導出プログラムを実行することで、取得部110A(110B,11C,110D,110E)及び導出部111A(111B,111C,111D,111E)として動作する。 The CPU 654 operates as the acquisition unit 110A (110B, 11C, 110D, 110E) and the derivation unit 111A (111B, 111C, 111D, 111E) by executing the derivation program.
 よって、情報処理システム650では、PC652が測距装置10Fから第1撮像画像信号、2撮像画像信号、注目画素座標、及び距離等を通信I/F662を介して取得した上で導出プログラムを実行することで、上記各実施形態と同様の作用及び効果が得られる。 Therefore, in the information processing system 650, the PC 652 acquires the first captured image signal, the second captured image signal, the pixel-of-interest coordinates, the distance, and the like from the distance measuring device 10F via the communication I / F 662, and executes the derivation program. Thus, the same operations and effects as those of the above embodiments can be obtained.
 [第7実施形態]
 上記第1実施形態では、測距ユニット12及び撮像装置14により測距装置10Aが実現される場合を例示したが、本第7実施形態では、更にスマートデバイス702を備えることによって実現される測距装置10Gについて説明する。なお、本第7実施形態では、上記各実施形態と同一の構成要素については同一の符号を付して、その説明を省略し、上記各実施形態と異なる部分についてのみ説明する。
[Seventh Embodiment]
In the first embodiment, the case where the distance measuring device 10A is realized by the distance measuring unit 12 and the imaging device 14 is exemplified. However, in the seventh embodiment, the distance measuring realized by further including the smart device 702. The device 10G will be described. Note that in the seventh embodiment, the same components as those in the above embodiments are denoted by the same reference numerals, description thereof will be omitted, and only parts different from those in the above embodiments will be described.
 一例として図55に示すように、本第7実施形態に係る測距装置10Gは、上記第1実施形態に係る測距装置10Aに比べ、撮像装置14に代えて撮像装置700を有する点が異なる。また、測距装置10Gは、測距装置10Aに比べ、スマートデバイス702を有する点が異なる。 As an example, as illustrated in FIG. 55, the distance measuring device 10 </ b> G according to the seventh embodiment is different from the distance measuring device 10 </ b> A according to the first embodiment in that an imaging device 700 is provided instead of the imaging device 14. . Further, the distance measuring device 10G is different from the distance measuring device 10A in that it includes a smart device 702.
 撮像装置700は、撮像装置14に比べ、撮像装置本体18に代えて撮像装置本体703を有する点が異なる。 The imaging device 700 is different from the imaging device 14 in that it has an imaging device body 703 instead of the imaging device body 18.
 撮像装置本体703は、撮像装置本体18に比べ、無線通信部704及び無線通信用アンテナ706を有する点が異なる。 The imaging device body 703 is different from the imaging device body 18 in that the imaging device body 703 includes a wireless communication unit 704 and a wireless communication antenna 706.
 無線通信部704は、バスライン84及び無線通信用アンテナ706に接続されている。主制御部62は、スマートデバイス702へ送信される対象の情報である送信対象情報を無線通信部704に出力する。 The wireless communication unit 704 is connected to the bus line 84 and the wireless communication antenna 706. The main control unit 62 outputs transmission target information, which is information to be transmitted to the smart device 702, to the wireless communication unit 704.
 無線通信部704は、主制御部62から入力された送信対象情報を無線通信用アンテナ706を介してスマートデバイス702へ電波で送信する。また、無線通信部704は、スマートデバイス702からの電波が無線通信用アンテナ706で受信されると、受信された電波に応じた信号を取得し、取得した信号を主制御部62に出力する。 The wireless communication unit 704 transmits the transmission target information input from the main control unit 62 to the smart device 702 via the wireless communication antenna 706 by radio waves. In addition, when the radio wave from the smart device 702 is received by the radio communication antenna 706, the radio communication unit 704 acquires a signal corresponding to the received radio wave and outputs the acquired signal to the main control unit 62.
 スマートデバイス702は、CPU708、一次記憶部710、及び二次記憶部712を備えている。CPU708、一次記憶部710、及び二次記憶部712は、バスライン714に接続されている。 The smart device 702 includes a CPU 708, a primary storage unit 710, and a secondary storage unit 712. The CPU 708, the primary storage unit 710, and the secondary storage unit 712 are connected to the bus line 714.
 CPU708は、スマートデバイス702を含めて測距装置10Gの全体を制御する。一次記憶部710は、各種プログラムの実行時のワークエリア等として用いられる揮発性のメモリである。一次記憶部710の一例としては、RAMが挙げられる。二次記憶部712は、スマートデバイス702を含めて測距装置10Gの全体の作動を制御する制御プログラム又は各種パラメータ等を予め記憶する不揮発性のメモリである。二次記憶部712の一例としては、フラッシュメモリ又はEEPROMが挙げられる。 The CPU 708 controls the entire distance measuring device 10G including the smart device 702. The primary storage unit 710 is a volatile memory used as a work area or the like when executing various programs. An example of the primary storage unit 710 is a RAM. The secondary storage unit 712 is a non-volatile memory that stores in advance a control program for controlling the overall operation of the distance measuring apparatus 10G including the smart device 702, various parameters, or the like. An example of the secondary storage unit 712 is a flash memory or an EEPROM.
 スマートデバイス702は、表示部715、タッチパネル716、無線通信部718、及び無線通信用アンテナ720を備えている。 The smart device 702 includes a display unit 715, a touch panel 716, a wireless communication unit 718, and a wireless communication antenna 720.
 表示部715は、表示制御部(図示省略)を介してバスライン714に接続されており、表示制御部の制御下で各種情報を表示する。なお、表示部715は、例えば、LCDにより実現される。 The display unit 715 is connected to the bus line 714 via a display control unit (not shown), and displays various information under the control of the display control unit. The display unit 715 is realized by an LCD, for example.
 タッチパネル716は、表示部715の表示画面に重ねられており、指示体による接触を受け付ける。タッチパネル716は、タッチパネルI/F(図示省略)を介してバスライン714に接続されており、指示体により接触された位置を示す位置情報をタッチパネルI/Fに出力する。タッチパネルI/Fは、CPU708の指示に従ってタッチパネルI/Fを作動させ、タッチパネル716から入力された位置情報をCPU708に出力する。 The touch panel 716 is overlaid on the display screen of the display unit 715, and accepts contact by an indicator. The touch panel 716 is connected to the bus line 714 via a touch panel I / F (not shown), and outputs position information indicating the position touched by the indicator to the touch panel I / F. The touch panel I / F operates the touch panel I / F in accordance with an instruction from the CPU 708, and outputs position information input from the touch panel 716 to the CPU 708.
 表示部715には、計測撮像ボタン90A、撮像ボタン(図示省略)、撮像系動作モード切替ボタン90B、広角指示ボタン90C、及び望遠指示ボタン90D、撮像位置距離導出ボタン90F、及び3次元座標導出ボタン90G等に相当するソフトキーが表示される(図56参照)。 The display unit 715 includes a measurement imaging button 90A, an imaging button (not shown), an imaging system operation mode switching button 90B, a wide-angle instruction button 90C, a telephoto instruction button 90D, an imaging position distance derivation button 90F, and a three-dimensional coordinate derivation button. A soft key corresponding to 90G or the like is displayed (see FIG. 56).
 例えば、図56に示すように、表示部715には、計測撮像ボタン90Aとして機能する計測撮像ボタン90A1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。また、例えば、表示部715には、上記第1実施形態で説明した撮像ボタンとして機能する撮像ボタン(図示省略)がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。また、例えば、表示部715には、撮像系動作モード切替ボタン90Bとして機能する撮像系動作モード切替ボタン90B1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。 For example, as shown in FIG. 56, a measurement imaging button 90A1 that functions as the measurement imaging button 90A is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716. Further, for example, an imaging button (not shown) that functions as the imaging button described in the first embodiment is displayed as a soft key on the display unit 715 and pressed by the user via the touch panel 716. Further, for example, an imaging system operation mode switching button 90B1 that functions as the imaging system operation mode switching button 90B is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
 また、例えば、表示部715には、広角指示ボタン90Cとして機能する広角指示ボタン90C1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。更に、例えば、表示部715には、望遠指示ボタン90Dとして機能する望遠指示ボタン90D1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。 Also, for example, a wide-angle instruction button 90C1 that functions as the wide-angle instruction button 90C is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716. Further, for example, a telephoto instruction button 90D1 that functions as the telephoto instruction button 90D is displayed as a soft key on the display unit 715 and is pressed by the user via the touch panel 716.
 また、例えば、表示部715には、寸法導出ボタン90Eとして機能する寸法導出ボタン90E1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。撮像位置距離導出ボタン90Fとして機能する撮像位置距離導出ボタン90F1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。また、例えば、表示部715には、3次元座標導出ボタン90Gとして機能する3次元座標導出ボタン90G1がソフトキーとして表示され、タッチパネル716を介してユーザによって押下される。 Also, for example, the display unit 715 displays a dimension derivation button 90E1 that functions as the dimension derivation button 90E as a soft key, and is pressed by the user via the touch panel 716. An imaging position distance derivation button 90F1 that functions as the imaging position distance derivation button 90F is displayed as a soft key and is pressed by the user via the touch panel 716. Further, for example, the display unit 715 displays a 3D coordinate derivation button 90G1 that functions as the 3D coordinate derivation button 90G as a soft key, and is pressed by the user via the touch panel 716.
 無線通信部718は、バスライン714及び無線通信用アンテナ720に接続されている。無線通信部718は、CPU708から入力された信号を無線通信用アンテナ720を介して撮像装置本体703へ電波で送信する。また、無線通信部718は、撮像装置本体703からの電波が無線通信用アンテナ720で受信されると、受信された電波に応じた信号を取得し、取得した信号をCPU708に出力する。従って、撮像装置本体703は、スマートデバイス702との間で無線通信が行われることで、スマートデバイス702によって制御される。 The wireless communication unit 718 is connected to the bus line 714 and the wireless communication antenna 720. The wireless communication unit 718 transmits the signal input from the CPU 708 to the imaging apparatus main body 703 via the wireless communication antenna 720 by radio waves. In addition, when the radio wave from the imaging apparatus main body 703 is received by the radio communication antenna 720, the radio communication unit 718 acquires a signal corresponding to the received radio wave and outputs the acquired signal to the CPU 708. Accordingly, the imaging apparatus main body 703 is controlled by the smart device 702 by performing wireless communication with the smart device 702.
 二次記憶部712は、導出プログラムを記憶している。CPU708は、二次記憶部712から導出プログラムを読み出して一次記憶部710に展開し、導出プログラムを実行する。また、二次記憶部712は、3次元座標導出プログラムを記憶している。CPU708は、二次記憶部712から3次元座標導出プログラムを読み出して一次記憶部710に展開し、3次元座標導出プログラムを実行する。更に、二次記憶部712は、焦点距離導出テーブルを記憶している。 The secondary storage unit 712 stores a derivation program. The CPU 708 reads out the derived program from the secondary storage unit 712, expands it in the primary storage unit 710, and executes the derived program. The secondary storage unit 712 stores a three-dimensional coordinate derivation program. The CPU 708 reads the three-dimensional coordinate derivation program from the secondary storage unit 712, develops it in the primary storage unit 710, and executes the three-dimensional coordinate derivation program. Further, the secondary storage unit 712 stores a focal length derivation table.
 CPU708は、導出プログラムを実行することで、取得部110A(110B、110C,110D,110E)及び導出部111A(111B,111C,111D,111E)として動作する。 The CPU 708 operates as the acquisition unit 110A (110B, 110C, 110D, 110E) and the derivation unit 111A (111B, 111C, 111D, 111E) by executing the derivation program.
 従って、測距装置10Gでは、スマートデバイス702が導出プログラムを実行することで、上記各実施形態と同様の作用及び効果が得られる。 Therefore, in the distance measuring device 10G, the smart device 702 executes the derivation program, so that the same operations and effects as in the above embodiments can be obtained.
 なお、上記各実施形態では、対応注目画素が、第2撮像画像を解析対象として画像解析を実行することで特定され、特定された対応注目画素を特定する対応注目画素座標が取得されるが(ステップ330M,336L参照)、本開示の技術はこれに限定されるものではない。例えば、ユーザがタッチパネル88を介して第2撮像画像から注目画素に対応する画素を対応注目画素として指定するようにしてもよい。 In each of the above embodiments, the corresponding target pixel is specified by performing image analysis using the second captured image as an analysis target, and the corresponding target pixel coordinates specifying the specified corresponding target pixel are acquired ( Steps 330M and 336L), and the technology of the present disclosure is not limited to this. For example, the user may designate a pixel corresponding to the target pixel from the second captured image via the touch panel 88 as the corresponding target pixel.
 また、上記各実施形態では、導出部111A(111B,111C,111D,111E)が演算式を用いて照射位置実空間座標、平面の向き、撮像位置距離、及び指定画素3次元座標等を導出する場合を例示したが、本開示の技術はこれに限定されるものでない。例えば、導出部111A(111B,111C,111D,111E)は、演算式の独立変数を入力とし、演算式の従属変数を出力とするテーブルを用いて照射位置実空間座標、平面の向き、撮像位置距離、及び指定画素3次元座標等を導出するようにしてもよい。 Further, in each of the above embodiments, the deriving unit 111A (111B, 111C, 111D, 111E) derives the irradiation space real space coordinates, the plane orientation, the imaging position distance, the designated pixel three-dimensional coordinates, and the like using arithmetic expressions. Although the case is illustrated, the technology of the present disclosure is not limited to this. For example, the derivation unit 111A (111B, 111C, 111D, 111E) uses the table having the independent variable of the arithmetic expression as an input and the output of the dependent variable of the arithmetic expression as an output, the irradiation position real space coordinates, the plane orientation, and the imaging position The distance, the specified pixel three-dimensional coordinates, and the like may be derived.
 また、上記各実施形態では、導出プログラム及び3次元座標導出プログラムを二次記憶部104(658,712)から読み出す場合を例示したが、必ずしも最初から二次記憶部104(658,712)に記憶させておく必要はない。例えば、図57に示すように、SSD(Solid State Drive)又はUSB(Universal Serial Bus)メモリなどの任意の可搬型の記憶媒体750に先ずは導出プログラム及び3次元座標導出プログラムを記憶させておいてもよい。この場合、記憶媒体750の導出プログラムが測距装置10A(10B,10C,10D,10E)(以下、「測距装置10A等」と称する)又はPC652にインストールされ、インストールされた導出プログラムがCPU100(654,708)によって実行される。また、記憶媒体750の3次元座標導出プログラムが測距装置10A等又はPC652にインストールされ、インストールされた3次元座標導出プログラムがCPU100(654,708)によって実行される。 Further, in each of the above embodiments, the case where the derivation program and the three-dimensional coordinate derivation program are read from the secondary storage unit 104 (658, 712) is exemplified, but it is not necessarily stored in the secondary storage unit 104 (658, 712) from the beginning. There is no need to keep it. For example, as shown in FIG. 57, first, a derivation program and a three-dimensional coordinate derivation program are stored in an arbitrary portable storage medium 750 such as an SSD (Solid State Drive) or a USB (Universal Serial Bus) memory. Also good. In this case, the derivation program of the storage medium 750 is installed in the distance measuring device 10A (10B, 10C, 10D, 10E) (hereinafter referred to as “the distance measuring device 10A etc.”) or the PC 652, and the installed derivation program is the CPU 100 ( 654, 708). Further, the three-dimensional coordinate derivation program of the storage medium 750 is installed in the distance measuring device 10A or the like or the PC 652, and the installed three-dimensional coordinate derivation program is executed by the CPU 100 (654, 708).
 また、通信網(図示省略)を介して測距装置10A等又はPC652に接続される他のコンピュータ又はサーバ装置等の記憶部に導出プログラム及び3次元座標導出プログラムを記憶させておき、導出プログラム及び3次元座標導出プログラムが測距装置10A等又はPC652の要求に応じてダウンロードされるようにしてもよい。この場合、ダウンロードされた導出プログラムがCPU100(654,708)によって実行される。 Further, the derivation program and the three-dimensional coordinate derivation program are stored in a storage unit such as the distance measuring device 10A or the like or another computer or server device connected to the PC 652 via a communication network (not shown), The three-dimensional coordinate derivation program may be downloaded in response to a request from the distance measuring device 10A or the like or the PC 652. In this case, the downloaded derivation program is executed by the CPU 100 (654, 708).
 また、上記各実施形態では、照射位置目印136、撮像位置距離、及び指定画素3次元座標等の各種情報が表示部86に表示される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、測距装置10A等又はPC652に接続して使用される外部装置の表示部に各種情報が表示されるようにしてもよい。外部装置の一例としては、PC、又は眼鏡型若しくは腕時計型のウェアラブル端末装置が挙げられる。 Further, in each of the above embodiments, the case where various types of information such as the irradiation position mark 136, the imaging position distance, and the designated pixel three-dimensional coordinates are displayed on the display unit 86 is illustrated, but the technology of the present disclosure is limited to this. It is not something. For example, various types of information may be displayed on a display unit of an external device that is used by being connected to the distance measuring device 10A or the like or the PC 652. As an example of the external device, there is a PC or a glasses-type or watch-type wearable terminal device.
 また、上記各実施形態では、照射位置目印136、撮像位置距離、及び指定画素3次元座標等が表示部86により可視表示される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、音声再生装置による音声の出力等の可聴表示又はプリンタによる印刷物の出力等の永久可視表示を可視表示に代えて行ってもよいし、併用してもよい。 In each of the above embodiments, the case where the irradiation position mark 136, the imaging position distance, the specified pixel three-dimensional coordinates, and the like are visually displayed on the display unit 86 is illustrated, but the technology of the present disclosure is limited to this. is not. For example, an audible display such as sound output by a sound reproduction device or a permanent visual display such as output of a printed matter by a printer may be performed instead of the visible display, or may be used in combination.
 また、上記各実施形態では、照射位置目印136、撮像位置距離、及び指定画素3次元座標等が表示部86に表示される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、照射位置目印136、撮像位置距離、及び指定画素3次元座標等のうちの少なくとも1つが表示部86と異なる表示部(図示省略)に表示されるようにし、残りが表示部86に表示されるようにしてもよい。照射位置目印136、撮像位置距離、及び指定画素3次元座標等の各々が表示部86を含めた複数の表示部に個別に表示されるようにしてもよい。 Further, in each of the above embodiments, the case where the irradiation position mark 136, the imaging position distance, the specified pixel three-dimensional coordinates, and the like are displayed on the display unit 86 is illustrated, but the technology of the present disclosure is not limited to this. Absent. For example, at least one of the irradiation position mark 136, the imaging position distance, the designated pixel three-dimensional coordinates, and the like is displayed on a display unit (not shown) different from the display unit 86, and the rest is displayed on the display unit 86. You may make it do. Each of the irradiation position mark 136, the imaging position distance, the designated pixel three-dimensional coordinates, and the like may be individually displayed on a plurality of display units including the display unit 86.
 また、上記各実施形態では、測距用の光としてレーザ光を例示しているが、本開示の技術はこれに限定されるものではなく、指向性のある光である指向性光であればよい。例えば、発光ダイオード(LED:Light Emitting Diode)又はスーパールミネッセントダイオード(SLD:Super Luminescent Diode)等により得られる指向性光であってもよい。指向性光が有する指向性は、レーザ光が有する指向性と同程度の指向性であることが好ましく、例えば、数メートルから数キロメートルの範囲内における測距で使用可能な指向性であることが好ましい。 In each of the above embodiments, laser light is exemplified as distance measurement light. However, the technology of the present disclosure is not limited to this, and any directional light that is directional light may be used. Good. For example, it may be directional light obtained by a light emitting diode (LED: Light Emitting Diode) or a super luminescent diode (SLD). The directivity of the directional light is preferably the same as the directivity of the laser light. For example, the directivity can be used for ranging within a range of several meters to several kilometers. preferable.
 また、上記各実施形態で説明した寸法導出処理、撮像位置距離導出処理、及び3次元座標導出処理はあくまでも一例である。従って、主旨を逸脱しない範囲内において不要なステップを削除したり、新たなステップを追加したり、処理順序を入れ替えたりしてもよいことは言うまでもない。また、寸法導出処理、撮像位置距離導出処理、及び3次元座標導出処理に含まれる各処理は、ASIC等のハードウェア構成のみで実現されてもよいし、コンピュータを利用したソフトウェア構成とハードウェア構成との組み合わせで実現されてもよい。 Further, the dimension derivation process, the imaging position distance derivation process, and the three-dimensional coordinate derivation process described in the above embodiments are merely examples. Therefore, it goes without saying that unnecessary steps may be deleted, new steps may be added, and the processing order may be changed within a range not departing from the spirit. Each process included in the dimension derivation process, the imaging position distance derivation process, and the three-dimensional coordinate derivation process may be realized only by a hardware configuration such as an ASIC, or a software configuration and a hardware configuration using a computer. May be realized in combination.
 また、上記各実施形態では、説明の便宜上、測距装置10A等に含まれる撮像装置本体18の側面に測距ユニット12が取り付けられる場合について説明したが、本開示の技術はこれに限定されるものではない。例えば、撮像装置本体18の上面又は下面に測距ユニット12が取り付けられてもよい。また、例えば、図58に示すように、測距装置10A等に代えて測距装置10Hを適用してもよい。一例として図58に示すように、測距装置10Hは、測距装置10A等に比べ、測距ユニット12に代えて測距ユニット12Aを有する点、及び撮像装置本体18に代えて撮像装置本体18Aを有する点が異なる。 In each of the above embodiments, for the sake of convenience of explanation, the case where the distance measuring unit 12 is attached to the side surface of the imaging apparatus main body 18 included in the distance measuring apparatus 10A and the like has been described, but the technology of the present disclosure is limited to this. It is not a thing. For example, the distance measuring unit 12 may be attached to the upper surface or the lower surface of the imaging apparatus main body 18. Further, for example, as shown in FIG. 58, a distance measuring device 10H may be applied instead of the distance measuring device 10A or the like. As an example, as shown in FIG. 58, the distance measuring device 10H has a distance measuring unit 12A instead of the distance measuring unit 12 as compared with the distance measuring device 10A and the like, and an image pickup device main body 18A instead of the image pickup device main body 18. Is different.
 図58に示す例において、測距ユニット12Aは、撮像装置本体18Aの筐体18A1に収容されており、対物レンズ32,38は、測距装置10Gの正面側(フォーカスレンズ50が露出している側)に筐体18A1から露出している。また、測距ユニット12Aは、光軸L1,L2が鉛直方向において同一の高さに設定されるように配置されることが好ましい。なお、筐体18A1に対して測距ユニット12Aが挿脱可能な開口(図示省略)が筐体18A1に形成されていてもよい。 In the example shown in FIG. 58, the distance measuring unit 12A is housed in the housing 18A1 of the imaging apparatus main body 18A, and the objective lenses 32 and 38 are the front side of the distance measuring apparatus 10G (the focus lens 50 is exposed). Side) is exposed from the housing 18A1. The distance measuring unit 12A is preferably arranged so that the optical axes L1 and L2 are set at the same height in the vertical direction. Note that an opening (not shown) through which the distance measuring unit 12A can be inserted into and removed from the housing 18A1 may be formed in the housing 18A1.
 また、上記第1実施形態では、導出用距離から焦点距離が直接導出可能な焦点距離導出テーブル109Aを例示したが、本開示の技術はこれに限定されるものではない。例えば、図59に示すように、実測距離に対応する補正値を有する焦点距離導出テーブル109Dを採用してもよい。 In the first embodiment, the focal length deriving table 109A that can directly derive the focal length from the deriving distance is exemplified, but the technology of the present disclosure is not limited to this. For example, as shown in FIG. 59, a focal length derivation table 109D having a correction value corresponding to the actually measured distance may be employed.
 焦点距離導出テーブル109Dでは、複数の導出用距離の各々に対して異なる補正値が対応付けられている。すなわち、焦点距離導出テーブル109Dでは、実測距離に相当する導出用距離が補正値で補正されることによって焦点距離が導出されるように導出用距離と補正値とが対応付けられている。 In the focal length derivation table 109D, different correction values are associated with each of the plurality of derivation distances. That is, in the focal distance derivation table 109D, the derivation distance and the correction value are associated with each other so that the focal distance is derived by correcting the derivation distance corresponding to the actually measured distance with the correction value.
 なお、焦点距離導出テーブル109Dは、例えば、測距装置10Aの実機による試験、及び、測距装置10Aの設計仕様等に基づくコンピュータ・シミュレーションの少なくとも一方の結果から導き出されたテーブルである。 The focal length derivation table 109D is a table derived from, for example, a result of at least one of a test by the actual device of the distance measuring device 10A and a computer simulation based on a design specification of the distance measuring device 10A.
 一例として図59に示すように、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の1メートルに対して補正値のYが乗じられることで焦点距離の7ミリメートルが導出される。また、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の2メートルに対して補正値のYが乗じられることで焦点距離の8ミリメートルが導出される。また、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の3メートルに対して補正値のYが乗じられることで焦点距離の10ミリメートルが導出される。また、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の5メートルに対して補正値のYが乗じられることで焦点距離の12ミリメートルが導出される。 As shown in FIG. 59 as an example, if the focal length derived table 109D used by CPU 100, 7 mm focal length is derived by Y 1 in the correction value is multiplied to one meter deriving distance. Further, when the focal length derived table 109D used by CPU 100, 8 mm focal length is derived by Y 2 is multiplied by the correction value for the two meters deriving distance. Further, when the focal length derived table 109D used by CPU 100, 10 mm focal length is derived by the Y 3 of the correction value is multiplied to three meters deriving distance. Further, when the focal length derived table 109D is used by CPU 100, 12 mm focal length is derived by the Y 4 is the correction value is multiplied against 5m deriving distance.
 また、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の10メートルに対して補正値のYが乗じられることで焦点距離の14ミリメートルが導出される。また、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の30メートルに対して補正値のYが乗じられることで焦点距離の16ミリメートルが導出される。更に、CPU100によって焦点距離導出テーブル109Dが用いられると、導出用距離の無限遠に対して補正値のYが乗じられることで焦点距離の18ミリメートルが導出される。 Further, when the focal length derived table 109D used by CPU 100, 14 mm focal length is derived by the Y 5 a correction value is multiplied against 10m deriving distance. Further, when the focal length derived table 109D used by CPU 100, 16 mm focal length is derived by Y 6 in the correction value is multiplied by relative 30m deriving distance. Furthermore, if the focal length derived table 109D used by CPU 100, 18 mm focal length is derived by Y 7 the correction value is multiplied by relative infinity derivation distance.
 なお、実測距離と一致する導出用距離が焦点距離導出テーブル109Dに存在しない場合は、焦点距離導出テーブル109Dの導出用距離から上述した補間法で補正値を導出し、導出した補正値で実測距離を補正することにより焦点距離を導出すればよい。 If there is no derivation distance that matches the actual distance in the focal distance derivation table 109D, a correction value is derived from the derivation distance in the focal distance derivation table 109D by the interpolation method described above, and the actual distance is derived using the derived correction value. The focal length may be derived by correcting.
 なお、図59に示す例では、導出用距離に対する乗算用の係数が補正値として定められているが、補正値は、乗算用の係数に限定されるものではなく、導出用距離から焦点距離を導出する上で必要な演算子と共に用いられる数値であればよい。 In the example shown in FIG. 59, the multiplication coefficient for the derivation distance is defined as the correction value. However, the correction value is not limited to the multiplication coefficient, and the focal distance is calculated from the derivation distance. Any numerical value may be used as long as it is used together with an operator necessary for deriving.
 このように、CPU100により、実測距離が補正値で補正されることによって、実測距離に対応する焦点距離が導出されることで、測距装置10A等は、焦点距離の高精度化を図るために、撮像画像に含まれる基準画像の長さをユーザに入力させる場合に比べ、手間をかけずに焦点距離を高精度に導出することができる。 As described above, the CPU 100 corrects the measured distance with the correction value to derive the focal length corresponding to the measured distance, so that the distance measuring device 10A and the like can improve the focal length accuracy. Compared with the case where the user inputs the length of the reference image included in the captured image, the focal length can be derived with high accuracy without taking time and effort.
 また、上記各実施形態では、第2位置で静止画像が撮像される場合について説明したが、本開示の技術はこれに限定されるものではない。例えば、第2位置でライブビュー画像等の動画像が撮像される場合にも本開示の技術は適用可能である。この場合、CPU100等が、動画像が撮像されている間に実測距離等を取得し、取得した実測距離等に対応する焦点距離を焦点距離導出テーブル109A等を用いて導出すればよい。これにより、例えば、撮像位置距離等の即時的な導出が高精度に実現される。また、このように動画像が撮像されている間に焦点距離を導出することは、いわゆるコンティニュアスAF(Auto-Focus)により合焦制御が継続的に行われることに伴って焦点距離が変化する場合に特に効果的である。 In each of the above embodiments, the case where a still image is captured at the second position has been described, but the technology of the present disclosure is not limited to this. For example, the technique of the present disclosure can also be applied when a moving image such as a live view image is captured at the second position. In this case, the CPU 100 or the like may acquire the measured distance or the like while the moving image is captured, and derive the focal distance corresponding to the acquired measured distance or the like using the focal distance derivation table 109A or the like. Thereby, for example, immediate derivation of the imaging position distance or the like is realized with high accuracy. Also, deriving the focal length while a moving image is being captured in this manner means that the focal length changes as focus control is continuously performed by so-called continuous AF (Auto-Focus). This is particularly effective when
 また、上記各実施形態では、合焦状態の判定対象とされるエリアである合焦判定エリアを設けずに測距を行う場合を例示したが、本開示の技術はこれに限定されるものではなく、合焦判定エリアに対してレーザ光を照射して測距を行うようにしてもよい。 Further, in each of the above-described embodiments, the case where distance measurement is performed without providing a focus determination area that is an area to be determined as a focus state is illustrated, but the technology of the present disclosure is not limited to this. Instead, distance measurement may be performed by irradiating the focus determination area with laser light.
 本明細書に記載された全ての文献、特許出願及び技術規格は、個々の文献、特許出願及び技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。
 
All documents, patent applications and technical standards mentioned in this specification are to the same extent as if each individual document, patent application and technical standard were specifically and individually stated to be incorporated by reference. Incorporated by reference in the book.

Claims (20)

  1.  フォーカスレンズを有する撮像部であって、被写体を撮像する撮像部、及び指向性のある光である指向性光を前記被写体に射出し、前記指向性光の反射光を受光することにより前記被写体までの距離を計測する計測部を含む測距装置に含まれる前記計測部により計測された前記距離である実測距離を取得する取得部と、
     前記実測距離と前記フォーカスレンズの焦点距離との対応関係を示す対応関係情報を用いて、前記取得部により取得された前記実測距離に対応する前記焦点距離を導出する導出部と、
     を含む情報処理装置。
    An imaging unit having a focus lens, and an imaging unit that captures an image of a subject, and emitting directional light that is directional light to the subject and receiving reflected light of the directional light up to the subject An acquisition unit for acquiring an actually measured distance that is the distance measured by the measurement unit included in a distance measuring device including a measurement unit that measures the distance of
    A derivation unit for deriving the focal length corresponding to the actual measurement distance acquired by the acquisition unit, using correspondence information indicating the correspondence between the actual measurement distance and the focal length of the focus lens;
    An information processing apparatus including:
  2.  前記対応関係情報は、前記実測距離に対応する補正値を含む情報であり、
     前記導出部は、前記実測距離が前記補正値で補正されることにより、前記実測距離に対応する前記焦点距離を導出する請求項1に記載の情報処理装置。
    The correspondence information is information including a correction value corresponding to the measured distance,
    The information processing apparatus according to claim 1, wherein the deriving unit derives the focal distance corresponding to the actually measured distance by correcting the actually measured distance with the correction value.
  3.  前記撮像部は、ズームレンズを更に有しており、
     前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記撮像部での前記ズームレンズの光軸方向の位置を示す位置情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離及び前記位置情報に対応する前記焦点距離を導出する請求項1に記載の情報処理装置。
    The imaging unit further includes a zoom lens,
    The correspondence information is information indicating the correspondence between the measured distance, the position of the zoom lens in the optical axis direction in the imaging unit, and the focal length.
    The acquisition unit further acquires position information indicating a position of the zoom lens in the optical axis direction in the imaging unit,
    The information processing apparatus according to claim 1, wherein the derivation unit derives the focal distance corresponding to the measured distance and the position information acquired by the acquisition unit using the correspondence information.
  4.  前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記撮像部による撮像に影響を及ぼす領域の温度、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記領域の温度を示す温度情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、及び前記温度情報に対応する前記焦点距離を導出する請求項3に記載の情報処理装置。
    The correspondence information is information indicating the correspondence between the measured distance, the position of the zoom lens in the optical axis direction of the imaging unit, the temperature of the region that affects the imaging by the imaging unit, and the focal length. ,
    The acquisition unit further acquires temperature information indicating the temperature of the region,
    The information processing apparatus according to claim 3, wherein the deriving unit derives the focal distance corresponding to the measured distance, the position information, and the temperature information acquired by the acquiring unit using the correspondence information. .
  5.  前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記撮像部による撮像に影響を及ぼす領域の温度、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、前記温度情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する請求項4に記載の情報処理装置。
    The correspondence information includes the measured distance, the position of the zoom lens in the optical axis direction at the imaging unit, the temperature of the region that affects the imaging by the imaging unit, the attitude of the focus lens with respect to the vertical direction, and the focus It is information indicating the correspondence of distance,
    The acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction,
    The deriving unit derives the focal length corresponding to the measured distance, the position information, the temperature information, and the focus lens attitude information acquired by the acquiring unit using the correspondence information. The information processing apparatus described in 1.
  6.  前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記撮像部による撮像に影響を及ぼす領域の温度、前記鉛直方向に対する前記フォーカスレンズの姿勢、前記鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、前記温度情報、前記フォーカスレンズ姿勢情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する請求項5に記載の情報処理装置。
    The correspondence information includes the measured distance, the position of the zoom lens in the optical axis direction at the imaging unit, the temperature of the region that affects the imaging by the imaging unit, the attitude of the focus lens with respect to the vertical direction, and the vertical Information indicating a correspondence relationship between the orientation of the zoom lens with respect to a direction and the focal length;
    The acquisition unit further acquires zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction,
    The derivation unit uses the correspondence relationship information, and the focal distance corresponding to the measured distance, the position information, the temperature information, the focus lens posture information, and the zoom lens posture information acquired by the acquisition unit. The information processing apparatus according to claim 5 for deriving.
  7.  前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する請求項3に記載の情報処理装置。
    The correspondence relationship information is information indicating the correspondence relationship between the measured distance, the position of the zoom lens in the optical axis direction in the imaging unit, the posture of the zoom lens with respect to the vertical direction, and the focal length.
    The acquisition unit further acquires zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction,
    The information according to claim 3, wherein the deriving unit derives the focal distance corresponding to the measured distance, the position information, and the zoom lens posture information acquired by the acquiring unit using the correspondence information. Processing equipment.
  8.  前記撮像部は、ズームレンズを更に有しており、
     前記対応関係情報は、鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する請求項1に記載の情報処理装置。
    The imaging unit further includes a zoom lens,
    The correspondence relationship information is information indicating a correspondence relationship between the attitude of the zoom lens with respect to a vertical direction and the focal length,
    The acquisition unit further acquires zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction,
    The information processing apparatus according to claim 1, wherein the deriving unit derives the focal distance corresponding to the measured distance and the zoom lens posture information acquired by the acquiring unit using the correspondence information.
  9.  前記対応関係情報は、前記実測距離、前記鉛直方向に対する前記ズームレンズの姿勢、前記鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記ズームレンズ姿勢情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する請求項8に記載の情報処理装置。
    The correspondence relationship information is information indicating the correspondence relationship between the measured distance, the posture of the zoom lens with respect to the vertical direction, the posture of the focus lens with respect to the vertical direction, and the focal length.
    The acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction,
    The deriving unit derives the focal distance corresponding to the measured distance, the zoom lens posture information, and the focus lens posture information acquired by the acquiring unit using the correspondence information. Information processing device.
  10.  前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、前記鉛直方向に対する前記ズームレンズの姿勢、前記鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記領域の温度を示す温度情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記温度情報、前記ズームレンズ姿勢情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する請求項9に記載の情報処理装置。
    The correspondence information includes correspondence between the measured distance, the temperature of a region that affects imaging by the imaging unit, the posture of the zoom lens with respect to the vertical direction, the posture of the focus lens with respect to the vertical direction, and the focal length. Information indicating
    The acquisition unit further acquires temperature information indicating the temperature of the region,
    The deriving unit derives the focal length corresponding to the measured distance, the temperature information, the zoom lens posture information, and the focus lens posture information acquired by the acquiring unit using the correspondence information. Item 10. The information processing device according to Item 9.
  11.  前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、前記鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記領域の温度を示す温度情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記温度情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する請求項8に記載の情報処理装置。
    The correspondence relationship information is information indicating the correspondence relationship between the measured distance, the temperature of a region that affects imaging by the imaging unit, the attitude of the zoom lens with respect to the vertical direction, and the focal length.
    The acquisition unit further acquires temperature information indicating the temperature of the region,
    The information according to claim 8, wherein the deriving unit derives the focal distance corresponding to the measured distance, the temperature information, and the zoom lens posture information acquired by the acquiring unit using the correspondence information. Processing equipment.
  12.  前記対応関係情報は、前記実測距離、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する請求項3に記載の情報処理装置。
    The correspondence information is information indicating the correspondence between the measured distance, the posture of the focus lens with respect to the vertical direction, and the focal length.
    The acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction,
    The information according to claim 3, wherein the deriving unit derives the focal distance corresponding to the measured distance, the position information, and the focus lens attitude information acquired by the acquiring unit using the correspondence information. Processing equipment.
  13.  前記対応関係情報は、前記実測距離、前記撮像部での前記ズームレンズの光軸方向の位置、前記鉛直方向に対する前記フォーカスレンズの姿勢、前記鉛直方向に対する前記ズームレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記ズームレンズの姿勢を示すズームレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記位置情報、前記フォーカスレンズ姿勢情報、及び前記ズームレンズ姿勢情報に対応する前記焦点距離を導出する請求項12に記載の情報処理装置。
    The correspondence information includes the measured distance, the position of the zoom lens in the optical axis direction at the imaging unit, the attitude of the focus lens with respect to the vertical direction, the attitude of the zoom lens with respect to the vertical direction, and the focal length. It is information indicating correspondence,
    The acquisition unit further acquires zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction,
    The deriving unit derives the focal length corresponding to the measured distance, the position information, the focus lens posture information, and the zoom lens posture information acquired by the acquiring unit using the correspondence information. Item 13. The information processing device according to Item 12.
  14.  前記対応関係情報は、前記実測距離、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する請求項1に記載の情報処理装置。
    The correspondence information is information indicating the correspondence between the measured distance, the posture of the focus lens with respect to the vertical direction, and the focal length.
    The acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction,
    The information processing apparatus according to claim 1, wherein the deriving unit derives the focal distance corresponding to the measured distance and the focus lens posture information acquired by the acquiring unit using the correspondence information.
  15.  前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記領域の温度を示す温度情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離及び前記温度情報に対応する前記焦点距離を導出する請求項1に記載の情報処理装置。
    The correspondence relationship information is information indicating the correspondence relationship between the measured distance, the temperature of an area that affects imaging by the imaging unit, and the focal length,
    The acquisition unit further acquires temperature information indicating the temperature of the region,
    The information processing apparatus according to claim 1, wherein the deriving unit derives the focal distance corresponding to the measured distance and the temperature information acquired by the acquiring unit using the correspondence information.
  16.  前記対応関係情報は、前記実測距離、前記撮像部による撮像に影響を及ぼす領域の温度、鉛直方向に対する前記フォーカスレンズの姿勢、及び前記焦点距離の対応関係を示す情報であり、
     前記取得部は、前記鉛直方向に対する前記フォーカスレンズの姿勢を示すフォーカスレンズ姿勢情報を更に取得し、
     前記導出部は、前記対応関係情報を用いて、前記取得部により取得された前記実測距離、前記温度情報、及び前記フォーカスレンズ姿勢情報に対応する前記焦点距離を導出する請求項15に記載の情報処理装置。
    The correspondence relationship information is information indicating the correspondence relationship between the measured distance, the temperature of a region that affects imaging by the imaging unit, the posture of the focus lens with respect to the vertical direction, and the focal length.
    The acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction,
    The information according to claim 15, wherein the deriving unit derives the focal distance corresponding to the measured distance, the temperature information, and the focus lens attitude information acquired by the acquiring unit using the correspondence information. Processing equipment.
  17.  前記取得部は、前記被写体が第1撮像位置から前記撮像部により撮像されて得られた第1撮像画像、及び前記被写体が前記第1撮像位置とは異なる第2撮像位置から撮像されて得られた第2撮像画像を取得し、前記第1撮像位置に対応する位置から前記計測部により前記指向性光が前記被写体に射出されて前記指向性光の反射光が受光されることにより計測された前記被写体までの距離を前記実測距離として取得し、
     前記導出部は、前記対応関係情報を用いて導出した前記焦点距離に基づいて前記第1撮像位置と前記第2撮像位置との距離である撮像位置距離を導出する請求項1から請求項16の何れか一項に記載の情報処理装置。
    The acquisition unit is obtained by imaging a first captured image obtained by imaging the subject from the first imaging position by the imaging unit and a second imaging position different from the first imaging position. The second captured image is acquired and measured by the measurement unit emitting the directional light from the position corresponding to the first imaging position and receiving the reflected light of the directional light. Obtain the distance to the subject as the measured distance,
    The derivation unit derives an imaging position distance that is a distance between the first imaging position and the second imaging position based on the focal distance derived using the correspondence information. The information processing apparatus according to any one of claims.
  18.  前記導出部は、前記対応関係情報を用いて導出した前記焦点距離、前記取得部により取得された前記実測距離、及び前記計測部による前記実測距離の計測で用いられた前記指向性光が照射された撮像範囲で前記撮像部により撮像されて得られた画像内において指定された複数画素の間隔に基づいて、前記間隔に対応する実空間領域の寸法を導出する請求項1から請求項17の何れか一項に記載の情報処理装置。 The derivation unit is irradiated with the focal length derived using the correspondence information, the measured distance acquired by the acquisition unit, and the directional light used in the measurement of the measured distance by the measurement unit. The size of the real space region corresponding to the interval is derived based on the interval between a plurality of pixels specified in an image obtained by imaging with the imaging unit within the selected imaging range. The information processing apparatus according to claim 1.
  19.  フォーカスレンズを有する撮像部であって、被写体を撮像する撮像部、及び指向性のある光である指向性光を前記被写体に射出し、前記指向性光の反射光を受光することにより前記被写体までの距離を計測する計測部を含む測距装置に含まれる前記計測部により計測された前記距離である実測距離を取得し、
     前記実測距離と前記フォーカスレンズの焦点距離との対応関係を示す対応関係情報を用いて、取得した前記実測距離に対応する前記焦点距離を導出することを含む情報処理方法。
    An imaging unit having a focus lens, and an imaging unit that captures an image of a subject, and emitting directional light that is directional light to the subject and receiving reflected light of the directional light up to the subject An actual distance that is the distance measured by the measurement unit included in the distance measuring device including the measurement unit that measures the distance of
    An information processing method including deriving the focal length corresponding to the acquired actual measured distance using correspondence relationship information indicating a correspondence relationship between the actual measured distance and the focal length of the focus lens.
  20.  コンピュータに、
     フォーカスレンズを有する撮像部であって、被写体を撮像する撮像部、及び指向性のある光である指向性光を前記被写体に射出し、前記指向性光の反射光を受光することにより前記被写体までの距離を計測する計測部を含む測距装置に含まれる前記計測部により計測された前記距離である実測距離を取得し、
     前記実測距離と前記撮像部での焦点距離との対応関係を示す対応関係情報を用いて、取得した前記実測距離に対応する前記焦点距離を導出することを含む処理を実行させるためのプログラム。
    On the computer,
    An imaging unit having a focus lens, and an imaging unit that captures an image of a subject, and emitting directional light that is directional light to the subject and receiving reflected light of the directional light up to the subject An actual distance that is the distance measured by the measurement unit included in the distance measuring device including the measurement unit that measures the distance of
    A program for executing a process including deriving the focal distance corresponding to the acquired measured distance using correspondence information indicating a correspondence relation between the measured distance and a focal distance in the imaging unit.
PCT/JP2016/083994 2016-02-29 2016-11-16 Information processing device, information processing method, and program WO2017149852A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-038145 2016-02-29
JP2016038145A JP2019070529A (en) 2016-02-29 2016-02-29 Information processor, information processing method and program

Publications (1)

Publication Number Publication Date
WO2017149852A1 true WO2017149852A1 (en) 2017-09-08

Family

ID=59742800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/083994 WO2017149852A1 (en) 2016-02-29 2016-11-16 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2019070529A (en)
WO (1) WO2017149852A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050113A (en) * 2021-03-10 2021-06-29 广州南方卫星导航仪器有限公司 Laser point positioning method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH113415A (en) * 1997-06-12 1999-01-06 Mitsubishi Electric Corp Image fetching device
JP2000266985A (en) * 1999-03-15 2000-09-29 Mitsubishi Heavy Ind Ltd Automatic focal adjustment device for monitor camera
JP2004205222A (en) * 2002-12-20 2004-07-22 Matsushita Electric Works Ltd Distance measuring apparatus
JP2012063751A (en) * 2010-07-27 2012-03-29 Panasonic Corp Image pickup apparatus
WO2015008587A1 (en) * 2013-07-16 2015-01-22 富士フイルム株式会社 Imaging device and three-dimensional-measurement device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH113415A (en) * 1997-06-12 1999-01-06 Mitsubishi Electric Corp Image fetching device
JP2000266985A (en) * 1999-03-15 2000-09-29 Mitsubishi Heavy Ind Ltd Automatic focal adjustment device for monitor camera
JP2004205222A (en) * 2002-12-20 2004-07-22 Matsushita Electric Works Ltd Distance measuring apparatus
JP2012063751A (en) * 2010-07-27 2012-03-29 Panasonic Corp Image pickup apparatus
WO2015008587A1 (en) * 2013-07-16 2015-01-22 富士フイルム株式会社 Imaging device and three-dimensional-measurement device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050113A (en) * 2021-03-10 2021-06-29 广州南方卫星导航仪器有限公司 Laser point positioning method and device
CN113050113B (en) * 2021-03-10 2023-08-01 广州南方卫星导航仪器有限公司 Laser spot positioning method and device

Also Published As

Publication number Publication date
JP2019070529A (en) 2019-05-09

Similar Documents

Publication Publication Date Title
JP6464281B2 (en) Information processing apparatus, information processing method, and program
US11828847B2 (en) Distance measurement device, deriving method for distance measurement, and deriving program for distance measurement
CN102724398A (en) Image data providing method, combination method thereof, and presentation method thereof
US20190339059A1 (en) Information processing device, information processing method, and program
US10641896B2 (en) Distance measurement device, distance measurement method, and distance measurement program
JP5526733B2 (en) Image composition device, image reproduction device, and imaging device
JP6404482B2 (en) Ranging device, ranging control method, and ranging control program
WO2017149852A1 (en) Information processing device, information processing method, and program
JP6534456B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
CN108700403B (en) Information processing apparatus, information processing method, and recording medium
CN108718532B (en) Information processing apparatus, information processing method, and recording medium
WO2017134882A1 (en) Information processing device, information processing method, and program
WO2017056544A1 (en) Distance measuring device, distance measuring method, and distance measuring program

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16892688

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16892688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP