WO2017149852A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2017149852A1
WO2017149852A1 PCT/JP2016/083994 JP2016083994W WO2017149852A1 WO 2017149852 A1 WO2017149852 A1 WO 2017149852A1 JP 2016083994 W JP2016083994 W JP 2016083994W WO 2017149852 A1 WO2017149852 A1 WO 2017149852A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance
information
unit
imaging
derivation
Prior art date
Application number
PCT/JP2016/083994
Other languages
English (en)
Japanese (ja)
Inventor
智紀 増田
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2017149852A1 publication Critical patent/WO2017149852A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/40Systems for automatic generation of focusing signals using time delay of the reflected waves, e.g. of ultrasonic waves

Definitions

  • the technology of the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • Japanese Patent Application Laid-Open No. 2004-157386 discloses a technique for measuring a distance to a subject based on an image signal indicating the subject.
  • International Publication No. 2008-155961 discloses a technique for measuring a distance to a subject based on a pair of captured images obtained by imaging with a pair of imaging units mounted at a predetermined interval apart from each other. Is disclosed.
  • Japanese Patent Application Laid-Open No. 2004-264827 discloses a technique for deriving a focal length at an imaging unit based on image data.
  • the distance measuring device receives the input reference image.
  • a technique for correcting the focal length in the imaging unit based on the length of the image pickup unit is known.
  • the focal length in order to improve the focal length, can be set with high accuracy without taking time and effort compared to the case where the user inputs the length of the reference image included in the captured image.
  • An information processing apparatus is an imaging unit having a focus lens, and an imaging unit that captures an image of a subject, and directivity light that is directional light is emitted to the subject, thereby providing directivity.
  • An acquisition unit that acquires an actual distance that is a distance measured by a measurement unit included in a distance measuring device that includes a measurement unit that measures a distance to a subject by receiving reflected light of the light, an actual distance and a focus lens
  • a deriving unit for deriving a focal length corresponding to the actually measured distance obtained by the obtaining unit using correspondence relationship information indicating a correspondence relationship with the focal length.
  • the information processing apparatus takes less time than the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length.
  • the focal length can be derived with high accuracy.
  • the correspondence information is information including a correction value corresponding to the actually measured distance, The focal distance corresponding to the actual distance is derived by correcting the actual distance with the correction value.
  • the information processing apparatus takes less time compared to the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • An information processing apparatus is the information processing apparatus according to the first aspect of the present invention, wherein the imaging unit further includes a zoom lens, and the correspondence information includes measured distance, imaging The information indicating the correspondence relationship between the position of the zoom lens in the optical axis direction and the focal length at the imaging unit, and the acquisition unit further acquires and derives position information indicating the position of the zoom lens in the optical axis direction at the imaging unit The unit uses the correspondence information to derive the focal distance corresponding to the actually measured distance and the position information acquired by the acquisition unit.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the focal length can be derived with high accuracy without taking time and effort.
  • the correspondence information includes an actually measured distance, a position of the zoom lens in the optical axis direction at the imaging unit, and imaging.
  • Information indicating the correspondence between the temperature of the region affecting the imaging by the unit and the focal length the acquisition unit further acquires temperature information indicating the temperature of the region, and the derivation unit uses the correspondence information, The focal distance corresponding to the measured distance, the position information, and the temperature information acquired by the acquisition unit is derived.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes and the temperature of the region that affects imaging changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the fourth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit. Information indicating the correspondence between the position of the image, the temperature of the region that affects the imaging by the imaging unit, the attitude of the focus lens with respect to the vertical direction, and the focal length,
  • the acquisition unit further acquires focus lens posture information indicating the posture of the focus lens with respect to the vertical direction,
  • the deriving unit derives the focal distance corresponding to the measured distance, the position information, the temperature information, and the focus lens attitude information acquired by the acquiring unit using the correspondence information. ing.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the temperature of the region that affects the imaging by the imaging unit changes, and the attitude of the focus lens changes, the focal length is derived with high accuracy without taking time and effort. be able to.
  • the information processing apparatus is the information processing apparatus according to the fifth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit.
  • the acquisition unit further acquires zoom lens posture information indicating the posture of the zoom lens with respect to the vertical direction
  • the derivation unit uses the correspondence information to acquire the measured distance and the position acquired by the acquisition unit.
  • the focal length corresponding to the information, the temperature information, the focus lens attitude information, and the zoom lens attitude information is derived.
  • the information processing apparatus has a zoom lens light compared to a case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes, the temperature of the area that affects the imaging by the imaging unit changes, the attitude of the focus lens changes, and the attitude of the zoom lens changes, it takes time and effort. Therefore, the focal length can be derived with high accuracy.
  • the information processing apparatus is the information processing apparatus according to the third aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit.
  • the deriving unit derives the focal distance corresponding to the measured distance, the position information, and the zoom lens attitude information acquired by the acquiring unit using the correspondence information.
  • the information processing apparatus provides light from the zoom lens in comparison with a case where the user inputs the length of the reference image included in the captured image in order to increase the accuracy of the focal length. Even if the position in the axial direction changes and the posture of the zoom lens changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the first aspect of the present invention, wherein the imaging unit further includes a zoom lens,
  • the correspondence information is information indicating a correspondence relationship between the posture of the zoom lens with respect to a vertical direction and the focal length
  • the acquisition unit further includes zoom lens posture information indicating a posture of the zoom lens with respect to the vertical direction.
  • the derivation unit derives the focal distance corresponding to the measured distance and the zoom lens posture information acquired by the acquisition unit using the correspondence information.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing device is the information processing device according to the eighth aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the zoom lens with respect to the vertical direction, and the vertical It is information indicating the correspondence relationship between the orientation of the focus lens with respect to the direction and the focal length, the acquisition unit further acquires focus lens orientation information indicating the orientation of the focus lens with respect to the vertical direction, and the derivation unit is The focal distance corresponding to the measured distance, the zoom lens attitude information, and the focus lens attitude information acquired by the acquisition unit is derived using the correspondence information.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if the angle changes and the posture of the focus lens changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the ninth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit.
  • the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit.
  • Information indicating a correspondence relationship between the attitude of the zoom lens with respect to the vertical direction, the attitude of the focus lens with respect to the vertical direction, and the focal length, and the acquisition unit further acquires temperature information indicating the temperature of the region
  • the derivation unit derives the focal distance corresponding to the measured distance, the temperature information, the zoom lens posture information, and the focus lens posture information acquired by the acquisition unit using the correspondence information. It is supposed to be.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if the angle of the focus lens changes, and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing apparatus is the information processing apparatus according to the eighth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the posture of the zoom lens with respect to the vertical direction and the focal length, the acquisition unit further acquires temperature information indicating a temperature of the region, and the derivation unit includes the correspondence relationship The focal distance corresponding to the measured distance, the temperature information, and the zoom lens attitude information acquired by the acquisition unit is derived using information.
  • the zoom lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even when the temperature changes and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing apparatus is the information processing apparatus according to the third aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the focus lens with respect to a vertical direction, and the focus.
  • Information indicating a correspondence relationship between distances wherein the acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction, and the derivation unit uses the correspondence relationship information to obtain the acquisition
  • the focal distance corresponding to the measured distance, the position information, and the focus lens attitude information acquired by the unit is derived.
  • the information processing apparatus has a zoom lens light as compared with the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length. Even if the position in the axial direction changes and the posture of the focus lens changes, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the twelfth aspect of the present invention, wherein the correspondence information includes the measured distance, and the optical axis direction of the zoom lens at the imaging unit.
  • the position of the focus lens with respect to the vertical direction, the posture of the zoom lens with respect to the vertical direction, and the information indicating the correspondence relationship between the focal lengths, and the acquisition unit is the posture of the zoom lens with respect to the vertical direction
  • Zoom lens posture information is further obtained, and the derivation unit uses the correspondence information to obtain the measured distance, the position information, the focus lens posture information, and the zoom lens posture obtained by the acquisition unit.
  • the focal length corresponding to the information is derived.
  • the information processing apparatus provides light from the zoom lens as compared with the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. Even if the position in the axial direction is changed, the posture of the focus lens is changed, and the posture of the zoom lens is changed, the focal length can be derived with high accuracy without taking time and effort.
  • the information processing apparatus is the information processing apparatus according to the first aspect of the present invention, wherein the correspondence information includes the measured distance, the attitude of the focus lens with respect to a vertical direction, and the focus. Information indicating a correspondence relationship between distances, wherein the acquisition unit further acquires focus lens posture information indicating a posture of the focus lens with respect to the vertical direction, and the derivation unit uses the correspondence relationship information to obtain the acquisition The focal distance corresponding to the measured distance and the focus lens attitude information acquired by the unit is derived.
  • the focus lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even if changes, the focal length can be derived with high accuracy without taking time and effort.
  • the correspondence information includes an actually measured distance, a temperature of an area that affects imaging by the imaging unit, and a focal point. This is information indicating the correspondence between distances, the acquisition unit further acquires temperature information indicating the temperature of the region, and the derivation unit corresponds to the measured distance and temperature information acquired by the acquisition unit using the correspondence relationship information.
  • the focal length is derived.
  • imaging by the imaging unit is performed compared to when the user inputs the length of the reference image included in the captured image. Even if the temperature of the region affecting the temperature changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing device is the information processing device according to the fifteenth aspect of the present invention, wherein the correspondence information includes the measured distance and a temperature of a region that affects imaging by the imaging unit. , Information indicating a correspondence relationship between the attitude of the focus lens with respect to the vertical direction and the focal length, and the acquisition unit further acquires focus lens attitude information indicating the attitude of the focus lens with respect to the vertical direction, and derives the information. The unit derives the focal distance corresponding to the measured distance, the temperature information, and the focus lens attitude information acquired by the acquisition unit using the correspondence information.
  • the focus lens posture is compared with the case where the user inputs the length of the reference image included in the captured image. Even when the temperature changes and the temperature of the region that affects the imaging by the imaging unit changes, the focal length can be derived with high accuracy without taking time and effort.
  • An information processing device is the information processing device according to any one of the first to sixteenth aspects of the present invention, wherein the acquisition unit is configured to move the subject from the first imaging position.
  • a first captured image obtained by imaging by the imaging unit and a second captured image obtained by imaging the subject from a second imaging position different from the first imaging position are acquired, and corresponds to the first imaging position.
  • the directional light is emitted from the position by the measurement unit to the subject and the reflected light of the directional light is received to obtain the measured distance to the subject, and the derivation unit uses the correspondence information
  • the imaging position distance which is the distance between the first imaging position and the second imaging position, is derived based on the derived focal length.
  • the information processing apparatus takes less time than the case where the user inputs the length of the reference image included in the captured image in order to increase the accuracy of the focal length.
  • the imaging position distance can be derived with high accuracy.
  • the information processing device is the information processing device according to any one of the first to seventeenth aspects of the present invention, wherein the derivation unit derives using the correspondence information. Specified in the image obtained by imaging by the imaging unit in the imaging range irradiated with the directional light used in the measurement of the measured distance acquired by the acquisition unit and the measured distance acquired by the measuring unit. The size of the real space region corresponding to the interval is derived based on the interval between the plurality of pixels.
  • the information processing apparatus takes less time than the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length.
  • the dimensions of the real space region can be derived with high accuracy.
  • An information processing method is an image pickup unit having a focus lens, an image pickup unit for picking up an image of a subject, and directivity light, which is directional light, is emitted to the subject.
  • the focal length in order to increase the focal length accuracy, it takes less time than when the user inputs the length of the reference image included in the captured image.
  • the focal length can be derived with high accuracy.
  • a program according to a twentieth aspect of the present invention is an imaging unit having a focus lens in a computer, an imaging unit that images a subject, and directional light that is directional light is emitted to the subject and directed.
  • the measured distance which is the distance measured by the measuring unit included in the distance measuring device including the measuring unit that measures the distance to the subject by receiving the reflected light of the natural light, is obtained, and the measured distance and the focus at the imaging unit
  • the program according to the twentieth aspect of the present invention is capable of focusing with less effort compared to the case where the user inputs the length of the reference image included in the captured image in order to increase the focal length.
  • the distance can be derived with high accuracy.
  • the focal length in order to increase the focal length, can be increased without much effort compared to the case where the user inputs the length of the reference image included in the captured image.
  • lead to precision is acquired.
  • FIG. 6 is a front view showing an example of an appearance of a distance measuring device according to the first to sixth embodiments.
  • FIG. 6 is a block diagram illustrating an example of a hardware configuration of a distance measuring device according to first to fifth embodiments.
  • 10 is a time chart showing an example of a measurement sequence by the distance measuring apparatus according to the first to seventh embodiments.
  • 10 is a time chart showing an example of a laser trigger, a light emission signal, a light reception signal, and a count signal required for performing one measurement by the distance measuring apparatus according to the first to seventh embodiments.
  • FIG. 6 is a block diagram illustrating an example of a hardware configuration of a main control unit included in a distance measuring device according to first to fifth embodiments. It is a block diagram which shows an example of a structure of the focal distance derivation
  • FIG. 11 is a block diagram showing an example of main functions of a CPU according to the first to seventh embodiments.
  • FIG. 6 is a schematic plan view showing an example of a positional relationship between a distance measuring apparatus according to the first to fifth embodiments and a subject.
  • FIG. 6 is a conceptual diagram illustrating an example of a positional relationship among a part of a subject, a first captured image, a second captured image, a principal point of an imaging lens at a first imaging position, and a principal point of the imaging lens at a second imaging position.
  • FIG. 10 is a diagram for explaining a method for deriving irradiation position real space coordinates according to the first to seventh embodiments.
  • FIG. 10 is a diagram for explaining a method for deriving irradiation position real space coordinates according to the first to seventh embodiments.
  • FIG. 10 is a diagram for explaining a method for deriving irradiation position pixel coordinates according to the first to seventh embodiments. It is a flowchart which shows an example of the flow of the dimension derivation
  • FIG. 16 is a conceptual diagram illustrating an example of a subject included in a shooting range of the imaging device according to the first to seventh embodiments. It is a screen figure which shows an example of the screen of the state by which the irradiation position mark and the measurement distance were superimposed and displayed on the captured image obtained by imaging with the imaging device which concerns on 1st Embodiment.
  • FIG. 23 is a flowchart showing an example of the flow of a first derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 22. It is a flowchart which shows an example of the flow of the 2nd derivation
  • FIG. 23 is a flowchart showing an example of the flow of a first derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 22. It is a flowchart which shows an example of the flow of the 2nd derivation
  • FIG. 25 is a flowchart showing an example of a flow of a second derivation process included in the imaging position distance derivation process according to the first embodiment, and is a continuation of the flowchart shown in FIG. 24. It is a screen figure which shows an example of the screen of the state in which the 1st captured image acquired by image pick-up by the imaging device which concerns on 1st Embodiment was displayed. It is a screen figure which shows an example of the screen of the state by which the irradiation position mark and the 1st measurement distance were superimposed and displayed on the 1st captured image obtained by imaging with the imaging device which concerns on 1st Embodiment.
  • An example of a screen in which a first captured image obtained by capturing an image with the imaging apparatus according to the first embodiment and displaying a first captured image in which a pixel of interest and first to third pixels are specified is displayed.
  • FIG. It is a screen figure which shows an example of the screen of the state by which the imaging position distance was superimposed and displayed on the 2nd captured image obtained by imaging with the imaging device which concerns on 1st Embodiment.
  • FIG. 23 is a flowchart showing an example of a flow of a first derivation process included in an imaging position distance derivation process according to the second embodiment, and is a continuation of the flowchart shown in FIG. 22.
  • FIG. 25 is a flowchart showing an example of a flow of a second derivation process included in the imaging position distance derivation process according to the second embodiment, and is a continuation of the flowchart shown in FIG. 24. It is a block diagram which shows the modification of a structure of a focal distance derivation
  • FIG. 23 is a flowchart showing an example of a flow of a first derivation process included in an imaging position distance derivation process according to the fifth embodiment, and is a continuation of the flowchart shown in FIG. 22. It is a screen figure which shows an example of the screen of the state by which the last imaging position distance was superimposed and displayed on the 2nd captured image acquired by the imaging device which concerns on 5th Embodiment. It is a flowchart which shows an example of the flow of the three-dimensional coordinate derivation process which concerns on 5th Embodiment.
  • FIG. 10 is a front view showing a modified example of the appearance of the distance measuring apparatus according to the first to sixth embodiments. It is a block diagram which shows the modification of the structure of the focal distance derivation
  • the distance from the distance measuring device 10A to the subject to be measured is also simply referred to as “distance” or “distance to the subject”.
  • the angle of view with respect to the subject is also simply referred to as “angle of view”.
  • a distance measuring device 10 ⁇ / b> A that is an example of an information processing device according to the technology of the present disclosure includes a distance measuring unit 12 and an imaging device 14.
  • the distance measurement unit 12 and a distance measurement control unit 68 (see FIG. 2) described later are examples of a measurement unit according to the technique of the present disclosure
  • the imaging device 14 is an imaging unit according to the technique of the present disclosure. It is an example.
  • the imaging device 14 includes a lens unit 16 and an imaging device body 18, and the lens unit 16 is detachably attached to the imaging device body 18.
  • a hot shoe 20 is provided on the left side of the image pickup apparatus main body 18 when viewed from the front, and the distance measuring unit 12 is detachably attached to the hot shoe 20.
  • the distance measuring device 10A includes a distance measuring function for performing distance measurement by emitting a distance measuring laser beam to the distance measuring unit 12, and an image capturing function for obtaining a captured image by causing the image capturing device 14 to capture an image of a subject. ing.
  • the captured image is also simply referred to as “image”.
  • the optical axis L1 (see FIG. 2) of the laser light emitted from the distance measuring unit 12 is the same height as the optical axis L2 (see FIG. 2) of the lens unit 16. It is assumed that this is the case.
  • the distance measuring device 10A operates the distance measuring system function to perform one measurement sequence (see FIG. 3) in response to one instruction, and finally, one measurement sequence is performed. The distance is output.
  • the ranging device 10A has a still image capturing mode and a moving image capturing mode as operation modes of the image capturing system function.
  • the still image capturing mode is an operation mode for capturing a still image
  • the moving image capturing mode is an operation mode for capturing a moving image.
  • the still image capturing mode and the moving image capturing mode are selectively set according to a user instruction.
  • the distance measuring unit 12 includes an emitting unit 22, a light receiving unit 24, and a connector 26.
  • the connector 26 can be connected to the hot shoe 20, and the distance measuring unit 12 operates under the control of the imaging apparatus main body 18 with the connector 26 connected to the hot shoe 20.
  • the emission unit 22 includes an LD (Laser Diode) 30, a condenser lens (not shown), an objective lens 32, and an LD driver 34.
  • LD Laser Diode
  • condenser lens not shown
  • objective lens 32 an objective lens
  • LD driver 34 an LD driver
  • the condenser lens and objective lens 32 are provided along the optical axis L1 of the laser light emitted from the LD 30, and are arranged in the order of the condenser lens and objective lens 32 along the optical axis L1 from the LD 30 side. .
  • the LD 30 emits laser light for distance measurement, which is an example of directional light according to the technology of the present disclosure.
  • the laser beam emitted by the LD 30 is a colored laser beam. For example, if the laser beam is within a range of about several meters from the emission unit 22, the irradiation position of the laser beam is visually recognized in real space and imaged. It is also visually recognized from a captured image obtained by imaging by the device 14.
  • the condensing lens condenses the laser light emitted by the LD 30 and passes the condensed laser light.
  • the objective lens 32 faces the subject and emits laser light that has passed through the condenser lens to the subject.
  • the LD driver 34 is connected to the connector 26 and the LD 30 and drives the LD 30 in accordance with an instruction from the imaging apparatus main body 18 to emit laser light.
  • the light receiving unit 24 includes a PD (photodiode: Photo Diode) 36, an objective lens 38, and a light reception signal processing circuit 40.
  • the objective lens 38 is disposed on the light receiving surface side of the PD 36, and reflected laser light, which is laser light reflected by the laser light emitted by the emission unit 22 when hitting the subject, is incident on the objective lens 38.
  • the objective lens 38 passes the reflected laser light and guides it to the light receiving surface of the PD 36.
  • the PD 36 receives the reflected laser light that has passed through the objective lens 38, and outputs an analog signal corresponding to the amount of received light as a light reception signal.
  • the light reception signal processing circuit 40 is connected to the connector 26 and the PD 36, amplifies the light reception signal input from the PD 36 by an amplifier (not shown), and performs A / D (Analog / Digital) conversion on the amplified light reception signal. I do. Then, the light reception signal processing circuit 40 outputs the light reception signal digitized by A / D conversion to the imaging apparatus body 18.
  • the imaging device 14 includes mounts 42 and 44.
  • the mount 42 is provided in the imaging apparatus main body 18, and the mount 44 is provided in the lens unit 16.
  • the lens unit 16 is attached to the imaging apparatus main body 18 in a replaceable manner by coupling the mount 44 to the mount 42.
  • the lens unit 16 includes a focus lens 50, a zoom lens 52, a focus lens moving mechanism 53, a zoom lens moving mechanism 54, and a motor 56.
  • Subject light that is reflected light from the subject is incident on the focus lens 50.
  • the focus lens 50 passes the subject light and guides it to the zoom lens 52.
  • a focus lens 50 is attached to the focus lens moving mechanism 53 so as to be slidable with respect to the optical axis L2.
  • a motor 57 is connected to the focus lens moving mechanism 53, and the focus lens moving mechanism 53 receives the power of the motor 57 and slides the focus lens 50 along the direction of the optical axis L2.
  • a zoom lens 52 is attached to the zoom lens moving mechanism 54 so as to be slidable with respect to the optical axis L2. Further, a motor 56 is connected to the zoom lens moving mechanism 54, and the zoom lens moving mechanism 54 slides the zoom lens 52 along the optical axis L2 direction by receiving the power of the motor 56.
  • the motors 56 and 57 are connected to the image pickup apparatus main body 18 via mounts 42 and 44, and their driving is controlled in accordance with commands from the image pickup apparatus main body 18.
  • stepping motors are applied as an example of the motors 56 and 57. Therefore, the motors 56 and 57 operate in synchronization with the pulse power according to a command from the imaging apparatus main body 18.
  • the imaging device main body 18 includes an imaging device 60, a main control unit 62, an image memory 64, an image processing unit 66, a distance measurement control unit 68, motor drivers 72 and 73, an imaging device driver 74, an image signal processing circuit 76, and display control. A portion 78 is provided.
  • the imaging device main body 18 includes a touch panel I / F (Interface) 79, a reception I / F 80, and a media I / F 82.
  • the main control unit 62, image memory 64, image processing unit 66, distance measurement control unit 68, motor drivers 72 and 73, image sensor driver 74, image signal processing circuit 76, and display control unit 78 are connected to the bus line 84. ing.
  • a touch panel I / F 79, a reception I / F 80, and a media I / F 82 are also connected to the bus line 84.
  • the imaging element 60 is a CMOS (Complementary Metal Oxide Semiconductor) type image sensor, and includes a color filter (not shown).
  • the color filter includes a G filter corresponding to G (Green: green) that contributes most to obtain a luminance signal, an R filter corresponding to R (Red: red), and a B filter corresponding to B (Blue: blue).
  • the imaging element 60 includes an imaging pixel group 60A including a plurality of imaging pixels 60A1 arranged in a matrix. Each of the imaging pixels 60A1 is assigned with any one of an R filter, a G filter, and a B filter included in the color filter, and the imaging pixel group 60A captures the subject by receiving the subject light.
  • the subject light that has passed through the zoom lens 52 is imaged on the imaging surface 60B, which is the light receiving surface of the imaging device 60, and charges corresponding to the amount of received light of the subject light are accumulated in the imaging pixel 60A1.
  • the imaging element 60 outputs the electric charge accumulated in each imaging pixel 60A1 as an image signal indicating an image corresponding to a subject image obtained by imaging subject light on the imaging surface 60B.
  • the main control unit 62 controls the entire distance measuring device 10 ⁇ / b> A via the bus line 84.
  • the motor driver 72 is connected to the motor 56 via the mounts 42 and 44, and controls the motor 56 in accordance with instructions from the main control unit 62.
  • the motor driver 73 is connected to the motor 57 via the mounts 42 and 44, and controls the motor 57 in accordance with instructions from the main control unit 62.
  • the imaging device 14 has a view angle changing function.
  • the angle of view changing function is a function of changing the angle of view by moving the zoom lens 52.
  • the angle of view changing function includes the zoom lens 52, the zoom lens moving mechanism 54, the motor 56, and the motor driver 72. , And the main control unit 62.
  • the optical angle-of-view changing function by the zoom lens 52 is illustrated, but the technology of the present disclosure is not limited to this, and an electronic angle of view that does not use the zoom lens 52. It may be a change function.
  • the image sensor driver 74 is connected to the image sensor 60 and supplies drive pulses to the image sensor 60 under the control of the main control unit 62.
  • Each imaging pixel 60A1 included in the imaging pixel group 60A is driven according to a driving pulse supplied to the imaging element 60 by the imaging element driver 74.
  • the image signal processing circuit 76 is connected to the image sensor 60, and reads an image signal for one frame from the image sensor 60 for each imaging pixel 60A1 under the control of the main control unit 62.
  • the image signal processing circuit 76 performs various processes such as correlated double sampling processing, automatic gain adjustment, and A / D conversion on the read image signal.
  • the image signal processing circuit 76 converts the image signal digitized by performing various processes on the image signal into a specific frame rate (for example, several tens frames / s) defined by the clock signal supplied from the main control unit 62. Second) for every frame.
  • the image memory 64 temporarily holds the image signal input from the image signal processing circuit 76.
  • the imaging apparatus body 18 includes a display unit 86, a touch panel 88, a receiving device 90, and a memory card 92.
  • the display unit 86 is connected to the display control unit 78 and displays various information under the control of the display control unit 78.
  • the display unit 86 is realized by, for example, an LCD (Liquid Crystal Display).
  • the touch panel 88 is superimposed on the display screen of the display unit 86, and accepts contact with a user's finger or an indicator such as a touch pen.
  • the touch panel 88 is connected to the touch panel I / F 79 and outputs position information indicating the position touched by the indicator to the touch panel I / F 79.
  • the touch panel I / F 79 operates the touch panel 88 according to an instruction from the main control unit 62 and outputs position information input from the touch panel 88 to the main control unit 62.
  • the touch panel 88 is illustrated, but not limited thereto, a mouse (not shown) connected to the distance measuring device 10A and used instead of the touch panel 88 may be applied.
  • the touch panel 88 and a mouse may be used in combination.
  • the reception device 90 includes a measurement imaging button 90A, an imaging button (not shown), an imaging system operation mode switching button 90B, a wide-angle instruction button 90C, and a telephoto instruction button 90D.
  • the receiving device 90 also includes a dimension derivation button 90E, an imaging position distance derivation button 90F, a three-dimensional coordinate derivation button 90G, and the like, and accepts various instructions from the user.
  • the reception device 90 is connected to the reception I / F 80, and the reception I / F 80 outputs an instruction content signal indicating the content of the instruction received by the reception device 90 to the main control unit 62.
  • the measurement imaging button 90A is a press-type button that receives an instruction to start measurement and imaging.
  • the imaging button is a push button that receives an instruction to start imaging.
  • the imaging system operation mode switching button 90B is a push-type button that receives an instruction to switch between the still image capturing mode and the moving image capturing mode.
  • the wide-angle instruction button 90C is a press-type button that receives an instruction to change the angle of view.
  • the amount of change of the angle of view to the wide-angle side is within an allowable range, and the pressure on the wide-angle instruction button 90C is continued. It depends on the pressing time.
  • the telephoto instruction button 90D is a push-type button that accepts an instruction to set the angle of view to telephoto.
  • the amount of change of the angle of view to the telephoto side is within an allowable range, and the press to the telephoto instruction button 90D is continued. It depends on the pressing time.
  • the dimension derivation button 90E is a push button that receives an instruction to start a dimension derivation process described later.
  • the imaging position distance derivation button 90F is a push button that receives an instruction to start an imaging position distance derivation process described later.
  • the three-dimensional coordinate derivation button 90G is a push button that receives an instruction to start an imaging position distance derivation process described later and a three-dimensional coordinate derivation process described later.
  • buttons when it is not necessary to distinguish between the measurement imaging button 90A and the imaging button, they are referred to as “release buttons”. In the following, for convenience of explanation, when there is no need to distinguish between the wide-angle instruction button 90C and the telephoto instruction button 90D, they are referred to as “view angle instruction buttons”.
  • the manual focus mode and the autofocus mode are selectively set according to a user instruction via the reception device 90.
  • the release button receives a two-stage pressing operation of an imaging preparation instruction state and an imaging instruction state.
  • the imaging preparation instruction state refers to, for example, a state where the release button is pressed from the standby position to the intermediate position (half-pressed position), and the imaging instruction state refers to the final pressed position (full-pressed when the release button exceeds the intermediate position). The position is pressed down to (position).
  • half-pressed state the state where the release button is pressed from the standby position to the half-pressed position
  • full-pressed state the state where the release button is pressed from the standby position to the full-pressed position”. Is referred to as a “fully pressed state”.
  • the imaging condition is adjusted by pressing the release button halfway, and then the main exposure is performed when the release button is fully pressed.
  • the release button is pressed halfway, the exposure adjustment is performed by the AE (Automatic Exposure) function, and then the focus adjustment is performed by the AF (Auto-Focus) function.
  • the main exposure is performed.
  • the main exposure refers to exposure performed to obtain a still image file described later.
  • exposure means exposure performed for obtaining a live view image described later and exposure performed for obtaining a moving image file described later in addition to the main exposure.
  • exposure means exposure performed for obtaining a live view image described later and exposure performed for obtaining a moving image file described later in addition to the main exposure.
  • the main control unit 62 performs exposure adjustment by the AE function and focus adjustment by the AF function. Moreover, although the case where exposure adjustment and focus adjustment are performed is illustrated in the present embodiment, the technology of the present disclosure is not limited to this, and exposure adjustment or focus adjustment may be performed. .
  • the image processing unit 66 acquires an image signal for each frame from the image memory 64 at a specific frame rate, and performs various processes such as gamma correction, luminance color difference conversion, and compression processing on the acquired image signal.
  • the image processing unit 66 outputs an image signal obtained by performing various processes to the display control unit 78 frame by frame at a specific frame rate. Further, the image processing unit 66 outputs an image signal obtained by performing various processes to the main control unit 62 in response to a request from the main control unit 62.
  • the display control unit 78 outputs the image signal input from the image processing unit 66 to the display unit 86 at a specific frame rate for each frame under the control of the main control unit 62.
  • the display unit 86 displays images, character information, and the like.
  • the display unit 86 displays the image indicated by the image signal input at a specific frame rate from the display control unit 78 as a live view image.
  • the live view image is a continuous frame image obtained by continuously capturing images, and is also referred to as a through image.
  • the display unit 86 also displays a still image that is a single frame image obtained by imaging in a single frame. Further, the display unit 86 displays a playback image, a menu screen, and the like in addition to the live view image.
  • the image processing unit 66 and the display control unit 78 are realized by ASIC (Application Specific Integrated Circuit), but the technology of the present disclosure is not limited to this.
  • each of the image processing unit 66 and the display control unit 78 may be realized by an FPGA (Field-Programmable Gate Array).
  • the image processing unit 66 may be realized by a computer including a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory).
  • the display control unit 78 may also be realized by a computer including a CPU, a ROM, and a RAM.
  • each of the image processing unit 66 and the display control unit 78 may be realized by a combination of a hardware configuration and a software configuration.
  • the main control unit 62 controls the image sensor driver 74 to cause the image sensor 60 to perform exposure for one frame when an instruction to capture a still image is received by the release button in the still image capturing mode.
  • the main control unit 62 acquires an image signal obtained by performing exposure for one frame from the image processing unit 66, performs a compression process on the acquired image signal, and performs still image processing in a specific still image format. Generate an image file.
  • the specific still image format refers to, for example, JPEG (Joint Photographic Experts Group).
  • the main control unit 62 When an instruction to capture a moving image is received by the release button in the moving image capturing mode, the main control unit 62 outputs an image signal output from the image processing unit 66 to the display control unit 78 as a live view image for a specific frame. Get every frame at the rate. Then, the main control unit 62 performs a compression process on the image signal acquired from the image processing unit 66 to generate a moving image file in a specific moving image format.
  • the specific moving image format refers to, for example, MPEG (Moving Picture Experts Group).
  • MPEG Motion Picture Experts Group
  • the media I / F 82 is connected to the memory card 92, and records and reads image files from and to the memory card 92 under the control of the main control unit 62. Note that the image file read from the memory card 92 by the media I / F 82 is decompressed by the main control unit 62 and displayed on the display unit 86 as a reproduced image.
  • the main control unit 62 associates the distance information input from the distance measurement control unit 68 with the image file, and stores it in the memory card 92 via the media I / F 82.
  • the distance information is read from the memory card 92 through the media I / F 82 by the main control unit 62 together with the image file, and the distance indicated by the read distance information is displayed together with the reproduced image by the related image file. Displayed on the part 86.
  • the distance measurement control unit 68 controls the distance measurement unit 12 under the control of the main control unit 62.
  • the ranging control unit 68 is realized by an ASIC, but the technology of the present disclosure is not limited to this.
  • the distance measurement control unit 68 may be realized by an FPGA.
  • the distance measurement control unit 68 may be realized by a computer including a CPU, a ROM, and a RAM. Further, the distance measurement control unit 68 may be realized by a combination of a hardware configuration and a software configuration.
  • the hot shoe 20 is connected to the bus line 84, and the distance measurement control unit 68 controls the LD driver 34 under the control of the main control unit 62 to control the light emission of the laser beam by the LD 30.
  • a light reception signal is acquired from the signal processing circuit 40.
  • the distance measurement control unit 68 derives the distance to the subject based on the timing at which the laser light is emitted and the timing at which the light reception signal is acquired, and outputs distance information indicating the derived distance to the main control unit 62.
  • the measurement of the distance to the subject by the distance measurement control unit 68 will be described in more detail.
  • one measurement sequence by the distance measuring device 10A is defined by a voltage adjustment period, an actual measurement period, and a pause period.
  • the voltage adjustment period is a period for adjusting the drive voltage of the LD 30 and the PD 36.
  • the actual measurement period is a period during which the distance to the subject is actually measured. In the actual measurement period, the operation of causing the LD 30 to emit laser light and causing the PD 36 to receive reflected laser light is repeated several hundred times. Based on the timing at which the laser light is emitted and the timing at which the received light signal is obtained, Is derived.
  • the pause period is a period for stopping the driving of the LD 30 and the PD 36. Therefore, in one measurement sequence, the distance to the subject is measured several hundred times.
  • each of the voltage adjustment period, the actual measurement period, and the rest period is set to several hundred milliseconds.
  • the distance measurement control unit 68 is supplied with a count signal that defines the timing at which the distance measurement control unit 68 gives an instruction to emit laser light and the timing at which the light reception signal is acquired.
  • the count signal is generated by the main control unit 62 and supplied to the distance measurement control unit 68, but is not limited thereto, and is generated by a dedicated circuit such as a time counter connected to the bus line 84. You may make it supply to the ranging control part 68.
  • the ranging control unit 68 outputs a laser trigger for emitting laser light to the LD driver 34 in accordance with the count signal.
  • the LD driver 34 drives the LD 30 to emit laser light according to the laser trigger.
  • the laser light emission time is set to several tens of nanoseconds.
  • the time until the laser light emitted toward the subject several kilometers ahead by the emitting unit 22 is received by the PD 36 as reflected laser light is “several kilometers ⁇ 2 / light speed” ⁇ several microseconds. Therefore, in order to measure the distance to the subject several kilometers away, as shown in FIG. 3 as an example, a minimum required time of several microseconds is required.
  • the measurement time of one time is set to several milliseconds.
  • the round trip time of the laser beam depends on the distance to the subject. Since they are different, the measurement time per time may be varied according to the assumed distance.
  • the distance measurement control unit 68 derives the distance to the subject based on the measurement values obtained from several hundred measurements in one measurement sequence, for example, a histogram of the measurement values obtained from several hundred measurements To derive the distance to the subject.
  • the horizontal axis is the distance to the subject
  • the vertical axis is the number of measurements
  • the number of measurements is derived by the distance measurement control unit 68 as a distance measurement result.
  • the histogram shown in FIG. 5 is merely an example, based on the round trip time of the laser beam (elapsed time from light emission to light reception) or 1/2 of the round trip time of the laser beam instead of the distance to the subject. A histogram may be generated.
  • the main control unit 62 includes a CPU 100, a primary storage unit 102, and a secondary storage unit 104, which are examples of an acquisition unit and a derivation unit according to the technology of the present disclosure.
  • the CPU 100 controls the entire distance measuring device 10A.
  • the primary storage unit 102 is a volatile memory used as a work area or the like when executing various programs.
  • An example of the primary storage unit 102 is a RAM.
  • the secondary storage unit 104 is a non-volatile memory that stores in advance a control program for controlling the operation of the distance measuring apparatus 10A, various parameters, or the like. Examples of the secondary storage unit 104 include an EEPROM (Electrically Erasable Programmable Read Only Memory) or a flash memory.
  • the CPU 100, the primary storage unit 102, and the secondary storage unit 104 are connected to each other via the bus line 84.
  • the secondary storage unit 104 stores a size derivation program 105A, an imaging position distance derivation program 106A, a three-dimensional coordinate derivation program 108A, and a focal length derivation table 109A.
  • the dimension derivation program 105A and the imaging position distance derivation program 106A are examples of programs according to the technique of the present disclosure.
  • the focal length derivation table 109A is an example of correspondence information according to the technology of the present disclosure.
  • the focal length derivation table 109 ⁇ / b> A is a table showing a correspondence relationship between the actually measured distance and the focal length of the focus lens 50.
  • the actually measured distance refers to the distance to the subject measured by using the ranging system function, that is, the distance to the subject measured by the distance measurement unit 12 and the distance measurement control unit 68.
  • the focal length of the focus lens 50 is associated with each of a plurality of derivation distances.
  • the derivation distance refers to the distance from the distance measuring device 10A to the subject.
  • the derivation distance is a parameter to be compared with the actually measured distance.
  • the focal length of the focus lens 50 is simply referred to as “focal length”.
  • a focal distance of 7 millimeters is associated with a derivation distance of 1 meter.
  • the focal length of 8 millimeters is associated with the derivation distance of 2 meters.
  • a focal length of 10 millimeters is associated with a derivation distance of 3 meters.
  • the focal length of 12 millimeters is associated with the derivation distance of 5 meters.
  • the focal distance of 14 millimeters is associated with the derivation distance of 10 meters.
  • the focal length derivation table 109A In the focal length derivation table 109A, the focal length of 16 millimeters is associated with the derivation distance of 30 meters. Further, in the focal length derivation table 109A, the focal distance of 18 millimeters is associated with the derivation distance of infinity.
  • the focal length derivation table 109A is a table derived from at least one result of, for example, a test using an actual distance measuring device 10A and a computer simulation based on a design specification of the distance measuring device 10A.
  • the CPU 100 reads the dimension derivation program 105A, the imaging position distance derivation program 106A, and the three-dimensional coordinate derivation program 108A from the secondary storage unit 104.
  • the CPU 100 expands the read dimension derivation program 105A, the imaging position distance derivation program 106A, and the three-dimensional coordinate derivation program 108A in the primary storage unit 102.
  • the CPU 100 executes a dimension deriving program 105A, an imaging position distance deriving program 106A, and a three-dimensional coordinate deriving program 108A developed in the primary storage unit 102.
  • the CPU 100 operates as an acquisition unit 110A and a deriving unit 111A as illustrated in FIG. 8 as an example by executing at least one of the dimension deriving program 105A and the imaging position distance deriving program 106A.
  • the obtaining unit 110A obtains the measured distance measured by using the ranging system function.
  • the deriving unit 111A derives a focal length corresponding to the actually measured distance acquired by the acquiring unit 110A using the focal length deriving table 109A.
  • the distance measuring device 10A is provided with a dimension deriving function.
  • the dimension deriving function is a function realized by the CPU 100 executing the dimension deriving program 105A and operating as the acquiring unit 110A and the deriving unit 111A. .
  • the dimension derivation function is based on the addresses u1 and u2 of designated pixels, the distance L to the subject measured by the distance measurement unit 12 and the distance measurement control unit 68, and the like. refers or to derive the length L M of the area in the real space contained in the subject, a function or to derive an area based on the length L M.
  • the distance L to the subject indicates an actually measured distance.
  • the distance L to the subject is simply referred to as “distance L”.
  • the length L M of the area in the real space included in the subject is simply referred to as “length L M ”.
  • the “designated pixel” refers to a pixel in the image sensor 60 corresponding to, for example, two points designated on the captured image by the user.
  • the length L M is calculated, for example, by the following equation (1).
  • Equation (1) p is a pitch between pixels included in the image pickup device 60, u1, u2 is the address of the pixel that is specified by the user, f 0 is the focal length.
  • Formula (1) is a formula used on the assumption that an object to be derived from a dimension is imaged in a state of facing the focus lens 50 in front view. Therefore, in the distance measuring device 10A, for example, when a subject including an object whose size is to be derived is captured in a state where the subject is not directly facing the focus lens 50 in front view, projective conversion processing is performed. .
  • the projective transformation process is, for example, a process of converting a captured image obtained by capturing an image into a front-view image based on a rectangular image included in the captured image using a known technique such as affine transformation. Point to.
  • the directly-facing image refers to an image in a state of facing the focus lens 50 in a front view.
  • the pixel address u1, u2 in the image pickup device 60 is specified via the confronting vision image, the length L M is derived from Equation (1).
  • the distance measuring device 10A is provided with a three-dimensional coordinate derivation function.
  • the three-dimensional coordinate derivation function operates as the acquisition unit 110A and the derivation unit 111A when the CPU 100 executes the three-dimensional coordinate derivation program 108A. It is a function realized by this.
  • the three-dimensional coordinate derivation function is a mathematical expression (2) from a first designated pixel coordinate described later, a second designated pixel coordinate described later, an imaging position distance described later, a focal length of the focus lens 50, and a dimension of the imaging pixel 60A1. ), A function for deriving designated pixel three-dimensional coordinates, which will be described later.
  • Equation (2) “u L ” refers to the X coordinate of the first designated pixel coordinate.
  • “v L ” indicates the Y coordinate of the first designated pixel coordinate.
  • “u R ” indicates the X coordinate of the second designated pixel coordinate.
  • “B” indicates an imaging position distance (see FIGS. 10 and 11).
  • “f” refers to (focal length) / (dimension of the imaging pixel 60A1). Further, in the formula (2), (X, Y, Z) indicates designated pixel three-dimensional coordinates.
  • the first designated pixel coordinates are two-dimensional coordinates that specify a first designated pixel whose position in the real space is designated as a corresponding pixel in a first captured image described later.
  • the second designated pixel coordinates are two-dimensional coordinates that specify a second designated pixel designated as a pixel corresponding to a position in real space in a second captured image described later. That is, the first designated pixel and the second designated pixel are pixels whose positions in the real space are designated as corresponding pixels, and correspond to each other in each of the first captured image and the second captured image. It is a pixel that can be specified by position.
  • the first designated pixel coordinates are two-dimensional coordinates on the first captured image
  • the second designated pixel coordinates are two-dimensional coordinates on the second captured image.
  • the designated pixel three-dimensional coordinates refer to three-dimensional coordinates that are coordinates on the real space corresponding to the first designated pixel coordinates and the second designated pixel coordinates.
  • the first captured image refers to a captured image obtained by capturing an image of the subject from the first imaging position by the imaging device 14.
  • the second captured image is an image of a subject including a subject that is an imaging target from the first imaging position from a second imaging position different from the first imaging position.
  • the captured image obtained by being imaged by the device 14 is indicated.
  • not only the first captured image and the second captured image but also a captured image obtained by being captured by the imaging device 14 including a still image and a moving image will be described separately. When it is not necessary, it is simply referred to as “captured image”.
  • the first measurement position refers to the position of the distance measurement unit 12 when the subject is imaged by the image pickup device 14 from the first image pickup position with the distance measurement unit 12 correctly attached to the image pickup device 14.
  • the second measurement position refers to the position of the distance measurement unit 12 when the subject is imaged by the imaging device 14 from the second imaging position in a state where the distance measurement unit 12 is correctly attached to the imaging device 14.
  • the imaging position distance refers to the distance between the first imaging position and the second imaging position.
  • an imaging position distance as shown in FIG. 11, the main point of the focus lens 50 of the imaging device 14 at the principal point O L and a second imaging position of the focus lens 50 of the imaging apparatus 14 in the first imaging position O R
  • the technology of the present disclosure is not limited to this.
  • the distance between the imaging pixel 60A1 positioned at the center of the imaging device 60 of the imaging device 14 at the first imaging position and the imaging pixel 60A1 positioned at the center of the imaging device 60 of the imaging device 14 at the second imaging position is the imaging position distance. It may be said.
  • the pixel P L included in the first image is a first designated pixel
  • the pixel P R included in the second image is a second designated pixel
  • the pixel P L, P R is , The pixel corresponding to the point P of the subject.
  • the first designated pixel coordinates are two-dimensional coordinates of the pixel P L (u L, v L )
  • the second designated pixel coordinates are two-dimensional coordinates of the pixel P R (u R, v R ) is This corresponds to the designated pixel three-dimensional coordinates (X, Y, Z) which is the three-dimensional coordinates of the point P.
  • Equation (2) “v R ” is not used.
  • designated pixels when it is not necessary to distinguish between the first designated pixel and the second designated pixel, they are referred to as “designated pixels”. In the following, for convenience of explanation, when there is no need to distinguish between the first designated pixel coordinates and the second designated pixel coordinates, they are referred to as “designated pixel coordinates”.
  • the ranging device 10A uses the three-dimensional coordinate derivation function to derive the designated pixel three-dimensional coordinates based on the formula (2), it is preferable to derive the imaging position distance with high accuracy. This is because “B”, which is the imaging position distance, is included in Equation (2).
  • the distance measuring device 10A is provided with an imaging position distance deriving function.
  • the imaging position distance deriving function is realized by the CPU 100 operating as the deriving unit 111A by executing the imaging position distance deriving program 106A. It is.
  • the deriving unit 111A derives the imaging position distance based on the derived focal length.
  • irradiation position real space coordinates are also required.
  • the irradiation position real space coordinates are three-dimensional coordinates that specify the irradiation position of the laser light in the real space, that is, the irradiation position of the laser light on the subject in the real space.
  • the derivation unit 111A derives the irradiation position real space coordinates based on the actually measured distance acquired by the acquisition unit 110A.
  • the irradiation position real space coordinates are derived from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M shown in FIG.
  • (x Laser , y Laser , z Laser ) refers to irradiation space real space coordinates.
  • y Laser 0, which means that the optical axis L1 is at the same height as the optical axis L2 in the vertical direction.
  • y Laser is a positive value.
  • y Laser is a negative value.
  • the half angle of view ⁇ indicates half of the angle of view.
  • the emission angle ⁇ refers to an angle at which laser light is emitted from the emission unit 22.
  • the distance between reference points M refers to the distance between the first reference point P1 defined in the imaging device 14 and the second reference point P2 defined in the distance measuring unit 12.
  • An example of the first reference point P1 is the main point of the focus lens 50.
  • An example of the second reference point P2 is a point set in advance as the origin of coordinates that can specify the position of the three-dimensional space in the distance measuring unit 12.
  • one end of the left and right ends of the objective lens 38 when viewed from the front, or one corner of the casing when the casing (not shown) of the distance measuring unit 12 is a rectangular parallelepiped, that is, one apex. .
  • the derivation unit 111A Based on the distance acquired by the acquisition unit 110A, the derivation unit 111A specifies the position of the pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in each of the first captured image and the second captured image. An irradiation position pixel coordinate is derived.
  • the irradiation position pixel coordinates are roughly divided into first irradiation position pixel coordinates and second irradiation position pixel coordinates.
  • the first irradiation position pixel coordinates are two-dimensional coordinates that specify the position of a pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in the first captured image.
  • the second irradiation position pixel coordinate is a two-dimensional coordinate that specifies the position of the pixel corresponding to the irradiation position specified by the irradiation position real space coordinates in the second captured image.
  • the derivation method of the X coordinate of the first irradiation position pixel coordinate and the derivation method of the Y coordinate of the first irradiation position pixel coordinate are the same in the derivation method except that the target coordinate axes are different. That is, the method of deriving the X coordinate of the first irradiation position pixel coordinate is a method of deriving the pixel in the row direction in the image sensor 60, whereas the method of deriving the Y coordinate of the first irradiation position pixel coordinate is The difference is that this is a derivation method for pixels in the column direction in the image sensor 60.
  • the row direction means the front view left-right direction of the imaging surface 60B
  • the column direction means the front view vertical direction of the imaging surface 60B.
  • the X coordinates of the first irradiation position pixel coordinates are based on the following formulas (4) to (6) from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M shown in FIG. Derived.
  • row-direction pixel at irradiation position refers to a pixel at a position corresponding to the irradiation position of the laser light in real space among the pixels in the row direction of the image sensor 60.
  • “Half the number of pixels in the row direction” refers to half of the number of pixels in the row direction in the image sensor 60.
  • the deriving unit 111A substitutes the distance M between the reference points and the emission angle ⁇ into the equation (4), substitutes the half angle of view ⁇ and the emission angle ⁇ into the equation (5), and sets the distance L as the equations (4) and ( Assign to 5).
  • the derivation unit 111A specifies the position of “the pixel in the row direction of the irradiation position” by substituting ⁇ x and X thus obtained and “half the number of pixels in the row direction” in Equation (6).
  • the X coordinate which is the coordinate is derived.
  • the X coordinate that specifies the position of the “irradiation position row direction pixel” is the X coordinate of the first irradiation position pixel coordinate.
  • the derivation unit 111A derives, as the second irradiation position pixel coordinates, coordinates that specify the pixel positions corresponding to the pixel positions specified by the first irradiation position pixel coordinates among the pixels of the second captured image.
  • irradiation position pixel coordinates when it is not necessary to distinguish between the first irradiation position pixel coordinates and the second irradiation position pixel coordinates, they are referred to as “irradiation position pixel coordinates”. Further, among the pixels in the captured image, derivation similar to the first irradiation position pixel coordinates or the second irradiation position pixel coordinates is used as a two-dimensional coordinate for specifying the position of the pixel corresponding to the actual irradiation position with respect to the subject by the laser light. The two-dimensional coordinates derived by the method are also referred to as “irradiation position pixel coordinates”.
  • the derivation unit 111A selectively executes the first derivation process and the second derivation process in accordance with the instructions received on the touch panel 88 when the position can be specified.
  • the position specifiable state refers to a state in which the position of the pixel specified by the irradiation position pixel coordinate is a position of a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. .
  • the derivation unit 111A executes the first derivation process when the position cannot be specified.
  • the position cannot be specified state is a position of a pixel that is different from a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image. Refers to the state.
  • the first derivation process refers to a process of deriving the imaging position distance based on a plurality of pixel coordinates, an irradiation position real space coordinate, a focal distance, and a dimension of the imaging pixel 60A1 described later.
  • the plurality of pixel coordinates are present in the same planar region as the irradiation position of the laser light in real space in each of the first captured image and the second captured image, and can be specified at positions corresponding to each other. It refers to a plurality of two-dimensional coordinates that specify a plurality of pixels equal to or more than a pixel.
  • the parameters used for the first derivation process are not limited to the multiple pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • a plurality of parameters obtained by further adding one or more fine adjustment parameters to the dimensions of the plurality of pixel coordinates, irradiation position real space coordinates, focal length, and imaging pixel 60A1 are used in the first derivation process. Also good.
  • the second derivation process refers to a process for deriving the imaging position distance based on the irradiation position pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the parameters used for the second derivation process are not limited to the irradiation position pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • a plurality of parameters obtained by further adding one or more fine adjustment parameters to the irradiation position pixel coordinates, irradiation position real space coordinates, focal length, and dimensions of the imaging pixel 60A1 are used in the second derivation process. May be.
  • the second derivation process is performed when the actual irradiation position of the laser light is a position in the real space corresponding to the position of a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image.
  • the imaging position distance can be derived with higher accuracy than the first derivation process.
  • the second derivation process is a process for deriving the imaging position distance based on a plurality of parameters smaller than the number of parameters used in the derivation of the imaging position distance by the first derivation process.
  • the “plurality of parameters” referred to here refers to, for example, irradiation position pixel coordinates, irradiation position real space coordinates, focal length, and dimensions of the imaging pixel 60A1.
  • the derivation unit 111A When executing the first derivation process, the derivation unit 111A indicates a plane including three-dimensional coordinates in the real space corresponding to the plurality of pixel coordinates based on the plurality of pixel coordinates, the focal length, and the size of the imaging pixel 60A1. The orientation of the plane defined by the plane equation is derived. Then, the deriving unit 111A determines a plane equation based on the derived plane direction and irradiation position real space coordinates, and performs imaging based on the determined plane equation, the plurality of pixel coordinates, the focal length, and the size of the imaging pixel 60A1. The position distance is derived.
  • deriving “the direction of the plane” means deriving a, b, c in the equation (7), and determinating the “plane equation” by deriving d in the equation (7).
  • the imaging range 115 of the imaging device 14 of the distance measuring device 10A includes an area including the outer wall surface 121 of the office building 120 as a subject. This will be explained as a premise. Further, the outer wall surface 121 will be described on the assumption that it is a main subject and a laser light irradiation target.
  • the outer wall surface 121 is formed in a planar shape, and is an example of a planar region according to the technique of the present disclosure.
  • a plurality of rectangular windows 122 are provided on the outer wall surface 121.
  • a laterally long rectangular pattern 124 is drawn on the lower side of each window 122 on the outer wall surface 121, but not limited to this, the outer wall surface 121 is attached to the outer wall surface 121. It may be dirt or wrinkles.
  • the “planar shape” includes not only a flat surface but also a planar shape in a range that allows slight unevenness due to a window or a vent, for example, by visual observation or an existing image. Any plane or plane shape recognized as “planar” by an analysis technique may be used.
  • the distance measurement apparatus 10A will be described on the assumption that the distance to the outer wall surface 121 is measured by irradiating the outer wall surface 121 with laser light.
  • step 200 the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on. If the measurement imaging button 90 ⁇ / b> A is not turned on in step 200, the determination is negative and the routine proceeds to step 202. If the measurement imaging button 90A is turned on in step 200, the determination is affirmed and the routine proceeds to step 204.
  • step 202 the acquisition unit 110A determines whether or not a condition for ending the dimension derivation process is satisfied.
  • the conditions for ending the dimension derivation process are, for example, a condition that the dimension derivation button 90E is turned on again, and that the first predetermined time has elapsed without being affirmed after the execution of the process of step 200 is started. It refers to the conditions of the.
  • the first predetermined time refers to, for example, 1 minute.
  • step 202 If it is determined in step 202 that the condition for ending the dimension derivation process is not satisfied, the determination is negative and the routine proceeds to step 200. In step 202, if the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the dimension derivation process is terminated.
  • step 204 the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 206. .
  • the acquisition unit 110A acquires the actual measurement distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of step 204.
  • the acquisition unit 110 ⁇ / b> A acquires a captured image signal indicating a captured image obtained by the imaging device 14 by executing the process of Step 204.
  • the captured image indicated by the captured image signal acquired by executing the process of step 206 is a captured image obtained by capturing in the focused state by executing the process of step 204. is there.
  • the acquisition unit 110A causes the display unit 86 to start displaying the captured image indicated by the acquired captured image signal, and then proceeds to step 209.
  • step 209 the deriving unit 111A derives a focal length corresponding to the actually measured distance using the focal length deriving table 109A, and then proceeds to step 210.
  • the actually measured distance used in the process of step 209 indicates the actually measured distance acquired by the acquisition unit 110A by executing the process of step 206.
  • the focal distance corresponding to the actually measured distance refers to, for example, the focal distance associated with the derivation distance that matches the actually measured distance among the derivation distances stored in the focal distance derivation table 109A.
  • the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by an interpolation method.
  • the interpolation method employed in this embodiment include linear interpolation and nonlinear interpolation.
  • step 210 first, the deriving unit 111A derives the half angle of view ⁇ from the focal length based on the following formula (8).
  • dimension of the imaging pixel refers to the dimension of the imaging pixel 60A1.
  • f 0 refers to the focal length. Note that the focal length used in the processing of step 210 is a focal length derived by executing the processing of step 209.
  • step 210 the deriving unit 111A derives the irradiation position pixel coordinates from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M based on the equations (4) to (6), Thereafter, the process proceeds to step 212.
  • the distance L used in the process of step 210 refers to the actually measured distance acquired by the acquisition unit 110A by executing the process of step 206.
  • the half angle of view ⁇ used for deriving the irradiation position pixel coordinates is the half angle of view ⁇ derived from the focal length by the deriving unit 111A based on Expression (8).
  • step 212 the derivation unit 111A causes the display unit 86 to start displaying the measured distance and the irradiation position mark 136 superimposed on the captured image, as shown in FIG. 17 as an example, and then proceeds to step 214. .
  • the measured distance displayed by being superimposed on the captured image by executing the process of step 212 is the measured distance acquired by the acquisition unit 110A by executing the process of step 206.
  • the numerical value “1333325.0” corresponds to the actually measured distance, and the unit is millimeter.
  • the irradiation position mark 136 is a mark indicating the position of the pixel specified by the irradiation position pixel coordinates derived by the deriving unit 111 ⁇ / b> A by executing the process of step 210.
  • step 214 the derivation unit 111A causes the display unit 86 to start displaying a frame definition guidance message (not shown) superimposed on the captured image, and then proceeds to step 216.
  • the frame definition guidance message refers to a message that guides the user to define a rectangular frame in the display area of the captured image.
  • the rectangular frame is defined in accordance with an instruction from the user via the touch panel 88.
  • the frame prescription guidance message there is a message such as “Tap four points in the screen to prescribe a square frame that includes the irradiation position mark.”
  • the derivation unit 111A determines whether or not a square frame is correctly defined in the display area of the captured image via the touch panel 88.
  • the correctly defined quadrangular frame refers to a quadrangular frame 117 that includes the irradiation position mark 136 in the display area of the captured image, as shown in FIG. 18 as an example.
  • the frame 117 is defined by four points 119A, 119B, 119C, and 119D in the display area of the captured image.
  • the rectangular area surrounded by the frame 117 is associated with the irradiation position pixel coordinates corresponding to the irradiation position mark 136.
  • step 216 if the square frame is not correctly defined in the display area of the captured image via the touch panel 88, the determination is negative and the process proceeds to step 218.
  • step 216 when a square frame is correctly defined in the display area of the captured image via the touch panel 88, the determination is affirmed and the process proceeds to step 220.
  • step 218 the derivation unit 111A determines whether or not a condition for ending the dimension derivation process is satisfied.
  • the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
  • step 218 if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the process proceeds to step 216. If it is determined in step 218 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
  • step 220 the derivation unit 111A ends the display of the frame regulation guidance message on the display unit 86, and then proceeds to step 222.
  • the derivation unit 111A determines whether or not a quadrangular region exists within the prescribed quadrangular frame.
  • the quadrangular region indicates a trapezoidal region 123 as shown in FIG.
  • region 123 of the outer wall surface 121 when the part corresponding to the trapezoidal area
  • step 222 If it is determined in step 222 that there is no quadrangular area within the defined quadrangular frame, the determination is negative and the routine proceeds to step 230. In step 222, if a quadrangular region exists within the defined quadrangular frame, the determination is affirmed and the process proceeds to step 224. In the example shown in FIG. 18, since the trapezoidal region 123 exists in the frame 117, the determination in step 222 is affirmed in this case.
  • step 224 the derivation unit 111A ends the display of the measured distance and the irradiation position mark 136 on the display unit 86, and then proceeds to step 226.
  • step 226 the derivation unit 111A performs the above-described projective transformation process on the captured image based on the quadrangular region existing in the prescribed quadrangular frame.
  • the projective transformation process described above is performed on the captured image by the derivation unit 111 ⁇ / b> A based on the trapezoidal region 123 existing in the frame 117. Is done.
  • the derivation unit 111A causes the display unit 86 to start displaying the post-projection conversion image 87 as shown in FIG. 19 as an example, and then proceeds to step 232.
  • the post-projection conversion image 87 is an image obtained by performing a projective conversion process on a captured image.
  • the post-projection transformation image 87 includes a rectangular region 123 ⁇ / b> A that is a quadrangular region corresponding to the trapezoidal region 123.
  • step 230 the derivation unit 111A causes the display unit 86 to finish displaying the measured distance and the irradiation position mark 136, and then proceeds to step 232.
  • step 232 the derivation unit 111A starts display in which a pixel designation guidance message (not shown) is superimposed on the processing target image, and then proceeds to step 234.
  • the processing target image refers to the captured image or the post-projection converted image 87.
  • the determination in step 222 is negative, the captured image indicated by the captured image signal acquired by executing the process of step 206 is used as the processing target image. If the determination in step 222 is affirmative, the post-projection conversion image 87 is used as the processing target image.
  • the pixel designation guidance message refers to a message for guiding the user to designate two points, that is, two pixels in the display area of the processing target image.
  • step 234 the derivation unit 111A determines whether or not two pixels are designated by the user via the touch panel 88 among the pixels of the processing target image. In step 234, when two pixels are not designated by the user via the touch panel 88, determination is denied and it transfers to step 236. In step 234, when two pixels are designated by the user via the touch panel 88, the determination is affirmed and the process proceeds to step 238.
  • step 236 it is determined whether or not a condition for terminating the dimension derivation process is satisfied.
  • the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
  • step 236 if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the process proceeds to step 234. If it is determined in step 236 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
  • step 238 the derivation unit 111A ends the display of the pixel designation guidance message on the display unit 86, and then proceeds to step 242.
  • step 242 the deriving unit 111A derives the length of the area in the real space corresponding to the interval between the two pixels designated by the user via the touch panel 88 by using the dimension deriving function, and then in step 244.
  • the interval between two pixels designated by the user via the touch panel 88 is an example of the interval between a plurality of pixels according to the technique of the present disclosure.
  • step 242 the length of the area in the real space corresponding to the interval between the two pixels designated by the user via the touch panel 88 is derived by Expression (1).
  • u1 and u2 in Expression (1) are addresses of two pixels designated by the user via the touch panel 88.
  • L in Expression (1) is an actual measurement distance acquired by the acquisition unit 110A by executing the process of step 206.
  • f 0 in the equation (1) is a focal length derived by the deriving unit 111A by executing the process of step 209.
  • step 244 the derivation unit 111A causes the display unit 86 to start displaying the area length and the bidirectional arrow 125 superimposed on the processing target image, as shown in FIG. Migrate to
  • the length of the area displayed on the display unit 86 by executing the process of step 244 is the length of the area derived by the deriving unit 111A by executing the process of step 242.
  • the numerical value “63” corresponds to the length of the area, and the unit is millimeter.
  • the bidirectional arrow 125 displayed on the display unit 86 by executing the process of step 244 is an arrow that specifies between two pixels designated by the user via the touch panel 88.
  • step 246 the derivation unit 111A determines whether or not a condition for ending the dimension derivation process is satisfied.
  • the condition for terminating the dimension derivation process is the same as the condition used in the process of step 202.
  • step 246 if the condition for ending the dimension derivation process is not satisfied, the determination is negative and the determination in step 246 is performed again. If it is determined in step 246 that the condition for terminating the dimension derivation process is satisfied, the determination is affirmed and the process proceeds to step 248.
  • step 248 the derivation unit 111A ends the display of the processing target image and the superimposed display information on the display unit 86, and then ends the dimension derivation process.
  • the superimposed display information refers to various types of information that are currently superimposed and displayed on the processing target image, such as the length of the area and the two-way arrow 125.
  • the imaging position distance deriving process realized by the CPU 100 executing the imaging position distance deriving function 106A by executing the imaging position distance deriving program 106A when the three-dimensional coordinate deriving button 90G is turned on.
  • first position the position of the distance measuring device 10A when the distance measuring unit 12 is located at the first measurement position and the imaging device 14 is located at the first imaging position
  • second position the position of the distance measuring device 10A when the distance measuring unit 12 is located at the second measurement position and the imaging device 14 is located at the second imaging position.
  • step 300 the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the first position. In step 300, if the measurement imaging button 90A is not turned on, the determination is negative and the routine proceeds to step 302. If the measurement imaging button 90A is turned on in step 300, the determination is affirmed and the routine proceeds to step 304.
  • the acquisition unit 110A determines whether or not a condition for ending the imaging position distance deriving process is satisfied.
  • the conditions for ending the imaging position distance derivation process include, for example, a condition that the 3D coordinate derivation button 90G is turned on again, a condition that an instruction to end the imaging position distance derivation process is received by the touch panel 88, and the like. .
  • step 302 If it is determined in step 302 that the conditions for ending the imaging position distance deriving process are not satisfied, the determination is negative and the routine proceeds to step 300. In step 302, when the condition for ending the imaging position distance deriving process is satisfied, the determination is affirmed and the imaging position distance deriving process ends.
  • step 304 the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 306. .
  • the actual distance measured by executing the process of step 304 is referred to as a “first actual distance”.
  • step 306 the acquisition unit 110A acquires the first actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of step 304.
  • the acquisition unit 110 ⁇ / b> A acquires a first captured image signal indicating a first captured image obtained by performing the processing in Step 304 and captured by the imaging device 14. Note that the first captured image indicated by the first captured image signal acquired by executing the processing of step 306 is obtained by being captured in the focused state by executing the processing of step 304. It is the 1st picked-up image.
  • step 308 the acquisition unit 110A starts to display the first captured image indicated by the first captured image signal acquired in the process of step 306 on the display unit 86 as shown in FIG. 26 as an example. Then, the process proceeds to step 310.
  • step 310 the deriving unit 111A derives a focal length corresponding to the first actually measured distance using the focal length deriving table 109A, and then proceeds to step 312.
  • the first measured distance used in the process of step 310 refers to the first measured distance acquired by the acquisition unit 110A by executing the process of step 306.
  • the focal distance corresponding to the first actually measured distance is, for example, the focal distance associated with the derivation distance that matches the first actually measured distance among the derivation distances stored in the focal distance derivation table 109A. Point to.
  • the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by the above-described interpolation method.
  • step 312 first, the deriving unit 111A derives the half angle of view ⁇ from the focal length based on Expression (8).
  • the focal length used in the processing of step 312 is a focal length derived by executing the processing of step 310.
  • step 312 the deriving unit 111A derives the irradiation position real space coordinates from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M based on Equation (3), and then The process proceeds to step 314.
  • the distance L used in the processing of step 312 indicates the first actually measured distance acquired by the acquisition unit 110A by executing the processing of step 306.
  • the half angle of view ⁇ used for deriving the irradiation position real space coordinates is the half angle of view ⁇ derived from the focal length by the deriving unit 111A based on the mathematical formula (8).
  • step 314 the deriving unit 111A derives the first irradiation position pixel coordinates from the distance L, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M based on the equations (4) to (6). Thereafter, the process proceeds to step 316.
  • the distance L used in the process of step 314 indicates the first measured distance acquired by the acquisition unit 110A by executing the process of step 306.
  • the half angle of view ⁇ used in the derivation of the first irradiation position pixel coordinates is calculated based on the formula (8) from the focal length by the derivation unit 111A by executing the process of step 312. The derived half angle of view ⁇ .
  • step 316 the derivation unit 111A causes the display unit 86 to start displaying the first measured distance and the irradiation position mark 136 superimposed on the first captured image, as shown in FIG. The process proceeds to 318.
  • the first measured distance displayed by executing the process of step 316 indicates the first measured distance acquired by the acquiring unit 110A by executing the process of step 306.
  • the numerical value “1333325.0” corresponds to the first actually measured distance, and the unit is millimeter.
  • the irradiation position mark 136 is a mark indicating the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314.
  • the derivation unit 111A determines whether or not the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 matches the identifiable pixel position.
  • the identifiable pixel position refers to the position of a pixel that can be identified at a position corresponding to each other in each of the first captured image and the second captured image.
  • step 318 when the position of the pixel specified by the first irradiation position pixel coordinate derived by executing the process of step 314 matches the identifiable pixel position, the determination is affirmed and the process proceeds to step 320. Transition. In step 318, when the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 does not match the identifiable pixel position, the determination is negative and the process proceeds to step 342. Transition.
  • step 320 the derivation unit 111A causes the display unit 86 to display a match message 137A superimposed on the first captured image for a specific time (for example, 5 seconds), as shown in FIG. 28 as an example. Thereafter, the process proceeds to step 322.
  • the coincidence message 137A is a message indicating that the position of the pixel identified by the first irradiation position pixel coordinates derived by executing the process of step 314 coincides with the identifiable pixel position. Therefore, when the process of step 320 is executed, the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 may coincide with the identifiable pixel position. The user is notified.
  • the matching message 137A “Because the irradiation position of the laser light matches the characteristic position of the subject, the first derivation process or the second derivation process can be executed.”
  • the technology of the present disclosure is not limited to this.
  • only the message “the laser beam irradiation position matches the characteristic position of the subject” in the matching message 137A may be adopted and displayed.
  • any message may be used as long as it is a message notifying the coincidence between the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 314 and the identifiable pixel position. Also good.
  • the match message 137A is visually displayed.
  • an audible display such as an audio output by an audio playback device (not shown) or a permanent visual display such as an output of a printed matter by a printer. May be performed instead of visible display, or may be used in combination.
  • step 322 the derivation unit 111A starts a display in which the derivation process selection screen 139 is superimposed on the first captured image, as shown in FIG. 29 as an example, and then proceeds to step 324.
  • the derivation process selection screen 139 displays two soft keys, a first derivation process start button 139A and a second derivation process start button 139B.
  • the derivation process selection screen 139 also displays a message prompting to turn on either the first derivation process start button 139A or the second derivation process start button 139B.
  • the case where the first derivation process start button 139A is turned on means a case where the user desires to execute the first derivation process.
  • the case where the user desires to execute the first derivation process there is a case where the user has doubts about the content of the match message 137A.
  • the case where the user is suspicious in the content of the coincidence message 137A means that the irradiation position specified by the actual irradiation position of the laser beam and the irradiation position real space coordinates, for example, by exchanging the distance measuring unit 12 or changing the angle of view The case where it is judged by the user that there is a possibility of being misaligned.
  • the case where the second derivation process start button 139B is turned on means a case where the user desires to execute the second derivation process.
  • the user desires to execute the second derivation process
  • the case where the user has no doubt in the content of the coincidence message 137A indicates, for example, a case where the user determines that the actual irradiation position of the laser beam and the irradiation position specified by the irradiation position real space coordinates are not shifted. .
  • the second derivation process can reduce the load required to derive the imaging position distance compared to the first derivation process.
  • step 324 the derivation unit 111A determines whether or not the first derivation process start button 139A is turned on. If the first derivation process start button 139A is turned on in step 324, the determination is affirmed and the routine proceeds to step 328. If it is determined in step 324 that the first derivation process start button 139A is not turned on, the determination is negative and the routine proceeds to step 332.
  • step 328 the derivation unit 111A causes the display unit 86 to end the display of the derivation process selection screen 139 and to start displaying the target pixel designation guidance message (not shown) superimposed on the first captured image. Thereafter, the process proceeds to step 330.
  • the pixel-of-interest designation guidance message refers to, for example, a message for guiding the designation of a pixel of interest via the touch panel 88 from the first captured image.
  • the attention pixel designation guidance message there is a message “Please specify one pixel to be noticed (attention point)”.
  • the attention pixel designation guidance message displayed by executing the processing of step 328 is not displayed, for example, when the determination is affirmed in the processing of step 330A described later, that is, when the attention pixel is designated. Is done.
  • step 332 the derivation unit 111A determines whether or not the second derivation process start button 139B is turned on. If the second derivation process start button 139B is turned on in step 332, the determination is affirmed and the routine proceeds to step 334. If it is determined in step 332 that the second derivation start button 139B is not turned on, the determination is negative and the routine proceeds to step 338.
  • step 334 the derivation unit 111A ends the display of the derivation process selection screen 139 on the display unit 86, and displays the above-described attention pixel designation guidance message (not shown) superimposed on the first captured image.
  • the process proceeds to step 336. Note that the target pixel designation guidance message displayed by executing the process of step 334 is not displayed when the target pixel is specified in the process of step 336A described later, for example.
  • step 338 the deriving unit 111A determines whether or not a condition for ending the imaging position distance deriving process is satisfied.
  • the condition for ending the imaging position distance derivation process is the same as the condition used in the process of step 302.
  • step 338 If it is determined in step 338 that the conditions for ending the imaging position distance derivation are not satisfied, the determination is negative and the routine proceeds to step 324. If the condition for ending the imaging position distance deriving process is satisfied in step 338, the determination is affirmed and the routine proceeds to step 340.
  • step 340 the derivation unit 111A ends the display of the first captured image and the superimposed display information on the display unit 86, and then ends the imaging position distance derivation process.
  • the superimposed display information refers to various types of information that are currently displayed superimposed on the first captured image. For example, the first actually measured distance, the irradiation position mark 136, and the derivation process selection This refers to the screen 139 or the like.
  • step 342 the derivation unit 111A causes the display unit 86 to display a mismatch message 137B superimposed on the first captured image for a specific time (for example, 5 seconds), as shown in FIG. 30 as an example. Thereafter, the process proceeds to step 330.
  • the mismatch message 137B is a message indicating that the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the processing of step 314 does not match the identifiable pixel position.
  • “the position of the pixel specified by the first irradiation position pixel coordinates does not coincide with the identifiable pixel position”, in other words, the position of the pixel specified by the first irradiation position pixel coordinates can be specified. This means that the pixel position is different from the pixel position.
  • step 226 by executing the process of step 226, the position of the pixel specified by the first irradiation position pixel coordinates derived by executing the process of step 207 matches the identifiable pixel position. The user is notified that it has not.
  • a message “The first derivation process is executed because the irradiation position of the laser beam did not match the characteristic position of the subject” is displayed as the mismatch message 137B.
  • the technology of the present disclosure is not limited to this. For example, only the message “The laser beam irradiation position did not match the characteristic position of the subject” in the mismatch message 137B may be adopted and displayed.
  • any message can be used as long as it is a message notifying the inconsistency between the position of the pixel specified by the first irradiation position pixel coordinates derived by the processing of step 314 and the identifiable pixel position. Also good.
  • the example shown in FIG. 30 shows a case where the discrepancy message 137B is displayed visually.
  • an audible display such as an audio output by an audio reproduction device (not shown) or a permanent visual display such as an output of a printed matter by a printer. May be performed instead of visible display, or may be used in combination.
  • step 330 the CPU 100 executes the first derivation process shown in FIGS. 22 and 23 as an example, and then ends the imaging position distance derivation process.
  • step 330 ⁇ / b> A the acquisition unit 110 ⁇ / b> A determines whether or not a pixel of interest has been designated from the first captured image via the touch panel 88 by the user.
  • the target pixel corresponds to the first designated pixel described above.
  • the touch panel 88 receives pixel designation information for designating two-dimensional coordinates corresponding to pixels included in the first captured image among the two-dimensional coordinates assigned to the touch panel 88. Therefore, in step 330A, it is determined that the pixel of interest has been designated when pixel designation information is received by the touch panel 88. That is, the pixel corresponding to the two-dimensional coordinate designated by the pixel designation information is set as the target pixel.
  • step 330A if the target pixel is not specified from the first captured image by the user via the touch panel 88, the determination is negative and the process proceeds to step 330B.
  • step 330A when the target pixel is designated from the first captured image by the user via the touch panel 88, the determination is affirmed and the process proceeds to step 330D.
  • step 330B the acquiring unit 110A determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 330B if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 330A.
  • step 330B when the condition for ending the first derivation process is satisfied, the determination is affirmed and the process proceeds to step 330C.
  • Step 330C the acquisition unit 110A performs the same process as the process in Step 340, and then ends the first derivation process.
  • Step 330D the acquisition unit 110A acquires the pixel coordinates of interest that specify the pixel specified by the user via the touch panel 88 in the first captured image, and then proceeds to Step 330E.
  • a pixel of interest 126 is given as shown in FIG.
  • the target pixel 126 is a pixel in the lower left corner of an image corresponding to the central window on the second floor of the outer wall surface in the first captured image.
  • the outer wall surface second floor central window refers to the window 122 at the center of the second floor of the office building 120 among the windows 122 provided on the outer wall surface 121.
  • the pixel-of-interest coordinates indicate two-dimensional coordinates that specify the pixel-of-interest 126 in the first captured image.
  • step 330E the acquiring unit 110A acquires the three characteristic pixel coordinates that specify the positions of the characteristic three pixels in the outer wall surface image 128 (the hatched area in the example illustrated in FIG. 32) of the first captured image, and then , The process proceeds to step 330F.
  • the outer wall surface image 128 refers to an image showing the outer wall surface 121 (see FIG. 16) in the first captured image.
  • the characteristic three pixels are pixels that can be specified at positions corresponding to each other in each of the first captured image and the second captured image.
  • the characteristic three pixels in the first captured image are separated from each other by a predetermined number of pixels or more by image analysis based on the spatial frequency of the image corresponding to the pattern or building material in the outer wall surface image 128. Pixels present at each of the three points specified according to a predetermined rule. For example, three pixels that indicate different vertices having the maximum spatial frequency in a circular area that is defined by a predetermined radius centered on the pixel of interest 126 and that satisfy the predetermined condition are characteristic three pixels. Extracted.
  • the three characteristic pixel coordinates correspond to the above-described plural pixel coordinates.
  • the characteristic three pixels are the first pixel 130, the second pixel 132, and the third pixel 134.
  • the first pixel 130 is a pixel in the upper left corner of the image corresponding to the central window on the second floor of the outer wall surface in the outer wall image 128.
  • the second pixel 132 is a pixel at the upper right corner of the image corresponding to the central window on the second floor of the outer wall surface.
  • the third pixel 134 is a pixel at the lower left corner of the image corresponding to the pattern 124 close to the lower part of the central window on the third floor of the outer wall.
  • the outer wall surface third floor central window refers to the window 122 at the center of the third floor of the office building 120 among the windows 122 provided on the outer wall surface 121.
  • step 330F the acquisition unit 110A performs the same process as the process in step 340, and then proceeds to step 330G illustrated in FIG.
  • Step 330G the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the second position. If it is determined in step 330G that the measurement imaging button 90A is not turned on, the determination is negative and the process proceeds to step 330H. If the measurement imaging button 90A is turned on in step 330G, the determination is affirmed and the routine proceeds to step 330I.
  • Step 330H the acquisition unit 110A determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of Step 302.
  • step 330H if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 330G.
  • step 330H when the condition for ending the first derivation process is satisfied, the determination is affirmed and the first derivation process is ended.
  • step 330I the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform measurement of the actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 330J.
  • a second actual distance an actual distance measured by executing the process of Step 330I or Step 336H (see FIG. 25) described later is referred to as a “second actual distance”.
  • Step 330J the acquisition unit 110A acquires the second actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the process of Step 330I.
  • the acquisition unit 110A acquires a second captured image signal indicating a second captured image obtained by being captured by the imaging device 14 by executing the process of Step 330I.
  • the second captured image indicated by the second captured image signal acquired by executing the process of step 330J is obtained by being captured in the focused state by executing the process of step 330I. It is the 2nd picked-up image.
  • the acquisition unit 110A causes the display unit 86 to start displaying the second captured image indicated by the second captured image signal acquired in the process of step 330J, and then proceeds to step 330L. .
  • step 330L the deriving unit 111A derives a focal length corresponding to the second actually measured distance using the focal length deriving table 109A, and then proceeds to step 330M.
  • the second actually measured distance used in the process of step 330L indicates the second actually measured distance acquired by the acquiring unit 110A by executing the process of step 330J.
  • the focal distance corresponding to the second actually measured distance is, for example, the focal distance associated with the derivation distance that matches the second actually measured distance among the derivation distances stored in the focal distance derivation table 109A. Point to.
  • the derivation unit 111A derives the focal distance from the derivation distance in the focal distance derivation table 109A by the above-described interpolation method.
  • step 330M the acquisition unit 110A specifies a corresponding target pixel that is a pixel corresponding to the target pixel 126 among the pixels included in the second captured image, and specifies the specified corresponding target pixel coordinate. Then, the process proceeds to step 330N.
  • the corresponding target pixel coordinates refer to two-dimensional coordinates that specify the corresponding target pixels in the second captured image.
  • the corresponding target pixel is specified by performing existing image analysis such as pattern matching on the first and second captured images as analysis targets. Note that the corresponding target pixel corresponds to the above-described second designated pixel, and when the target pixel 126 is specified from the first captured image, the processing of this step 330M is executed to uniquely identify the second captured pixel. Identified.
  • step 330N the acquisition unit 110A identifies the characteristic three pixels in the outer wall image corresponding to the outer wall image 128 (see FIG. 32) in the second captured image, and identifies the identified characteristic three pixels.
  • Corresponding feature pixel coordinates are acquired, and then the process proceeds to step 330P.
  • the corresponding characteristic pixel coordinates indicate two-dimensional coordinates that specify the characteristic three pixels specified in the second captured image.
  • the corresponding feature pixel coordinates are also two-dimensional coordinates corresponding to the three feature pixel coordinates acquired by the processing in step 330E in the second captured image, and correspond to the above-described plurality of pixel coordinates.
  • the characteristic three pixels of the second captured image are subjected to an existing image analysis such as pattern matching with the first and second captured images as analysis targets in the same manner as the above-described method of identifying the corresponding target pixel. It is specified by that.
  • step 330P the derivation unit 111A derives a plane equation a, b, and c shown in Equation (7) from the three feature pixel coordinates, the corresponding feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1, The orientation of the plane defined by the plane equation is derived.
  • the focal length used in the processing of step 330P is the focal length derived by executing the processing of step 330L.
  • the three feature pixel coordinates are (u L1 , v L1 ), (u L2 , v L2 ), (u L3 , v L3 ), and the corresponding feature pixel coordinates are (u R1 , v R1 ), (u R2 , Assuming v R2 ), (u R3 , v R3 ), the first to third feature pixel three-dimensional coordinates are defined by the following equations (9) to (11).
  • the first feature pixel three-dimensional coordinates refer to three-dimensional coordinates corresponding to (u L1 , v L1 ) and (u R1 , v R1 ).
  • the second feature pixel three-dimensional coordinates refer to three-dimensional coordinates corresponding to (u L2 , v L2 ) and (u R2 , v R2 ).
  • the third feature pixel three-dimensional coordinate refers to a three-dimensional coordinate corresponding to (u L3 , v L3 ) and (u R3 , v R3 ). In equations (9) to (11), “v R1 ”, “v R2 ”, and “v R3 ” are not used.
  • the derivation unit 111A has three mathematical expressions in an equivalent relationship obtained by substituting each of the first to third characteristic pixel three-dimensional coordinates shown in mathematical expressions (9) to (11) into the mathematical expression (7).
  • a, b, and c in Equation (7) are derived.
  • a, b, and c in Equation (7) being derived means that the plane orientation defined by the plane equation shown in Equation (7) is derived.
  • the deriving unit 111A determines the plane equation shown in the mathematical expression (7) based on the irradiation position real space coordinates derived in the process of step 312, and then proceeds to step 330R. That is, in this step 330P, the derivation unit 111A substitutes the a, b, c derived in the processing in step 330P and the irradiation position real space coordinates derived in the processing in step 312 into the mathematical formula (7). 7) Confirm d. Since a, b, and c of Expression (7) are derived by the process of Step 330P, when d of Expression (7) is determined by the process of Step 330Q, the plane equation shown by Expression (7) is determined. Is done.
  • step 330R the derivation unit 111A derives the imaging position distance based on the feature pixel three-dimensional coordinates and the plane equation, and then proceeds to step 330S.
  • the feature pixel three-dimensional coordinates used in the processing of step 330R indicate the first feature pixel three-dimensional coordinates.
  • the feature pixel three-dimensional coordinates used in the processing of step 330R are not limited to the first feature pixel three-dimensional coordinates, and may be the second feature pixel three-dimensional coordinates or the third feature pixel three-dimensional coordinates.
  • the plane equation used in step 330R is the plane equation determined in step 330Q.
  • step 330R “B” which is the imaging position distance is derived by substituting the three-dimensional feature pixel coordinates into the plane equation.
  • step 330S the derivation unit 111A causes the display unit 86 to start displaying the imaging position distance derived in step 330R on the second captured image, as shown in FIG. 33 as an example.
  • step 330S the deriving unit 111A stores the imaging position distance derived in the process of step 330R in a predetermined storage area, and then proceeds to step 330T.
  • the default storage area include a storage area of the primary storage unit 102 or a storage area of the secondary storage unit 104.
  • the numerical value “144656.1” corresponds to the imaging position distance derived by the processing in step 330R, and the unit is millimeters.
  • step 330T the derivation unit 111A determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • Step 330T when the condition for ending the first derivation process is not satisfied, the determination is denied and the determination in Step 330 is performed again. If the condition for ending the first derivation process is satisfied in step 330T, the determination is affirmed and the process proceeds to step 330U.
  • step 330U the derivation unit 111A ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the first derivation process.
  • the superimposed display information refers to various types of information that are currently displayed superimposed on the second captured image, such as the second actually measured distance and the imaging position distance.
  • step 336 shown in FIG. 21 the CPU 100 executes the second derivation process shown in FIG. 24 as an example, and thereafter ends the imaging position distance derivation process.
  • step 336A the acquisition unit 110A determines whether or not a pixel of interest has been designated from the first captured image via the touch panel 88 by the user.
  • the target pixel corresponds to the first designated pixel described above.
  • the touch panel 88 receives pixel designation information for designating two-dimensional coordinates corresponding to pixels included in the first captured image among the two-dimensional coordinates assigned to the touch panel 88. Therefore, in step 336A, it is determined that the pixel of interest has been designated when pixel designation information is received by the touch panel 88. That is, the pixel corresponding to the two-dimensional coordinate designated by the pixel designation information is set as the target pixel.
  • step 336A if the target pixel is not designated from the first captured image by the user via the touch panel 88, the determination is negative and the process proceeds to step 336B.
  • step 336A when the pixel of interest is designated from the first captured image via the touch panel 88 by the user, the determination is affirmed and the process proceeds to step 336D.
  • step 336B the acquisition unit 110A determines whether or not a condition for ending the second derivation process is satisfied.
  • the condition for ending the second derivation process is the same as the condition used in the process of step 302.
  • step 336B if the condition for ending the second derivation process is not satisfied, the determination is negative and the process proceeds to step 336A.
  • step 336B when the condition for ending the second derivation process is satisfied, the determination is affirmed and the process proceeds to step 336C.
  • step 336C the acquisition unit 110A performs the same process as the process in step 340, and then ends the second derivation process.
  • Step 336D the acquisition unit 110A acquires the pixel coordinates of interest specifying the pixel specified by the user via the touch panel 88 in the first captured image, and then proceeds to Step 336E.
  • the target pixel 126 may be cited, and the target pixel coordinate is the first captured image. Indicates a two-dimensional coordinate specifying the target pixel 126.
  • step 336E the acquisition unit 110D performs the same process as the process in step 340, and then proceeds to step 330F illustrated in FIG.
  • Step 336F the acquisition unit 110A determines whether or not the measurement imaging button 90A is turned on at the second position. If it is determined in step 336F that the measurement imaging button 90A is not turned on, the determination is negative and the process proceeds to step 336G. In step 336F, when the measurement imaging button 90A is turned on, the determination is affirmed and the process proceeds to step 336H.
  • step 336G the acquisition unit 110A determines whether or not a condition for ending the second derivation process is satisfied.
  • the condition for ending the second derivation process is the same as the condition used in the process of step 302.
  • step 336G if the condition for ending the second derivation process is not satisfied, the determination is negative and the process proceeds to step 336F.
  • step 336G when the condition for ending the second derivation process is satisfied, the determination is affirmed and the second derivation process is ended.
  • step 336H the acquisition unit 110A causes the distance measurement unit 12 and the distance measurement control unit 68 to perform the measurement of the second actually measured distance and causes the imaging device 14 to perform imaging, and then proceeds to step 336J. Transition.
  • step 336I the acquisition unit 110A acquires the second actually measured distance measured by the distance measurement unit 12 and the distance measurement control unit 68 by executing the processing in step 336H.
  • the acquisition unit 110A acquires a second captured image signal indicating a second captured image obtained by the imaging device 14 by executing the processing in Step 336H. Note that the second captured image indicated by the second captured image signal acquired by executing the processing of step 336I is obtained by being captured in the focused state by executing the processing of step 336H. It is the 2nd picked-up image.
  • step 336J the acquisition unit 110A causes the display unit 86 to start displaying the second captured image indicated by the second captured image signal acquired in the process of step 336I, and then proceeds to step 336K. .
  • step 336K the deriving unit 111A derives a focal length corresponding to the second actually measured distance using the focal length deriving table 109A, and then proceeds to step 336L.
  • the second actually measured distance used in the process of step 336K indicates the second actually measured distance acquired by the acquisition unit 110A by executing the process of step 336I.
  • a derivation method similar to the focal distance derivation method in the process of step 330L described above is employed.
  • Step 336L the acquisition unit 110A specifies the corresponding target pixel that is the pixel corresponding to the target pixel 126 among the pixels included in the second captured image, and specifies the specified corresponding target pixel coordinate. Then, the process proceeds to step 336M.
  • step 336L an acquisition method similar to the acquisition method of the corresponding target pixel coordinates in the process of step 330M described above is employed.
  • step 336M the deriving unit 111A derives the second irradiation position pixel coordinates, and then proceeds to step 336N. That is, in step 336M, the deriving unit 111A identifies the position of the pixel corresponding to the position of the pixel identified by the first irradiation position pixel coordinates derived in the process of step 314 among the pixels of the second captured image. The coordinates are derived as the second irradiation position pixel coordinates.
  • the pixels corresponding to the position of the pixel specified by the first irradiation position pixel coordinates are the first and second captured images as in the above-described method of specifying the corresponding target pixel.
  • the analysis target is specified by executing existing image analysis such as pattern matching.
  • step 336N the deriving unit 111A derives the imaging position distance based on the irradiation position real space coordinates, the irradiation position pixel coordinates, the focal length, the size of the imaging pixel 60A1, and the mathematical expression (2), and then proceeds to step 336P. Transition.
  • the irradiation position real space coordinates used in the process of step 336N indicate the irradiation position real space coordinates derived in the process of step 312.
  • the irradiation position pixel coordinates used in the process of step 336N indicate the first irradiation position pixel coordinates derived in the process of step 314 and the second irradiation position pixel coordinates derived in the process of step 336M.
  • the focal length used in the processing of step 336N indicates the focal length derived by the processing of step 336K.
  • the imaging position distance “B” is derived by substituting the irradiation position real space coordinates, the irradiation position pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1 into Expression (2).
  • step 336P the derivation unit 111A causes the display unit 86 to start displaying the imaging position distance derived in the process of step 336N on the second captured image, as illustrated in FIG. 33 as an example.
  • step 336P the deriving unit 111A stores the imaging position distance derived in the process of step 336N in a predetermined storage area, and then proceeds to step 336Q.
  • step 336Q the derivation unit 111A determines whether or not a condition for ending the second derivation process is satisfied.
  • the condition for ending the second derivation process is the same as the condition used in the process of step 302.
  • step 336Q when the condition for ending the second derivation process is not satisfied, the determination is denied and the determination in step 336Q is performed again. If the condition for ending the second derivation process is satisfied in step 336Q, the determination is affirmed and the process proceeds to step 336R.
  • step 336R the derivation unit 111A ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the second derivation process.
  • the superimposed display information refers to various types of information that are currently superimposed on the second captured image, such as the second actually measured distance and the imaging position distance.
  • the load for deriving the imaging position distance is smaller in the second derivation process than in the first derivation process.
  • the derivation accuracy of the imaging position distance by the second derivation process is equal to the imaging position distance by the first derivation process. It becomes higher than the derivation accuracy.
  • step 350 the derivation unit 111A performs the imaging position distance by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process. It is determined whether or not is already derived. In step 350, if the imaging position distance is not derived in any of the processing of step 330R included in the first derivation processing and the processing of step 336N included in the second derivation processing, the determination is negative, and the step 358. In step 350, if the imaging position distance has already been derived by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process, the determination is affirmed and the process proceeds to step 352. Transition.
  • the derivation unit 111A determines whether or not a condition for starting derivation of the designated pixel three-dimensional coordinates (hereinafter referred to as “derivation start condition”) is satisfied.
  • the derivation start condition include a condition that an instruction to start derivation of the designated pixel three-dimensional coordinates is accepted by the touch panel 88, a condition that the imaging position distance is displayed on the display unit 86, and the like.
  • step 352 if the derivation start condition is not satisfied, the determination is denied and the process proceeds to step 358. If the derivation start condition is satisfied in step 352, the determination is affirmed and the routine proceeds to step 354.
  • step 354 the deriving unit 111A derives the designated pixel three-dimensional coordinates based on the pixel-of-interest coordinates, the corresponding pixel-of-interest coordinates, the imaging position distance, the focal length, the size of the imaging pixel 60A1, and the formula (2), and then The process proceeds to step 356.
  • the pixel-of-interest coordinates used in the process of step 354 indicate the pixel-of-interest coordinates acquired by the process of step 330D included in the first derivation process or the process of step 336D included in the second derivation process.
  • the corresponding target pixel coordinates used in the process of step 354 are the corresponding target pixel coordinates acquired in the process of step 330M included in the first derivation process or the process of step 336L included in the second derivation process.
  • the imaging position distance used in the process of step 354 indicates the imaging position distance derived by the process of step 330R included in the first derivation process or the process of step 336N included in the second derivation process.
  • the focal length used in the processing of step 354 indicates the focal length derived by the processing of step 330L included in the first derivation processing or the processing of 336K included in the second derivation processing.
  • the designated pixel three-dimensional coordinates are derived by substituting the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 into the formula (2). .
  • step 356 the derivation unit 111A causes the display unit 86 to superimpose the designated pixel three-dimensional coordinates derived in the process of step 354 on the second captured image, as shown in FIG.
  • the deriving unit 111A stores the designated pixel three-dimensional coordinates derived in the process of step 354 in a predetermined storage area, and then proceeds to step 358.
  • the default storage area include a storage area of the primary storage unit 102 or a storage area of the secondary storage unit 104.
  • (20161, 50134, 136892) corresponds to the designated pixel three-dimensional coordinates derived in the process of step 354.
  • the designated pixel three-dimensional coordinates are displayed close to the target pixel 126. Note that the target pixel 126 may be highlighted so as to be distinguishable from other pixels.
  • the derivation unit 111A determines whether or not a condition for ending the three-dimensional coordinate derivation process is satisfied.
  • a condition that an instruction for ending the three-dimensional coordinate derivation process is received from the touch panel 88 can be given.
  • Another example of the condition for ending the three-dimensional coordinate derivation process is a condition that the second predetermined time has passed without the determination being affirmed in step 350 after the determination in step 350 is denied.
  • the second predetermined time refers to, for example, 30 minutes.
  • step 358 If it is determined in step 358 that the conditions for ending the three-dimensional coordinate derivation process are not satisfied, the determination is negative and the routine proceeds to step 350. If it is determined in step 358 that the condition for ending the three-dimensional coordinate derivation process is satisfied, the determination is affirmed and the three-dimensional coordinate derivation process is ended.
  • the deriving unit 111A uses the focal length deriving table 109A indicating the correspondence between the actual measurement distance and the focal length, and derives the focal distance corresponding to the actual measurement distance acquired by the acquisition unit 110A.
  • the focal length can be increased with less precision than when the user inputs the length of the reference image included in the captured image. Can be derived.
  • the acquisition unit 110A acquires the first captured image, the second captured image, and the second measured distance (steps 330J and 336I).
  • the deriving unit 111A uses the focal length deriving table 109A to derive the focal length corresponding to the second measured distance acquired by the acquiring unit 110A (steps 330L and 336K). Then, the deriving unit 111A derives the focal position distance based on the derived focal distance (steps 330R and 336N).
  • the imaging position distance can be increased without much effort compared to the case where the user inputs the length of the reference image included in the captured image.
  • the accuracy can be derived.
  • the actual measurement distance is acquired by the acquisition unit 110A (step 206).
  • the focal length derivation table 109A is used by the deriving unit 111A to derive the focal length corresponding to the actually measured distance acquired by the acquiring unit 110A (step 209).
  • the real space corresponding to the interval between the two specified pixels Is derived (step 242).
  • the distance measuring device 10A in order to increase the focal length accuracy, the distance between the two specified pixels is actually compared with the case where the user inputs the length of the reference image included in the captured image.
  • the length of the area on the space can be derived with high accuracy without trouble.
  • the first derivation process and the second derivation process are selectively executed.
  • the actual irradiation position of the laser light is a position in real space corresponding to a position of a pixel different from a pixel that can be specified at a position corresponding to each other in each of the first captured image and the second captured image.
  • the imaging position distance is derived with higher accuracy than the second derivation process.
  • the second derivation process is the first when the actual irradiation position of the laser light is a position on the real space corresponding to the position of the pixel that can be specified at the position corresponding to each other in each of the first captured image and the second captured image.
  • the imaging position distance is derived with higher accuracy than the derivation process.
  • the first derivation process and the second derivation process are received by the touch panel 88 when the position of the pixel specified by the irradiation position pixel coordinates derived by the deriving unit 111A is the identifiable pixel position. Selectively executed according to the given instructions. Therefore, according to the distance measuring apparatus 10A, it is possible to derive the imaging position distance with higher accuracy compared to the case where the imaging position distance is derived by only one type of derivation process regardless of the irradiation position of the laser beam.
  • the first derivation process is executed when the pixel position specified by the irradiation position pixel coordinates derived by the derivation unit 111A is a pixel position different from the identifiable pixel position. Therefore, according to the distance measuring device 10A, when the pixel position specified by the irradiation position pixel coordinates is a pixel position different from the identifiable pixel position, the imaging position distance is derived by a derivation process different from the first derivation process. Compared to the case, the imaging position distance can be derived with high accuracy.
  • the second derivation process is a process of deriving the imaging position distance based on a plurality of parameters smaller than the number of parameters used in the derivation of the imaging position distance by the first derivation process. . Therefore, according to the distance measuring device 10A, it is possible to derive the imaging position distance with a low load compared to the case where the imaging position distance is derived only by the first derivation process regardless of the irradiation position of the laser beam.
  • the first derivation process is performed after the user recognizes that the position of the pixel specified by the first irradiation position pixel coordinate derived by the deriving unit 111A is the identifiable pixel position.
  • the second derivation process can be selected.
  • the mismatch message 137B is displayed on the display unit 86 when the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position. Is displayed. Therefore, according to the distance measuring apparatus 10A, the user is made to recognize that the pixel position specified by the first irradiation position pixel coordinates derived by the deriving unit 111A is a pixel position different from the identifiable pixel position. Thus, the first derivation process and the second derivation process can be selected.
  • the designated pixel three-dimensional coordinates are derived based on the imaging position distance derived by the imaging position distance deriving process (see FIG. 34). Therefore, according to the distance measuring apparatus 10A, the designated pixel three-dimensional coordinates can be derived with higher accuracy than when the imaging position distance is derived by only one type of derivation process regardless of the irradiation position of the laser beam. .
  • the designated pixel three-dimensional coordinates are defined based on the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal distance, and the dimensions of the imaging pixel 60A1 (see Expression (2)). ). Therefore, according to the distance measuring device 10A, the designated pixel 3 is compared with the case where the designated pixel three-dimensional coordinates are not defined based on the target pixel coordinates, the corresponding target pixel coordinates, the imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1. Dimensional coordinates can be derived with high accuracy.
  • the derivation unit 111A uses a plane equation defined by the plane equation shown in Equation (7) based on the three feature pixel coordinates, the corresponding feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the direction is derived (step 330P).
  • the plane equation shown in Formula (7) is determined by the deriving unit 111A based on the orientation of the plane and the irradiation position real space coordinates derived in Step 312 (Step 330Q).
  • the imaging position distance is derived by the deriving unit 111A based on the determined plane equation and the feature pixel three-dimensional coordinates (for example, the first feature pixel three-dimensional coordinates) (step 330R).
  • the imaging position distance is not used without using the plane equation.
  • the imaging position distance can be derived with higher accuracy than when deriving.
  • the acquisition unit 110A acquires three feature pixel coordinates (step 330E), and the acquisition unit 110A acquires corresponding feature pixel coordinates (step 330N).
  • the deriving unit 111A derives the imaging position distance based on the three feature pixel coordinates, the corresponding feature pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1 (step 330R). Therefore, according to the distance measuring device 10A, the three feature pixel coordinates and the corresponding feature pixels are reduced in the number of operations compared to the case where the user designates the three characteristic pixel coordinates when acquiring the three feature pixel coordinates and the corresponding feature pixel coordinates.
  • the imaging position distance can be derived based on the coordinates.
  • the pixel designation information is received by the touch panel 88, the pixel designated by the accepted pixel designation information is set as the target pixel 126, and the target pixel coordinates are acquired by the acquisition unit 110A (Step S1). 330D, 336D).
  • the acquisition unit 110 ⁇ / b> A identifies a corresponding target pixel that is a pixel corresponding to the target pixel 126.
  • the corresponding pixel-of-interest coordinates that identify the corresponding pixel-of-interest pixel are acquired by the acquisition unit 110A (steps 330M and 336L). Therefore, according to the distance measuring device 10A, the designated pixels related to both the first captured image and the second captured image are quickly compared with the case where the designated pixels related to both the first captured image and the second captured image are specified by the user. Can be decided.
  • the ranging device 10A includes a ranging unit 12 and a ranging control unit 68, and the first measured distance and the second measured distance measured by the ranging unit 12 and the ranging control unit 68 are obtained by the acquiring unit 110A. Obtained by Therefore, according to the distance measuring device 10A, the first actually measured distance and the second actually measured distance acquired by the distance measuring unit 12 and the distance measuring control unit 68 are used for deriving the irradiation position real space coordinates and the irradiation position pixel coordinates. it can.
  • the distance measuring device 10A includes the imaging device 14, and the acquisition unit 110A acquires the first captured image and the second captured image obtained by imaging the subject by the imaging device 14. Therefore, according to the distance measuring device 10A, the first captured image and the second captured image obtained by imaging the subject by the imaging device 14 can be used for deriving the imaging position distance.
  • the derivation result by the derivation unit 111A is displayed by the display unit 86 (see FIGS. 33 and 35). Therefore, according to the distance measuring device 10A, the derivation result obtained by the derivation unit 111A can be easily recognized by the user as compared with the case where the derivation result obtained by the derivation unit 111A is not displayed by the display unit 86.
  • the focal length is derived using the focal length derivation table 109A.
  • the technique of the present disclosure is not limited to this, and for example, the following formula (12) May be used to derive the focal length.
  • “f 0 ” is a focal length.
  • “F zoom ” is a nominal focal length predetermined as a position in the optical axis direction of the zoom lens 52 in the lens unit 16, and is a fixed value. The position of the zoom lens 52 in the optical axis direction is defined by the distance from the imaging surface 60B to the subject, and the unit is millimeters.
  • “D” is an actually measured distance.
  • three feature pixel coordinates are exemplified, but the technology of the present disclosure is not limited to this.
  • two-dimensional coordinates that specify each of a predetermined number of characteristic four or more pixels may be adopted.
  • the case where the pixel-of-interest coordinates are acquired from the coordinates on the first captured image and the corresponding pixel-of-interest coordinates are acquired from the coordinates on the second captured image is exemplified.
  • the target pixel coordinates may be acquired from the coordinates on the second captured image
  • the corresponding target pixel coordinates may be acquired from the coordinates on the first captured image.
  • the case where the three feature pixel coordinates are acquired from the coordinates on the first captured image and the corresponding feature pixel coordinates are acquired from the coordinates on the second captured image is illustrated.
  • the technology is not limited to this.
  • three feature pixel coordinates may be acquired from coordinates on the second captured image, and corresponding feature pixel coordinates may be acquired from coordinates on the first captured image.
  • the technology of the present disclosure is not limited to this.
  • two-dimensional coordinates that specify each of the first pixel 130A, the second pixel 132A, and the third pixel 134A may be acquired by the acquisition unit 110A.
  • the first pixel 130 ⁇ / b> A, the second pixel 132 ⁇ / b> A, and the third pixel 134 ⁇ / b> A are three pixels that have the maximum area surrounded by the outer wall surface image 128.
  • the number of pixels is not limited to three pixels, but may be a predetermined number of pixels that is three or more pixels that maximizes the area surrounded by the outer wall surface image 128.
  • the three pixels having the maximum area surrounded by the outer wall surface image 128 are identified as characteristic three pixels, and the two-dimensional coordinates relating to the identified three pixels are defined as the three characteristic pixel coordinates. Obtained by the obtaining unit 110A.
  • the corresponding feature pixel coordinates corresponding to the three feature pixel coordinates are also acquired by the acquisition unit 110A. Therefore, according to the distance measuring device 10A, as compared with the case where three characteristic pixel coordinates and corresponding characteristic pixel coordinates that specify a plurality of pixels whose surrounding area is not the maximum are acquired as characteristic three pixels, the imaging position distance is set. It can be derived with high accuracy.
  • the imaging position distance deriving process is realized when the three-dimensional coordinate deriving button 90G is turned on has been described, but the technology of the present disclosure is not limited to this.
  • the imaging position distance derivation process may be executed when the imaging position distance derivation button 90F is turned on.
  • the imaging position distance deriving process described in the first embodiment is an example in the case where the ultimate purpose is to derive three-dimensional coordinates.
  • the target pixel coordinates and corresponding pixel coordinates required for derivation of the three-dimensional coordinates are acquired by the imaging position distance derivation process, but when only the imaging position distance is derived, the target pixel coordinates in the imaging position distance derivation process are obtained. And acquisition of corresponding pixel coordinates is unnecessary. Therefore, when the imaging position distance derivation button 90F is turned on, the CPU 100 derives the imaging position distance without acquiring the target pixel coordinates and the corresponding pixel coordinates, and then when the three-dimensional coordinate derivation button 90G is turned on. The target pixel coordinates and the corresponding pixel coordinates may be acquired.
  • the CPU 100 acquires the target pixel coordinates and the corresponding target pixel coordinates between the processing in step 352 and the processing in step 354 of the three-dimensional coordinate derivation processing illustrated in FIG.
  • the pixel coordinates of interest may be used in the process of step 354.
  • the derivation unit 111A forcibly executes the second derivation process, and when the determination is negative in step 318, the derivation unit 111A performs the first derivation process. It may be forcibly executed.
  • the distance measuring device 10B stores a size deriving program 105B in place of the size deriving program 105A in the secondary storage unit 104, as compared to the distance measuring device 10A. Is different. As shown in FIG. 6 as an example, the distance measuring device 10B stores an imaging position distance deriving program 106B in place of the imaging position distance deriving program 106A in the secondary storage unit 104, as compared with the distance measuring device 10A. The point is different. In addition, as shown in FIG. 6 as an example, the distance measuring device 10B is different from the distance measuring device 10A in that a focal length deriving table 109B is stored in the secondary storage unit 104 instead of the focal length deriving table 109A. Different.
  • the CPU 100 operates as an acquisition unit 110B and a deriving unit 111B as illustrated in FIG. 8 as an example by executing at least one of the dimension deriving program 105B and the imaging position distance deriving program 106B.
  • the acquisition unit 110B corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111B corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110B and the derivation unit 111B will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
  • the focal length deriving table 109B is a table showing the correspondence between the deriving distance, the position of the zoom lens 52 in the optical axis direction, and the focal length.
  • the optical axis direction of the zoom lens 52 indicates, for example, the direction of the optical axis L2.
  • the position of the zoom lens 52 in the optical axis direction is defined by the distance from the imaging surface 60B to the subject side, and the unit is millimeters.
  • the “position of the zoom lens 52 in the optical axis direction” stored in the focal length derivation table 109B is simply referred to as “derivation position”.
  • a plurality of derivation distances are defined in the focal length derivation table 109B, and a focal distance is associated with each of a plurality of derivation positions for each derivation distance. ing.
  • a different focal length is associated with each of 18 millimeters, 23 millimeters, 35 millimeters, and 55 millimeters of the derivation position with respect to 1 meter of the derivation distance.
  • a different focal length is associated with each derivation position for each derivation position.
  • the focal length derivation table 109B is a table derived from at least one result of, for example, an actual test of the distance measuring device 10B and a computer simulation based on the design specifications of the distance measuring device 10B.
  • FIG. 15 and FIG. The description will be given with reference.
  • the same step number is attached
  • the dimension derivation process (see FIG. 38) according to the second embodiment has a process of step 370 instead of the process of step 206, compared to the dimension derivation process (see FIG. 14) described in the first embodiment. Is different. Further, the dimension derivation process according to the second embodiment is different from the dimension derivation process described in the first embodiment in that it includes a process in step 372 instead of the process in step 209.
  • the position information refers to information indicating the position of the zoom lens 52 in the lens unit 16 in the optical axis direction (the direction of the optical axis L2).
  • information indicating the current position of the zoom lens 52 in the optical axis direction in the lens unit 16 is used as an example of the position information, but the technology of the present disclosure is limited to this. is not.
  • information indicating the position of the zoom lens 52 in the optical axis direction in the lens unit 16 at the imaging timing several frames (for example, two frames) ago can be used as the position information.
  • step 38 is different from the process in step 209 shown in FIG. 14 in that the derivation unit 111B derives the focal length using the focal length derivation table 109B instead of the focal length derivation table 109A.
  • step 372 the deriving unit 111B derives the focal distance corresponding to the actually measured distance and the position information using the focal distance deriving table 109B.
  • the actually measured distance used in the process of step 372 is the actually measured distance acquired by the acquisition unit 110B by executing the process of step 370.
  • the position information used in the process of step 372 is the position information acquired by the acquisition unit 110B by executing the process of step 370.
  • the focal length corresponding to the position information is a derivation position that matches the position in the optical axis direction of the zoom lens 52 indicated by the position information among a plurality of derivation positions included in the focal length derivation table 109B. Refers to the corresponding focal length.
  • the derivation unit 111B determines from the derivation position of the focal length derivation table 109B as described above.
  • the focal length is derived by the interpolation method.
  • FIG. 21, FIG. 22, FIG. 24, FIG. 39, and FIG. 40 are described with respect to the imaging position distance derivation processing realized by the CPU 100 executing the imaging position distance derivation program 106B and using the imaging position distance derivation function.
  • the description will be given with reference. Note that the same steps as those included in the imaging position distance deriving process (see FIGS. 21 to 25) described in the first embodiment are denoted by the same step numbers, and description thereof is omitted.
  • the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes the process of step 380 instead of the process of step 330J. Further, the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes the process of step 382 instead of the process of step 330L. Further, the imaging position distance deriving process according to the second embodiment differs from the imaging position distance deriving process described in the first embodiment in that it includes a process of step 390 instead of the process of step 336I. Furthermore, the imaging position distance deriving process according to the second embodiment is different from the imaging position distance deriving process described in the first embodiment in that it includes a process of step 392 instead of the process of step 336K.
  • step 39 is different from the process in step 330J shown in FIG. 23 in that the acquisition unit 110B further acquires the position information.
  • step 382 shown in FIG. 39 differs from the process of step 330L shown in FIG. 23 in that the deriving unit 111B derives the focal distance using the focal distance deriving table 109B instead of the focal distance deriving table 109A.
  • the deriving unit 111B derives the focal length corresponding to the second actually measured distance and the position information using the focal length deriving table 109B.
  • the second actually measured distance used in the process of step 382 is the second actually measured distance acquired by the acquisition unit 110B by executing the process of step 380.
  • the position information used in the process of step 382 is the position information acquired by the acquisition unit 110B by executing the process of step 380.
  • the focal length is derived by the same derivation method as the focal length derivation method in step 372 shown in FIG.
  • step 40 is different from the process in step 336I shown in FIG. 25 in that the acquisition unit 110B further acquires position information.
  • step 392 shown in FIG. 40 differs from the process of step 336K shown in FIG. 25 in that the derivation unit 111B derives the focal length using the focal length derivation table 109B instead of the focal length derivation table 109A.
  • step 392 the deriving unit 111B derives the focal length corresponding to the second actually measured distance and the position information using the focal length deriving table 109B.
  • the position information used in the process of step 392 indicates the position information acquired by the acquisition unit 110B by executing the process of step 390.
  • the focal length is derived by the same derivation method as the focal length derivation method in the processing of step 372.
  • the second actually measured distance used in the process of step 392 is the second actually measured distance acquired by the acquisition unit 110B by executing the process of step 390.
  • the position information used in the process of step 392 is the position information acquired by the acquisition unit 110B by executing the process of step 390.
  • the focal length derivation table 109B indicating the correspondence between the measured distance, the position of the zoom lens 52 in the optical axis direction, and the focal length is defined. Also, the second measured distance and position information are acquired by the acquisition unit 110B (step 380). The deriving unit 111B uses the focal length deriving table 109B to derive the focal length corresponding to the second actually measured distance and the position information acquired by the acquiring unit 110B (step 382).
  • the position of the zoom lens 52 in the optical axis direction is larger than that in the case where the user inputs the length of the reference image included in the captured image in order to improve the focal length. Even if it changes, the focal length can be derived with high accuracy without taking time and effort.
  • the focal length corresponding to the second actually measured distance and the position information is derived by executing the process of step 382 (392) included in the imaging position distance deriving process has been described.
  • the technology of the present disclosure is not limited to this.
  • the first process refers to a process in which the acquisition unit 110B acquires the first measured distance, the first captured image signal, and the position information.
  • the second process refers to a process in which the deriving unit 111B derives a focal distance corresponding to the first actually measured distance and the position information using the focal distance deriving table 109B.
  • the first measured distance and position information used in the second process are the first measured distance and position information acquired by the acquisition unit 110B by executing the first process.
  • the focal length is derived using the focal length derivation table 109B.
  • the technique of the present disclosure is not limited to this, and for example, the above-described formula (12) May be used to derive the focal length.
  • Expression (12) the position in the optical axis direction of the zoom lens 52 indicated by the position information is adopted as “f zoom ”.
  • the focal distance derivation table 109B indicating the correspondence between the derivation distance, the derivation position, and the focal distance is exemplified, but the technology of the present disclosure is not limited to this.
  • the focal length may be derived using the focal length derivation table 109C shown in FIGS. 41A to 41E.
  • the focal length derivation table 109C shown in FIGS. 41A to 41E shows the correspondence relationship between the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance. It is a table to show. That is, in the focal distance derivation table 109C, the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance are associated with each other.
  • the temperature for derivation refers to the temperature of the region that affects the imaging by the imaging device 14.
  • the temperature of the region that affects imaging by the imaging device 14 refers to, for example, the temperature of the outside air, the temperature of the space inside the lens unit 16, or the temperature of the imaging element 60.
  • the unit of the temperature for derivation is “° C.”.
  • the derivation focus lens orientation refers to the orientation of the focus lens 50 with respect to the vertical direction.
  • the derivation focus lens posture is one of the parameters in the focal length derivation table 109C because the focus lens moving mechanism 53 moves by the weight of the focus lens 50 without depending on the power of the motor 57, so that the focus lens 50 emits light. This is because it moves along the axis L2 to the imaging surface 60B side or the subject side and affects the focal length.
  • the posture of the focus lens 50 with respect to the vertical direction can be defined by the angle formed by the vertical direction and the optical axis of the focus lens 50.
  • the derivation zoom lens orientation refers to the orientation of the zoom lens 52 with respect to the vertical direction.
  • the derivation zoom lens posture is one of the parameters in the focal length derivation table 109C.
  • the zoom lens moving mechanism 54 moves by the weight of the zoom lens 52 without depending on the power of the motor 56, so that the zoom lens 52 is lighted. This is because it moves along the axis L2 to the imaging surface 60B side or the subject side and affects the focal length.
  • the attitude of the zoom lens 52 with respect to the vertical direction can be defined by the angle formed by the vertical direction and the optical axis of the zoom lens 54.
  • a plurality of derivation positions are associated with each of a plurality of derivation distances.
  • a plurality of derivation temperatures are associated with each of the plurality of derivation positions.
  • a plurality of derivation focus lens postures are associated with each of the plurality of derivation temperatures.
  • a plurality of derivation zoom lens postures are associated with each of the plurality of derivation focus lens postures.
  • a focal length is individually associated with each of all prescribed derivation zoom lens postures.
  • the focal length derivation table 109C is a table derived from, for example, a result of at least one of a test by an actual device of the distance measuring device 10B and a computer simulation based on a design specification of the distance measuring device 10B.
  • the CPU 100 When deriving the focal distance using the focal distance deriving table 109C, the CPU 100 further acquires temperature information, focus lens attitude information, and zoom lens attitude information in addition to the actually measured distance and position information.
  • the temperature information refers to temperature information of a region that affects imaging by the imaging device 14.
  • information indicating the current temperature of a region that affects imaging by the imaging device 14 is employed, but the technology of the present disclosure is not limited to this. .
  • the temperature information may be a temperature detected by a sensor (not shown) or may be information received by the touch panel 88 as temperature information. Further, when the distance measuring device 10A can communicate with an external device such as a server via the Internet (not shown), even if the temperature is acquired from weather information provided from the external device via the Internet Good.
  • the focus lens attitude information refers to information indicating the attitude of the focus lens 50 with respect to the vertical direction.
  • information indicating the current posture of the focus lens 50 with respect to the vertical direction is used as an example of the focus lens posture information, but the technology of the present disclosure is not limited to this.
  • information indicating the current attitude of the focus lens 50 with respect to the vertical direction at the imaging timing several frames before (for example, two frames) can be used as the focus lens attitude information.
  • the focus lens attitude information may be, for example, a detection result obtained by detecting the attitude of the focus lens 50 with respect to the vertical direction by a sensor (not shown), or information received by the touch panel 88 as the focus lens attitude information. Also good.
  • the zoom lens attitude information refers to information indicating the attitude of the zoom lens 52 with respect to the vertical direction.
  • information indicating the current posture of the zoom lens 52 with respect to the vertical direction is used as an example of the zoom lens posture information, but the technology of the present disclosure is not limited to this.
  • information indicating the attitude of the zoom lens 52 with respect to the vertical direction at the imaging timing several frames (for example, two frames) ago can be used as the zoom lens attitude information.
  • the zoom lens posture information may be, for example, a detection result obtained by detecting the posture of the zoom lens 52 with respect to the vertical direction by a sensor (not shown), or information received by the touch panel 88 as zoom lens posture information. Also good.
  • the CPU 100 acquires the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information. Then, the CPU 100 uses the focal length derivation table 109C to derive the focal length corresponding to the acquired actual measurement distance, position information, temperature information, focus lens posture information, and zoom lens posture information. When parameters that match the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information do not exist in the focal length derivation table 109C, the focal length is calculated by the interpolation method as described in the above embodiments. May be derived.
  • the distance measuring device 10B allows the user to set the length of the reference image included in the captured image to the user in order to increase the accuracy of the focal length.
  • the position of the zoom lens 52 in the optical axis direction changes, the temperature of the region that affects the imaging by the imaging device 14 changes, the attitude of the focus lens 50 changes, and Even if the attitude of the zoom lens 52 changes, the focal length can be derived with high accuracy without taking time and effort.
  • the deriving distance, the deriving position, the deriving temperature, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other.
  • the disclosed technique is not limited to this.
  • the CPU 100 generates a focal length derivation table in which at least one of the derivation position, the derivation temperature, the derivation focus lens attitude, and the derivation zoom lens attitude is associated with the derivation distance and the focal distance. It may be used to derive the focal length.
  • the distance measuring device 10B captures a captured image in order to increase the accuracy of the focal distance. Even when the position of the zoom lens 52 in the optical axis direction is changed and the temperature of the region that affects the imaging by the imaging device 14 is changed as compared with the case where the user inputs the length of the reference image included in the The focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation temperature, the derivation focus lens posture, and the focal length are associated with each other
  • the distance measuring device 10B has high focal length accuracy.
  • the position of the zoom lens 52 in the direction of the optical axis changes compared to the case where the user inputs the length of the reference image included in the captured image, and the region that affects the imaging by the imaging device 14 Even if the temperature of the lens changes and the posture of the focus lens 50 changes, the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation zoom lens posture, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • the focal length derivation table is a table in which the derivation distance, the derivation zoom lens posture, and the focal length are associated with each other
  • the distance measuring device 10B captures a captured image in order to increase the accuracy of the focal length.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the focal length deriving table is a table in which the deriving distance, the deriving focus lens posture, the deriving zoom lens posture, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • the focal distance deriving table is a table in which the deriving distance, the deriving temperature, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other.
  • the focal length derivation table is a table in which the derivation distance, the derivation temperature, the derivation zoom lens attitude, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation position, the derivation focus lens posture, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy.
  • the focal distance deriving table is a table in which the deriving distance, the deriving position, the deriving focus lens attitude, the deriving zoom lens attitude, and the focal distance are associated with each other.
  • the position of the zoom lens 52 in the optical axis direction changes and the attitude of the focus lens 50 changes compared to when the user inputs the length of the reference image included in the captured image.
  • position of the zoom lens 52 changes a focal distance can be derived
  • the distance measuring device 10B captures the captured image in order to increase the accuracy of the focal distance. Compared with the case where the user inputs the length of the reference image included in the image, even if the posture of the focus lens 50 changes, the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation temperature, and the focal length are associated with each other
  • the distance measuring device 10B is included in the captured image in order to improve the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the focal length derivation table is a table in which the derivation distance, the derivation temperature, the derivation focus lens attitude, and the focal length are associated with each other
  • the distance measuring device 10B increases the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the CPU 100 replaces the focal length derivation table 109C with the correspondence relationship between the derivation distance, the derivation position, the derivation temperature, the derivation focus lens attitude, the derivation zoom lens attitude, and the focal distance.
  • the focal length may be derived using a prescribed focal length calculation formula.
  • the dependent variable of the focal length calculation formula is the focal length
  • the independent variables of the focal length calculation formula are the measured distance, position information, temperature information, focus lens posture information, and zoom lens posture information.
  • the independent variable of the focal length calculation formula may be at least one of position information, temperature information, focus lens posture information, and zoom lens posture information, and an actually measured distance.
  • the distance measuring apparatus 10C has an imaging position distance derivation program 106C in place of the imaging position distance derivation program 106A in the secondary storage unit 104, as compared to the distance measurement apparatus 10A. Is different in that is stored.
  • the CPU 100 operates as the acquisition unit 110C and the derivation unit 111C by executing the imaging position distance derivation program 106C (see FIG. 8).
  • the acquisition unit 110C corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111C corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110C and the derivation unit 111C will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
  • the 110C of acquisition parts perform control which displays the 1st picked-up image on the display part 86, and displays the outer wall surface image 128 so that it can distinguish with another area
  • the touch panel 88 receives region designation information for designating a coordinate acquisition target region in a state where the outer wall surface image 128 is displayed on the display unit 86.
  • the coordinate acquisition target area refers to a part of the closed area in the outer wall surface image 128.
  • the area designation information refers to information that designates a coordinate acquisition target area.
  • the acquiring unit 110C acquires the three characteristic pixel coordinates from the coordinate acquisition target area specified by the area specifying information received by the touch panel 88.
  • an imaging position distance derivation process realized by the CPU 100 executing the imaging position distance derivation function by executing the imaging position distance derivation program 106C.
  • the first derivation process will be described with reference to FIG.
  • the same steps as those in the flowchart shown in FIG. 22 are denoted by the same step numbers, and the description thereof is omitted.
  • steps 400 to 418 are provided instead of step 330E.
  • the acquisition unit 110C identifies the outer wall surface image 128 (see FIG. 32) from the first captured image, and then proceeds to step 402.
  • step 402 the acquisition unit 110C starts display on the display unit 86 in such a manner that the outer wall surface image 128 specified in the process of step 300 is emphasized so as to be distinguishable from other regions in the display region of the first captured image. Then, the process proceeds to step 404.
  • step 404 the acquisition unit 110C determines whether or not the area designation information has been received by the touch panel 88 and the coordinate acquisition target area has been specified by the received area designation information.
  • step 404 when the coordinate acquisition target area is not designated by the area designation information, the determination is denied and the process proceeds to step 406.
  • step 404 when the coordinate acquisition target area is designated by the area designation information, the determination is affirmed and the process proceeds to step 410.
  • step 406 the acquisition unit 110C determines whether or not a condition for ending the first derivation process is satisfied. If the condition for ending the first derivation process is not satisfied at step 406, the determination is negative and the routine proceeds to step 404. If the condition for ending the first derivation process is satisfied at step 406, the determination is affirmed and the routine proceeds to step 408.
  • step 408 the acquisition unit 110C executes the same process as the process in step 330C illustrated in FIG. 22, and then ends the first derivation process.
  • step 410 the acquisition unit 110 ⁇ / b> C ends display of the redesignation message displayed on the display unit 86 by executing the processing of step 414 described later on the display unit 86, and then proceeds to step 412. To do.
  • the acquisition unit 110 ⁇ / b> C has the characteristic three pixels described in the first embodiment in the coordinate acquisition target region 158 (see FIG. 43) designated by the region designation information received by the touch panel 88. It is determined whether or not.
  • a pattern image 160 indicating a pattern 124 is displayed in the coordinate acquisition target area 158. It is included.
  • the coordinate acquisition target area 158 includes a first pixel 162, a second pixel 164, and a third pixel 166 as characteristic three pixels.
  • the first pixel 162 is a pixel at the upper left corner of the pattern image 160 when viewed from the front
  • the second pixel 164 is a pixel at the lower left corner of the pattern image 160 when viewed from the front
  • the third pixel 166 is These are pixels at the lower right corner of the pattern image 160 when viewed from the front.
  • step 412 when there are no characteristic three pixels in the coordinate acquisition target area 158 specified by the area specifying information received by the touch panel 88, the determination is negative and the process proceeds to step 414.
  • step 414 if there are three characteristic pixels in the coordinate acquisition target area 158 specified by the area specifying information received by the touch panel 88, the determination is affirmed and the process proceeds to step 416. Note that the case where an affirmative determination is made in step 412 indicates a case where the coordinate acquisition target region 158 including the pattern image 160 is designated by the region designation information received by the touch panel 88, as shown in FIG. .
  • the acquisition unit 110C causes the display unit 86 to start displaying the redesignation message superimposed on the predetermined area of the first captured image, and then proceeds to step 404.
  • the re-designation message indicates, for example, a message “Please specify a closed area including a characteristic pattern or building material”.
  • an audible display such as audio output by an audio playback device (not shown)
  • the permanent visible display such as the output of the printed matter by the printer may be performed instead of the visible display, or may be used in combination.
  • step 416 the acquisition unit 110C causes the display unit 86 to end the emphasized display of the outer wall surface image 128, and then proceeds to step 418.
  • step 418 the acquisition unit 110C acquires three characteristic pixel coordinates that specify three characteristic pixels in the coordinate acquisition target area 158 specified by the area specification information received by the touch panel 88, and then proceeds to step 330F. To do.
  • the processing in this step 418 is executed, so that the two-dimensional coordinates that specify each of the first pixel 162, the second pixel 164, and the third pixel 166 are three feature pixel coordinates. Obtained by the obtaining unit 110C.
  • the outer wall surface image 128 is displayed on the display unit 86 so as to be distinguishable from other regions in the first captured image.
  • area designation information is received by the touch panel 88, and a coordinate acquisition target area that is a part of the outer wall surface image 128 is designated by the received area designation information.
  • the acquisition unit 110C acquires three characteristic pixel coordinates that specify the characteristic three pixels (step 418), and corresponds to the three characteristic pixel coordinates. Corresponding feature pixel coordinates are also acquired (step 330N).
  • the three feature pixel coordinates and the corresponding feature pixel coordinates are acquired with a smaller load than when the three feature pixel coordinates and the corresponding feature pixel coordinates are acquired for the entire outer wall surface image 128. be able to.
  • the distance measuring device 10D according to the fourth embodiment is different from the distance measuring device 10A in that an imaging position distance deriving program 106D is stored in the secondary storage unit 104 instead of the imaging position distance deriving program 106A ( (See FIG. 6).
  • the CPU 100 operates as the acquisition unit 110D and the derivation unit 111D as illustrated in FIG. 8 as an example by executing the imaging position distance derivation program 106D.
  • the acquisition unit 110D corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111D corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110D and the derivation unit 111D will be described with respect to differences from the acquisition unit 110A and the derivation unit 111A described in the first embodiment.
  • the touch panel 88 receives the pixel designation information described in the first embodiment when each of the first captured image and the second captured image is displayed on the display unit 86.
  • the touch panel 88 also accepts the pixel designation information described in the first embodiment even when the second captured image is displayed on the display unit 86.
  • the acquisition unit 110D is a first two-dimensional coordinate that identifies each of the characteristic three pixels specified by the pixel specification information received by the touch panel 88. Get feature pixel coordinates.
  • the first feature pixel coordinates are two-dimensional coordinates corresponding to the three feature pixel coordinates described in the first embodiment.
  • the acquisition unit 110 ⁇ / b> D is a second two-dimensional coordinate that specifies each of the characteristic three pixels specified by the pixel specification information received by the touch panel 88. Get feature pixel coordinates.
  • the second feature pixel coordinates are two-dimensional coordinates corresponding to the corresponding feature pixel coordinates described in the first embodiment.
  • the deriving unit 111D derives the imaging position distance based on the target pixel coordinates, the corresponding target pixel coordinates, the first feature pixel coordinates, the second feature pixel coordinates, the irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1. To do.
  • FIG. 45 to 47 the same steps as those in the flowcharts shown in FIGS. 22 and 23 are denoted by the same step numbers, and description thereof is omitted.
  • 45 to 47 are different from the flowcharts shown in FIGS. 22 and 23 in that steps 450 to 474 are provided instead of step 330E. 45 to 47 is different from the flowcharts shown in FIGS. 22 and 23 in that steps 476 to 502 are provided instead of steps 330N to 330P.
  • step 450 shown in FIG. 45 the acquisition unit 110D executes the same processing as the processing in step 400 described in the third embodiment, and then proceeds to step 452.
  • step 452 the acquisition unit 110D executes the same process as the process in step 402 described in the third embodiment, and then proceeds to step 454.
  • step 454 the acquisition unit 110D determines whether or not the area designation information is received by the touch panel 88 and the first coordinate acquisition target area 178 (see FIG. 43) is specified by the received area designation information.
  • the first coordinate acquisition target area is an area corresponding to the coordinate acquisition target area 158 described in the second embodiment.
  • step 454 if the first coordinate acquisition target area 178 is not designated by the area designation information, the determination is denied and the process proceeds to step 456. In step 454, when the first coordinate acquisition target area 178 is designated by the area designation information, the determination is affirmed and the process proceeds to step 458.
  • step 456 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 456 if the condition for ending the first derivation process is not satisfied, the determination is denied and the routine proceeds to step 454. If the condition for ending the first derivation process is satisfied at step 456, the determination is affirmed and the routine proceeds to step 330C.
  • step 458 the acquisition unit 110D performs the same process as the process in step 410 described in the third embodiment, and then proceeds to step 460.
  • step 460 the acquisition unit 110D sets the first coordinate acquisition target region 178 specified by the region specification information received by the touch panel 88 for the display unit 86 to another region in the display region of the first captured image. It is highlighted so that it can be distinguished.
  • the acquisition unit 110D determines whether or not three pixels are designated by the pixel designation information received by the touch panel 88. In step 462, if three pixels are not designated by the pixel designation information received by the touch panel 88 (for example, if the number of designated pixels is less than three), the determination is negative and the routine proceeds to step 464. To do. If three pixels are designated by the pixel designation information received by the touch panel 88 at step 462, the determination is affirmed and the routine proceeds to step 468.
  • step 464 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 464 if the condition for ending the first derivation process is not satisfied, the determination is negative and the routine proceeds to step 462. If the condition for ending the first derivation process is satisfied in step 464, the determination is affirmed and the process proceeds to step 330C.
  • step 468 the acquisition unit 110D ends the display in which the first coordinate acquisition target area 178 is highlighted on the display unit 86, and then proceeds to step 470.
  • the acquisition unit 110D determines whether or not the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels.
  • the first coordinate acquisition target area 178 includes a pattern image 160.
  • the characteristic three pixels indicate a first pixel 162, a second pixel 164, and a third pixel 166 that are pixels at three corners of the pattern image 160, as shown in FIG. 44 as an example.
  • step 470 if the three pixels designated by the pixel designation information received by the touch panel 88 are not characteristic three pixels, the determination is negative and the routine proceeds to step 472. In step 470, if the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels, the determination is affirmed and the routine proceeds to step 474.
  • the acquisition unit 110D causes the display unit 86 to start displaying the redesignation message superimposed on the predetermined area of the first captured image, and then proceeds to step 454.
  • the re-designation message according to the fourth embodiment refers to, for example, a message “Please designate a characteristic 3 pixel after designating a closed region including a characteristic pattern or building material”. .
  • step 474 the acquisition unit 110D acquires the first characteristic pixel coordinates that specify the characteristic three pixels specified by the pixel specification information received by the touch panel 88, and then proceeds to step 330F.
  • the processing of this step 474 is executed, so that the two-dimensional coordinates that specify each of the first pixel 162, the second pixel 164, and the third pixel 166 are the first feature pixel coordinates. Acquired by the acquisition unit 110D.
  • the acquisition unit 110D specifies a corresponding outer wall surface image that is an outer wall surface image corresponding to the outer wall surface image 128 from the second captured image, and then proceeds to step 478.
  • step 478 the acquisition unit 110D causes the display unit 86 to display the corresponding outer wall surface image specified in the process of step 476 in a highlighted manner so as to be distinguishable from other regions in the display region of the second captured image. Thereafter, the process proceeds to step 480.
  • step 480 the acquisition unit 110D determines whether or not the area designation information has been received by the touch panel 88, and the second coordinate acquisition target area has been specified by the received area designation information.
  • the second coordinate acquisition target area is an area specified by the user via the touch panel 88 as an area corresponding to the first coordinate acquisition target area 178 (see FIG. 44) in the second captured image.
  • step 480 if the second coordinate acquisition target area is not designated by the area designation information, the determination is denied and the routine proceeds to step 482. If the second coordinate acquisition target area is designated by the area designation information in step 480, the determination is affirmed and the routine proceeds to step 484.
  • step 482 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 482 if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 480. In step 482, if the condition for ending the first derivation process is satisfied, the determination is affirmed and the routine proceeds to step 492.
  • step 484 the acquisition unit 110D causes the display unit 86 to terminate the display of the redesignation message displayed on the display unit 86 by executing the processing of step 498 described below, and then proceeds to step 486. To do.
  • step 486 the acquisition unit 110 ⁇ / b> D sets the second coordinate acquisition target area specified by the area specification information received by the touch panel 88 to the display unit 86 as another area in the display area of the second captured image.
  • the display is highlighted so as to be distinguishable, and then the process proceeds to step 488.
  • step 488 the acquisition unit 110D determines whether or not three pixels are designated by the pixel designation information received by the touch panel 88. In step 488, when three pixels are not designated by the pixel designation information received by the touch panel 88 (for example, when the number of designated pixels is less than three), the determination is negative and the process proceeds to step 490. To do. In step 488, when three pixels are designated by the pixel designation information received by the touch panel 88, the determination is affirmed and the process proceeds to step 494.
  • step 490 the acquisition unit 110D determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 490 if the condition for ending the first derivation process is not satisfied, the determination is negative and the process proceeds to step 488. In step 490, if the condition for ending the first derivation process is satisfied, the determination is affirmed and the routine proceeds to step 492.
  • step 492 the acquisition unit 110D ends the display of the second captured image on the display unit 86, and then ends the first derivation process.
  • step 494 the acquisition unit 110D causes the display unit 86 to end the display highlighting the second coordinate acquisition target region, and then proceeds to step 496.
  • step 496 the acquisition unit 110D determines whether or not the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels.
  • the second coordinate acquisition target area includes a pattern image corresponding to the pattern image 160.
  • the characteristic three pixels are pixels present at the three corners of the pattern image corresponding to the pattern image 160 in the second captured image.
  • the pixels present at the three corners of the pattern image corresponding to the pattern image 160 correspond to, for example, the pixel corresponding to the first pixel 162, the pixel corresponding to the second pixel 164, and the third pixel in the second captured image. Refers to the pixel to be
  • step 496 if the three pixels designated by the pixel designation information received by the touch panel 88 are not characteristic three pixels, the determination is negative and the routine proceeds to step 498. In step 496, if the three pixels designated by the pixel designation information received by the touch panel 88 are characteristic three pixels, the determination is affirmed and the routine proceeds to step 500 shown in FIG.
  • step 498 the acquisition unit 110D causes the display unit 86 to start displaying the above-described redesignation message superimposed on the predetermined area of the second captured image, and then proceeds to step 480.
  • step 500 shown in FIG. 47 the acquisition unit 110D acquires the second feature pixel coordinates that specify the characteristic three pixels specified by the pixel specification information received by the touch panel 88, and then proceeds to step 502. .
  • this step 500 for example, in the second captured image, a two-dimensional coordinate specifying each of the pixel corresponding to the first pixel 162, the pixel corresponding to the second pixel 164, and the pixel corresponding to the third pixel 166. Is acquired by the acquisition unit 110D as the second feature pixel coordinates.
  • the deriving unit 111D derives a, b, and c of the plane equation shown in Expression (7) from the first feature pixel coordinates, the second feature pixel coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the first feature pixel coordinates used in the process of step 502 are the first feature pixel coordinates acquired in the process of step 474, and correspond to the three feature pixel coordinates described in the first embodiment.
  • the second feature pixel coordinates used in the process of step 502 are the second feature pixel coordinates acquired in the process of step 500, and correspond to the corresponding feature pixel coordinates described in the first embodiment.
  • characteristic three pixels are designated via the touch panel 88 in the first captured image, and the first characteristic pixel coordinates that specify the designated characteristic three pixels are acquired by the acquisition unit. Obtained by 110D (step 474). Further, characteristic three pixels corresponding to the characteristic three pixels of the first captured image are designated in the second captured image via the touch panel 88 (step 470: Y). Further, second feature pixel coordinates that specify three characteristic pixels designated through the touch panel 88 in the second captured image are acquired by the acquisition unit 110D (step 500).
  • the imaging position distance is derived by the deriving unit 111D based on the target pixel coordinates, the corresponding target pixel coordinates, the first characteristic pixel coordinates, the second characteristic pixel coordinates, the focal position coordinates, the focal distance, and the dimensions of the imaging pixel 60A1. Is done. Therefore, according to the distance measuring device 10D, the imaging position distance can be derived based on the first feature pixel coordinates and the second feature pixel coordinates acquired according to the user's intention.
  • the distance measuring device 10E according to the fifth embodiment is different from the distance measuring device 10A in that an imaging position distance deriving program 106E is stored in the secondary storage unit 104 instead of the imaging position distance deriving program 106A. Further, the distance measuring device 10E is different from the distance measuring device 10A in that a three-dimensional coordinate derivation program 108B is stored in the secondary storage unit 104 instead of the three-dimensional coordinate derivation program 108A.
  • the CPU 100 operates as the acquisition unit 110E and the derivation unit 111E as illustrated in FIG. 8 as an example by executing the imaging position distance derivation program 106E.
  • the acquisition unit 110E corresponds to the acquisition unit 110A described in the first embodiment
  • the derivation unit 111E corresponds to the derivation unit 111A described in the first embodiment.
  • the acquisition unit 110E and the derivation unit 111E will be described with respect to differences from the acquisition unit 110A and the derivation unit 111E described in the first embodiment.
  • the acquisition unit 110E is different from the acquisition unit 110A in that the second measured distance is acquired as a reference distance.
  • the deriving unit 111E refers to the distance between the first imaging position and the second imaging position based on the target pixel coordinates, the three characteristic pixel coordinates, the reference irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • the image pickup position distance is derived.
  • the deriving unit 111E adjusts the imaging position distance with reference to the derived reference imaging position distance, thereby finally adopting the final imaging position distance as the distance between the first imaging position and the second imaging position. Is derived.
  • the deriving unit 111E derives the designated pixel three-dimensional coordinates based on the derived final imaging position distance.
  • the final designated pixel real space coordinates refer to the three-dimensional coordinates that are finally adopted as the three-dimensional coordinates that are the coordinates of the target pixel 126 in the real space.
  • an imaging position distance deriving process realized by the CPU 100 executing the imaging position distance deriving function 106E by executing the imaging position distance deriving program 106E.
  • the first derivation process will be described with reference to FIGS. Note that the same steps as those in the flowcharts shown in FIGS. 22 and 23 are denoted by the same step numbers, and description thereof is omitted.
  • step 550 is provided instead of step 330J.
  • step 552 to 568 are provided instead of steps 330Q to 330U.
  • the acquisition unit 110E acquires the second actually measured distance measured by executing the processing of Step 330I as a reference distance.
  • the acquisition unit 110E acquires a second captured image signal indicating the second captured image obtained by performing the processing in step 330I, and then proceeds to step 330K.
  • step 552 the deriving unit 111E determines the first plane equation, which is the plane equation shown in equation (7), based on the irradiation position real space coordinates derived in the process of step 312 shown in FIG. Go to 554.
  • step 554 the deriving unit 111E derives the imaging position distance based on the feature pixel three-dimensional coordinates and the first plane equation, and then proceeds to step 556.
  • step 556 the deriving unit 111E calculates the reference distance, the half angle of view ⁇ , the emission angle ⁇ , and the reference point distance M acquired by the acquisition unit 110E in the process of step 550, based on the equation (3).
  • the reference irradiation position real space coordinates are derived, and then the process proceeds to step 558.
  • the reference distance used in the processing of step 556 is a distance corresponding to the distance L described in the first embodiment.
  • step 558 the deriving unit 111E determines the second plane equation, which is the plane equation shown in equation (7), based on the reference irradiation position real space coordinates derived in the process of step 556, and then proceeds to step 560. To do. That is, in this step 558, the deriving unit 111E substitutes a, b, c derived in the process of step 330P and the reference irradiation position real space coordinates derived in the process of step 556 into Equation (7), Determine d in equation (7). Since a, b, and c of Expression (7) are derived in the process of Step 330P, when d of Expression (7) is determined in the process of Step 558, the second plane equation is determined.
  • step 560 the deriving unit 111E derives the reference imaging position distance based on the feature pixel three-dimensional coordinates and the second plane equation, and then proceeds to step 562.
  • the reference imaging position distance corresponds to, for example, “B” shown in Equation (9), and is derived by substituting the first feature pixel three-dimensional coordinates into the second plane equation.
  • step 562 the deriving unit 111E refers to the reference imaging position distance derived in step 560 and adjusts the imaging position distance derived in step 554 to derive the final imaging position distance. Thereafter, the process proceeds to step 564.
  • adjusting the imaging position distance means, for example, obtaining an average value of the imaging position distance and the reference imaging position distance, and multiplying the average value of the imaging position distance and the reference imaging position distance by the first adjustment coefficient. Or, it means multiplying the imaging position distance by the second adjustment coefficient.
  • the first adjustment coefficient and the second adjustment coefficient are both coefficients that are uniquely determined according to, for example, the reference imaging position distance.
  • the first adjustment coefficient is, for example, a correspondence table in which the reference imaging position distance and the first adjustment coefficient are associated in advance, or the reference imaging position distance is an independent variable, and the first adjustment coefficient is dependent. It is derived from an arithmetic expression that is a variable.
  • the second adjustment coefficient is similarly derived.
  • the correspondence table or the calculation formula is for derivation derived from the result of a computer simulation or the like based on a test of the distance measuring device 10E by an actual device or a design specification of the distance measuring device 10E at the stage before the shipment of the distance measuring device 10E. Derived from a table or arithmetic expression.
  • an average value of the imaging position distance and the reference imaging position distance a value obtained by multiplying the average value of the imaging position distance and the reference imaging position distance by the first adjustment coefficient, or A value obtained by multiplying the imaging position distance by the second adjustment coefficient can be given.
  • step 564 the derivation unit 111E causes the display unit 86 to start display in which the final imaging position distance derived in the process of step 562 is superimposed on the second captured image, as shown in FIG. 49 as an example.
  • the deriving unit 111E stores the final imaging position distance derived in the process of step 562 in a predetermined storage area, and then proceeds to step 566.
  • the numerical value “1444621.7” corresponds to the final imaging position distance derived by the processing in step 562, and the unit is millimeters.
  • step 566 the derivation unit 111E determines whether or not a condition for ending the first derivation process is satisfied.
  • the condition for ending the first derivation process is the same as the condition used in the process of step 302.
  • step 566 if the condition for ending the first derivation process is not satisfied, the determination is negative and the determination in step 566 is performed again. If the condition for ending the first derivation process is satisfied in step 566, the determination is affirmed and the process proceeds to step 568.
  • step 568 the derivation unit 111E ends the display of the second captured image and the superimposed display information on the display unit 86, and then ends the first derivation process.
  • the superimposed display information refers to various pieces of information that are currently displayed superimposed on the second captured image, for example, the final imaging position distance.
  • step 600 the derivation unit 111E determines whether or not the final imaging position distance has already been derived in the process of step 562 included in the first derivation process.
  • step 600 when the final imaging position distance is not derived in the process of step 562 included in the first derivation process, the determination is negative and the process proceeds to step 608.
  • step 600 if the final imaging position distance has already been derived in the process of step 562 included in the first derivation process, the determination is affirmed and the process proceeds to step 602.
  • step 602 the derivation unit 111E determines whether or not the derivation start condition is satisfied. If it is determined in step 602 that the derivation start condition is not satisfied, the determination is negative and the process proceeds to step 608. In step 602, if the derivation start condition is satisfied, the determination is affirmed and the process proceeds to step 604.
  • step 604 the deriving unit 111E derives the designated pixel three-dimensional coordinates based on the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, the size of the imaging pixel 60A1, and the formula (2). Thereafter, the process proceeds to step 606.
  • step 604 the designated pixel three-dimensional coordinates are derived by substituting the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 into Equation (2).
  • step 606 the derivation unit 111E causes the display unit 86 to superimpose and display the designated pixel three-dimensional coordinates derived in step 604 on the second captured image, as shown in FIG. 51 as an example.
  • the deriving unit 111E stores the designated pixel three-dimensional coordinates derived in the process of step 604 in a predetermined storage area, and then proceeds to step 608.
  • (20160, 50132, 137810) corresponds to the designated pixel three-dimensional coordinates derived in the process of step 454.
  • the designated pixel three-dimensional coordinates are displayed close to the target pixel 126.
  • step 608 the derivation unit 111E determines whether or not a condition for ending the three-dimensional coordinate derivation process is satisfied. If the condition for ending the three-dimensional coordinate derivation process is not satisfied at step 608, the determination is negative and the routine proceeds to step 600. If the condition for ending the three-dimensional coordinate derivation process is satisfied in step 608, the determination is affirmed and the three-dimensional coordinate derivation process is ended.
  • the distance from the second position to the subject is measured, and the reference distance that is the measured distance is acquired by the acquisition unit 110E (step 550).
  • the reference irradiation position real space coordinates are derived by the deriving unit 111E based on the reference distance (step 556).
  • the deriving unit 111E performs reference imaging based on the target pixel coordinates, the corresponding target pixel coordinates, the three feature pixel coordinates, the corresponding feature pixel coordinates, the reference irradiation position real space coordinates, the focal length, and the dimensions of the imaging pixel 60A1.
  • a position distance is derived (step 560).
  • the deriving unit 111E refers to the reference imaging position distance and adjusts the imaging position distance, thereby deriving the final imaging position distance (step 562). Therefore, according to the distance measuring device 10E, the distance between the first imaging position and the second imaging position can be derived with higher accuracy than when the reference imaging position distance is not used.
  • the designated pixel three-dimensional coordinates are derived based on the final imaging position distance derived by the imaging position distance deriving process (see FIG. 50). Therefore, according to the distance measuring device 10E, it is possible to derive the designated pixel three-dimensional coordinates with higher accuracy than when the final imaging position distance is not used.
  • the designated pixel three-dimensional coordinates are defined based on the target pixel coordinates, the corresponding target pixel coordinates, the final imaging position distance, the focal length, and the dimensions of the imaging pixel 60A1 (formula (2)). reference). Therefore, according to the distance measuring device 10D, the designated pixel is compared with the case where the designated pixel three-dimensional coordinate is not defined based on the final imaging position distance, the target pixel coordinate, the corresponding target pixel coordinate, the focal length, and the size of the imaging pixel 60A1. Three-dimensional coordinates can be derived with high accuracy.
  • the distance measured based on the laser beam emitted from the second position is used as the reference distance, but the technology of the present disclosure is not limited to this.
  • the distance measured based on the laser light emitted from the first position may be used as the reference distance.
  • an information processing system 650 includes distance measuring devices 10F1 and 10F2 and a PC 652.
  • the PC 652 can communicate with the distance measuring devices 10F1 and 10F2.
  • the PC 652 is an example of an information processing apparatus according to the technology of the present disclosure.
  • the distance measuring device 10F1 is disposed at the first position, and the distance measuring device 10F2 is disposed at a second position different from the first position.
  • the distance measuring devices 10F1 and 10F2 have the same configuration.
  • the distance measuring devices 10F1 and 10F2 are referred to as “range measuring device 10F” when it is not necessary to distinguish between them.
  • the distance measuring device 10F differs from the distance measuring device 10A in that it includes an imaging device 15 instead of the imaging device 14.
  • the imaging device 15 is different from the imaging device 14 in that an imaging device body 19 is provided instead of the imaging device body 18.
  • the imaging device main body 19 is different from the imaging device main body 18 in having a communication I / F 83.
  • the communication I / F 83 is connected to the bus line 84 and operates under the control of the main control unit 62.
  • the communication I / F 83 is connected to a communication network (not shown) such as the Internet, and controls transmission / reception of various information to / from the PC 652 connected to the communication network.
  • a communication network such as the Internet
  • the PC 652 includes a main control unit 653.
  • the main control unit 653 includes a CPU 654, a primary storage unit 656, and a secondary storage unit 658.
  • the CPU 654, the primary storage unit 656, and the secondary storage unit 658 are connected to each other via a bus line 660.
  • the PC 652 includes a communication I / F 662.
  • the communication I / F 662 is connected to the bus line 660 and operates under the control of the main control unit 653.
  • the communication I / F 662 is connected to a communication network, and controls transmission / reception of various information to / from the distance measuring device 10F connected to the communication network.
  • the PC 652 includes a reception unit 663 and a display unit 664.
  • the reception unit 663 is connected to the bus line 660 via a reception I / F (not shown), and the reception I / F transmits an instruction content signal indicating the content of the instruction received by the reception unit 663 to the main control unit. Output to 653.
  • the reception unit 663 is realized by, for example, a keyboard, a mouse, and a touch panel.
  • the display unit 664 is connected to the bus line 660 via a display control unit (not shown), and displays various information under the control of the display control unit.
  • the display unit 664 is realized by an LCD, for example.
  • the secondary storage unit 658 stores the dimension derivation program 105A (105B) described in the above embodiments. Further, the secondary storage unit 658 stores the imaging position distance derivation program 106A (106B, 106C, 106D, 106E) described in the above embodiments. The secondary storage unit 658 stores the three-dimensional coordinate derivation program 108A (108B) described in the above embodiments. Further, the secondary storage unit 658 stores the focal length derivation table 109A (109B) described in the above embodiments.
  • the dimension derivation programs 105A and 105B when it is not necessary to distinguish between the dimension derivation programs 105A and 105B, the dimension derivation programs are referred to as “dimension derivation programs” without reference numerals.
  • the imaging position distance deriving programs 106A, 106B, 106C, 106D, and 106E when there is no need to distinguish between the imaging position distance deriving programs 106A, 106B, 106C, 106D, and 106E, they are referred to as “imaging position distance deriving programs” without reference numerals.
  • the three-dimensional coordinate derivation programs 108A and 108B are referred to as “three-dimensional coordinate derivation programs” without reference numerals.
  • the focal length derivation tables 109A and 109B are referred to as “focal length derivation tables” without reference numerals.
  • the CPU 654 acquires the first captured image signal, the target pixel coordinate, the distance, and the like from the distance measuring device 10F1 via the communication I / F 662. In addition, the CPU 654 acquires the second captured image signal and the like from the distance measuring device 10F2 via the communication I / F 662.
  • the CPU 654 reads the dimension derivation program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the dimension derivation program.
  • the CPU 654 reads an imaging position distance deriving program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the imaging position distance deriving program.
  • the CPU 654 reads out the three-dimensional coordinate derivation program from the secondary storage unit 658, develops it in the primary storage unit 656, and executes the three-dimensional coordinate derivation program.
  • the dimension derivation program and the imaging position distance derivation program are collectively referred to as “derivation program”.
  • the CPU 654 operates as the acquisition unit 110A (110B, 11C, 110D, 110E) and the derivation unit 111A (111B, 111C, 111D, 111E) by executing the derivation program.
  • the PC 652 acquires the first captured image signal, the second captured image signal, the pixel-of-interest coordinates, the distance, and the like from the distance measuring device 10F via the communication I / F 662, and executes the derivation program.
  • the same operations and effects as those of the above embodiments can be obtained.
  • the distance measuring device 10A is realized by the distance measuring unit 12 and the imaging device 14 is exemplified.
  • the device 10G will be described. Note that in the seventh embodiment, the same components as those in the above embodiments are denoted by the same reference numerals, description thereof will be omitted, and only parts different from those in the above embodiments will be described.
  • the distance measuring device 10 ⁇ / b> G according to the seventh embodiment is different from the distance measuring device 10 ⁇ / b> A according to the first embodiment in that an imaging device 700 is provided instead of the imaging device 14. .
  • the distance measuring device 10G is different from the distance measuring device 10A in that it includes a smart device 702.
  • the imaging device 700 is different from the imaging device 14 in that it has an imaging device body 703 instead of the imaging device body 18.
  • the imaging device body 703 is different from the imaging device body 18 in that the imaging device body 703 includes a wireless communication unit 704 and a wireless communication antenna 706.
  • the wireless communication unit 704 is connected to the bus line 84 and the wireless communication antenna 706.
  • the main control unit 62 outputs transmission target information, which is information to be transmitted to the smart device 702, to the wireless communication unit 704.
  • the wireless communication unit 704 transmits the transmission target information input from the main control unit 62 to the smart device 702 via the wireless communication antenna 706 by radio waves.
  • the radio communication unit 704 acquires a signal corresponding to the received radio wave and outputs the acquired signal to the main control unit 62.
  • the smart device 702 includes a CPU 708, a primary storage unit 710, and a secondary storage unit 712.
  • the CPU 708, the primary storage unit 710, and the secondary storage unit 712 are connected to the bus line 714.
  • the CPU 708 controls the entire distance measuring device 10G including the smart device 702.
  • the primary storage unit 710 is a volatile memory used as a work area or the like when executing various programs.
  • An example of the primary storage unit 710 is a RAM.
  • the secondary storage unit 712 is a non-volatile memory that stores in advance a control program for controlling the overall operation of the distance measuring apparatus 10G including the smart device 702, various parameters, or the like.
  • An example of the secondary storage unit 712 is a flash memory or an EEPROM.
  • the smart device 702 includes a display unit 715, a touch panel 716, a wireless communication unit 718, and a wireless communication antenna 720.
  • the display unit 715 is connected to the bus line 714 via a display control unit (not shown), and displays various information under the control of the display control unit.
  • the display unit 715 is realized by an LCD, for example.
  • the touch panel 716 is overlaid on the display screen of the display unit 715, and accepts contact by an indicator.
  • the touch panel 716 is connected to the bus line 714 via a touch panel I / F (not shown), and outputs position information indicating the position touched by the indicator to the touch panel I / F.
  • the touch panel I / F operates the touch panel I / F in accordance with an instruction from the CPU 708, and outputs position information input from the touch panel 716 to the CPU 708.
  • the display unit 715 includes a measurement imaging button 90A, an imaging button (not shown), an imaging system operation mode switching button 90B, a wide-angle instruction button 90C, a telephoto instruction button 90D, an imaging position distance derivation button 90F, and a three-dimensional coordinate derivation button.
  • a soft key corresponding to 90G or the like is displayed (see FIG. 56).
  • a measurement imaging button 90A1 that functions as the measurement imaging button 90A is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
  • an imaging button (not shown) that functions as the imaging button described in the first embodiment is displayed as a soft key on the display unit 715 and pressed by the user via the touch panel 716.
  • an imaging system operation mode switching button 90B1 that functions as the imaging system operation mode switching button 90B is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
  • a wide-angle instruction button 90C1 that functions as the wide-angle instruction button 90C is displayed as a soft key on the display unit 715, and is pressed by the user via the touch panel 716.
  • a telephoto instruction button 90D1 that functions as the telephoto instruction button 90D is displayed as a soft key on the display unit 715 and is pressed by the user via the touch panel 716.
  • the display unit 715 displays a dimension derivation button 90E1 that functions as the dimension derivation button 90E as a soft key, and is pressed by the user via the touch panel 716.
  • An imaging position distance derivation button 90F1 that functions as the imaging position distance derivation button 90F is displayed as a soft key and is pressed by the user via the touch panel 716.
  • the display unit 715 displays a 3D coordinate derivation button 90G1 that functions as the 3D coordinate derivation button 90G as a soft key, and is pressed by the user via the touch panel 716.
  • the wireless communication unit 718 is connected to the bus line 714 and the wireless communication antenna 720.
  • the wireless communication unit 718 transmits the signal input from the CPU 708 to the imaging apparatus main body 703 via the wireless communication antenna 720 by radio waves.
  • the radio communication unit 718 acquires a signal corresponding to the received radio wave and outputs the acquired signal to the CPU 708. Accordingly, the imaging apparatus main body 703 is controlled by the smart device 702 by performing wireless communication with the smart device 702.
  • the secondary storage unit 712 stores a derivation program.
  • the CPU 708 reads out the derived program from the secondary storage unit 712, expands it in the primary storage unit 710, and executes the derived program.
  • the secondary storage unit 712 stores a three-dimensional coordinate derivation program.
  • the CPU 708 reads the three-dimensional coordinate derivation program from the secondary storage unit 712, develops it in the primary storage unit 710, and executes the three-dimensional coordinate derivation program. Further, the secondary storage unit 712 stores a focal length derivation table.
  • the CPU 708 operates as the acquisition unit 110A (110B, 110C, 110D, 110E) and the derivation unit 111A (111B, 111C, 111D, 111E) by executing the derivation program.
  • the smart device 702 executes the derivation program, so that the same operations and effects as in the above embodiments can be obtained.
  • the corresponding target pixel is specified by performing image analysis using the second captured image as an analysis target, and the corresponding target pixel coordinates specifying the specified corresponding target pixel are acquired ( Steps 330M and 336L), and the technology of the present disclosure is not limited to this.
  • the user may designate a pixel corresponding to the target pixel from the second captured image via the touch panel 88 as the corresponding target pixel.
  • the deriving unit 111A (111B, 111C, 111D, 111E) derives the irradiation space real space coordinates, the plane orientation, the imaging position distance, the designated pixel three-dimensional coordinates, and the like using arithmetic expressions.
  • the technology of the present disclosure is not limited to this.
  • the derivation unit 111A (111B, 111C, 111D, 111E) uses the table having the independent variable of the arithmetic expression as an input and the output of the dependent variable of the arithmetic expression as an output, the irradiation position real space coordinates, the plane orientation, and the imaging position
  • the distance, the specified pixel three-dimensional coordinates, and the like may be derived.
  • the case where the derivation program and the three-dimensional coordinate derivation program are read from the secondary storage unit 104 (658, 712) is exemplified, but it is not necessarily stored in the secondary storage unit 104 (658, 712) from the beginning. There is no need to keep it.
  • a derivation program and a three-dimensional coordinate derivation program are stored in an arbitrary portable storage medium 750 such as an SSD (Solid State Drive) or a USB (Universal Serial Bus) memory. Also good.
  • the derivation program of the storage medium 750 is installed in the distance measuring device 10A (10B, 10C, 10D, 10E) (hereinafter referred to as “the distance measuring device 10A etc.”) or the PC 652, and the installed derivation program is the CPU 100 ( 654, 708). Further, the three-dimensional coordinate derivation program of the storage medium 750 is installed in the distance measuring device 10A or the like or the PC 652, and the installed three-dimensional coordinate derivation program is executed by the CPU 100 (654, 708).
  • the derivation program and the three-dimensional coordinate derivation program are stored in a storage unit such as the distance measuring device 10A or the like or another computer or server device connected to the PC 652 via a communication network (not shown),
  • the three-dimensional coordinate derivation program may be downloaded in response to a request from the distance measuring device 10A or the like or the PC 652.
  • the downloaded derivation program is executed by the CPU 100 (654, 708).
  • the case where various types of information such as the irradiation position mark 136, the imaging position distance, and the designated pixel three-dimensional coordinates are displayed on the display unit 86 is illustrated, but the technology of the present disclosure is limited to this. It is not something.
  • various types of information may be displayed on a display unit of an external device that is used by being connected to the distance measuring device 10A or the like or the PC 652.
  • the external device there is a PC or a glasses-type or watch-type wearable terminal device.
  • the irradiation position mark 136, the imaging position distance, the specified pixel three-dimensional coordinates, and the like are visually displayed on the display unit 86 , but the technology of the present disclosure is limited to this. is not.
  • an audible display such as sound output by a sound reproduction device or a permanent visual display such as output of a printed matter by a printer may be performed instead of the visible display, or may be used in combination.
  • the irradiation position mark 136, the imaging position distance, the specified pixel three-dimensional coordinates, and the like are displayed on the display unit 86 , but the technology of the present disclosure is not limited to this. Absent. For example, at least one of the irradiation position mark 136, the imaging position distance, the designated pixel three-dimensional coordinates, and the like is displayed on a display unit (not shown) different from the display unit 86, and the rest is displayed on the display unit 86. You may make it do. Each of the irradiation position mark 136, the imaging position distance, the designated pixel three-dimensional coordinates, and the like may be individually displayed on a plurality of display units including the display unit 86.
  • laser light is exemplified as distance measurement light.
  • the technology of the present disclosure is not limited to this, and any directional light that is directional light may be used. Good.
  • it may be directional light obtained by a light emitting diode (LED: Light Emitting Diode) or a super luminescent diode (SLD).
  • the directivity of the directional light is preferably the same as the directivity of the laser light.
  • the directivity can be used for ranging within a range of several meters to several kilometers. preferable.
  • the dimension derivation process, the imaging position distance derivation process, and the three-dimensional coordinate derivation process described in the above embodiments are merely examples. Therefore, it goes without saying that unnecessary steps may be deleted, new steps may be added, and the processing order may be changed within a range not departing from the spirit.
  • Each process included in the dimension derivation process, the imaging position distance derivation process, and the three-dimensional coordinate derivation process may be realized only by a hardware configuration such as an ASIC, or a software configuration and a hardware configuration using a computer. May be realized in combination.
  • the distance measuring unit 12 is attached to the side surface of the imaging apparatus main body 18 included in the distance measuring apparatus 10A and the like, but the technology of the present disclosure is limited to this. It is not a thing.
  • the distance measuring unit 12 may be attached to the upper surface or the lower surface of the imaging apparatus main body 18.
  • a distance measuring device 10H may be applied instead of the distance measuring device 10A or the like.
  • the distance measuring device 10H has a distance measuring unit 12A instead of the distance measuring unit 12 as compared with the distance measuring device 10A and the like, and an image pickup device main body 18A instead of the image pickup device main body 18. Is different.
  • the distance measuring unit 12A is housed in the housing 18A1 of the imaging apparatus main body 18A, and the objective lenses 32 and 38 are the front side of the distance measuring apparatus 10G (the focus lens 50 is exposed). Side) is exposed from the housing 18A1.
  • the distance measuring unit 12A is preferably arranged so that the optical axes L1 and L2 are set at the same height in the vertical direction. Note that an opening (not shown) through which the distance measuring unit 12A can be inserted into and removed from the housing 18A1 may be formed in the housing 18A1.
  • the focal length deriving table 109A that can directly derive the focal length from the deriving distance is exemplified, but the technology of the present disclosure is not limited to this.
  • a focal length derivation table 109D having a correction value corresponding to the actually measured distance may be employed.
  • the focal length derivation table 109D different correction values are associated with each of the plurality of derivation distances. That is, in the focal distance derivation table 109D, the derivation distance and the correction value are associated with each other so that the focal distance is derived by correcting the derivation distance corresponding to the actually measured distance with the correction value.
  • the focal length derivation table 109D is a table derived from, for example, a result of at least one of a test by the actual device of the distance measuring device 10A and a computer simulation based on a design specification of the distance measuring device 10A.
  • the focal length derived table 109D used by CPU 100 7 mm focal length is derived by Y 1 in the correction value is multiplied to one meter deriving distance. Further, when the focal length derived table 109D used by CPU 100, 8 mm focal length is derived by Y 2 is multiplied by the correction value for the two meters deriving distance. Further, when the focal length derived table 109D used by CPU 100, 10 mm focal length is derived by the Y 3 of the correction value is multiplied to three meters deriving distance. Further, when the focal length derived table 109D is used by CPU 100, 12 mm focal length is derived by the Y 4 is the correction value is multiplied against 5m deriving distance.
  • focal length derived table 109D used by CPU 100 14 mm focal length is derived by the Y 5 a correction value is multiplied against 10m deriving distance. Further, when the focal length derived table 109D used by CPU 100, 16 mm focal length is derived by Y 6 in the correction value is multiplied by relative 30m deriving distance. Furthermore, if the focal length derived table 109D used by CPU 100, 18 mm focal length is derived by Y 7 the correction value is multiplied by relative infinity derivation distance.
  • a correction value is derived from the derivation distance in the focal distance derivation table 109D by the interpolation method described above, and the actual distance is derived using the derived correction value.
  • the focal length may be derived by correcting.
  • the multiplication coefficient for the derivation distance is defined as the correction value.
  • the correction value is not limited to the multiplication coefficient, and the focal distance is calculated from the derivation distance. Any numerical value may be used as long as it is used together with an operator necessary for deriving.
  • the CPU 100 corrects the measured distance with the correction value to derive the focal length corresponding to the measured distance, so that the distance measuring device 10A and the like can improve the focal length accuracy.
  • the focal length can be derived with high accuracy without taking time and effort.
  • the technique of the present disclosure can also be applied when a moving image such as a live view image is captured at the second position.
  • the CPU 100 or the like may acquire the measured distance or the like while the moving image is captured, and derive the focal distance corresponding to the acquired measured distance or the like using the focal distance derivation table 109A or the like.
  • immediate derivation of the imaging position distance or the like is realized with high accuracy.
  • deriving the focal length while a moving image is being captured in this manner means that the focal length changes as focus control is continuously performed by so-called continuous AF (Auto-Focus). This is particularly effective when
  • distance measurement is performed without providing a focus determination area that is an area to be determined as a focus state
  • the technology of the present disclosure is not limited to this. Instead, distance measurement may be performed by irradiating the focus determination area with laser light.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optics & Photonics (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un dispositif de traitement d'informations qui comprend : une unité d'acquisition qui acquiert une distance mesurée, qui est une distance mesurée par une unité de mesure comprise dans un dispositif de mesure de distance, qui comprend une unité d'imagerie pour imager un sujet et ayant une lentille de mise au point, et ayant l'unité de mesure, qui mesure la distance au sujet en éclairant le sujet avec une lumière directionnelle, c'est-à-dire, une lumière ayant une directivité, et en recevant la lumière réfléchie de la lumière directionnelle ; et une unité de dérivation qui utilise des informations de relation de correspondance qui indiquent la relation de correspondance entre la distance mesurée et la longueur focale de la lentille de mise au point susmentionnée pour dériver la longueur focale correspondant à la distance mesurée obtenue par l'unité d'acquisition.
PCT/JP2016/083994 2016-02-29 2016-11-16 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2017149852A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-038145 2016-02-29
JP2016038145A JP2019070529A (ja) 2016-02-29 2016-02-29 情報処理装置、情報処理方法、及びプログラム

Publications (1)

Publication Number Publication Date
WO2017149852A1 true WO2017149852A1 (fr) 2017-09-08

Family

ID=59742800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/083994 WO2017149852A1 (fr) 2016-02-29 2016-11-16 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (2)

Country Link
JP (1) JP2019070529A (fr)
WO (1) WO2017149852A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050113A (zh) * 2021-03-10 2021-06-29 广州南方卫星导航仪器有限公司 一种激光点定位方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH113415A (ja) * 1997-06-12 1999-01-06 Mitsubishi Electric Corp 画像取込装置
JP2000266985A (ja) * 1999-03-15 2000-09-29 Mitsubishi Heavy Ind Ltd 監視カメラの自動焦点調整装置
JP2004205222A (ja) * 2002-12-20 2004-07-22 Matsushita Electric Works Ltd 測距装置
JP2012063751A (ja) * 2010-07-27 2012-03-29 Panasonic Corp 撮像装置
WO2015008587A1 (fr) * 2013-07-16 2015-01-22 富士フイルム株式会社 Dispositif de capture d'image, et dispositif de mesure tridimensionnelle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH113415A (ja) * 1997-06-12 1999-01-06 Mitsubishi Electric Corp 画像取込装置
JP2000266985A (ja) * 1999-03-15 2000-09-29 Mitsubishi Heavy Ind Ltd 監視カメラの自動焦点調整装置
JP2004205222A (ja) * 2002-12-20 2004-07-22 Matsushita Electric Works Ltd 測距装置
JP2012063751A (ja) * 2010-07-27 2012-03-29 Panasonic Corp 撮像装置
WO2015008587A1 (fr) * 2013-07-16 2015-01-22 富士フイルム株式会社 Dispositif de capture d'image, et dispositif de mesure tridimensionnelle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050113A (zh) * 2021-03-10 2021-06-29 广州南方卫星导航仪器有限公司 一种激光点定位方法和装置
CN113050113B (zh) * 2021-03-10 2023-08-01 广州南方卫星导航仪器有限公司 一种激光点定位方法和装置

Also Published As

Publication number Publication date
JP2019070529A (ja) 2019-05-09

Similar Documents

Publication Publication Date Title
JP6464281B2 (ja) 情報処理装置、情報処理方法、及びプログラム
US11828847B2 (en) Distance measurement device, deriving method for distance measurement, and deriving program for distance measurement
CN102724398A (zh) 图像数据提供方法、组合方法及呈现方法
US20190339059A1 (en) Information processing device, information processing method, and program
US10641896B2 (en) Distance measurement device, distance measurement method, and distance measurement program
JP5526733B2 (ja) 画像合成装置、画像再生装置、および撮像装置
JP6404482B2 (ja) 測距装置、測距用制御方法、及び測距用制御プログラム
WO2017149852A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP6534456B2 (ja) 情報処理装置、情報処理方法、及びプログラム
JP6494059B2 (ja) 情報処理装置、情報処理方法、及びプログラム
JPWO2017056544A1 (ja) 測距装置、測距方法、及び測距プログラム
WO2017134882A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16892688

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16892688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP