WO2022113482A1 - Information processing device, method, and program - Google Patents

Information processing device, method, and program Download PDF

Info

Publication number
WO2022113482A1
WO2022113482A1 PCT/JP2021/033765 JP2021033765W WO2022113482A1 WO 2022113482 A1 WO2022113482 A1 WO 2022113482A1 JP 2021033765 W JP2021033765 W JP 2021033765W WO 2022113482 A1 WO2022113482 A1 WO 2022113482A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
world coordinates
coordinate information
wall surface
Prior art date
Application number
PCT/JP2021/033765
Other languages
French (fr)
Japanese (ja)
Inventor
剛史 中村
Original Assignee
株式会社Clue
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Clue filed Critical 株式会社Clue
Priority to JP2022565078A priority Critical patent/JPWO2022113482A1/ja
Publication of WO2022113482A1 publication Critical patent/WO2022113482A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/28Measuring arrangements characterised by the use of optical techniques for measuring areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • This disclosure relates to information processing devices, methods and programs.
  • Patent Document 1 discloses a technique of measuring the shape and dimension of a roof, which is an object, from an image of an object captured by a camera mounted on a flying object, and calculating the roof area from the shape and dimension.
  • the present disclosure has been made in view of such a background, and an object thereof is to provide an information processing device, a method and a program capable of easily measuring the wall surface of an object.
  • a first acquisition unit for acquiring coordinate information on an image displayed on the screen of at least two reference points displayed on a screen and each having coordinate information in world coordinates, and at least two of the reference points.
  • the first estimation unit that estimates the position of the wall surface of the screen
  • the second acquisition unit that acquires the coordinate information on the image of the input point input to the screen, and the input point are located on the virtual wall surface.
  • an information processing apparatus including a second estimation unit that estimates the world coordinates of the input point in the case of the above.
  • the processor acquires the coordinate information on the image displayed on the screen of at least two reference points displayed on the screen and each having the coordinate information in the world coordinates.
  • the world coordinates passing through at least two reference points based on the coordinate information of the two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is captured.
  • Estimating the position of the virtual wall surface in the above acquiring the coordinate information on the image of the input point input to the screen, and assuming that the input point is located on the virtual wall surface.
  • Methods are provided, including, including estimating the world coordinates of the input point.
  • the computer is displayed on the screen, and the first acquisition unit for acquiring the coordinate information on the image displayed on the screen of at least two reference points each having the coordinate information in the world coordinates.
  • the at least two reference points are passed through the at least two reference points based on the coordinate information in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged.
  • the first estimation unit that estimates the position of the virtual wall surface in the world coordinates, the second acquisition unit that acquires the coordinate information on the image of the input point input to the screen, and the input point are the virtual A program for functioning as a second estimation unit for estimating the world coordinates of the input point when it is located on the wall surface is provided.
  • the wall surface of the object can be easily surveyed.
  • FIG. 1 is a diagram showing an outline of a system 1 according to an embodiment of the present disclosure.
  • the system 1 includes an information processing terminal 10 (an example of an information processing device) and an unmanned flying object 20.
  • the system 1 according to the present embodiment can be used, for example, for surveying or inspecting the building S1 which is an object of photography by the unmanned flying object 20.
  • U using the information processing terminal 10 operates on the touch panel of the information processing terminal 10 and uses the unmanned flying object 20 to capture an image including the wall surface W1 of the building S1.
  • the information processing terminal 10 defines a region of the wall surface W1 included in the image displayed on the touch panel of the information processing terminal 10, and the wall surface is based on the region and various information at the time of imaging of the unmanned flying object 20. Acquires the position information in the world coordinates (real space) of W1. Based on such position information, for example, the length between arbitrary points on the wall surface W1 and the area of an arbitrary region can be estimated.
  • the information processing terminal 10 is mounted by a so-called tablet-shaped small computer.
  • the information processing terminal 10 may be realized by a portable information processing terminal such as a smartphone or a game machine, or may be realized by a stationary information processing terminal such as a personal computer.
  • the information processing terminal 10 may be realized by a plurality of hardware and may have a configuration in which the functions are distributed to them.
  • FIG. 2 is a block diagram showing the configuration of the information processing terminal 10 according to the present embodiment.
  • the information processing terminal 10 includes a control unit 11 and a touch panel 12 which is an example of a display unit.
  • the processor 11a is an arithmetic unit that controls the operation of the control unit 11, controls the transmission and reception of data between each element, and performs processing necessary for program execution.
  • the processor 11a is, for example, a CPU (Central Processing Unit), and executes each process by executing a program stored in the storage 11c and expanded in the memory 11b, which will be described later.
  • CPU Central Processing Unit
  • the memory 11b includes a main storage device composed of a volatile storage device such as a DRAM (Dynamic Random Access Memory), and an auxiliary storage device composed of a non-volatile storage device such as a flash memory and an HDD (Hard Disc Drive). .. While the memory 11b is used as a work area of the processor 11a, a boot loader executed when the control unit 11 is started, various setting information, and the like are stored.
  • a volatile storage device such as a DRAM (Dynamic Random Access Memory)
  • auxiliary storage device composed of a non-volatile storage device such as a flash memory and an HDD (Hard Disc Drive).
  • the storage 11c stores programs and information used for various processes. For example, when the user operates the flying object for capturing the image information of the wall surface W1 via the information processing terminal 10, the storage 11c may store a program for controlling the flight of the flying object. ..
  • the transmission / reception unit 11d connects the control unit 11 to a network such as an Internet network, and is a local area network (LAN), wide area network (WAN), infrared ray, wireless, WiFi, point-to-point (P2P) network, and the like. It may be equipped with a short-range communication interface such as a telecommunications network, cloud communication, Bluetooth (registered trademark) or BLE (Bluetooth Low Energy).
  • the input / output unit 11e is an interface to which input / output devices are connected, and in the present embodiment, the touch panel 12 is connected.
  • the bus 11f transmits, for example, an address signal, a data signal, and various control signals between the connected processor 11a, memory 11b, storage 11c, transmission / reception unit 11d, and input / output unit 11e.
  • the touch panel 12 is an example of a display unit, and includes a display surface on which acquired images and images are displayed.
  • this display surface receives information input by contact with the display surface, and is implemented by various techniques such as a resistance film method and a capacitance method.
  • an image captured by the unmanned flying object 20 can be displayed on the display surface of the touch panel 12.
  • buttons, objects, and the like for flight control of the unmanned vehicle 20 and control of the image pickup apparatus may be displayed on the display surface.
  • the user can input input information to the image, the button, or the like displayed on the display surface via the touch panel 12.
  • the operation for inputting such input information is, for example, a touch (tap) operation, a slide operation, a swipe operation, or the like for a button, an object, or the like.
  • FIG. 3 is a block diagram showing an example of the functional configuration of the unmanned aircraft 20 according to the present embodiment.
  • the unmanned aircraft 20 includes a transmission / reception unit 22, a flight controller 23, a battery 24, an ESC 25, a motor 26, a propeller 27, and a camera 28 in the main body 21.
  • the unmanned flying object 20 is an example of an flying object.
  • the type of the flying object is not particularly limited, and may be, for example, a so-called multi-rotor type drone as shown in FIG.
  • the flight controller 23 can have one or more processors 23A such as a programmable processor (eg, central processing unit (CPU)).
  • processors 23A such as a programmable processor (eg, central processing unit (CPU)).
  • the flight controller 23 has a memory 23B and can access the memory 23B.
  • Memory 23B stores logic, code, and / or program instructions that the flight controller can execute to perform one or more steps.
  • the memory 23B may include, for example, a separable medium such as an SD card or a random access memory (RAM) or an external storage device.
  • the data acquired from the sensors 23C may be directly transmitted and stored in the memory 23B.
  • still image / moving image data taken by the camera 28 is recorded in the built-in memory or the external memory.
  • the flight controller 23 includes a control module configured to control the state of the flying object.
  • the control module may adjust the spatial placement, velocity, and / or acceleration of an air vehicle with 6 degrees of freedom (translation x, y and z, and rotational motion ⁇ x , ⁇ y and ⁇ z ).
  • the propulsion mechanism (motor 26, etc.) of the air vehicle is controlled via the ESC (Electric Speed Controller) 25.
  • the control module can control one or more of the camera 28, the sensors 23C, and the like.
  • the flight controller 23 can generate and retain information about the state of the flying object.
  • the information regarding the state of the flying object includes, for example, data acquired by the camera 28 and the sensors 23C.
  • the data acquired by the camera 28 includes, for example, image information generated by the camera 28 taking an image, and information regarding an image pickup state of the camera 28 (for example, an image pickup position of the camera 28, an image pickup direction of the camera 28).
  • the imaging direction may include angles in the pan direction and tilt direction of the camera 28 and the like.
  • the flight controller 23 is data from one or more external devices (eg, a terminal such as an information processing terminal 10, a display device, a radio or other remote controller that is a controller that remotely controls the unmanned aircraft 20). Is communicable with a transmitter / receiver 22 configured to transmit and / or receive.
  • the transmission / reception unit 22 uses one or more of a local area network (LAN), a wide area network (WAN), infrared rays, wireless, WiFi, a point-to-point (P2P) network, a telecommunications network, cloud communication, and the like. can do.
  • the transmission / reception unit 22 is one or more of data acquired by the camera 28 and sensors 23C, processing results generated by the flight controller 23, predetermined control data, user commands from the information processing terminal 10 or a remote controller, and the like. Can be sent and / or received.
  • the transmission / reception unit 22 may receive, for example, an input to the information processing terminal 10 and receive control related to flight or imaging via a radio (not shown). Further, the transmission / reception unit 22 may transmit, for example, the data acquired by the camera 28 or the like to the information processing terminal via a radio (not shown).
  • Sensors 23C may include an inertial sensor (acceleration sensor, gyro sensor), GPS sensor, proximity sensor (eg, rider), or vision / image sensor (eg, camera).
  • inertial sensor acceleration sensor, gyro sensor
  • GPS sensor GPS sensor
  • proximity sensor eg, rider
  • vision / image sensor eg, camera
  • the battery 24 can be a known battery such as a lithium polymer battery.
  • the power for driving the unmanned vehicle 20 is not limited to the electric power supplied from the battery 24 or the like, and may be, for example, the power of an internal combustion engine or the like.
  • the camera 28 is an example of an image pickup device.
  • the type of the camera 28 is not particularly limited, and may be, for example, an ordinary digital camera, an omnidirectional camera, an infrared camera, an image sensor such as a thermography, or the like.
  • the camera 28 may be connected to the main body 21 so as to be independently displaceable by a gimbal or the like (not shown).
  • FIG. 4 is a block diagram showing a functional configuration of the control unit 11 according to the present embodiment.
  • the control unit 11 includes an input information acquisition unit 111, a display control unit 112, an acquisition unit 113, an estimation unit 114, a calculation unit 115, and an image information DB (database) 116.
  • Each of these functional units can be realized by the processor 11a reading a program stored in the storage 11c into the memory 11b and executing the program.
  • the input information acquisition unit 111 has a function of acquiring input information generated based on an operation on an image displayed on the touch panel 12.
  • the input information referred to here includes, for example, information regarding a position on an image displayed on the touch panel 12.
  • the position on the image is, for example, the position of the pixels constituting the image. That is, the input information includes information indicating to which position on the image the user has performed an operation. More specifically, the input information may include information related to the designation of points for the image displayed on the touch panel 12 and information related to the touch operation for the object of the button displayed on the touch panel 12.
  • the display control unit 112 has a function of displaying the acquired image on the touch panel 12. Further, the display control unit 112 includes information such as buttons, objects, and texts for providing information to the user who uses the system 1 and for acquiring input information based on the operation by the user in the image. It has a function to display. Further, it may also have a function for displaying the display based on the input information obtained by the input information acquisition unit 111 on the touch panel 12.
  • the acquisition unit 113 has a function of acquiring input information and image information.
  • the acquisition unit 113 may acquire image information obtained by imaging an object from the image information DB 116. Further, the acquisition unit 113 may acquire image information from the unmanned aircraft 20 that is performing image pickup processing in real time.
  • the acquisition unit 113 may acquire other information based on various information.
  • the acquisition unit 113 includes a first acquisition unit 1131 and a second acquisition unit 1132.
  • the first acquisition unit 1131 is displayed on the screen of the touch panel 12, and acquires the coordinate information on the image displayed on the screen of at least two reference points each having the coordinate information in the world coordinates.
  • the reference point referred to here may be attached with information related to coordinate information (for example, latitude information, longitude information, altitude information, etc.) in world coordinates (coordinates in real space).
  • the reference point can be set on the screen by, for example, an operation on the touch panel 12 of the user U.
  • the coordinate information in the world coordinates may be attached when the reference point is set on the screen as described later, or may be attached to the reference point in advance.
  • the coordinate information on the image may be, for example, the coordinate information in the XY coordinate system determined based on the pixels of the image. The specific behavior will be described later.
  • the reference point may have, for example, coordinate information in a predetermined height direction in world coordinates.
  • the coordinate information in the predetermined height direction may be the coordinate information in the height direction of the ground.
  • the coordinate information in the plane direction in the world coordinates of the reference point is based on the coordinate information in the predetermined height direction, the coordinate information of the reference point on the image, and the information related to the imaging state when the image is captured. May be obtained. That is, the coordinate information in the plane direction in the world coordinates of the reference point is a coordinate conversion based on a conversion formula obtained from the coordinate information of the reference point on the image, the information related to the imaging situation, and the coordinate information in the predetermined height direction. It can be obtained by performing processing.
  • the second acquisition unit 1132 has a function of acquiring coordinate information on an image of at least one input point input to the screen.
  • the input point is different from the reference point.
  • the input point may be set on the screen by, for example, an operation on the touch panel 12 of the user U.
  • the estimation unit 114 has a function of estimating a position or the like in world coordinates based on the coordinate information acquired by the acquisition unit 113. Specifically, the estimation unit 114 includes a first estimation unit 1141 and a second estimation unit 1142.
  • the first estimation unit 1141 determines at least two reference points based on the coordinate information of at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging situation when the image is captured. It has a function to estimate the position of the virtual wall surface in the passing world coordinates. For example, if it is assumed that the (actual) wall surface of the object is at a predetermined angle (for example, perpendicular) to the ground, the virtual wall surface can be considered to have the same predetermined angle. Then, the first estimation unit 1141 can estimate the position of the virtual wall surface in the world coordinates by performing a coordinate conversion process as described later, for example.
  • the position of the virtual wall surface in the world coordinates means, for example, a group of coordinates in the real space of the region corresponding to the virtual wall surface.
  • the second estimation unit 1142 has a function of estimating the world coordinates of the input point when the input point is located on a virtual wall surface.
  • the second estimation unit 1142 can estimate the world coordinates of the input point by performing coordinate conversion processing when the input point is located on a virtual wall surface from the coordinate information on the image of the input point.
  • the calculation unit 115 has a function of calculating the distance between the reference point and the input point on the virtual wall surface in world coordinates. Further, the calculation unit 115 may calculate the distance between the input point and another input point on the virtual wall surface in world coordinates. Further, the calculation unit 115 may calculate the area of the area on the virtual wall surface having the point on the virtual wall surface in the world coordinates as the apex, which corresponds to at least one of the reference point and the input point. Such a region may be a region having a point corresponding to a reference point and an input point as a vertex, or may be a region having a point corresponding to a plurality of input points as a vertex.
  • the region having a point corresponding to a plurality of input points as a vertex may be, for example, a region formed by a vertex composed of only the points corresponding to the input points. It should be noted that such a region may be composed of at least one of a straight line and a curved line. That is, the region may have a polygonal shape or a shape as drawn by freehand. The setting of such a region is performed by, for example, the calculation unit 115.
  • the image information DB 116 is a database that stores information (image information) of images captured by an unmanned flying object 20 or the like.
  • Such an image may be an image taken by the unmanned flying object 20, but is not limited to such an example.
  • such an image may be an image taken by taking an image from a high place or the like using a digital camera, a smartphone, a tablet, or the like.
  • the imaging position, imaging direction, and the like can be acquired by any method, such information can be used as information related to the imaging status.
  • FIG. 5 is a flowchart of a series of processes in the information processing terminal 10 according to the present embodiment.
  • the acquisition unit 113 acquires image information and information related to the imaging status (step SQ101).
  • the unmanned flying object 20 obtains an image of the wall surface W1 of the building S1.
  • FIG. 6 is a diagram showing an example of image pickup processing by the unmanned flying object 20.
  • the unmanned vehicle 20 is flying in the vicinity of the building S1, and the camera 28 is facing the direction PV1 of the wall surface W1.
  • the camera 28 performs an image pickup process so that the wall surface W1 is reflected, and obtains a captured image.
  • the unmanned flying object 20 acquires information relating to the height H1 of the camera 28, the imaging direction Dir1 in the horizontal direction of the camera 28, and the imaging angle Ang1 of the camera 28 as information relating to the imaging situation.
  • the obtained image information and information related to the imaging status are transmitted to the information processing terminal 10 via the unmanned flying object 20.
  • the acquisition unit 113 acquires this information.
  • the image information may be stored once in the image information DB 116. Further, the information related to the imaging status may be associated with the image information.
  • FIG. 7 is a display example on the screen V1 of the touch panel 12.
  • the image D11 captured by the unmanned aircraft 20 is displayed on the screen V1.
  • Image D11 includes an image of building S1.
  • the building S1 includes a wall surface W1, a window W2, and a window W3.
  • the screen V1 is provided with a button 101 for creating an object on a virtual wall surface, a button 102 for creating an area object corresponding to a window, and a button 103 for deleting an area once created. You may.
  • the input information acquisition unit 111 acquires the operation of the user U, and the reference point is set (step SQ105).
  • the reference points are points provided for acquiring the coordinates in the world coordinates of the wall surface W1, and are provided at least two points.
  • the reference point may be set on the image D11, for example, by the operation of the user U.
  • FIG. 8 is a diagram showing an example of setting to a reference point on the image D11. As shown in FIG. 8, two reference points 51 and 52 may be set on the image D11. For example, the two reference points 51 and 52 are provided so as to correspond to the positions of the lower vertices of the wall surface W1 to be surveyed.
  • the first acquisition unit 1131 associates the coordinates on the image of the reference point with the coordinates in the world coordinates, and acquires the coordinate information of the reference point in the world coordinates (step SQ107).
  • the coordinates in the height direction in the world coordinates that is, the height from the ground
  • the coordinates in the height direction of the lower apex of the wall surface W1 may be, for example, an actual altitude value other than 0.
  • the horizontal coordinates of the reference points 51 and 52 in the world coordinates are known from the coordinates on the image of the reference points 51 and 52, the coordinates in the height direction in the world coordinates, and the information related to the imaging status of the camera 28. It can be obtained by using the method of coordinate conversion of. Specifically, the coordinates on the image of the reference points 51 and 52 are converted into the film coordinates of the camera 28, the film coordinates are converted into the camera coordinates of the camera 28, and the camera coordinates are converted into the world coordinates. The world coordinates of 51 and 52 can be obtained.
  • the horizontal coordinates in the world coordinates of the reference points 51 and 52 can be obtained.
  • the conversion formula between the respective coordinate systems can be determined by the specifications of the camera 28 and the information related to the image pickup status of the camera 28. In such coordinate conversion, a method considering information such as the specifications of the camera 28 such as the F value and distortion of the lens may be used.
  • FIG. 9 is a diagram for explaining the process of estimating the position of the virtual wall surface.
  • a virtual wall surface 60 passing through the reference points 51 and 52 is virtually provided on the image D11.
  • the virtual wall surface 60 may or may not be actually displayed on the screen V1.
  • the virtual wall surface 60 is obtained, for example, as follows. Here, it is assumed that the virtual wall surface 60 is perpendicular to the ground, but the present technique is not limited to this example.
  • the normal direction (yaw direction) of the virtual wall surface is calculated assuming that the virtual wall surface passes through the reference points 51 and 52.
  • the coordinate system consisting of the line segment connecting the reference points 51 and 52 and the calculated normal direction is defined as the coordinate system of the virtual wall surface.
  • the coordinates on the image of the virtual wall surface 60 and the world coordinates are associated with each other, and the position of the virtual wall surface 60 in the world coordinates is associated with the position of the actual wall surface W1 in the real space.
  • the input information acquisition unit 111 acquires the operation of the user U, and the input point is set (step SQ111).
  • the second acquisition unit 1132 acquires the coordinate information on the image of the input point.
  • the second estimation unit 1142 estimates the world coordinates of the input point when the input point is located on the virtual wall surface 60 (step SQ113).
  • a region having the reference point and the input point as the vertices is set (step SQ115).
  • FIG. 10 is a diagram for explaining the process related to the setting of the input point and the setting of the area.
  • input points 53 and 54 are set in addition to the reference points 51 and 52.
  • the input points 53 and 54 are processed as being located on the virtual wall surface 60.
  • the coordinates in the world coordinates of the input points 53 and 54 are obtained by being converted from the coordinates on the image D11 of the input points 53 and 54 by the second estimation unit 1142 via the coordinate system of the virtual wall surface 60.
  • Such a region 61 is a region corresponding to the virtual wall surface 60, and is a region corresponding to the wall surface W1 reflected in the image D11.
  • the calculation unit 115 calculates the area surrounded by the area 61 (step SQ117).
  • the calculation unit 115 calculates the area in real space of the area surrounded by the reference points 51 and 52 and the input points 53 and 54 using the coordinate information in the world coordinates. Further, the calculation unit 115 may calculate the distance between two points in real space. Information on such an area and a distance may be output to the screen V1 or the like in any manner by the display control unit 112, for example.
  • FIG. 11 is a diagram for explaining a process related to estimation of a region according to a modification of the present embodiment.
  • the area 61 (corresponding to the wall surface W1) surrounded by the reference points 51, 52 and the input points 53, 54 is surrounded by four other input points 55, 56, 57, 58.
  • the area 62 is set.
  • Such a region 62 corresponds to a window W2 provided on the wall surface W1 of the building S1.
  • the calculation unit 115 calculates the area of the area 61 and the area of the area 62, and by subtracting the area of the area 62 from the area of the area 61, the area of the wall surface W1 excluding the window W2 can be calculated.
  • At least two reference points whose world coordinates are known for the image of the wall surface W1 of the building S1 taken from the sky by the unmanned aircraft 20. Can be used to estimate the position of the virtual wall surface corresponding to the world coordinates, set the input point on the virtual wall surface, and estimate the position coordinates of the reference point and the input point in the world coordinates.
  • the world coordinates in the height direction of at least two reference points for example, the height direction from the ground
  • the coordinate information in the world coordinates of the reference points is acquired from the information related to the imaging situation by coordinate conversion. be able to.
  • this reference point is automatically set at a position corresponding to the installation portion of the wall surface W1 of the building S1. It is possible to uniquely determine the coordinate information in the world coordinates of the reference point.
  • the coordinate information in the world coordinates of the input point when the input point set on the image is located on the virtual wall surface is uniquely determined. be able to.
  • the area of the wall surface in the real space and the distance between the two points can be calculated.
  • the actual position, length and area in the real space can be more accurately obtained even for the wall surface imaged from an oblique direction. Can be calculated.
  • the device described in the present specification may be realized as a single device, or may be realized by a plurality of devices, etc., which are partially or wholly connected by a network.
  • the control unit and the storage of the information processing terminal 10 may be realized by different servers connected to each other by a network.
  • the series of processes by the apparatus described in the present specification may be realized by using any of software, hardware, and a combination of software and hardware. It is possible to create a computer program for realizing each function of the information processing terminal 10 according to the present embodiment and implement it on a PC or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed, for example, via a network without using a recording medium.
  • the passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged.
  • the first estimation unit that estimates the position of the virtual wall surface in world coordinates
  • a second acquisition unit that acquires coordinate information on the image of the input point input to the screen
  • the reference point has coordinate information in a predetermined height direction in the world coordinates.
  • the coordinate information in the plane direction in the world coordinates of the reference point includes the coordinate information in the predetermined height direction, the coordinate information of the reference point on the image, and the information related to the imaging state when the image is imaged.
  • the information processing device acquired based on.
  • the information processing apparatus in which the coordinate information in the predetermined height direction in the world coordinates is the coordinate information in the height direction of the ground.
  • An information processing device (item 5) further comprising a calculation unit for calculating the distance between the reference point and the input point on the virtual wall surface in the world coordinates.
  • Information further comprising a calculation unit for calculating the area of the area on the virtual wall surface having a point on the virtual wall surface in the world coordinates as an apex corresponding to at least one of the reference point and the input point.
  • Processing equipment (Item 6) The information processing device according to item 5.
  • the region is an information processing apparatus including a region whose apex is a point corresponding to the reference point and the input point.
  • the area is an information processing apparatus including an area having a point corresponding to the plurality of input points as an apex.
  • the information relating to the image pickup state is an information processing apparatus including information regarding an image pickup position of the image pickup device and an image pickup direction of the image pickup device.
  • the information processing apparatus according to any one of items 1 to 8.
  • the image is an information processing apparatus including an image captured by an unmanned vehicle.
  • the processor Acquiring the coordinate information on the image displayed on the screen of at least two reference points displayed on the screen and having the coordinate information in the world coordinates, respectively. The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged.
  • the first estimation unit that estimates the position of the virtual wall surface in world coordinates
  • a second acquisition unit that acquires coordinate information on the image of the input point input to the screen
  • a second estimation unit that estimates the world coordinates of the input point when the input point is located on the virtual wall surface

Abstract

[Problem] To easily measure the wall surface of an object. [Solution] An information processing device 10 according to the present disclosure comprises: a first acquisition unit 1131 for acquiring coordinate information, on an image displayed on a screen, of at least two reference points, each of which is displayed on the screen and has coordinate information in world coordinates; a first inference unit 1141 for inferring the location of a virtual wall surface in the world coordinates which passes through the at least two reference points, on the basis of the coordinate information of the at least two reference points in the world coordinates, the coordinate information of the at least two reference points on the image, and information pertaining to an imaging condition in which the image has been captured; a second acquisition unit 1132 for acquiring coordinate information of at least one input point on the image which has been input with respect to the screen; and a second inference unit 1142 for inferring world coordinates of the input point for a case where the input point is assumed to be located on the virtual wall surface.

Description

情報処理装置、方法およびプログラムInformation processing equipment, methods and programs
 本開示は、情報処理装置、方法およびプログラムに関する。 This disclosure relates to information processing devices, methods and programs.
 対象物を高所から観察したり、上空から地上を空撮したり、あるいは立ち入りが困難な領域を観察したりする場合には、近年、複数のプロペラの回転によって飛行するいわゆるドローンあるいはマルチコプタといった飛行体が用いられることがある。このような飛行体を用いて高所から対象物を撮像して生成された画像が、対象物の点検や測量等に用いられることがある。例えば特許文献1には、飛行体に搭載したカメラで対象物を撮像した画像から対象物である屋根の形状寸法を測定し、かかる形状寸法から屋根の面積を算出する技術が開示されている。 When observing an object from a high place, taking an aerial photograph of the ground from the sky, or observing an area that is difficult to enter, in recent years, a flight such as a so-called drone or multicopter that flies by rotating multiple propellers. The body may be used. An image generated by imaging an object from a high place using such a flying object may be used for inspection or surveying of the object. For example, Patent Document 1 discloses a technique of measuring the shape and dimension of a roof, which is an object, from an image of an object captured by a camera mounted on a flying object, and calculating the roof area from the shape and dimension.
特開2003-162552号公報Japanese Patent Application Laid-Open No. 2003-162552
 しかしながら、上記特許文献に開示された技術においては、高所から撮像される対象物の画像は中心投影によりパースがかかるので、対象物の壁面の測量が困難である。 However, in the technique disclosed in the above patent document, it is difficult to measure the wall surface of the object because the image of the object captured from a high place is parsed by the central projection.
 本開示はこのような背景を鑑みてなされたものであり、その目的は、対象物の壁面の測量を容易に行うことが可能な、情報処理装置、方法およびプログラムを提供することである。 The present disclosure has been made in view of such a background, and an object thereof is to provide an information processing device, a method and a program capable of easily measuring the wall surface of an object.
 本開示によれば、画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得する第1取得部と、前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定する第1推定部と、前記画面に対して入力された入力点の画像上における座標情報を取得する第2取得部と、前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定する第2推定部と、を備える情報処理装置が提供される。 According to the present disclosure, a first acquisition unit for acquiring coordinate information on an image displayed on the screen of at least two reference points displayed on a screen and each having coordinate information in world coordinates, and at least two of the reference points. Virtually in the world coordinates passing through at least two reference points based on the coordinate information of the reference point in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. When the first estimation unit that estimates the position of the wall surface of the screen, the second acquisition unit that acquires the coordinate information on the image of the input point input to the screen, and the input point are located on the virtual wall surface. Provided is an information processing apparatus including a second estimation unit that estimates the world coordinates of the input point in the case of the above.
 また、本開示によれば、プロセッサが、画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得することと、前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定することと、前記画面に対して入力された入力点の画像上における座標情報を取得することと、前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定することと、を含む、方法が提供される。 Further, according to the present disclosure, the processor acquires the coordinate information on the image displayed on the screen of at least two reference points displayed on the screen and each having the coordinate information in the world coordinates. The world coordinates passing through at least two reference points based on the coordinate information of the two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is captured. Estimating the position of the virtual wall surface in the above, acquiring the coordinate information on the image of the input point input to the screen, and assuming that the input point is located on the virtual wall surface. Methods are provided, including, including estimating the world coordinates of the input point.
 また、本開示によれば、コンピュータを、画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得する第1取得部と、前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定する第1推定部と、前記画面に対して入力された入力点の画像上における座標情報を取得する第2取得部と、前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定する第2推定部と、として機能させるためのプログラムが提供される。 Further, according to the present disclosure, the computer is displayed on the screen, and the first acquisition unit for acquiring the coordinate information on the image displayed on the screen of at least two reference points each having the coordinate information in the world coordinates. , The at least two reference points are passed through the at least two reference points based on the coordinate information in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. The first estimation unit that estimates the position of the virtual wall surface in the world coordinates, the second acquisition unit that acquires the coordinate information on the image of the input point input to the screen, and the input point are the virtual A program for functioning as a second estimation unit for estimating the world coordinates of the input point when it is located on the wall surface is provided.
 本開示によれば、対象物の壁面の測量を容易に行うことができる。 According to the present disclosure, the wall surface of the object can be easily surveyed.
本開示の一実施形態に係るシステム1の概略を示す図である。It is a figure which shows the outline of the system 1 which concerns on one Embodiment of this disclosure. 同実施形態に係る情報処理端末10の構成を示すブロック図である。It is a block diagram which shows the structure of the information processing terminal 10 which concerns on the same embodiment. 同実施形態に係る無人飛行体20の機能構成の一例を示すブロック図である。It is a block diagram which shows an example of the functional structure of the unmanned aircraft 20 which concerns on the same embodiment. 同実施形態に係る制御部11の機能構成を示すブロック図である。It is a block diagram which shows the functional structure of the control part 11 which concerns on the same embodiment. 同実施形態に係る情報処理端末10における一連の処理に係るフローチャート図である。It is a flowchart which concerns on a series of processing in the information processing terminal 10 which concerns on the same embodiment. 無人飛行体20による撮像処理の一例を示す図である。It is a figure which shows an example of the image pickup processing by an unmanned vehicle body 20. タッチパネル12の画面V1における表示例である。This is a display example on the screen V1 of the touch panel 12. 画像D11上への基準点への設定の例を示す図である。It is a figure which shows the example of setting to the reference point on the image D11. 仮想の壁面の位置の推定処理を説明するための図である。It is a figure for demonstrating the estimation process of the position of a virtual wall surface. 入力点の設定および領域の設定に係る処理を説明するための図である。It is a figure for demonstrating the process which concerns on the setting of an input point and the setting of an area. 同実施形態の一変形例に係る領域の推定に係る処理を説明するための図である。It is a figure for demonstrating the process which concerns on the estimation of the area which concerns on one modification of the same Embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 The preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings below. In the present specification and the drawings, components having substantially the same functional configuration are designated by the same reference numerals, and duplicate description will be omitted.
 図1は、本開示の一実施形態に係るシステム1の概略を示す図である。図示のように、システム1は、情報処理端末10(情報処理装置の例)と、無人飛行体20とを備える。本実施形態に係るシステム1は、例えば、無人飛行体20による撮影の対象物である建造物S1の測量や点検のために用いられ得る。かかるシステム1においては、情報処理端末10を使用するUが、情報処理端末10のタッチパネルに対して操作を行い、無人飛行体20を用いて建造物S1の壁面W1を含む画像を撮像する。そして、情報処理端末10は、情報処理端末10のタッチパネルに表示された該画像に含まれる壁面W1の領域を規定し、かかる領域と、無人飛行体20の撮像時の種々の情報に基づき、壁面W1の世界座標(実空間)における位置情報を取得する。かかる位置情報に基づいて、例えば、壁面W1における任意の点間の長さや、任意の領域の面積等を推定することができる。 FIG. 1 is a diagram showing an outline of a system 1 according to an embodiment of the present disclosure. As shown in the figure, the system 1 includes an information processing terminal 10 (an example of an information processing device) and an unmanned flying object 20. The system 1 according to the present embodiment can be used, for example, for surveying or inspecting the building S1 which is an object of photography by the unmanned flying object 20. In such a system 1, U using the information processing terminal 10 operates on the touch panel of the information processing terminal 10 and uses the unmanned flying object 20 to capture an image including the wall surface W1 of the building S1. Then, the information processing terminal 10 defines a region of the wall surface W1 included in the image displayed on the touch panel of the information processing terminal 10, and the wall surface is based on the region and various information at the time of imaging of the unmanned flying object 20. Acquires the position information in the world coordinates (real space) of W1. Based on such position information, for example, the length between arbitrary points on the wall surface W1 and the area of an arbitrary region can be estimated.
 本実施形態に係る情報処理端末10は、いわゆるタブレット状の小型のコンピュータによって実装される。他の実施形態においては、情報処理端末10は、スマートフォンまたはゲーム機等の携帯型の情報処理端末により実現されてもよいし、パーソナルコンピュータ等の据え置き型の情報処理端末により実現されてもよい。また、情報処理端末10は、複数のハードウェアにより実現され、それらに機能が分散された構成を有してもよい。 The information processing terminal 10 according to this embodiment is mounted by a so-called tablet-shaped small computer. In another embodiment, the information processing terminal 10 may be realized by a portable information processing terminal such as a smartphone or a game machine, or may be realized by a stationary information processing terminal such as a personal computer. Further, the information processing terminal 10 may be realized by a plurality of hardware and may have a configuration in which the functions are distributed to them.
 図2は、本実施形態に係る情報処理端末10の構成を示すブロック図である。図示のように、情報処理端末10は、制御部11及び表示部の一例であるタッチパネル12を備える。 FIG. 2 is a block diagram showing the configuration of the information processing terminal 10 according to the present embodiment. As shown in the figure, the information processing terminal 10 includes a control unit 11 and a touch panel 12 which is an example of a display unit.
 プロセッサ11aは、制御部11の動作を制御し、各要素間におけるデータの送受信の制御や、プログラムの実行に必要な処理等を行う演算装置である。このプロセッサ11aは、本実施の形態では例えばCPU(Central Processing Unit)であり、後述するストレージ11cに格納されてメモリ11bに展開されたプログラムを実行して各処理を行う。 The processor 11a is an arithmetic unit that controls the operation of the control unit 11, controls the transmission and reception of data between each element, and performs processing necessary for program execution. In the present embodiment, the processor 11a is, for example, a CPU (Central Processing Unit), and executes each process by executing a program stored in the storage 11c and expanded in the memory 11b, which will be described later.
 メモリ11bは、DRAM(Dynamic Random Access Memory)等の揮発性記憶装置で構成される主記憶装置、及びフラッシュメモリやHDD(Hard Disc Drive)等の不揮発性記憶装置で構成される補助記憶装置を備える。このメモリ11bは、プロセッサ11aの作業領域として使用される一方、制御部11の起動時に実行されるブートローダ、及び各種の設定情報等が格納される。 The memory 11b includes a main storage device composed of a volatile storage device such as a DRAM (Dynamic Random Access Memory), and an auxiliary storage device composed of a non-volatile storage device such as a flash memory and an HDD (Hard Disc Drive). .. While the memory 11b is used as a work area of the processor 11a, a boot loader executed when the control unit 11 is started, various setting information, and the like are stored.
 ストレージ11cは、プログラムや各種の処理に用いられる情報等が格納されている。例えば、壁面W1の画像情報を撮像するための飛行体を、情報処理端末10を介してユーザが操作する場合、ストレージ11cには、かかる飛行体の飛行を制御するプログラムが格納されていてもよい。 The storage 11c stores programs and information used for various processes. For example, when the user operates the flying object for capturing the image information of the wall surface W1 via the information processing terminal 10, the storage 11c may store a program for controlling the flight of the flying object. ..
 送受信部11dは、制御部11をインターネット網等のネットワークに接続するものであって、ローカルエリアネットワーク(LAN)、ワイドエリアネットワーク(WAN)、赤外線、無線、WiFi、ポイントツーポイント(P2P)ネットワーク、電気通信ネットワーク、クラウド通信、Bluetooth(登録商標)やBLE(Bluetooth Low Energy)といった近距離通信インターフェースを具備するものであってもよい。 The transmission / reception unit 11d connects the control unit 11 to a network such as an Internet network, and is a local area network (LAN), wide area network (WAN), infrared ray, wireless, WiFi, point-to-point (P2P) network, and the like. It may be equipped with a short-range communication interface such as a telecommunications network, cloud communication, Bluetooth (registered trademark) or BLE (Bluetooth Low Energy).
 入出力部11eは、入出力機器が接続されるインターフェースであって、本実施形態では、タッチパネル12が接続される。 The input / output unit 11e is an interface to which input / output devices are connected, and in the present embodiment, the touch panel 12 is connected.
 バス11fは、接続したプロセッサ11a、メモリ11b、ストレージ11c、送受信部11d及び入出力部11eの間において、例えばアドレス信号、データ信号及び各種の制御信号を伝達する。 The bus 11f transmits, for example, an address signal, a data signal, and various control signals between the connected processor 11a, memory 11b, storage 11c, transmission / reception unit 11d, and input / output unit 11e.
 タッチパネル12は、表示部の一例であり、取得した映像や画像が表示される表示面を備える。この表示面は、本実施形態では、表示面への接触によって情報の入力を受け付けるものであって、抵抗膜方式や静電容量方式といった各種の技術によって実装される。 The touch panel 12 is an example of a display unit, and includes a display surface on which acquired images and images are displayed. In the present embodiment, this display surface receives information input by contact with the display surface, and is implemented by various techniques such as a resistance film method and a capacitance method.
 タッチパネル12の表示面には、例えば、無人飛行体20により撮像された画像が表示され得る。また、該表示面には、無人飛行体20の飛行制御や撮像装置の制御のためのボタンやオブジェクト等が表示されてもよい。また、ユーザは、該表示面に表示された画像やボタン等に対して、タッチパネル12を介して入力情報を入力し得る。かかる入力情報の入力のための操作は、例えば、ボタンやオブジェクト等に対するタッチ(タップ)操作、スライド操作、またはスワイプ操作等である。 For example, an image captured by the unmanned flying object 20 can be displayed on the display surface of the touch panel 12. Further, buttons, objects, and the like for flight control of the unmanned vehicle 20 and control of the image pickup apparatus may be displayed on the display surface. Further, the user can input input information to the image, the button, or the like displayed on the display surface via the touch panel 12. The operation for inputting such input information is, for example, a touch (tap) operation, a slide operation, a swipe operation, or the like for a button, an object, or the like.
 図3は、本実施形態に係る無人飛行体20の機能構成の一例を示すブロック図である。図3に示すように、一実施形態に係る無人飛行体20は、本体21において、送受信部22、フライトコントローラ23、バッテリ24、ESC25、モータ26、プロペラ27およびカメラ28を備える。なお、無人飛行体20は飛行体の一例である。飛行体の種類は特に限定されず、例えば、図3に示すようなマルチローター式のいわゆるドローンであってもよい。 FIG. 3 is a block diagram showing an example of the functional configuration of the unmanned aircraft 20 according to the present embodiment. As shown in FIG. 3, the unmanned aircraft 20 according to the embodiment includes a transmission / reception unit 22, a flight controller 23, a battery 24, an ESC 25, a motor 26, a propeller 27, and a camera 28 in the main body 21. The unmanned flying object 20 is an example of an flying object. The type of the flying object is not particularly limited, and may be, for example, a so-called multi-rotor type drone as shown in FIG.
 フライトコントローラ23は、プログラマブルプロセッサ(例えば、中央演算処理装置(CPU))などの1つ以上のプロセッサ23Aを有することができる。 The flight controller 23 can have one or more processors 23A such as a programmable processor (eg, central processing unit (CPU)).
 フライトコントローラ23は、メモリ23Bを有しており、メモリ23Bにアクセス可能である。メモリ23Bは、1つ以上のステップを行うためにフライトコントローラが実行可能であるロジック、コード、および/またはプログラム命令を記憶している。 The flight controller 23 has a memory 23B and can access the memory 23B. Memory 23B stores logic, code, and / or program instructions that the flight controller can execute to perform one or more steps.
 メモリ23Bは、例えば、SDカードやランダムアクセスメモリ(RAM)などの分離可能な媒体または外部の記憶装置を含んでいてもよい。センサ類23Cから取得したデータは、メモリ23Bに直接に伝達されかつ記憶されてもよい。例えば、カメラ28で撮影した静止画・動画データが内蔵メモリ又は外部メモリに記録される。 The memory 23B may include, for example, a separable medium such as an SD card or a random access memory (RAM) or an external storage device. The data acquired from the sensors 23C may be directly transmitted and stored in the memory 23B. For example, still image / moving image data taken by the camera 28 is recorded in the built-in memory or the external memory.
 フライトコントローラ23は、飛行体の状態を制御するように構成された制御モジュールを含んでいる。例えば、制御モジュールは、6自由度(並進運動x、y及びz、並びに回転運動θ、θ及びθ)を有する飛行体の空間的配置、速度、および/または加速度を調整するために飛行体の推進機構(モータ26等)をESC(Electric Speed Controller)25を介して制御する。制御モジュールは、カメラ28やセンサ類23C等のうち1つ以上を制御することができる。また、フライトコントローラ23は、飛行体の状態に関する情報を生成し、保持しうる。飛行体の状態に関する情報は、例えば、カメラ28やセンサ類23Cが取得したデータを含む。カメラ28が取得したデータは、例えば、カメラ28が撮像して生成される画像情報、カメラ28の撮像状況(例えば、カメラ28の撮像位置、カメラ28の撮像方向)に関する情報を含む。撮像方向は、カメラ28のパン方向およびチルト方向の角度等を含みうる。 The flight controller 23 includes a control module configured to control the state of the flying object. For example, the control module may adjust the spatial placement, velocity, and / or acceleration of an air vehicle with 6 degrees of freedom (translation x, y and z, and rotational motion θ x , θ y and θ z ). The propulsion mechanism (motor 26, etc.) of the air vehicle is controlled via the ESC (Electric Speed Controller) 25. The control module can control one or more of the camera 28, the sensors 23C, and the like. In addition, the flight controller 23 can generate and retain information about the state of the flying object. The information regarding the state of the flying object includes, for example, data acquired by the camera 28 and the sensors 23C. The data acquired by the camera 28 includes, for example, image information generated by the camera 28 taking an image, and information regarding an image pickup state of the camera 28 (for example, an image pickup position of the camera 28, an image pickup direction of the camera 28). The imaging direction may include angles in the pan direction and tilt direction of the camera 28 and the like.
 フライトコントローラ23は、1つ以上の外部のデバイス(例えば、情報処理端末10等の端末、表示装置、無人飛行体20を遠隔で操作するコントローラであるプロポまたは他の遠隔の制御器)からのデータを送信および/または受け取るように構成された送受信部22と通信可能である。例えば、送受信部22は、ローカルエリアネットワーク(LAN)、ワイドエリアネットワーク(WAN)、赤外線、無線、WiFi、ポイントツーポイント(P2P)ネットワーク、電気通信ネットワーク、クラウド通信などのうちの1つ以上を利用することができる。 The flight controller 23 is data from one or more external devices (eg, a terminal such as an information processing terminal 10, a display device, a radio or other remote controller that is a controller that remotely controls the unmanned aircraft 20). Is communicable with a transmitter / receiver 22 configured to transmit and / or receive. For example, the transmission / reception unit 22 uses one or more of a local area network (LAN), a wide area network (WAN), infrared rays, wireless, WiFi, a point-to-point (P2P) network, a telecommunications network, cloud communication, and the like. can do.
 送受信部22は、カメラ28やセンサ類23Cで取得したデータ、フライトコントローラ23が生成した処理結果、所定の制御データ、情報処理端末10または遠隔の制御器からのユーザコマンドなどのうちの1つ以上を送信および/または受け取ることができる。送受信部22は、例えば、情報処理端末10に対する入力を受けて、不図示のプロポを介して、飛行や撮像にかかる制御を受信し得る。また、送受信部22は、例えば、カメラ28等が取得したデータを、不図示のプロポを介して、情報処理端末に送信し得る。 The transmission / reception unit 22 is one or more of data acquired by the camera 28 and sensors 23C, processing results generated by the flight controller 23, predetermined control data, user commands from the information processing terminal 10 or a remote controller, and the like. Can be sent and / or received. The transmission / reception unit 22 may receive, for example, an input to the information processing terminal 10 and receive control related to flight or imaging via a radio (not shown). Further, the transmission / reception unit 22 may transmit, for example, the data acquired by the camera 28 or the like to the information processing terminal via a radio (not shown).
 本実施形態によるセンサ類23Cは、慣性センサ(加速度センサ、ジャイロセンサ)、GPSセンサ、近接センサ(例えば、ライダー)、またはビジョン/イメージセンサ(例えば、カメラ)を含み得る。 Sensors 23C according to this embodiment may include an inertial sensor (acceleration sensor, gyro sensor), GPS sensor, proximity sensor (eg, rider), or vision / image sensor (eg, camera).
 バッテリ24は、例えばリチウムポリマー電池等の公知のバッテリであり得る。なお、無人飛行体20を駆動させる動力は、バッテリ24等から供給される電力に限定されず、例えば内燃機関等の動力によるものであってもよい。 The battery 24 can be a known battery such as a lithium polymer battery. The power for driving the unmanned vehicle 20 is not limited to the electric power supplied from the battery 24 or the like, and may be, for example, the power of an internal combustion engine or the like.
 カメラ28は撮像装置の一例である。カメラ28の種類は特に限定されず、例えば通常のデジタルカメラ、全天球カメラ、赤外線カメラ、サーモグラフィ等のイメージセンサ等であってもよい。カメラ28は、本体21と不図示のジンバル等により独立変位可能に接続されていてもよい。 The camera 28 is an example of an image pickup device. The type of the camera 28 is not particularly limited, and may be, for example, an ordinary digital camera, an omnidirectional camera, an infrared camera, an image sensor such as a thermography, or the like. The camera 28 may be connected to the main body 21 so as to be independently displaceable by a gimbal or the like (not shown).
 図4は、本実施形態に係る制御部11の機能構成を示すブロック図である。図4に示すように、制御部11は、入力情報取得部111、表示制御部112、取得部113、推定部114、算出部115および画像情報DB(データベース)116を備える。これらの各機能部は、プロセッサ11aがストレージ11cに記憶されているプログラムをメモリ11bに読み出して実行することにより実現され得る。 FIG. 4 is a block diagram showing a functional configuration of the control unit 11 according to the present embodiment. As shown in FIG. 4, the control unit 11 includes an input information acquisition unit 111, a display control unit 112, an acquisition unit 113, an estimation unit 114, a calculation unit 115, and an image information DB (database) 116. Each of these functional units can be realized by the processor 11a reading a program stored in the storage 11c into the memory 11b and executing the program.
 入力情報取得部111は、タッチパネル12に表示された画像に対する操作に基づいて生成される入力情報を取得する機能を有する。ここでいう入力情報は、例えば、タッチパネル12に表示された画像上の位置に関する情報を含む。画像上の位置とは、例えば、該画像を構成する画素の位置である。すなわち、入力情報には、ユーザが画像上のどの位置に対する操作を行ったかを示す情報が含まれる。より具体的には、入力情報は、タッチパネル12に表示された画像に対する点の指定に係る情報や、タッチパネル12に表示されているボタンのオブジェクトに対するタッチ操作に係る情報を含みうる。 The input information acquisition unit 111 has a function of acquiring input information generated based on an operation on an image displayed on the touch panel 12. The input information referred to here includes, for example, information regarding a position on an image displayed on the touch panel 12. The position on the image is, for example, the position of the pixels constituting the image. That is, the input information includes information indicating to which position on the image the user has performed an operation. More specifically, the input information may include information related to the designation of points for the image displayed on the touch panel 12 and information related to the touch operation for the object of the button displayed on the touch panel 12.
 表示制御部112は、取得した画像をタッチパネル12に表示させる機能を有する。また、表示制御部112は、本システム1を利用するユーザに情報を提供するための、およびユーザによる操作に基づく入力情報を取得するためのボタンやオブジェクト、テキスト等の情報を前記画像に含めて表示させる機能を有する。また、入力情報取得部111により得られた入力情報に基づく表示をタッチパネル12に表示させるための機能も有し得る。 The display control unit 112 has a function of displaying the acquired image on the touch panel 12. Further, the display control unit 112 includes information such as buttons, objects, and texts for providing information to the user who uses the system 1 and for acquiring input information based on the operation by the user in the image. It has a function to display. Further, it may also have a function for displaying the display based on the input information obtained by the input information acquisition unit 111 on the touch panel 12.
 取得部113は、入力情報や画像情報を取得する機能を有する。例えば、取得部113は、画像情報DB116から対象物を撮像して得られる画像情報を取得してもよい。また、取得部113は、リアルタイムに撮像処理を行っている無人飛行体20から画像情報を取得してもよい。 The acquisition unit 113 has a function of acquiring input information and image information. For example, the acquisition unit 113 may acquire image information obtained by imaging an object from the image information DB 116. Further, the acquisition unit 113 may acquire image information from the unmanned aircraft 20 that is performing image pickup processing in real time.
 また、取得部113は、種々の情報を基に、他の情報を取得してもよい。具体的には、取得部113は、第1取得部1131および第2取得部1132を含む。 Further, the acquisition unit 113 may acquire other information based on various information. Specifically, the acquisition unit 113 includes a first acquisition unit 1131 and a second acquisition unit 1132.
 第1取得部1131は、タッチパネル12の画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、該画面に表示される画像上における座標情報を取得する。ここでいう基準点は、世界座標(実空間上の座標)における座標情報(例えば、緯度情報、経度情報および高度情報等)に関する情報が付され得る。基準点は、例えば、ユーザUのタッチパネル12に対する操作により画面上に設定され得る。世界座標における座標情報は、後述するように基準点を画面上に設定する際に付されてもよいし、予め定められて基準点に付されていてもよい。また、画像上における座標情報とは、例えば、画像の画素に基づき定められるXY座標系における座標情報であり得る。具体的な挙動については後述する。 The first acquisition unit 1131 is displayed on the screen of the touch panel 12, and acquires the coordinate information on the image displayed on the screen of at least two reference points each having the coordinate information in the world coordinates. The reference point referred to here may be attached with information related to coordinate information (for example, latitude information, longitude information, altitude information, etc.) in world coordinates (coordinates in real space). The reference point can be set on the screen by, for example, an operation on the touch panel 12 of the user U. The coordinate information in the world coordinates may be attached when the reference point is set on the screen as described later, or may be attached to the reference point in advance. Further, the coordinate information on the image may be, for example, the coordinate information in the XY coordinate system determined based on the pixels of the image. The specific behavior will be described later.
 基準点は、例えば、世界座標における所定の高さ方向の座標情報を有し得る。所定の高さ方向の座標情報は、地面の高さ方向の座標情報であってもよい。また、基準点の世界座標における平面方向の座標情報は、所定の高さ方向の座標情報と、画像上における基準点の座標情報と、画像を撮像した際の撮像状況に係る情報と、に基づいて取得されてもよい。つまり、基準点の世界座標における平面方向の座標情報は、画像上における基準点の座標情報から、撮像状況に係る情報と、所定の高さ方向の座標情報とにより得られる変換式に基づく座標変換処理を行うことで得ることができる。 The reference point may have, for example, coordinate information in a predetermined height direction in world coordinates. The coordinate information in the predetermined height direction may be the coordinate information in the height direction of the ground. Further, the coordinate information in the plane direction in the world coordinates of the reference point is based on the coordinate information in the predetermined height direction, the coordinate information of the reference point on the image, and the information related to the imaging state when the image is captured. May be obtained. That is, the coordinate information in the plane direction in the world coordinates of the reference point is a coordinate conversion based on a conversion formula obtained from the coordinate information of the reference point on the image, the information related to the imaging situation, and the coordinate information in the predetermined height direction. It can be obtained by performing processing.
 第2取得部1132は、画面に対して入力された少なくとも1つの入力点の画像上における座標情報を取得する機能を有する。入力点は、基準点とは異なる点である。入力点は、例えば、ユーザUのタッチパネル12に対する操作により画面上に設定され得る。 The second acquisition unit 1132 has a function of acquiring coordinate information on an image of at least one input point input to the screen. The input point is different from the reference point. The input point may be set on the screen by, for example, an operation on the touch panel 12 of the user U.
 推定部114は、取得部113が取得した座標情報を基に、世界座標における位置等を推定する機能を有する。具体的には、推定部114は、第1推定部1141および第2推定部1142を含む。 The estimation unit 114 has a function of estimating a position or the like in world coordinates based on the coordinate information acquired by the acquisition unit 113. Specifically, the estimation unit 114 includes a first estimation unit 1141 and a second estimation unit 1142.
 第1推定部1141は、少なくとも2つの基準点の、世界座標における座標情報と、画像上における座標情報と、画像を撮像した際の撮像状況に係る情報とに基づいて、少なくとも2つの基準点を通る世界座標における仮想の壁面の位置を推定する機能を有する。例えば、対象物の(実際の)壁面が地面に対して所定の角度(例えば垂直)であると仮定した場合に、仮想の壁面も同様の所定の角度であると考えることができる。そうすると、第1推定部1141は、例えば後述するような座標変換処理を行うことで、仮想の壁面の世界座標における位置を推定することができる。仮想の壁面の世界座標における位置とは、例えば、仮想の壁面に対応する領域の、実空間上における座標群を意味する。 The first estimation unit 1141 determines at least two reference points based on the coordinate information of at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging situation when the image is captured. It has a function to estimate the position of the virtual wall surface in the passing world coordinates. For example, if it is assumed that the (actual) wall surface of the object is at a predetermined angle (for example, perpendicular) to the ground, the virtual wall surface can be considered to have the same predetermined angle. Then, the first estimation unit 1141 can estimate the position of the virtual wall surface in the world coordinates by performing a coordinate conversion process as described later, for example. The position of the virtual wall surface in the world coordinates means, for example, a group of coordinates in the real space of the region corresponding to the virtual wall surface.
 第2推定部1142は、入力点が仮想の壁面上に位置するとした場合の入力点の世界座標を推定する機能を有する。例えば、第2推定部1142は、入力点の画像上の座標情報から、入力点が仮想の壁面上に位置するとした場合の座標変換処理を行い、入力点の世界座標を推定し得る。 The second estimation unit 1142 has a function of estimating the world coordinates of the input point when the input point is located on a virtual wall surface. For example, the second estimation unit 1142 can estimate the world coordinates of the input point by performing coordinate conversion processing when the input point is located on a virtual wall surface from the coordinate information on the image of the input point.
 算出部115は、基準点と入力点との、世界座標における仮想の壁面上における距離を算出する機能を有する。また、算出部115は、入力点と他の入力点との、世界座標における仮想の壁面上における距離を算出してもよい。また、算出部115は、基準点および入力点の少なくともいずれかに対応する、世界座標における仮想の壁面上の点を頂点とする、仮想の壁面上の領域の面積を算出してもよい。かかる領域は、基準点および入力点に対応する点を頂点とする領域であってもよいし、複数の入力点に対応する点を頂点とする領域であってもよい。複数の入力点に対応する点を頂点とする領域とは、例えば、入力点に対応する点のみにより構成される頂点によって形成される領域であってもよい。なお、かかる領域は直線および曲線の少なくともいずれかにより構成され得る。すなわち、領域は、多角形の形状であってもよいし、フリーハンドにより描かれるような形状であってもよい。かかる領域の設定は、例えば、算出部115により行われる。 The calculation unit 115 has a function of calculating the distance between the reference point and the input point on the virtual wall surface in world coordinates. Further, the calculation unit 115 may calculate the distance between the input point and another input point on the virtual wall surface in world coordinates. Further, the calculation unit 115 may calculate the area of the area on the virtual wall surface having the point on the virtual wall surface in the world coordinates as the apex, which corresponds to at least one of the reference point and the input point. Such a region may be a region having a point corresponding to a reference point and an input point as a vertex, or may be a region having a point corresponding to a plurality of input points as a vertex. The region having a point corresponding to a plurality of input points as a vertex may be, for example, a region formed by a vertex composed of only the points corresponding to the input points. It should be noted that such a region may be composed of at least one of a straight line and a curved line. That is, the region may have a polygonal shape or a shape as drawn by freehand. The setting of such a region is performed by, for example, the calculation unit 115.
 画像情報DB116は、無人飛行体20等により撮像された画像の情報(画像情報)を記憶するデータベースである。かかる画像は、無人飛行体20が撮像して得られる撮像画像であってもよいが、かかる例に限定されない。例えば、かかる画像は、高所等からデジタルカメラやスマートフォン、タブレット等を用いて撮像して得られる撮像画像であってもよい。その際、撮像位置や撮像方向等を任意の方法で取得できれば、かかる情報を撮像状況に係る情報として用いることができる。 The image information DB 116 is a database that stores information (image information) of images captured by an unmanned flying object 20 or the like. Such an image may be an image taken by the unmanned flying object 20, but is not limited to such an example. For example, such an image may be an image taken by taking an image from a high place or the like using a digital camera, a smartphone, a tablet, or the like. At that time, if the imaging position, imaging direction, and the like can be acquired by any method, such information can be used as information related to the imaging status.
 次に、本実施形態に係る情報処理端末10を用いた対象物の壁面の測量方法の一例について、フローチャートに沿って説明する。図5は、本実施形態に係る情報処理端末10における一連の処理に係るフローチャート図である。 Next, an example of a method for surveying the wall surface of an object using the information processing terminal 10 according to the present embodiment will be described with reference to a flowchart. FIG. 5 is a flowchart of a series of processes in the information processing terminal 10 according to the present embodiment.
 まず、取得部113が、画像情報および撮像状況に係る情報を取得する(ステップSQ101)。ここでは、無人飛行体20が建造物S1の壁面W1の画像を撮像して得ることを仮定する。 First, the acquisition unit 113 acquires image information and information related to the imaging status (step SQ101). Here, it is assumed that the unmanned flying object 20 obtains an image of the wall surface W1 of the building S1.
 図6は、無人飛行体20による撮像処理の一例を示す図である。図6に示すように、無人飛行体20は建造物S1の近傍を飛行しており、カメラ28は壁面W1の方向PV1を向いている。カメラ28は、壁面W1が写り込むように撮像処理を行い、撮像画像を得る。その際、無人飛行体20は、撮像状況に係る情報として、カメラ28の高さH1、カメラ28の水平方向における撮像方向Dir1およびカメラ28の撮像角度Ang1に係る情報を取得する。得られた画像情報および撮像状況に係る情報は、情報処理端末10に無人飛行体20を介して送信される。取得部113が、これらの情報を取得する。画像情報は、一度画像情報DB116に記憶されてもよい。また、撮像状況に係る情報は、画像情報に紐付けられていてもよい。 FIG. 6 is a diagram showing an example of image pickup processing by the unmanned flying object 20. As shown in FIG. 6, the unmanned vehicle 20 is flying in the vicinity of the building S1, and the camera 28 is facing the direction PV1 of the wall surface W1. The camera 28 performs an image pickup process so that the wall surface W1 is reflected, and obtains a captured image. At that time, the unmanned flying object 20 acquires information relating to the height H1 of the camera 28, the imaging direction Dir1 in the horizontal direction of the camera 28, and the imaging angle Ang1 of the camera 28 as information relating to the imaging situation. The obtained image information and information related to the imaging status are transmitted to the information processing terminal 10 via the unmanned flying object 20. The acquisition unit 113 acquires this information. The image information may be stored once in the image information DB 116. Further, the information related to the imaging status may be associated with the image information.
 次に、表示制御部112が、タッチパネル12の画面に画像を表示させる(ステップSQ103)。図7は、タッチパネル12の画面V1における表示例である。画面V1には、無人飛行体20により撮像された画像D11が表示されている。画像D11には、建造物S1の像が含まれる。建造物S1は、壁面W1、窓W2および窓W3を含む。また、画面V1には、仮想の壁面のオブジェクトを生成するためのボタン101、窓に対応する領域オブジェクトを生成するためのボタン102、および一度作成した領域を削除するためのボタン103が設けられていてもよい。 Next, the display control unit 112 displays an image on the screen of the touch panel 12 (step SQ103). FIG. 7 is a display example on the screen V1 of the touch panel 12. The image D11 captured by the unmanned aircraft 20 is displayed on the screen V1. Image D11 includes an image of building S1. The building S1 includes a wall surface W1, a window W2, and a window W3. Further, the screen V1 is provided with a button 101 for creating an object on a virtual wall surface, a button 102 for creating an area object corresponding to a window, and a button 103 for deleting an area once created. You may.
 次に、ユーザUの操作を入力情報取得部111が取得して、基準点が設定される(ステップSQ105)。基準点は、壁面W1の世界座標における座標を取得するために設けられる点であり、少なくとも2点設けられる。基準点は、例えば、ユーザUの操作により画像D11上に設定され得る。図8は、画像D11上への基準点への設定の例を示す図である。図8に示すように、2つの基準点51、52が画像D11上に設定され得る。例えば、2つの基準点51、52は、測量対象である壁面W1の下側の頂点の位置に対応するように設けられる。 Next, the input information acquisition unit 111 acquires the operation of the user U, and the reference point is set (step SQ105). The reference points are points provided for acquiring the coordinates in the world coordinates of the wall surface W1, and are provided at least two points. The reference point may be set on the image D11, for example, by the operation of the user U. FIG. 8 is a diagram showing an example of setting to a reference point on the image D11. As shown in FIG. 8, two reference points 51 and 52 may be set on the image D11. For example, the two reference points 51 and 52 are provided so as to correspond to the positions of the lower vertices of the wall surface W1 to be surveyed.
 次に、第1取得部1131は、基準点の画像上の座標と世界座標における座標との紐付けを行い、世界座標における基準点の座標情報を取得する(ステップSQ107)。図8に示すように、この壁面W1の下側の頂点において、世界座標における高さ方向の座標(すなわち地面からの高さ)を0と仮定することができる。また、壁面W1の下側の頂点の高さ方向の座標は、0以外にも、例えば、実際の標高の値であってもよい。また、基準点51、52の世界座標における水平方向における座標は、基準点51、52の画像上の座標と、世界座標における高さ方向の座標と、カメラ28の撮像状況に係る情報から、公知の座標変換の手法を用いて得ることができる。具体的には、基準点51、52の画像上の座標をカメラ28のフィルム座標に変換し、フィルム座標をカメラ28のカメラ座標に変換し、カメラ座標を世界座標に変換することで、基準点51、52の世界座標を得ることができる。その際、基準点51、52の世界座標における高さ方向の座標を座標変換式の高さ成分に代入することで、基準点51、52の世界座標における水平方向の座標を得ることができる。それぞれの座標系間の変換式は、カメラ28の仕様や、カメラ28の撮像状況に係る情報により定まり得る。なお、かかる座標変換においては、レンズのF値や歪みなど、カメラ28の仕様等の情報を考慮した手法が用いられてもよい。 Next, the first acquisition unit 1131 associates the coordinates on the image of the reference point with the coordinates in the world coordinates, and acquires the coordinate information of the reference point in the world coordinates (step SQ107). As shown in FIG. 8, at the lower vertex of the wall surface W1, the coordinates in the height direction in the world coordinates (that is, the height from the ground) can be assumed to be 0. Further, the coordinates in the height direction of the lower apex of the wall surface W1 may be, for example, an actual altitude value other than 0. Further, the horizontal coordinates of the reference points 51 and 52 in the world coordinates are known from the coordinates on the image of the reference points 51 and 52, the coordinates in the height direction in the world coordinates, and the information related to the imaging status of the camera 28. It can be obtained by using the method of coordinate conversion of. Specifically, the coordinates on the image of the reference points 51 and 52 are converted into the film coordinates of the camera 28, the film coordinates are converted into the camera coordinates of the camera 28, and the camera coordinates are converted into the world coordinates. The world coordinates of 51 and 52 can be obtained. At that time, by substituting the coordinates in the height direction in the world coordinates of the reference points 51 and 52 into the height component of the coordinate conversion formula, the horizontal coordinates in the world coordinates of the reference points 51 and 52 can be obtained. The conversion formula between the respective coordinate systems can be determined by the specifications of the camera 28 and the information related to the image pickup status of the camera 28. In such coordinate conversion, a method considering information such as the specifications of the camera 28 such as the F value and distortion of the lens may be used.
 次に、第1推定部1141は、仮想の壁面の世界座標における位置を推定する(ステップSQ109)。図9は、仮想の壁面の位置の推定処理を説明するための図である。図9に示すように、基準点51、52を通過する仮想の壁面60が画像D11上に仮想的に設けられる。かかる仮想の壁面60は、画面V1上に実際に表示されなくてもよいし、表示されていてもよい。かかる仮想の壁面60は、例えば次のように求められる。なお、ここでは、仮想の壁面60は、地面に対して垂直であるものとするが、本技術はかかる例に限定されない。まず、基準点51、52を通る仮想の壁面を仮定した際の、仮想の壁面の法線方向(ヨー方向)を算出する。そして、基準点51、52を結ぶ線分と、算出された法線方向からなる座標系を仮想の壁面の座標系として規定する。これにより、仮想の壁面60における画像上の座標と世界座標とが紐付けられ、仮想の壁面60の世界座標における位置が、実際の壁面W1の実空間上の位置と対応づけられる。 Next, the first estimation unit 1141 estimates the position of the virtual wall surface in the world coordinates (step SQ109). FIG. 9 is a diagram for explaining the process of estimating the position of the virtual wall surface. As shown in FIG. 9, a virtual wall surface 60 passing through the reference points 51 and 52 is virtually provided on the image D11. The virtual wall surface 60 may or may not be actually displayed on the screen V1. The virtual wall surface 60 is obtained, for example, as follows. Here, it is assumed that the virtual wall surface 60 is perpendicular to the ground, but the present technique is not limited to this example. First, the normal direction (yaw direction) of the virtual wall surface is calculated assuming that the virtual wall surface passes through the reference points 51 and 52. Then, the coordinate system consisting of the line segment connecting the reference points 51 and 52 and the calculated normal direction is defined as the coordinate system of the virtual wall surface. As a result, the coordinates on the image of the virtual wall surface 60 and the world coordinates are associated with each other, and the position of the virtual wall surface 60 in the world coordinates is associated with the position of the actual wall surface W1 in the real space.
 次に、ユーザUの操作を入力情報取得部111が取得して、入力点が設定される(ステップSQ111)。第2取得部1132は、入力点の画像上における座標情報を取得する。そして、第2推定部1142は、入力点が仮想の壁面60上に位置するとした場合の入力点の世界座標を推定する(ステップSQ113)。そして、基準点および入力点を頂点とする領域が設定される(ステップSQ115)。 Next, the input information acquisition unit 111 acquires the operation of the user U, and the input point is set (step SQ111). The second acquisition unit 1132 acquires the coordinate information on the image of the input point. Then, the second estimation unit 1142 estimates the world coordinates of the input point when the input point is located on the virtual wall surface 60 (step SQ113). Then, a region having the reference point and the input point as the vertices is set (step SQ115).
 図10は、入力点の設定および領域の設定に係る処理を説明するための図である。図10に示すように、画像D11上において、基準点51、52に加えて、入力点53、54が設定される。ここで、入力点53、54は、仮想の壁面60上に位置するものとして処理される。入力点53、54の世界座標における座標は、第2推定部1142により、仮想の壁面60の座標系を介して、入力点53、54の画像D11上の座標から変換されることにより得られる。 FIG. 10 is a diagram for explaining the process related to the setting of the input point and the setting of the area. As shown in FIG. 10, on the image D11, input points 53 and 54 are set in addition to the reference points 51 and 52. Here, the input points 53 and 54 are processed as being located on the virtual wall surface 60. The coordinates in the world coordinates of the input points 53 and 54 are obtained by being converted from the coordinates on the image D11 of the input points 53 and 54 by the second estimation unit 1142 via the coordinate system of the virtual wall surface 60.
 また、基準点51、52および入力点53、54により囲まれる領域61が設定される。かかる領域61は、仮想の壁面60上に対応する領域であり、画像D11に写る壁面W1に対応する領域である。 Further, the area 61 surrounded by the reference points 51 and 52 and the input points 53 and 54 is set. Such a region 61 is a region corresponding to the virtual wall surface 60, and is a region corresponding to the wall surface W1 reflected in the image D11.
 そして、算出部115は、領域61に囲まれた面積等を算出する(ステップSQ117)。算出部115は、基準点51、52および入力点53、54の世界座標における座標情報を用いて、該点により囲まれる領域の実空間上の面積を算出する。また、算出部115は、2点間の実空間上の距離を算出してもよい。かかる面積や距離に関する情報は、例えば、表示制御部112により、任意の態様で画面V1等に出力されてもよい。 Then, the calculation unit 115 calculates the area surrounded by the area 61 (step SQ117). The calculation unit 115 calculates the area in real space of the area surrounded by the reference points 51 and 52 and the input points 53 and 54 using the coordinate information in the world coordinates. Further, the calculation unit 115 may calculate the distance between two points in real space. Information on such an area and a distance may be output to the screen V1 or the like in any manner by the display control unit 112, for example.
 以上、本実施形態に係る情報処理端末10を用いた対象物の壁面の測量方法について説明した。なお、変形例として、仮想の壁面60上に設定した領域61の内側に、さらに他の領域を設定(推定)することも可能である。 The method of surveying the wall surface of the object using the information processing terminal 10 according to the present embodiment has been described above. As a modification, it is also possible to set (estimate) another area inside the area 61 set on the virtual wall surface 60.
 図11は、本実施形態の一変形例に係る領域の推定に係る処理を説明するための図である。図11に示すように、基準点51、52および入力点53、54により囲まれる領域61(壁面W1に対応する)の内側に、4つの他の入力点55、56、57、58により囲まれる領域62が設定される。かかる領域62は、建造物S1の壁面W1に設けられる窓W2に対応する。この領域61の面積と、領域62の面積を算出部115が算出し、領域61の面積から領域62の面積を差し引くことで、窓W2を除いた壁面W1の面積を算出することができる。なお、図11には示していないが、窓W2と同様に、窓W3についても、複数の入力点により囲まれる領域を設定することができる。 FIG. 11 is a diagram for explaining a process related to estimation of a region according to a modification of the present embodiment. As shown in FIG. 11, the area 61 (corresponding to the wall surface W1) surrounded by the reference points 51, 52 and the input points 53, 54 is surrounded by four other input points 55, 56, 57, 58. The area 62 is set. Such a region 62 corresponds to a window W2 provided on the wall surface W1 of the building S1. The calculation unit 115 calculates the area of the area 61 and the area of the area 62, and by subtracting the area of the area 62 from the area of the area 61, the area of the wall surface W1 excluding the window W2 can be calculated. Although not shown in FIG. 11, it is possible to set an area surrounded by a plurality of input points for the window W3 as well as the window W2.
 以上説明したように、本実施形態に係る情報処理端末10によれば、無人飛行体20が上空から撮像した建造物S1の壁面W1が写る画像について、世界座標が既知である少なくとも2つの基準点を用いて、世界座標に対応する仮想の壁面の位置を推定し、かかる仮想の壁面上に入力点を設定し、基準点と入力点の世界座標における位置座標を推定することができる。特に、少なくとも2つの基準点の高さ方向(例えば地面からの高さ方向)の世界座標が定まっていれば、撮像状況に係る情報から、座標変換により基準点の世界座標における座標情報を取得することができる。この基準点を、例えば建造物S1の地面に相当する部分(つまり高さがゼロの地点)として設定することで、自動的に、建造物S1の壁面W1の設置部分に対応する位置に設定される基準点の世界座標における座標情報を一意に定めることができる。 As described above, according to the information processing terminal 10 according to the present embodiment, at least two reference points whose world coordinates are known for the image of the wall surface W1 of the building S1 taken from the sky by the unmanned aircraft 20. Can be used to estimate the position of the virtual wall surface corresponding to the world coordinates, set the input point on the virtual wall surface, and estimate the position coordinates of the reference point and the input point in the world coordinates. In particular, if the world coordinates in the height direction of at least two reference points (for example, the height direction from the ground) are determined, the coordinate information in the world coordinates of the reference points is acquired from the information related to the imaging situation by coordinate conversion. be able to. By setting this reference point as, for example, a portion corresponding to the ground of the building S1 (that is, a point where the height is zero), the reference point is automatically set at a position corresponding to the installation portion of the wall surface W1 of the building S1. It is possible to uniquely determine the coordinate information in the world coordinates of the reference point.
 そして、仮想の壁面を少なくとも2つの基準点から設定することで、画像上に設定される入力点が該仮想の壁面上に位置した場合の、該入力点の世界座標における座標情報を一意に定めることができる。これによって、例えば、実空間上における壁面の面積や2点間の距離を算出することができる。このように、建造物の形状と実空間上の位置との関係を利用することで、斜め方向から撮像される壁面に対しても、実空間における実際の位置、長さおよび面積をより正確に算出することができる。 Then, by setting the virtual wall surface from at least two reference points, the coordinate information in the world coordinates of the input point when the input point set on the image is located on the virtual wall surface is uniquely determined. be able to. Thereby, for example, the area of the wall surface in the real space and the distance between the two points can be calculated. In this way, by utilizing the relationship between the shape of the building and the position in the real space, the actual position, length and area in the real space can be more accurately obtained even for the wall surface imaged from an oblique direction. Can be calculated.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 Although the preferred embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to such examples. It is clear that anyone with ordinary knowledge in the technical field of the present disclosure may come up with various modifications or amendments within the scope of the technical ideas described in the claims. Is, of course, understood to belong to the technical scope of the present disclosure.
 本明細書において説明した装置は、単独の装置として実現されてもよく、一部または全部がネットワークで接続された複数の装置等により実現されてもよい。例えば、情報処理端末10の制御部およびストレージは、互いにネットワークで接続された異なるサーバにより実現されてもよい。 The device described in the present specification may be realized as a single device, or may be realized by a plurality of devices, etc., which are partially or wholly connected by a network. For example, the control unit and the storage of the information processing terminal 10 may be realized by different servers connected to each other by a network.
 本明細書において説明した装置による一連の処理は、ソフトウェア、ハードウェア、及びソフトウェアとハードウェアとの組合せのいずれを用いて実現されてもよい。本実施形態に係る情報処理端末10の各機能を実現するためのコンピュータプログラムを作製し、PC等に実装することが可能である。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体も提供することができる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信されてもよい。 The series of processes by the apparatus described in the present specification may be realized by using any of software, hardware, and a combination of software and hardware. It is possible to create a computer program for realizing each function of the information processing terminal 10 according to the present embodiment and implement it on a PC or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed, for example, via a network without using a recording medium.
 また、本明細書においてフローチャート図を用いて説明した処理は、必ずしも図示された順序で実行されなくてもよい。いくつかの処理ステップは、並列的に実行されてもよい。また、追加的な処理ステップが採用されてもよく、一部の処理ステップが省略されてもよい。 Further, the processes described in the present specification using the flowchart diagram do not necessarily have to be executed in the order shown in the drawings. Some processing steps may be performed in parallel. Further, additional processing steps may be adopted, and some processing steps may be omitted.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Further, the effects described in the present specification are merely explanatory or exemplary and are not limited. That is, the techniques according to the present disclosure may have other effects apparent to those skilled in the art from the description herein, in addition to or in place of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(項目1)
 画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得する第1取得部と、
 前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定する第1推定部と、
 前記画面に対して入力された入力点の画像上における座標情報を取得する第2取得部と、
 前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定する第2推定部と、
を備える情報処理装置。
(項目2)
 項目1に記載の情報処理装置であって、
 前記基準点は、前記世界座標における所定の高さ方向の座標情報を有し、
 前記基準点の前記世界座標における平面方向の座標情報は、前記所定の高さ方向の座標情報と、前記画像上における前記基準点の座標情報と、前記画像を撮像した際の撮像状況に係る情報と、に基づいて取得される、情報処理装置。
(項目3)
 項目2に記載の情報処理装置であって、
 前記世界座標における前記所定の高さ方向の座標情報は、地面の高さ方向の座標情報である、情報処理装置。
(項目4)
 項目1~3のいずれか1項に記載の情報処理装置であって、
 前記基準点と前記入力点との、前記世界座標における前記仮想の壁面上における距離を算出する算出部をさらに備える、情報処理装置
(項目5)
 項目1~4のいずれか1項に記載の情報処理装置であって、
 前記基準点および前記入力点の少なくともいずれかに対応する、前記世界座標における前記仮想の壁面上の点を頂点とする、前記仮想の壁面上の領域の面積を算出する算出部をさらに備える、情報処理装置。
(項目6)
 項目5に記載の情報処理装置であって、
 前記領域は、前記基準点および入力点に対応する点を頂点とする領域を含む、情報処理装置。
(項目7)
 項目5または6に記載の情報処理装置であって、
 前記領域は、複数の前記入力点に対応する点を頂点とする領域を含む、情報処理装置。
(項目8)
 項目1~7のいずれか1項に記載の情報処理装置であって、
 前記撮像状況に係る情報は、撮像装置の撮像位置および前記撮像装置の撮像方向に関する情報を含む、情報処理装置。
(項目9)
 項目1~8のいずれか1項に記載の情報処理装置であって、
 前記画像は、無人飛行体が撮像して得られる撮像画像を含む、情報処理装置。
(項目10)
 プロセッサが、
 画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得することと、
 前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定することと、
 前記画面に対して入力された入力点の画像上における座標情報を取得することと、
 前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定することと、
を含む、方法。
(項目11)
 コンピュータを、
 画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得する第1取得部と、
 前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定する第1推定部と、
 前記画面に対して入力された入力点の画像上における座標情報を取得する第2取得部と、
 前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定する第2推定部と、
として機能させるためのプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(Item 1)
A first acquisition unit for acquiring coordinate information on an image displayed on the screen of at least two reference points displayed on the screen and having coordinate information in world coordinates, respectively.
The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. The first estimation unit that estimates the position of the virtual wall surface in world coordinates,
A second acquisition unit that acquires coordinate information on the image of the input point input to the screen, and
A second estimation unit that estimates the world coordinates of the input point when the input point is located on the virtual wall surface, and
Information processing device equipped with.
(Item 2)
The information processing device according to item 1.
The reference point has coordinate information in a predetermined height direction in the world coordinates.
The coordinate information in the plane direction in the world coordinates of the reference point includes the coordinate information in the predetermined height direction, the coordinate information of the reference point on the image, and the information related to the imaging state when the image is imaged. And, the information processing device acquired based on.
(Item 3)
The information processing device according to item 2.
The information processing apparatus in which the coordinate information in the predetermined height direction in the world coordinates is the coordinate information in the height direction of the ground.
(Item 4)
The information processing apparatus according to any one of items 1 to 3.
An information processing device (item 5) further comprising a calculation unit for calculating the distance between the reference point and the input point on the virtual wall surface in the world coordinates.
The information processing apparatus according to any one of items 1 to 4.
Information further comprising a calculation unit for calculating the area of the area on the virtual wall surface having a point on the virtual wall surface in the world coordinates as an apex corresponding to at least one of the reference point and the input point. Processing equipment.
(Item 6)
The information processing device according to item 5.
The region is an information processing apparatus including a region whose apex is a point corresponding to the reference point and the input point.
(Item 7)
The information processing device according to item 5 or 6.
The area is an information processing apparatus including an area having a point corresponding to the plurality of input points as an apex.
(Item 8)
The information processing apparatus according to any one of items 1 to 7.
The information relating to the image pickup state is an information processing apparatus including information regarding an image pickup position of the image pickup device and an image pickup direction of the image pickup device.
(Item 9)
The information processing apparatus according to any one of items 1 to 8.
The image is an information processing apparatus including an image captured by an unmanned vehicle.
(Item 10)
The processor,
Acquiring the coordinate information on the image displayed on the screen of at least two reference points displayed on the screen and having the coordinate information in the world coordinates, respectively.
The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. Estimating the position of the virtual wall surface in world coordinates,
Acquiring the coordinate information on the image of the input point input to the screen and
To estimate the world coordinates of the input point when the input point is located on the virtual wall surface, and
Including, how.
(Item 11)
Computer,
A first acquisition unit for acquiring coordinate information on an image displayed on the screen of at least two reference points displayed on the screen and having coordinate information in world coordinates, respectively.
The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. The first estimation unit that estimates the position of the virtual wall surface in world coordinates,
A second acquisition unit that acquires coordinate information on the image of the input point input to the screen, and
A second estimation unit that estimates the world coordinates of the input point when the input point is located on the virtual wall surface, and
A program to function as.
  1   システム
  10  情報処理端末
  11  制御部
  12  タッチパネル
  20  無人飛行体
  28  カメラ
  111 入力情報取得部
  112 表示制御部
  113 取得部
  114 推定部
  115 算出部
  1131 第1取得部
  1132 第2取得部
  1141 第1推定部
  1142 第2推定部
1 System 10 Information processing terminal 11 Control unit 12 Touch panel 20 Unmanned flying object 28 Camera 111 Input information acquisition unit 112 Display control unit 113 Acquisition unit 114 Estimate unit 115 Calculation unit 1131 1st acquisition unit 1132 2nd acquisition unit 1141 1st estimation unit 1142 Second estimation part

Claims (11)

  1.  画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得する第1取得部と、
     前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定する第1推定部と、
     前記画面に対して入力された入力点の画像上における座標情報を取得する第2取得部と、
     前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定する第2推定部と、
    を備える情報処理装置。
    A first acquisition unit for acquiring coordinate information on an image displayed on the screen of at least two reference points displayed on the screen and having coordinate information in world coordinates, respectively.
    The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. The first estimation unit that estimates the position of the virtual wall surface in world coordinates,
    A second acquisition unit that acquires coordinate information on the image of the input point input to the screen, and
    A second estimation unit that estimates the world coordinates of the input point when the input point is located on the virtual wall surface, and
    Information processing device equipped with.
  2.  請求項1に記載の情報処理装置であって、
     前記基準点は、前記世界座標における所定の高さ方向の座標情報を有し、
     前記基準点の前記世界座標における平面方向の座標情報は、前記所定の高さ方向の座標情報と、前記画像上における前記基準点の座標情報と、前記画像を撮像した際の撮像状況に係る情報と、に基づいて取得される、情報処理装置。
    The information processing apparatus according to claim 1.
    The reference point has coordinate information in a predetermined height direction in the world coordinates.
    The coordinate information in the plane direction in the world coordinates of the reference point includes the coordinate information in the predetermined height direction, the coordinate information of the reference point on the image, and the information related to the imaging state when the image is imaged. And, the information processing device acquired based on.
  3.  請求項2に記載の情報処理装置であって、
     前記世界座標における前記所定の高さ方向の座標情報は、地面の高さ方向の座標情報である、情報処理装置。
    The information processing apparatus according to claim 2.
    The information processing apparatus in which the coordinate information in the predetermined height direction in the world coordinates is the coordinate information in the height direction of the ground.
  4.  請求項1~3のいずれか1項に記載の情報処理装置であって、
     前記基準点と前記入力点との、前記世界座標における前記仮想の壁面上における距離を算出する算出部をさらに備える、情報処理装置
    The information processing apparatus according to any one of claims 1 to 3.
    An information processing device further comprising a calculation unit for calculating the distance between the reference point and the input point on the virtual wall surface in the world coordinates.
  5.  請求項1~4のいずれか1項に記載の情報処理装置であって、
     前記基準点および前記入力点の少なくともいずれかに対応する、前記世界座標における前記仮想の壁面上の点を頂点とする、前記仮想の壁面上の領域の面積を算出する算出部をさらに備える、情報処理装置。
    The information processing apparatus according to any one of claims 1 to 4.
    Information further comprising a calculation unit for calculating the area of the area on the virtual wall surface having a point on the virtual wall surface in the world coordinates as an apex corresponding to at least one of the reference point and the input point. Processing equipment.
  6.  請求項5に記載の情報処理装置であって、
     前記領域は、前記基準点および入力点に対応する点を頂点とする領域を含む、情報処理装置。
    The information processing apparatus according to claim 5.
    The region is an information processing apparatus including a region whose apex is a point corresponding to the reference point and the input point.
  7.  請求項5または6に記載の情報処理装置であって、
     前記領域は、複数の前記入力点に対応する点を頂点とする領域を含む、情報処理装置。
    The information processing apparatus according to claim 5 or 6.
    The area is an information processing apparatus including an area having a point corresponding to the plurality of input points as an apex.
  8.  請求項1~7のいずれか1項に記載の情報処理装置であって、
     前記撮像状況に係る情報は、撮像装置の撮像位置および前記撮像装置の撮像方向に関する情報を含む、情報処理装置。
    The information processing apparatus according to any one of claims 1 to 7.
    The information relating to the image pickup state is an information processing apparatus including information regarding an image pickup position of the image pickup device and an image pickup direction of the image pickup device.
  9.  請求項1~8のいずれか1項に記載の情報処理装置であって、
     前記画像は、無人飛行体が撮像して得られる撮像画像を含む、情報処理装置。
    The information processing apparatus according to any one of claims 1 to 8.
    The image is an information processing apparatus including an image captured by an unmanned vehicle.
  10.  プロセッサが、
     画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得することと、
     前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定することと、
     前記画面に対して入力された入力点の画像上における座標情報を取得することと、
     前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定することと、
    を含む、方法。
    The processor,
    Acquiring the coordinate information on the image displayed on the screen of at least two reference points displayed on the screen and having the coordinate information in the world coordinates, respectively.
    The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. Estimating the position of the virtual wall surface in world coordinates,
    Acquiring the coordinate information on the image of the input point input to the screen and
    To estimate the world coordinates of the input point when the input point is located on the virtual wall surface, and
    Including, how.
  11.  コンピュータを、
     画面に表示され、世界座標における座標情報をそれぞれ有する少なくとも2つの基準点の、前記画面に表示される画像上における座標情報を取得する第1取得部と、
     前記少なくとも2つの基準点の、前記世界座標における座標情報と、前記画像上における座標情報と、前記画像を撮像した際の撮像状況に係る情報とに基づいて、前記少なくとも2つの基準点を通る前記世界座標における仮想の壁面の位置を推定する第1推定部と、
     前記画面に対して入力された入力点の画像上における座標情報を取得する第2取得部と、
     前記入力点が前記仮想の壁面上に位置するとした場合の前記入力点の世界座標を推定する第2推定部と、
    として機能させるためのプログラム。
     
    Computer,
    A first acquisition unit for acquiring coordinate information on an image displayed on the screen of at least two reference points displayed on the screen and having coordinate information in world coordinates, respectively.
    The passage through the at least two reference points based on the coordinate information of the at least two reference points in the world coordinates, the coordinate information on the image, and the information related to the imaging state when the image is imaged. The first estimation unit that estimates the position of the virtual wall surface in world coordinates,
    A second acquisition unit that acquires coordinate information on the image of the input point input to the screen, and
    A second estimation unit that estimates the world coordinates of the input point when the input point is located on the virtual wall surface, and
    A program to function as.
PCT/JP2021/033765 2020-11-30 2021-09-14 Information processing device, method, and program WO2022113482A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022565078A JPWO2022113482A1 (en) 2020-11-30 2021-09-14

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-199019 2020-11-30
JP2020199019 2020-11-30

Publications (1)

Publication Number Publication Date
WO2022113482A1 true WO2022113482A1 (en) 2022-06-02

Family

ID=81755528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/033765 WO2022113482A1 (en) 2020-11-30 2021-09-14 Information processing device, method, and program

Country Status (2)

Country Link
JP (1) JPWO2022113482A1 (en)
WO (1) WO2022113482A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000221037A (en) * 1999-01-29 2000-08-11 Topcon Corp Automatic surveying machine and three-dimensional measuring method
JP2001033245A (en) * 1999-07-19 2001-02-09 Maeda Science:Kk Position measuring method of point on flat surface
JP2004163292A (en) * 2002-11-13 2004-06-10 Topcon Corp Survey system and electronic storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000221037A (en) * 1999-01-29 2000-08-11 Topcon Corp Automatic surveying machine and three-dimensional measuring method
JP2001033245A (en) * 1999-07-19 2001-02-09 Maeda Science:Kk Position measuring method of point on flat surface
JP2004163292A (en) * 2002-11-13 2004-06-10 Topcon Corp Survey system and electronic storage medium

Also Published As

Publication number Publication date
JPWO2022113482A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
JP6765512B2 (en) Flight path generation method, information processing device, flight path generation system, program and recording medium
US20200012756A1 (en) Vision simulation system for simulating operations of a movable platform
CN111344644A (en) Techniques for motion-based automatic image capture
US11556681B2 (en) Method and system for simulating movable object states
JP6829513B1 (en) Position calculation method and information processing system
WO2019230604A1 (en) Inspection system
WO2021251441A1 (en) Method, system, and program
JP2023100642A (en) inspection system
US20210404840A1 (en) Techniques for mapping using a compact payload in a movable object environment
US20230177707A1 (en) Post-processing of mapping data for improved accuracy and noise-reduction
WO2020019175A1 (en) Image processing method and apparatus, and photographing device and unmanned aerial vehicle
US20210185235A1 (en) Information processing device, imaging control method, program and recording medium
US20220187828A1 (en) Information processing device, information processing method, and program
JP6681101B2 (en) Inspection system
JP7004374B1 (en) Movement route generation method and program of moving object, management server, management system
WO2022113482A1 (en) Information processing device, method, and program
US20220113421A1 (en) Online point cloud processing of lidar and camera data
JP2020012774A (en) Method for measuring building
JP6684012B1 (en) Information processing apparatus and information processing method
WO2021130980A1 (en) Aircraft flight path display method and information processing device
WO2022070851A1 (en) Method, system, and program
JP2023083072A (en) Method, system and program
JP2020095519A (en) Shape estimation device, shape estimation method, program, and recording medium
US20240013460A1 (en) Information processing apparatus, information processing method, program, and information processing system
JP6681102B2 (en) Inspection system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21897460

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022565078

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21897460

Country of ref document: EP

Kind code of ref document: A1