WO2019146551A1 - Information processing device - Google Patents

Information processing device Download PDF

Info

Publication number
WO2019146551A1
WO2019146551A1 PCT/JP2019/001696 JP2019001696W WO2019146551A1 WO 2019146551 A1 WO2019146551 A1 WO 2019146551A1 JP 2019001696 W JP2019001696 W JP 2019001696W WO 2019146551 A1 WO2019146551 A1 WO 2019146551A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
required time
processing
calculation unit
size
Prior art date
Application number
PCT/JP2019/001696
Other languages
French (fr)
Japanese (ja)
Inventor
中川 宏
山田 和宏
陽平 大野
雄一朗 瀬川
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Priority to JP2019567059A priority Critical patent/JP6957651B2/en
Publication of WO2019146551A1 publication Critical patent/WO2019146551A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions

Definitions

  • the present invention relates to a technique for estimating the time required for processing using a flying object.
  • Patent Document 1 generates a map in which a plurality of cells are arranged in a work area, calculates the area S of the work area based on the map, and calculates the required work time from the calculated area S. Is described.
  • NDVI Normalized Difference Vegetation Index
  • the calculation accuracy of the NDVI is different. For example, the higher the flight speed of the aircraft, the worse the accuracy of the calculated NDVI. Also, for example, the higher the flight altitude of the aircraft, the worse the accuracy of the calculated NDVI.
  • a specification unit for specifying a target accuracy which is a target value of accuracy in processing performed by a flying object on a processing target area, and a size of the processing target area
  • An information processing apparatus comprising: a calculation unit for calculating a required time required for the flight body to perform the process with the target accuracy specified for the processing target area having a different size.
  • the processing is processing performed based on imaging of the ground by the aircraft, and the calculation unit may calculate the required time based on the size of the range covered by one imaging. Good.
  • the calculation unit may change the size of the effective range in the captured image according to the condition.
  • the calculation unit may change the size of the effective range in accordance with the amount of light at the time of imaging or imaging timing.
  • the calculation unit may correct the required time according to the light amount at the time of imaging or the imaging time.
  • the calculation unit may compare the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generate information according to the comparison result.
  • the required time required for the processing can be known.
  • FIG. 1 shows an example of the configuration of a flight control system 1.
  • FIG. FIG. 2 is a view showing an example of the appearance of a flying object 10; It is a figure which shows the hardware constitutions of the flying body 10.
  • FIG. 2 is a diagram showing a hardware configuration of a server device 20.
  • FIG. 1 is a diagram showing an example of the configuration of a flight control system 1.
  • the flight control system 1 is a system that controls the flight of the flying object 10.
  • the flight control system 1 includes a plurality of aircraft 10 and a server device 20.
  • the airframe 10 and the server device 20 can communicate with each other via a network.
  • the flying object 10 performs a process of imaging a plant in the field on a processing target area such as a field.
  • the server apparatus 20 is an example of the information processing apparatus according to the present invention, and the normalized vegetation index (NDVI: Normalized Difference Vegetation Index) is calculated from the spectral reflection characteristics of plants in the processing target area using the imaging results of the flying object 10. Perform processing to calculate.
  • the server device 20 performs processing for calculating the required time required for the processing.
  • FIG. 2 is a view showing an example of the appearance of the flying object 10.
  • the flying object 10 is, for example, a so-called drone, and includes a propeller 101, a drive device 102, and a battery 103.
  • the propeller 101 rotates about an axis. As the propeller 101 rotates, the flying object 10 flies.
  • the driving device 102 powers and rotates the propeller 101.
  • the drive device 102 includes, for example, a motor and a transmission mechanism that transmits the power of the motor to the propeller 101.
  • the battery 103 supplies power to each part of the aircraft 10 including the drive device 102.
  • FIG. 3 is a diagram showing the hardware configuration of the aircraft 10.
  • the flying object 10 is physically configured as a computer device including a processor 11, a memory 12, a storage 13, a communication device 14, a positioning device 15, an imaging device 16, a beacon device 17, a bus 18, and the like.
  • the term “device” can be read as a circuit, a device, a unit, or the like.
  • the processor 11 operates an operating system, for example, to control the entire computer.
  • the processor 11 may be configured by a central processing unit (CPU) including an interface with a peripheral device, a control device, an arithmetic device, a register, and the like.
  • CPU central processing unit
  • the processor 11 reads a program (program code), a software module or data from the storage 13 and / or the communication device 14 to the memory 12 and executes various processing according to these.
  • a program a program that causes a computer to execute at least a part of the operation of the flying object 10 is used.
  • the various processes performed in the aircraft 10 may be performed by one processor 11 or may be performed simultaneously or sequentially by two or more processors 11.
  • the processor 11 may be implemented by one or more chips.
  • the program may be transmitted from the network via a telecommunication line.
  • the memory 12 is a computer readable recording medium, and includes, for example, at least one of a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). It may be done.
  • the memory 12 may be called a register, a cache, a main memory (main storage device) or the like.
  • the memory 12 can store a program (program code), a software module, and the like that can be executed to implement the flight control method according to the embodiment of the present invention.
  • the storage 13 is a computer readable recording medium, and is, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magnetooptical disk (for example, a compact disk, a digital versatile disk, Blu-ray A (registered trademark) disk, a smart card, a flash memory (for example, a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, and the like may be used.
  • the storage 13 may be called an auxiliary storage device.
  • the communication device 14 is hardware (transmission / reception device) for performing communication between computers via a wired and / or wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the positioning device 15 measures the three-dimensional position of the aircraft 10.
  • the positioning device 15 is, for example, a GPS (Global Positioning System) receiver, and measures the current position of the aircraft 10 based on GPS signals received from a plurality of satellites.
  • GPS Global Positioning System
  • the imaging device 16 captures an image around the flying object 10.
  • the imaging device 16 is, for example, a camera, and captures an image by forming an image on an imaging element using an optical system.
  • the imaging device 16 captures an image of a predetermined range, for example, below the flying object 10.
  • the beacon device 17 transmits a beacon signal of a predetermined frequency, and also receives a beacon signal transmitted from another flying object 10.
  • the reach of this beacon signal is a predetermined distance such as 100 m.
  • the beacon signal includes aircraft identification information that identifies the aircraft 10 that transmits the beacon signal. This flying body identification information is used to prevent collision of the flying bodies 10 with each other.
  • the devices such as the processor 11 and the memory 12 described above are connected by a bus 18 for communicating information.
  • the bus 18 may be configured as a single bus or may be configured as different buses among the devices.
  • FIG. 4 is a diagram showing a hardware configuration of the server device 20.
  • the server device 20 is physically configured as a computer device including a processor 21, a memory 22, a storage 23, a communication device 24, a bus 25 and the like.
  • the processor 21, the memory 22, the storage 23, the communication device 24, and the bus 25 are similar to the processor 11, the memory 12, the storage 13, the communication device 14, and the bus 18 described above, and thus the description thereof is omitted.
  • FIG. 5 is a diagram showing an example of a functional configuration of the server device 20.
  • Each function in the server device 20 causes the processor 21 to perform an operation by reading predetermined software (program) on hardware such as the processor 21 and the memory 22, thereby performing communication by the communication device 24, the memory 22, and the storage 23. This is realized by controlling the reading and / or writing of data in
  • the tracking unit 200 records the flying object identification information of the flying object 10 under control of the server device 20 and the flight status thereof.
  • the flight status includes the position where the flying object 10 is flying and the date and time at that position.
  • the tracking unit 200 records position information and date and time information notified from the aircraft 10.
  • the tracking unit 200 determines whether the position information and the date and time information are within a previously planned flight plan, and records the determination result.
  • the input unit 201 inputs, to the server device 20, the target accuracy for the processing performed on the field which is the processing target area by the flying object 10 and the size of the processing target area.
  • the target accuracy here is, for example, a value predetermined as the accuracy of the NDVI.
  • the input unit 201 inputs image data indicating an image captured by the flying object 10 to the server device 20.
  • the identifying unit 202 identifies, based on the information input by the input unit 201, the target accuracy in the process performed on the processing target area by the aircraft 10 and the size of the processing target area.
  • the calculating unit 203 calculates the time required for the flight object 10 to perform imaging processing so that the target accuracy specified by the specifying unit 202 can be realized with respect to the processing target area of the size specified by the specifying unit 202. Do. This process is a process performed based on the imaging of the ground (field) by the aircraft 10, and the calculation unit 203 calculates the required time based on the size of the effective range in the imaged image. In addition, the calculation unit 203 changes the size of the effective range according to the condition. More specifically, the calculation unit 203 determines the size of the effective range using the condition of the light amount at the time of imaging. The larger the amount of light at the time of imaging, the larger the effective range, and the smaller the amount of light at the time of imaging, the smaller the effective range.
  • FIG. 6 is a diagram for explaining this effective range.
  • the lens used when the flying object 10 captures an image for NDVI calculation is a fisheye lens.
  • the image captured by this fisheye lens is a two-dimensional circular image, and NDVI is calculated from the spectral reflectance characteristics of plants in this image. Since the calculation result of the NDVI varies depending on the elevation angle at the time of imaging, in particular, the calculation result at the edge of the captured image changes significantly.
  • the effective range varies depending on the flight altitude of the flying object 10. That is, when the flight altitude is high, the imaging range P in one imaging becomes wide, and as a result, the effective range Pu also widens. On the other hand, when the flight altitude is low, the imaging range P in one imaging also becomes narrow, and as a result, the effective range Pu also becomes narrow.
  • the calculation unit 203 calculates the required time based on the size of the range covered by one imaging.
  • the NDVI of the field which is the processing target area it is desirable to calculate the NDVI from the spectral reflectance characteristics of the plant in the range of a predetermined ratio (for example 10%) of the size of the whole area.
  • a predetermined ratio for example 10%
  • the number of imaging processes is also different. Specifically, when the effective range in the captured image is large (larger than a certain value), the number of imaging processes decreases, and the effective range in the captured image is small (smaller than a certain value) In this case, the number of imaging processes also increases.
  • the generation unit 204 is information on the size of the processing target area that can be processed with the upper limit For example, the ratio of the size of the processing target area that can be processed at the above upper limit to the size of the entire processing target area is generated.
  • the degree of processing completion in the entire processing target area can be estimated.
  • the evaluation value calculation unit 206 calculates NDVI from the spectral reflection characteristics of the plants in the image based on the image data input by the input unit 201.
  • the output unit 205 outputs the required time calculated by the calculation unit 203, the information generated by the generation unit 204, or the NDVI calculated by the evaluation value calculation unit 206.
  • the processor 11 is read by reading predetermined software (program) on hardware such as the processor 11 and the memory 12.
  • the process is executed by performing an operation and controlling communication by the communication device 14 and reading and / or writing of data in the memory 12 and the storage 13. The same applies to the server device 20.
  • FIG. 8 is a flowchart showing an example of the required time calculation operation of the server device 20.
  • the input unit 201 inputs, to the server device 20, the target accuracy for the process performed on the field which is the processing target area by the flying object 10 and the size of the processing target area (step S11). This input may be performed by an input device connected to the server device 20 or may be performed by a remote control device of the aircraft 10.
  • the identifying unit 202 identifies, based on the information input by the input unit 201, the target accuracy for the process performed by the flying object 10 on the processing target area and the size of the processing target area (step S12).
  • the calculation unit 203 calculates the required time required for the flying object 10 to perform the imaging process as described below so as to achieve the specified target accuracy for the specified size of the processing target area (see below) Step S13).
  • the accuracy A is expressed by a function f of the flying height h of the flying object 10, as shown in the following equation.
  • A f (h)
  • this function f is designed to be a function such that the accuracy A becomes lower as the flight height h of the flying object 10 becomes higher.
  • the required time T is expressed by a function g of the flying height h and the flying speed v of the flying object 10, as shown in the following equation.
  • T g (h, v)
  • the flight speed v is a velocity at which the flying object 10 moves between the respective imaging positions.
  • the function g may include the area and shape of the processing target area (field), the effective range Pu, the photographing stop time (time to temporarily stop at the time of photographing), the number of times of photographing, etc. as variables.
  • the flying height h is determined from the precision A by the function f.
  • an imaging range to be covered by the flying object 10 in one imaging is determined.
  • a flight plan including various flight conditions is determined based on the one imaging range and the size of the processing target area.
  • the flight plan determines the time required for the aircraft 10 to process T. Therefore, as shown in FIG. 7, the required time T becomes long when imaging processing is performed such that the flight altitude is low and the flight path to the processing target area is dense.
  • the required time T becomes short.
  • step S14 If the required time T calculated by the calculation unit 203 exceeds the upper limit of the target required time (step S14; NO), the generation unit 204 can process the area with the upper limit.
  • the generation unit 204 generates (Sl / S) ⁇ 100 (%) as information on the size of the processing target area that can be processed at the upper limit.
  • the output unit 205 outputs the required time T calculated by the calculation unit 203 or information ((Sl / S) ⁇ 100 (%)) generated by the generation unit 204 (step S16).
  • Modifications The present invention is not limited to the embodiments described above. You may deform
  • the calculation unit 203 may compare the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generate information according to the comparison result. This makes it possible to determine whether the processing can be completed within the target time when calibration is performed. Specifically, before the imaging process, for example, an object of a predetermined color such as a white board is imaged to perform calibration of the imaging apparatus. By this calibration, the lower limit of the time required for the aircraft 10 to perform processing at a certain flight speed and flight altitude with a predetermined target accuracy can be determined. The calculation unit 203 compares the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generates information (for example, the ratio of the former to the latter) according to the comparison result.
  • information for example, the ratio of the former to the latter
  • the calculation unit 203 may change the effective range Pu in accordance with the imaging timing.
  • Modification 4 the process performed by the flying object is not limited to the imaging process of a field or the like. Further, each equation described above is merely an example, and addition of a constant or a coefficient to the equation described above is optional.
  • each functional block may be realized by one physically and / or logically coupled device, or directly and / or indirectly two or more physically and / or logically separated devices. It may be connected by (for example, wired and / or wireless) and realized by the plurality of devices.
  • at least a part of the functions of the server device 20 may be implemented on the aircraft 10.
  • at least part of the functions of the aircraft 10 may be implemented on the server device 20.
  • Each aspect / embodiment described in the present specification is LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA (Registered trademark), GSM (registered trademark), CDMA2000, UMB (Ultra Mobile Broadband), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-Wide Band),
  • the present invention may be applied to a system utilizing Bluetooth (registered trademark), other appropriate systems, and / or an advanced next-generation system based on these.
  • system and "network” as used herein are used interchangeably.
  • radio resources may be indexed.
  • determining may encompass a wide variety of operations. For example, “judgment” and “decision” may be judging, calculating, calculating, processing, processing, deriving, investigating, looking up (for example, a table) (Searching in a database or another data structure), ascertaining may be considered as “decision” or “decision”. Also, “determination” and “determination” are receiving (for example, receiving information), transmitting (for example, transmitting information), input (input), output (output), access (Accessing) (for example, accessing data in a memory) may be regarded as “determined” or “determined”.
  • the present invention may be provided as a flight control method or an information processing method including the steps of processing performed in the flight control system 1 or the server device 20. Also, the present invention may be provided as a program executed on the airframe 10 or the server device 20. Such a program may be provided in the form of being recorded in a recording medium such as an optical disk, or may be provided in the form of being downloaded to a computer via a network such as the Internet and installed and made available. It is possible.
  • Software, instructions, etc. may be sent and received via a transmission medium.
  • software may use a wireline technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or a website, server or other using wireless technology such as infrared, radio and microwave When transmitted from a remote source, these wired and / or wireless technologies are included within the definition of transmission medium.
  • wireline technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or a website, server or other using wireless technology such as infrared, radio and microwave
  • data, instructions, commands, information, signals, bits, symbols, chips etc may be voltage, current, electromagnetic waves, magnetic fields or particles, optical fields or photons, or any of these May be represented by a combination of
  • the channels and / or symbols may be signals.
  • the signal may be a message.
  • the component carrier (CC) may be called a carrier frequency, a cell or the like.
  • any reference to an element using the designation "first,” “second,” etc. as used herein does not generally limit the quantity or order of those elements. These designations may be used herein as a convenient way of distinguishing between two or more elements. Thus, reference to the first and second elements does not mean that only two elements can be taken there, or that in any way the first element must precede the second element.
  • each device described above may be replaced with a “unit”, a “circuit”, a “device” or the like.
  • Flight control system 10 Flight object 20: Server device 21: Processor 22: Memory 23: Storage 24: Communication device 25: Bus 200: Tracking unit 201: Input unit 202: Identification Part, 203: Calculation part, 204: Generation part, 205: Output part, 206: Evaluation value calculation part.

Abstract

The present invention addresses the problem of knowing the required time which is necessary for processing when performing processing at the accuracy required by a flying object. A server device (20) is one example of an information processing device according to the present invention, and the server device (20) performs processing for calculating the normalized difference vegetation index (NDVI) from the spectral reflectance properties of vegetation by using the imaging results from a flying object (10). In addition, the server device (20) performs processing for calculating the required time which is necessary for processing when performing processing at the accuracy required by the flying object (10).

Description

情報処理装置Information processing device
 本発明は、飛行体を用いた処理に要する時間を推定するための技術に関する。 The present invention relates to a technique for estimating the time required for processing using a flying object.
 例えば特許文献1には、作業エリアに複数のセルを配列してなるマップを生成し、そのマップに基づいて作業エリアの面積Sを算出し、算出された面積Sから必要作業時間を算出することが記載されている。 For example, Patent Document 1 generates a map in which a plurality of cells are arranged in a work area, calculates the area S of the work area based on the map, and calculates the required work time from the calculated area S. Is described.
特開2017-176115号公報JP, 2017-176115, A
 例えば飛行体が圃場の植物を撮像してその植物のスペクトル反射特性から正規化植生指数(NDVI:Normalized Difference Vegetation Index)を算出する場合には、圃場上空を飛行する飛行体の速度や高度等に応じて、NDVIの算出精度が異なる。例えば飛行体の飛行速度が速いほど、算出されるNDVIの精度は悪化する。また、例えば飛行体の飛行高度が高いほど、算出されるNDVIの精度は悪化する。 For example, in the case where a flying object images a plant in a field and calculates a normalized vegetation index (NDVI: Normalized Difference Vegetation Index) from spectral reflectance characteristics of the plant, the speed or altitude of the flying object flying above the field, etc. Accordingly, the calculation accuracy of the NDVI is different. For example, the higher the flight speed of the aircraft, the worse the accuracy of the calculated NDVI. Also, for example, the higher the flight altitude of the aircraft, the worse the accuracy of the calculated NDVI.
 このような背景に鑑み、本発明は、飛行体が求められる精度で処理を行う場合に、その処理に要する所要時間を知ることを目的とする。 In view of such background, it is an object of the present invention to know the required time required for processing when the processing is performed with the accuracy required for a flying object.
 上記課題を解決するため、本発明は、飛行体が処理対象エリアに対して行う処理における精度の目標値となる目標精度と、当該処理対象エリアの大きさとを特定する特定部と、前記特定された大きさの前記処理対象エリアに対して特定された前記目標精度で前記飛行体が前記処理を行うのに要する所要時間を算出する算出部とを備えることを特徴とする情報処理装置を提供する。 In order to solve the above problems, according to the present invention, there is provided a specification unit for specifying a target accuracy, which is a target value of accuracy in processing performed by a flying object on a processing target area, and a size of the processing target area; An information processing apparatus comprising: a calculation unit for calculating a required time required for the flight body to perform the process with the target accuracy specified for the processing target area having a different size. .
 前記処理は、前記飛行体による地上の撮像に基づいて行われる処理であり、前記算出部は、1回の撮像によりカバーされる範囲の大きさに基づいて前記所要時間を算出するようにしてもよい。 The processing is processing performed based on imaging of the ground by the aircraft, and the calculation unit may calculate the required time based on the size of the range covered by one imaging. Good.
 前記算出部は、撮像された画像における有効範囲の大きさを条件に応じて変化させるようにしてもよい。 The calculation unit may change the size of the effective range in the captured image according to the condition.
 前記算出部は、撮像時の光量又は撮像時期に応じて前記有効範囲の大きさを変化させるようにしてもよい。 The calculation unit may change the size of the effective range in accordance with the amount of light at the time of imaging or imaging timing.
 前記算出部は、撮像時の光量又は撮像時期に応じて前記所要時間を補正するようにしてもよい。 The calculation unit may correct the required time according to the light amount at the time of imaging or the imaging time.
 前記算出部によって算出された所要時間が、目標とする所要時間の上限を上回っている場合には、当該上限で前記処理を行うことが可能な前記処理対象エリアの大きさに関する情報を生成する生成部を備えるようにしてもよい。 When the required time calculated by the calculation unit exceeds the upper limit of the target required time, generation is performed to generate information on the size of the processing target area that can perform the process with the upper limit A unit may be provided.
 前記算出部は、前記処理に対するキャリブレーションの実行に応じた所要時間の下限と、目標とする所要時間とを比較し、当該比較結果に応じた情報を生成するようにしてもよい。 The calculation unit may compare the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generate information according to the comparison result.
 本発明によれば、飛行体が求められる精度で処理を行う場合に、その処理に要する所要時間を知ることができる。 According to the present invention, when processing is performed with the accuracy required for a flying object, the required time required for the processing can be known.
飛行制御システム1の構成の一例を示す図である。1 shows an example of the configuration of a flight control system 1. FIG. 飛行体10の外観の一例を示す図である。FIG. 2 is a view showing an example of the appearance of a flying object 10; 飛行体10のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the flying body 10. As shown in FIG. サーバ装置20のハードウェア構成を示す図である。FIG. 2 is a diagram showing a hardware configuration of a server device 20. サーバ装置20の機能構成の一例を示す図である。FIG. 2 is a diagram showing an example of a functional configuration of a server device 20. 撮像画像の有効範囲を例示する図である。It is a figure which illustrates the effective range of a captured image. 関数f,gの意義を説明する図である。It is a figure explaining the meaning of the functions f and g. サーバ装置20の所要時間算出動作の一例を示すフローチャートである。6 is a flowchart illustrating an example of a required time calculation operation of the server device 20.
構成
 図1は、飛行制御システム1の構成の一例を示す図である。飛行制御システム1は、飛行体10の飛行を制御するシステムである。飛行制御システム1は、複数の飛行体10と、サーバ装置20とを備える。飛行体10とサーバ装置20とはネットワークを介して相互に通信可能である。飛行体10は、例えば圃場等の処理対象エリアに対して、その圃場における植物を撮像する処理を行う。サーバ装置20は本発明に係る情報処理装置の一例であり、飛行体10の撮像結果を用いて、処理対象エリア内の植物のスペクトル反射特性から正規化植生指数(NDVI:Normalized Difference Vegetation Index)を算出する処理を行う。また、サーバ装置20は、飛行体10が求められる精度で処理を行う場合に、その処理に要する所要時間を算出する処理を行う。
Configuration FIG. 1 is a diagram showing an example of the configuration of a flight control system 1. The flight control system 1 is a system that controls the flight of the flying object 10. The flight control system 1 includes a plurality of aircraft 10 and a server device 20. The airframe 10 and the server device 20 can communicate with each other via a network. For example, the flying object 10 performs a process of imaging a plant in the field on a processing target area such as a field. The server apparatus 20 is an example of the information processing apparatus according to the present invention, and the normalized vegetation index (NDVI: Normalized Difference Vegetation Index) is calculated from the spectral reflection characteristics of plants in the processing target area using the imaging results of the flying object 10. Perform processing to calculate. In addition, when performing processing with the accuracy required for the flying object 10, the server device 20 performs processing for calculating the required time required for the processing.
 図2は、飛行体10の外観の一例を示す図である。飛行体10は、例えばドローンと呼ばれるものであり、プロペラ101と、駆動装置102と、バッテリー103とを備える。 FIG. 2 is a view showing an example of the appearance of the flying object 10. The flying object 10 is, for example, a so-called drone, and includes a propeller 101, a drive device 102, and a battery 103.
 プロペラ101は、軸を中心に回転する。プロペラ101が回転することにより、飛行体10が飛行する。駆動装置102は、プロペラ101に動力を与えて回転させる。駆動装置102は、例えばモーターとモーターの動力をプロペラ101に伝達する伝達機構とを含む。バッテリー103は、駆動装置102を含む飛行体10の各部に電力を供給する。 The propeller 101 rotates about an axis. As the propeller 101 rotates, the flying object 10 flies. The driving device 102 powers and rotates the propeller 101. The drive device 102 includes, for example, a motor and a transmission mechanism that transmits the power of the motor to the propeller 101. The battery 103 supplies power to each part of the aircraft 10 including the drive device 102.
 図3は、飛行体10のハードウェア構成を示す図である。飛行体10は、物理的には、プロセッサ11、メモリ12、ストレージ13、通信装置14、測位装置15、撮像装置16、ビーコン装置17、バス18などを含むコンピュータ装置として構成されている。なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。 FIG. 3 is a diagram showing the hardware configuration of the aircraft 10. The flying object 10 is physically configured as a computer device including a processor 11, a memory 12, a storage 13, a communication device 14, a positioning device 15, an imaging device 16, a beacon device 17, a bus 18, and the like. In the following description, the term "device" can be read as a circuit, a device, a unit, or the like.
 プロセッサ11は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ11は、周辺装置とのインターフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central  Processing  Unit)で構成されてもよい。 The processor 11 operates an operating system, for example, to control the entire computer. The processor 11 may be configured by a central processing unit (CPU) including an interface with a peripheral device, a control device, an arithmetic device, a register, and the like.
 また、プロセッサ11は、プログラム(プログラムコード)、ソフトウェアモジュールやデータを、ストレージ13及び/又は通信装置14からメモリ12に読み出し、これらに従って各種の処理を実行する。プログラムとしては、飛行体10の動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。飛行体10において実行される各種処理は、1つのプロセッサ11により実行されてもよいし、2以上のプロセッサ11により同時又は逐次に実行されてもよい。プロセッサ11は、1以上のチップで実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されてもよい。 Further, the processor 11 reads a program (program code), a software module or data from the storage 13 and / or the communication device 14 to the memory 12 and executes various processing according to these. As a program, a program that causes a computer to execute at least a part of the operation of the flying object 10 is used. The various processes performed in the aircraft 10 may be performed by one processor 11 or may be performed simultaneously or sequentially by two or more processors 11. The processor 11 may be implemented by one or more chips. The program may be transmitted from the network via a telecommunication line.
 メモリ12は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つで構成されてもよい。メモリ12は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ12は、本発明の一実施の形態に係る飛行制御方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 The memory 12 is a computer readable recording medium, and includes, for example, at least one of a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). It may be done. The memory 12 may be called a register, a cache, a main memory (main storage device) or the like. The memory 12 can store a program (program code), a software module, and the like that can be executed to implement the flight control method according to the embodiment of the present invention.
 ストレージ13は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact  Disc  ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つで構成されてもよい。ストレージ13は、補助記憶装置と呼ばれてもよい。 The storage 13 is a computer readable recording medium, and is, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magnetooptical disk (for example, a compact disk, a digital versatile disk, Blu-ray A (registered trademark) disk, a smart card, a flash memory (for example, a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, and the like may be used. The storage 13 may be called an auxiliary storage device.
 通信装置14は、有線及び/又は無線ネットワークを介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 14 is hardware (transmission / reception device) for performing communication between computers via a wired and / or wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
 測位装置15は、飛行体10の三次元の位置を測定する。測位装置15は、例えばGPS(Global Positioning  System)受信機であり、複数の衛星から受信したGPS信号に基づいて飛行体10の現在位置を測定する。 The positioning device 15 measures the three-dimensional position of the aircraft 10. The positioning device 15 is, for example, a GPS (Global Positioning System) receiver, and measures the current position of the aircraft 10 based on GPS signals received from a plurality of satellites.
 撮像装置16は、飛行体10の周囲の画像を撮影する。撮像装置16は、例えばカメラであり、光学系を用いて撮像素子上に像を結ばせることにより、画像を撮影する。撮像装置16は、例えば飛行体10の下方において所定の範囲の画像を撮影する。 The imaging device 16 captures an image around the flying object 10. The imaging device 16 is, for example, a camera, and captures an image by forming an image on an imaging element using an optical system. The imaging device 16 captures an image of a predetermined range, for example, below the flying object 10.
 ビーコン装置17は、所定の周波数のビーコン信号を送信し、また、他の飛行体10から送信されるビーコン信号を受信する。このビーコン信号の到達範囲は例えば100mなどの所定距離である。ビーコン信号には、当該ビーコン信号を送信する飛行体10を識別する飛行体識別情報が含まれている。この飛行体識別情報は飛行体10どうしの衝突防止等のために用いられる。 The beacon device 17 transmits a beacon signal of a predetermined frequency, and also receives a beacon signal transmitted from another flying object 10. The reach of this beacon signal is a predetermined distance such as 100 m. The beacon signal includes aircraft identification information that identifies the aircraft 10 that transmits the beacon signal. This flying body identification information is used to prevent collision of the flying bodies 10 with each other.
 上述したプロセッサ11やメモリ12などの各装置は、情報を通信するためのバス18で接続される。バス18は、単一のバスで構成されてもよいし、装置間で異なるバスで構成されてもよい。 The devices such as the processor 11 and the memory 12 described above are connected by a bus 18 for communicating information. The bus 18 may be configured as a single bus or may be configured as different buses among the devices.
 図4は、サーバ装置20のハードウェア構成を示す図である。サーバ装置20は、物理的には、プロセッサ21、メモリ22、ストレージ23、通信装置24、バス25などを含むコンピュータ装置として構成されている。プロセッサ21、メモリ22、ストレージ23、通信装置24、及びバス25は、上述したプロセッサ11、メモリ12、ストレージ13、通信装置14、及びバス18と同様であるため、その説明を省略する。 FIG. 4 is a diagram showing a hardware configuration of the server device 20. As shown in FIG. The server device 20 is physically configured as a computer device including a processor 21, a memory 22, a storage 23, a communication device 24, a bus 25 and the like. The processor 21, the memory 22, the storage 23, the communication device 24, and the bus 25 are similar to the processor 11, the memory 12, the storage 13, the communication device 14, and the bus 18 described above, and thus the description thereof is omitted.
 図5は、サーバ装置20の機能構成の一例を示す図である。サーバ装置20における各機能は、プロセッサ21、メモリ22などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることで、プロセッサ21が演算を行い、通信装置24による通信や、メモリ22及びストレージ23におけるデータの読み出し及び/又は書き込みを制御することにより実現される。 FIG. 5 is a diagram showing an example of a functional configuration of the server device 20. As shown in FIG. Each function in the server device 20 causes the processor 21 to perform an operation by reading predetermined software (program) on hardware such as the processor 21 and the memory 22, thereby performing communication by the communication device 24, the memory 22, and the storage 23. This is realized by controlling the reading and / or writing of data in
 図5において、トラッキング部200は、サーバ装置20の制御下にある飛行体10の飛行体識別情報とその飛行状況を記録する。飛行状況には、飛行体10が飛行している位置と、その位置における日時とが含まれている。トラッキング部200は、飛行体10から通知される位置情報及び日時情報を記録する。また、トラッキング部200は、その位置情報及び日時情報が、予め計画された飛行計画内であるかどうかを判断し、その判断結果を記録する。 In FIG. 5, the tracking unit 200 records the flying object identification information of the flying object 10 under control of the server device 20 and the flight status thereof. The flight status includes the position where the flying object 10 is flying and the date and time at that position. The tracking unit 200 records position information and date and time information notified from the aircraft 10. In addition, the tracking unit 200 determines whether the position information and the date and time information are within a previously planned flight plan, and records the determination result.
 入力部201は、飛行体10が処理対象エリアである圃場に対して行う処理に対する目標精度と、処理対象エリアの大きさとをサーバ装置20に入力する。ここでいう目標精度は、例えばNDVIの精度として予め定められた値である。また、入力部201は、飛行体10が撮像した画像を示す画像データをサーバ装置20に入力する。 The input unit 201 inputs, to the server device 20, the target accuracy for the processing performed on the field which is the processing target area by the flying object 10 and the size of the processing target area. The target accuracy here is, for example, a value predetermined as the accuracy of the NDVI. Further, the input unit 201 inputs image data indicating an image captured by the flying object 10 to the server device 20.
 特定部202は、入力部201により入力された情報に基づいて、飛行体10が処理対象エリアに対して行う処理における目標精度と、当該処理対象エリアの大きさとを特定する。 The identifying unit 202 identifies, based on the information input by the input unit 201, the target accuracy in the process performed on the processing target area by the aircraft 10 and the size of the processing target area.
 算出部203は、特定部202により特定された大きさの処理対象エリアに対して特定部202により特定された目標精度を実現し得るよう飛行体10が撮像処理を行うのに要する所要時間を算出する。この処理は、飛行体10による地上(圃場)の撮像に基づいて行われる処理であり、算出部203は、撮像された画像における有効範囲の大きさに基づいて所要時間を算出する。また、算出部203は、この有効範囲の大きさを条件に応じて変化させる。より具体的には、算出部203は、撮像時の光量という条件を用いてこの有効範囲の大きさを決める。撮像時の光量が多いほど有効範囲が大きく、撮像時の光量が少ないほど有効範囲が小さくなる。 The calculating unit 203 calculates the time required for the flight object 10 to perform imaging processing so that the target accuracy specified by the specifying unit 202 can be realized with respect to the processing target area of the size specified by the specifying unit 202. Do. This process is a process performed based on the imaging of the ground (field) by the aircraft 10, and the calculation unit 203 calculates the required time based on the size of the effective range in the imaged image. In addition, the calculation unit 203 changes the size of the effective range according to the condition. More specifically, the calculation unit 203 determines the size of the effective range using the condition of the light amount at the time of imaging. The larger the amount of light at the time of imaging, the larger the effective range, and the smaller the amount of light at the time of imaging, the smaller the effective range.
 ここで、撮像された画像における有効範囲について説明する。図6はこの有効範囲を説明するための図である。1回の撮像範囲Pに対して、その一部が有効範囲Puとなる。一般に、飛行体10がNDVI算出のために撮像するときに用いられるレンズは、魚眼レンズである。この魚眼レンズによって撮像された画像は、二次元の円形状の画像であり、この画像中の植物のスペクトル反射特性からNDVIが算出される。NDVIは撮像時の仰角によって算出結果が異なるから、特に、撮像画像端部における算出結果が著しく変化する。そこで、撮像された画像においては、撮像範囲の中心から所定の範囲の円領域を有効範囲とし、この有効範囲をNDVIの算出対象とすることが望ましい。また、有効範囲は飛行体10の飛行高度によって異なる。つまり、飛行高度が高い場合には、1回の撮像における撮像範囲Pも広くなり、この結果、有効範囲Puも広くなる。一方、飛行高度が低い場合には、1回の撮像における撮像範囲Pも狭くなり、この結果、有効範囲Puも狭くなる。 Here, the effective range in the captured image will be described. FIG. 6 is a diagram for explaining this effective range. For one imaging range P, a part thereof becomes the effective range Pu. In general, the lens used when the flying object 10 captures an image for NDVI calculation is a fisheye lens. The image captured by this fisheye lens is a two-dimensional circular image, and NDVI is calculated from the spectral reflectance characteristics of plants in this image. Since the calculation result of the NDVI varies depending on the elevation angle at the time of imaging, in particular, the calculation result at the edge of the captured image changes significantly. Therefore, in the captured image, it is preferable to set a circular area of a predetermined range from the center of the imaging range as an effective range, and set the effective range as a calculation target of NDVI. Also, the effective range varies depending on the flight altitude of the flying object 10. That is, when the flight altitude is high, the imaging range P in one imaging becomes wide, and as a result, the effective range Pu also widens. On the other hand, when the flight altitude is low, the imaging range P in one imaging also becomes narrow, and as a result, the effective range Pu also becomes narrow.
 このように、算出部203は、1回の撮像によりカバーされる範囲の大きさに基づいて所要時間を算出する。処理対象エリアである圃場のNDVIを算出するときには、そのエリア全体の大きさの所定割合(例えば10%)の範囲における植物のスペクトル反射特性からNDVIを算出することが望ましいが、上記のように撮像画像中の有効範囲が異なると、撮像処理の回数も異なる。具体的には、撮像画像中の有効範囲が大きい(或る値よりも大きい)場合には、撮像処理の回数も少なくなるし、撮像画像中の有効範囲が小さい(或る値よりも小さい)場合には、撮像処理の回数も多くなる。 Thus, the calculation unit 203 calculates the required time based on the size of the range covered by one imaging. When calculating the NDVI of the field which is the processing target area, it is desirable to calculate the NDVI from the spectral reflectance characteristics of the plant in the range of a predetermined ratio (for example 10%) of the size of the whole area. When the effective range in the image is different, the number of imaging processes is also different. Specifically, when the effective range in the captured image is large (larger than a certain value), the number of imaging processes decreases, and the effective range in the captured image is small (smaller than a certain value) In this case, the number of imaging processes also increases.
 生成部204は、算出部203によって算出された所要時間が、目標とする所要時間の上限を上回っている場合には、当該上限で処理を行うことが可能な処理対象エリアの大きさに関する情報(例えば当初の処理対象エリア全体の大きさに対して、上記上限で処理を行うことが可能な処理対象エリアの大きさの割合等)を生成する。これにより、処理対象エリアの全範囲を目標とする所要時間内に処理できないときに、処理対象エリア全体における処理完了度を推定することができる。 When the required time calculated by the calculation unit 203 exceeds the upper limit of the target required time, the generation unit 204 is information on the size of the processing target area that can be processed with the upper limit For example, the ratio of the size of the processing target area that can be processed at the above upper limit to the size of the entire processing target area is generated. Thus, when the entire range of the processing target area can not be processed within the target required time, the degree of processing completion in the entire processing target area can be estimated.
 評価値算出部206は、入力部201により入力された画像データに基づいて、この画像中の植物のスペクトル反射特性からNDVIを算出する。 The evaluation value calculation unit 206 calculates NDVI from the spectral reflection characteristics of the plants in the image based on the image data input by the input unit 201.
 出力部205は、算出部203によって算出された所要時間、生成部204により生成された情報又は評価値算出部206により算出されたNDVIを出力する。 The output unit 205 outputs the required time calculated by the calculation unit 203, the information generated by the generation unit 204, or the NDVI calculated by the evaluation value calculation unit 206.
動作
 次に本実施形態の動作を説明する。なお、以下の説明において、飛行体10を処理の主体として記載する場合には、具体的にはプロセッサ11、メモリ12などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることで、プロセッサ11が演算を行い、通信装置14による通信や、メモリ12及びストレージ13におけるデータの読み出し及び/又は書き込みを制御することにより、処理が実行されることを意味する。サーバ装置20についても同様である。
Operation Next, the operation of this embodiment will be described. In the following description, when the flying object 10 is described as a subject of processing, specifically, the processor 11 is read by reading predetermined software (program) on hardware such as the processor 11 and the memory 12. Means that the process is executed by performing an operation and controlling communication by the communication device 14 and reading and / or writing of data in the memory 12 and the storage 13. The same applies to the server device 20.
 図8は、サーバ装置20の所要時間算出動作の一例を示すフローチャートである。まず、入力部201は、飛行体10が処理対象エリアである圃場に対して行う処理に対する目標精度と、処理対象エリアの大きさとをサーバ装置20に入力する(ステップS11)。この入力は、サーバ装置20に接続された入力装置によって行われてもよいし、飛行体10のリモートコントロール装置によって行われてもよい。 FIG. 8 is a flowchart showing an example of the required time calculation operation of the server device 20. First, the input unit 201 inputs, to the server device 20, the target accuracy for the process performed on the field which is the processing target area by the flying object 10 and the size of the processing target area (step S11). This input may be performed by an input device connected to the server device 20 or may be performed by a remote control device of the aircraft 10.
 特定部202は、入力部201により入力された情報に基づいて、飛行体10が処理対象エリアに対して行う処理に対する目標精度と、当該処理対象エリアの大きさとを特定する(ステップS12)。 The identifying unit 202 identifies, based on the information input by the input unit 201, the target accuracy for the process performed by the flying object 10 on the processing target area and the size of the processing target area (step S12).
 算出部203は、特定された大きさの処理対象エリアに対して特定された目標精度を実現し得るよう飛行体10が撮像処理を行うのに要する所要時間を、以下のようにして算出する(ステップS13)。 The calculation unit 203 calculates the required time required for the flying object 10 to perform the imaging process as described below so as to achieve the specified target accuracy for the specified size of the processing target area (see below) Step S13).
 精度Aは、次式に示すように、飛行体10の飛行高度hの関数fによって表現される。
 A=f(h)
The accuracy A is expressed by a function f of the flying height h of the flying object 10, as shown in the following equation.
A = f (h)
 飛行体10が処理対象エリアを撮像する場合、その飛行高度が高くなるほど、飛行体10の直下エリア以外の広い地上範囲からの反射光を受ける等の影響がある。そこで、この関数fは、飛行体10の飛行高度hが高くなるほど精度Aが低くなるような関数となるように設計されている。 When the flying object 10 captures an image of the processing target area, the higher the flight altitude, the more reflected light from a wide ground area other than the area directly below the flying object 10 has effects. Therefore, this function f is designed to be a function such that the accuracy A becomes lower as the flight height h of the flying object 10 becomes higher.
 所要時間Tは、次式に示すように、飛行体10の飛行高度h及び飛行速度vの関数gによって表現される。
 T=g(h、v)
The required time T is expressed by a function g of the flying height h and the flying speed v of the flying object 10, as shown in the following equation.
T = g (h, v)
 ここで、撮像は飛行体10が停止して行われるので、飛行速度vは各々の撮像位置間を飛行体10が移動するときの速度である。また、関数gは、処理対象エリア(圃場)の面積や形状、有効範囲Pu、撮影時停止時間(撮影時に一時的に停止する時間)、撮影回数等を変数として含み得る。 Here, since the imaging is performed with the flying object 10 stopped, the flight speed v is a velocity at which the flying object 10 moves between the respective imaging positions. The function g may include the area and shape of the processing target area (field), the effective range Pu, the photographing stop time (time to temporarily stop at the time of photographing), the number of times of photographing, etc. as variables.
 ここで、関数f,gの意義について説明する。まず、精度Aから関数fにより飛行高度hが定まる。次に、その飛行高度hと有効範囲Puとに基づいて、飛行体10が1回の撮像でカバーすべき撮像範囲が定まる。そして、この1回の撮像範囲と処理対象エリアの大きさとに基づいて、各種飛行条件を含む飛行計画が定まる。この飛行計画によって飛行体10が処理するのに要する所要時間Tが定まる。従って、図7に例示するように、飛行高度が低高度で、処理対象エリアに対する飛行経路が高密度となるような撮像処理となる場合は、所要時間Tが長くなる。一方、飛行高度が高高度で、処理対象エリアに対する飛行経路が低密度(つまり単位区間あたりの処理範囲が広い)となる場合は、所要時間Tが短くなる。 Here, the significance of the functions f and g will be described. First, the flying height h is determined from the precision A by the function f. Next, based on the flying height h and the effective range Pu, an imaging range to be covered by the flying object 10 in one imaging is determined. Then, a flight plan including various flight conditions is determined based on the one imaging range and the size of the processing target area. The flight plan determines the time required for the aircraft 10 to process T. Therefore, as shown in FIG. 7, the required time T becomes long when imaging processing is performed such that the flight altitude is low and the flight path to the processing target area is dense. On the other hand, when the flight altitude is high and the flight path to the processing target area has a low density (that is, the processing range per unit section is wide), the required time T becomes short.
 生成部204は、算出部203によって算出された所要時間Tが、目標とする所要時間の上限を上回っている場合には(ステップS14;NO)、上限で処理を行うことが可能な処理対象エリアの大きさに関する情報を生成する(ステップS15)。具体的には、上限で処理を行うことが可能な処理対象エリアの大きさSl、処理対象エリアの大きさS、目標とする所要時間の上限Ttとしたとき、以下のように表現される。
 大きさSl=(Tt/T)×S
If the required time T calculated by the calculation unit 203 exceeds the upper limit of the target required time (step S14; NO), the generation unit 204 can process the area with the upper limit. The information on the size of is generated (step S15). Specifically, when the size S1 of the processing target area that can be processed at the upper limit, the size S of the processing target area, and the upper limit Tt of the target required time is represented as follows.
Size Sl = (Tt / T) × S
 生成部204は、上限で処理を行うことが可能な処理対象エリアの大きさに関する情報として、(Sl/S)×100(%)を生成する。 The generation unit 204 generates (Sl / S) × 100 (%) as information on the size of the processing target area that can be processed at the upper limit.
 出力部205は、算出部203によって算出された所要時間T又は生成部204により生成された情報((Sl/S)×100(%))を出力する(ステップS16)。 The output unit 205 outputs the required time T calculated by the calculation unit 203 or information ((Sl / S) × 100 (%)) generated by the generation unit 204 (step S16).
 上記実施形態によれば、飛行体10が求められる精度で処理を行う場合に、その処理に要する所要時間を知ることができる。 According to the above embodiment, when performing processing with the accuracy required for the flying object 10, it is possible to know the required time required for the processing.
変形例
 本発明は、上述した実施形態に限定されない。上述した実施形態を以下のように変形してもよい。また、以下の2つ以上の変形例を組み合わせて実施してもよい。
変形例1
 例えば午前中/午後、時間帯或いは月や季節等の撮像時期に応じて太陽高度が異なり、結果として撮像時の光量が異なるから、算出部203は、撮像時期に応じて所要時間を補正するようにしてもよい。具体的には、撮像時の光量Lによる補正式は時間tの関数e(t)で表現可能であるから、所要時間T×e(t)という数式で所要時間Tを補正する。また、算出部203は、撮像時の光量そのものに応じて所要時間を補正するようにしてもよい。具体的には、撮像時の光量Lの関する関数d(L)用いて、所要時間T×d(L)という数式で所要時間Tを補正する。
Modifications The present invention is not limited to the embodiments described above. You may deform | transform the embodiment mentioned above as follows. Also, the following two or more modifications may be implemented in combination.
Modification 1
For example, since the sun height differs according to the imaging time such as morning / afternoon, time zone or month or season, and as a result the light amount at the time of imaging differs, the calculation unit 203 corrects the required time according to the imaging time You may Specifically, since the correction equation based on the light amount L at the time of imaging can be expressed by the function e (t) at time t, the required time T is corrected with a formula called required time T × e (t). In addition, the calculation unit 203 may correct the required time according to the light amount itself at the time of imaging. Specifically, the required time T is corrected with a formula called required time T × d (L) using the function d (L) related to the light amount L at the time of imaging.
変形例2
 算出部203は、処理に対するキャリブレーションの実行に応じた所要時間の下限と、目標とする所要時間とを比較し、当該比較結果に応じた情報を生成するようにしてもよい。これにより、キャリブレーションした場合に目標時間以内に処理完了できるか否かを判断することが可能となる。具体的には、撮像処理の前には例えば白板等の所定色の対象物を撮像して撮像装置のキャリブレーションを行う。このキャリブレーションにより、飛行体10が或る飛行速度及び飛行高度で所定の目標精度で処理を行うときの所要時間の下限が求められる。算出部203は、処理に対するキャリブレーションの実行に応じた所要時間の下限と、目標とする所要時間とを比較し、当該比較結果に応じた情報(例えば後者に対する前者の割合)を生成する。
Modification 2
The calculation unit 203 may compare the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generate information according to the comparison result. This makes it possible to determine whether the processing can be completed within the target time when calibration is performed. Specifically, before the imaging process, for example, an object of a predetermined color such as a white board is imaged to perform calibration of the imaging apparatus. By this calibration, the lower limit of the time required for the aircraft 10 to perform processing at a certain flight speed and flight altitude with a predetermined target accuracy can be determined. The calculation unit 203 compares the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generates information (for example, the ratio of the former to the latter) according to the comparison result.
変形例3
 前述したように撮像時の光量と撮像時期とは一定の関係があるから、算出部203は、撮像時期に応じて有効範囲Puを変化させてもよい。
Modification 3
As described above, since the amount of light at the time of imaging and the imaging timing have a certain relationship, the calculation unit 203 may change the effective range Pu in accordance with the imaging timing.
変形例4
 本発明において、飛行体が行う処理は圃場等の撮像処理に限定されない。また、上述した各数式は一例に過ぎず、上述した数式に対する定数ないし係数の付加等は任意である。
Modification 4
In the present invention, the process performed by the flying object is not limited to the imaging process of a field or the like. Further, each equation described above is merely an example, and addition of a constant or a coefficient to the equation described above is optional.
そのほかの変形例
 上記実施の形態の説明に用いたブロック図は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェア及び/又はソフトウェアの任意の組み合わせによって実現される。また、各機能ブロックの実現手段は特に限定されない。すなわち、各機能ブロックは、物理的及び/又は論理的に結合した1つの装置により実現されてもよいし、物理的及び/又は論理的に分離した2つ以上の装置を直接的及び/又は間接的に(例えば、有線及び/又は無線)で接続し、これら複数の装置により実現されてもよい。
 また、サーバ装置20の機能の少なくとも一部が飛行体10に実装されてもよい。同様に、飛行体10の機能の少なくとも一部がサーバ装置20に実装されてもよい。
Other Modifications The block diagram used in the description of the above embodiment shows blocks in units of functions. These functional blocks (components) are realized by any combination of hardware and / or software. Moreover, the implementation means of each functional block is not particularly limited. That is, each functional block may be realized by one physically and / or logically coupled device, or directly and / or indirectly two or more physically and / or logically separated devices. It may be connected by (for example, wired and / or wireless) and realized by the plurality of devices.
In addition, at least a part of the functions of the server device 20 may be implemented on the aircraft 10. Similarly, at least part of the functions of the aircraft 10 may be implemented on the server device 20.
 本明細書で説明した各態様/実施形態は、LTE(Long Term Evolution)、LTE-A(LTE-Advanced)、SUPER 3G、IMT-Advanced、4G、5G、FRA(Future Radio  Access)、W-CDMA(登録商標)、GSM(登録商標)、CDMA2000、UMB(Ultra  Mobile  Broadband)、IEEE 802.11(Wi-Fi)、IEEE 802.16(WiMAX)、IEEE 802.20、UWB(Ultra-WideBand)、Bluetooth(登録商標)、その他の適切なシステムを利用するシステム及び/又はこれらに基づいて拡張された次世代システムに適用されてもよい。 Each aspect / embodiment described in the present specification is LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA (Registered trademark), GSM (registered trademark), CDMA2000, UMB (Ultra Mobile Broadband), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-Wide Band), The present invention may be applied to a system utilizing Bluetooth (registered trademark), other appropriate systems, and / or an advanced next-generation system based on these.
 本明細書で説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本明細書で説明した方法については、例示的な順序で様々なステップの要素を提示しており、提示した特定の順序に限定されない。
 本明細書で説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。
As long as there is no contradiction, the processing procedure, sequence, flow chart, etc. of each aspect / embodiment described in this specification may be reversed. For example, for the methods described herein, elements of the various steps are presented in an exemplary order and are not limited to the particular order presented.
Each aspect / embodiment described in this specification may be used alone, may be used in combination, and may be switched and used along with execution. In addition, notification of predetermined information (for example, notification of "it is X") is not limited to what is explicitly performed, but is performed by implicit (for example, not notifying of the predetermined information) It is also good.
 本明細書で使用する「システム」及び「ネットワーク」という用語は、互換的に使用される。 The terms "system" and "network" as used herein are used interchangeably.
 本明細書で説明した情報又はパラメータなどは、絶対値で表されてもよいし、所定の値からの相対値で表されてもよいし、対応する別の情報で表されてもよい。例えば、無線リソースはインデックスで指示されるものであってもよい。 The information or parameters described in the present specification may be represented by absolute values, may be represented by relative values from predetermined values, or may be represented by corresponding other information. For example, radio resources may be indexed.
 上述したパラメータに使用する名称はいかなる点においても限定的なものではない。さらに、これらのパラメータを使用する数式等は、本明細書で明示的に開示したものと異なる場合もある。様々なチャネル(例えば、PUCCH、PDCCHなど)及び情報要素(例えば、TPCなど)は、あらゆる好適な名称によって識別できるので、これらの様々なチャネル及び情報要素に割り当てている様々な名称は、いかなる点においても限定的なものではない。 The names used for the parameters described above are in no way limiting. In addition, the formulas etc. that use these parameters may differ from those explicitly disclosed herein. Since various channels (eg PUCCH, PDCCH etc.) and information elements (eg TPC etc.) can be identified by any suitable names, the various names assigned to these various channels and information elements can be Is not limited.
 本明細書で使用する「判定(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判定」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking  up)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判定」「決定」したとみなす事などを含み得る。また、「判定」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判定」「決定」したとみなす事などを含み得る。また、「判定」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判定」「決定」したとみなす事を含み得る。つまり、「判定」「決定」は、何らかの動作を「判定」「決定」したとみなす事を含み得る。 The terms "determining", "determining" as used herein may encompass a wide variety of operations. For example, “judgment” and “decision” may be judging, calculating, calculating, processing, processing, deriving, investigating, looking up (for example, a table) (Searching in a database or another data structure), ascertaining may be considered as “decision” or “decision”. Also, "determination" and "determination" are receiving (for example, receiving information), transmitting (for example, transmitting information), input (input), output (output), access (Accessing) (for example, accessing data in a memory) may be regarded as "determined" or "determined". In addition, "determination" and "decision" are to be considered as "determination" and "determination" that they have resolved (resolving), selecting (selecting), choosing (choosing), establishing (establishing), etc. May be included. That is, "determination" "determination" may include considering that some action is "determination" "determination".
 本発明は、飛行制御システム1やサーバ装置20において行われる処理のステップを備える飛行制御方法又は情報処理方法として提供されてもよい。また、本発明は、飛行体10又はサーバ装置20において実行されるプログラムとして提供されてもよい。かかるプログラムは、光ディスク等の記録媒体に記録した形態で提供されたり、インターネット等のネットワークを介して、コンピュータにダウンロードさせ、これをインストールして利用可能にするなどの形態で提供されたりすることが可能である。 The present invention may be provided as a flight control method or an information processing method including the steps of processing performed in the flight control system 1 or the server device 20. Also, the present invention may be provided as a program executed on the airframe 10 or the server device 20. Such a program may be provided in the form of being recorded in a recording medium such as an optical disk, or may be provided in the form of being downloaded to a computer via a network such as the Internet and installed and made available. It is possible.
 ソフトウェア、命令などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、同軸ケーブル、光ファイバケーブル、ツイストペア及びデジタル加入者回線(DSL)などの有線技術及び/又は赤外線、無線及びマイクロ波などの無線技術を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び/又は無線技術は、伝送媒体の定義内に含まれる。 Software, instructions, etc. may be sent and received via a transmission medium. For example, software may use a wireline technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or a website, server or other using wireless technology such as infrared, radio and microwave When transmitted from a remote source, these wired and / or wireless technologies are included within the definition of transmission medium.
 本明細書で説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 The information, signals, etc. described herein may be represented using any of a variety of different techniques. For example, data, instructions, commands, information, signals, bits, symbols, chips etc that may be mentioned throughout the above description may be voltage, current, electromagnetic waves, magnetic fields or particles, optical fields or photons, or any of these May be represented by a combination of
 本明細書で説明した用語及び/又は本明細書の理解に必要な用語については、同一の又は類似する意味を有する用語と置き換えてもよい。例えば、チャネル及び/又はシンボルは信号(シグナル)であってもよい。また、信号はメッセージであってもよい。また、コンポーネントキャリア(CC)は、キャリア周波数、セルなどと呼ばれてもよい。 The terms described herein and / or the terms necessary for the understanding of the present specification may be replaced with terms having the same or similar meanings. For example, the channels and / or symbols may be signals. Also, the signal may be a message. Also, the component carrier (CC) may be called a carrier frequency, a cell or the like.
 本明細書で使用する「第1の」、「第2の」などの呼称を使用した要素へのいかなる参照も、それらの要素の量又は順序を全般的に限定するものではない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本明細書で使用され得る。従って、第1及び第2の要素への参照は、2つの要素のみがそこで採用され得ること、又は何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。 Any reference to an element using the designation "first," "second," etc. as used herein does not generally limit the quantity or order of those elements. These designations may be used herein as a convenient way of distinguishing between two or more elements. Thus, reference to the first and second elements does not mean that only two elements can be taken there, or that in any way the first element must precede the second element.
 上記の各装置の構成における「手段」を、「部」、「回路」、「デバイス」等に置き換えてもよい。 The “means” in the configuration of each device described above may be replaced with a “unit”, a “circuit”, a “device” or the like.
 「含む(including)」、「含んでいる(comprising)」、及びそれらの変形が、本明細書或いは特許請求の範囲で使用されている限り、これら用語は、用語「備える」と同様に、包括的であることが意図される。さらに、本明細書或いは特許請求の範囲において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 Insofar as "including", "comprising" and variations thereof are used in the present specification or claims, these terms as well as the term "comprising" are inclusive. Intended to be Further, it is intended that the term "or" as used in the present specification or in the claims is not an exclusive OR.
 本開示の全体において、例えば、英語でのa、an、及びtheのように、翻訳により冠詞が追加された場合、これらの冠詞は、文脈から明らかにそうではないことが示されていなければ、複数のものを含むものとする。 Throughout the disclosure, for example, when articles are added by translation, such as a, an, and the in English, these articles are not clearly indicated by the context, unless the article clearly indicates otherwise. It shall contain several things.
 以上、本発明について詳細に説明したが、当業者にとっては、本発明が本明細書中に説明した実施形態に限定されるものではないということは明らかである。本発明は、特許請求の範囲の記載により定まる本発明の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。従って、本明細書の記載は、例示説明を目的とするものであり、本発明に対して何ら制限的な意味を有するものではない。 Although the present invention has been described above in detail, it is apparent to those skilled in the art that the present invention is not limited to the embodiments described herein. The present invention can be embodied as modifications and alterations without departing from the spirit and scope of the present invention defined by the description of the claims. Accordingly, the description in the present specification is for the purpose of illustration and does not have any limiting meaning on the present invention.
1:飛行制御システム、10:飛行体、20:サーバ装置、21:プロセッサ、22:メモリ、23:ストレージ、24:通信装置、25:バス、200:トラッキング部、201:入力部、202:特定部、203:算出部、204:生成部、205:出力部、206:評価値算出部。 1: Flight control system 10: Flight object 20: Server device 21: Processor 22: Memory 23: Storage 24: Communication device 25: Bus 200: Tracking unit 201: Input unit 202: Identification Part, 203: Calculation part, 204: Generation part, 205: Output part, 206: Evaluation value calculation part.

Claims (7)

  1.  飛行体が処理対象エリアに対して行う処理における精度の目標値となる目標精度と、当該処理対象エリアの大きさとを特定する特定部と、
     前記特定された大きさの前記処理対象エリアに対して特定された前記目標精度で前記飛行体が前記処理を行うのに要する所要時間を算出する算出部と
     を備えることを特徴とする情報処理装置。
    A target accuracy that is a target value of accuracy in processing performed by the flight object on the processing target area, and a specifying unit that specifies the size of the processing target area;
    An information processing apparatus comprising: a calculation unit that calculates a required time required for the flight body to perform the process with the target accuracy specified for the processing target area of the specified size .
  2.  前記処理は、前記飛行体による地上の撮像に基づいて行われる処理であり、
     前記算出部は、1回の撮像によりカバーされる範囲の大きさに基づいて前記所要時間を算出する
     ことを特徴とする請求項1記載の情報処理装置。
    The processing is processing performed based on imaging of the ground by the aircraft,
    The information processing apparatus according to claim 1, wherein the calculation unit calculates the required time based on a size of a range covered by one imaging.
  3.  前記算出部は、撮像された画像における有効範囲の大きさを条件に応じて変化させる
     ことを特徴とする請求項2記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein the calculation unit changes the size of the effective range in the captured image according to a condition.
  4.  前記算出部は、撮像時の光量又は撮像時期に応じて前記有効範囲の大きさを変化させる
     ことを特徴とする請求項3記載の情報処理装置。
    The information processing apparatus according to claim 3, wherein the calculation unit changes the size of the effective range in accordance with a light amount at the time of imaging or an imaging time.
  5.  前記算出部は、撮像時の光量又は撮像時期に応じて前記所要時間を補正する
     ことを特徴とする請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein the calculation unit corrects the required time according to a light quantity at the time of imaging or an imaging time.
  6.  前記算出部によって算出された所要時間が、目標とする所要時間の上限を上回っている場合には、当該上限で前記処理を行うことが可能な前記処理対象エリアの大きさに関する情報を生成する生成部を備える
     ことを特徴とする請求項1~5のいずれか1項に記載の情報処理装置。
    When the required time calculated by the calculation unit exceeds the upper limit of the target required time, generation is performed to generate information on the size of the processing target area that can perform the process with the upper limit The information processing apparatus according to any one of claims 1 to 5, further comprising:
  7.  前記算出部は、前記処理に対するキャリブレーションの実行に応じた所要時間の下限と、目標とする所要時間とを比較し、当該比較結果に応じた情報を生成する
     ことを特徴とする請求項1~6のいずれか1項に記載の情報処理装置。
    The calculation unit compares the lower limit of the required time according to the execution of the calibration for the process with the target required time, and generates information according to the comparison result. The information processing apparatus according to any one of 6.
PCT/JP2019/001696 2018-01-26 2019-01-21 Information processing device WO2019146551A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019567059A JP6957651B2 (en) 2018-01-26 2019-01-21 Information processing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018011500 2018-01-26
JP2018-011500 2018-01-26

Publications (1)

Publication Number Publication Date
WO2019146551A1 true WO2019146551A1 (en) 2019-08-01

Family

ID=67394925

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/001696 WO2019146551A1 (en) 2018-01-26 2019-01-21 Information processing device

Country Status (2)

Country Link
JP (1) JP6957651B2 (en)
WO (1) WO2019146551A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160132057A1 (en) * 2013-07-09 2016-05-12 Duretek Inc. Method for constructing air-observed terrain data by using rotary wing structure
JP2016197980A (en) * 2015-04-06 2016-11-24 株式会社Nttファシリティーズ Diagnostic system, diagnostic method, and program
JP2017077879A (en) * 2015-07-17 2017-04-27 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Unmanned flight vehicle, flight control method, and flight control program
WO2017073310A1 (en) * 2015-10-27 2017-05-04 三菱電機株式会社 Image capture system for shape measurement of structure, method of capturing image of stucture used for shape measurement of structure, onboard control device, remote control device, program, and recording medium
JP2017176115A (en) * 2016-03-31 2017-10-05 本田技研工業株式会社 Control device for autonomously travelling work vehicle
JP2017216524A (en) * 2016-05-30 2017-12-07 パナソニックIpマネジメント株式会社 Imaging apparatus
KR20180000767A (en) * 2016-06-23 2018-01-04 서울대학교산학협력단 Unmanned Aerial Vehicle anti-collision method by sharing routes and flight scheduling via Ground Control Station software

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160132057A1 (en) * 2013-07-09 2016-05-12 Duretek Inc. Method for constructing air-observed terrain data by using rotary wing structure
JP2016197980A (en) * 2015-04-06 2016-11-24 株式会社Nttファシリティーズ Diagnostic system, diagnostic method, and program
JP2017077879A (en) * 2015-07-17 2017-04-27 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Unmanned flight vehicle, flight control method, and flight control program
WO2017073310A1 (en) * 2015-10-27 2017-05-04 三菱電機株式会社 Image capture system for shape measurement of structure, method of capturing image of stucture used for shape measurement of structure, onboard control device, remote control device, program, and recording medium
JP2017176115A (en) * 2016-03-31 2017-10-05 本田技研工業株式会社 Control device for autonomously travelling work vehicle
JP2017216524A (en) * 2016-05-30 2017-12-07 パナソニックIpマネジメント株式会社 Imaging apparatus
KR20180000767A (en) * 2016-06-23 2018-01-04 서울대학교산학협력단 Unmanned Aerial Vehicle anti-collision method by sharing routes and flight scheduling via Ground Control Station software

Also Published As

Publication number Publication date
JPWO2019146551A1 (en) 2021-01-07
JP6957651B2 (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US11532094B2 (en) Systems and methods for three-dimensional pose determination
US20130212094A1 (en) Visual signatures for indoor positioning
US20160214533A1 (en) Autonomous vehicle cameras used for near real-time imaging
JP7341991B2 (en) monitoring device
CN111045023A (en) Vehicle tracking method and system based on light detection and distance measurement
CN111045025A (en) Vehicle tracking method and system based on light detection and distance measurement
US20210343036A1 (en) Position estimation system
CN112400346A (en) Server apparatus and method for collecting location information of other apparatus
CN110910445A (en) Object size detection method and device, detection equipment and storage medium
CN112215887B (en) Pose determining method and device, storage medium and mobile robot
JP6857250B2 (en) Flight control device and flight control system
WO2019087891A1 (en) Information processing device and flight control system
JP7246388B2 (en) Aircraft controller
WO2019146551A1 (en) Information processing device
JP7050809B2 (en) Information processing equipment
JP7060624B2 (en) Information processing equipment
CN112212851B (en) Pose determination method and device, storage medium and mobile robot
CN109242782A (en) Noise processing method and processing device
JP2019101451A (en) Information processing device
WO2019146577A1 (en) Information processing device
JP7058290B2 (en) Information processing equipment and information processing method
WO2024057746A1 (en) Correction device
JP6903535B2 (en) Information processing device
JP7153089B2 (en) Position estimation system
WO2023042551A1 (en) Information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19743814

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019567059

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19743814

Country of ref document: EP

Kind code of ref document: A1