WO2020022215A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2020022215A1
WO2020022215A1 PCT/JP2019/028464 JP2019028464W WO2020022215A1 WO 2020022215 A1 WO2020022215 A1 WO 2020022215A1 JP 2019028464 W JP2019028464 W JP 2019028464W WO 2020022215 A1 WO2020022215 A1 WO 2020022215A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
target objects
area
estimation
Prior art date
Application number
PCT/JP2019/028464
Other languages
French (fr)
Japanese (ja)
Inventor
竜也 山本
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019097874A external-priority patent/JP2020024672A/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to AU2019309839A priority Critical patent/AU2019309839A1/en
Publication of WO2020022215A1 publication Critical patent/WO2020022215A1/en
Priority to US17/156,267 priority patent/US20210142484A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G7/00Botany in general
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an information processing device, an information processing method, and a program.
  • Patent Document 1 proposes a method of detecting a flower area from a captured image and calculating the number of flowers by using an image processing technique. Further, by using the partial detector of Patent Document 2, even when the object is partially hidden (for example, when a part of the crop, which is the object, is hidden by leaves or the like), the object can be detected. it can. Thus, even when the object is partially hidden, the number of objects can be obtained with higher accuracy.
  • Patent Documents 1 and 2 cannot support the realization of such a mechanism.
  • An information processing apparatus includes: a feature obtaining unit configured to obtain a feature amount of the region related to the number of target objects detected from the image from an image of a region that is a part of a field where a crop is grown, A number acquisition unit that acquires an actual number of the target objects existing in a set area of the field, the feature amount acquired by the feature acquisition unit from an image obtained by capturing the set area, Learning means for learning an estimation parameter for estimating the actual number of the target objects present in a designated area of the field, with the actual number acquired by the number acquisition means as learning data, It is characterized by having.
  • the present invention it is possible to support realization of a mechanism for estimating the total number of objects even when some or all of the objects whose number is to be obtained cannot be detected.
  • FIG. 3 is a diagram illustrating an example of a hardware configuration of an estimation device. It is a figure showing an example of functional composition of an estimation device.
  • FIG. 4 is a diagram illustrating an example of a table for managing learning data.
  • FIG. 7 is a diagram illustrating an example of a state in which a part of an object is hidden by a leaf.
  • FIG. 4 is a diagram illustrating an example of a table for managing estimation data. It is a flowchart which shows an example of a learning process. It is a flowchart which shows an example of an estimation process.
  • FIG. 3 is a diagram illustrating an example of a hardware configuration of an estimation device. It is a figure showing an example of functional composition of an estimation device.
  • FIG. 4 is a diagram illustrating an example of a table for
  • FIG. 4 is a diagram illustrating an example of a table for managing learning data.
  • FIG. 4 is a diagram illustrating an example of a table for managing estimation data. It is a flowchart which shows an example of a learning process. It is a flowchart which shows an example of an estimation process. It is a figure showing an example of functional composition of an estimation device.
  • FIG. 4 is a diagram illustrating an example of a table for managing correction information. It is a flowchart which shows an example of an estimation process.
  • FIG. 2 is a diagram illustrating an example of a system configuration of an information processing system. It is a figure showing an example of a display screen of an estimation result. It is a figure showing an example of a display screen of an estimation result. It is a figure showing an example of a display screen of an estimation result.
  • the estimation device 100 learns an estimation parameter which is a parameter used for estimating the number of objects included in the specified area, and determines the number of objects included in the specified area based on the learned estimation parameter. The process of estimating the value will be described.
  • FIG. 1 is a diagram illustrating an example of a hardware configuration of the estimation device 100 according to the present embodiment.
  • the estimation device 100 is an information processing device such as a personal computer, a server device, and a tablet device that estimates the number of objects included in a designated area.
  • the estimation device 100 includes a CPU 101, a RAM 102, a ROM 103, a network I / F 104, a VRAM 105, an input controller 107, an HDD 109, and an input I / F 110.
  • the components are communicably connected to each other via a system bus 111.
  • the CPU 101 is a central processing unit that controls the entire estimating apparatus 100 as a whole.
  • the RAM 102 is a Random Access Memory, and functions as a main memory of the CPU 101, a work memory necessary for loading an execution program and executing a program, and the like.
  • the $ ROM 103 is a Read Only Memory, and stores, for example, various programs and various setting information.
  • the ROM 103 includes a program ROM in which basic software (OS), which is a system program for controlling equipment of the computer system, is stored, and a data ROM in which information necessary for operating the system is stored. Also, the HDD 109 may store programs and information stored in the ROM 103.
  • OS basic software
  • HDD 109 may store programs and information stored in the ROM 103.
  • the network I / F 104 is a network interface, and is used for input / output control of data such as image data transmitted and received via a network such as a local area network (LAN). It is assumed that the network I / F 104 is an interface corresponding to a network medium such as a wired or wireless network.
  • LAN local area network
  • the VRAM 105 is a video RAM in which image data displayed on the screen of the display 106 as a display device is expanded.
  • the display 106 is a display device, for example, a liquid crystal display or a liquid crystal panel.
  • the input controller 107 is a controller used for controlling an input signal from the input device 108.
  • the input device 108 is an external input device for receiving an operation instruction from a user, and is, for example, a touch panel, a keyboard, a pointing device, a remote controller, or the like.
  • the HDD 109 is a hard disk drive and stores application programs and data such as moving image data and image data.
  • the application program stored in the HDD 109 is, for example, a highlight moving image creation application or the like.
  • the input I / F 110 is an interface used for connection with an external device such as a CD (DVD) -ROM drive, a memory card drive, etc., and is used, for example, for reading image data captured by a digital camera.
  • the system bus 111 is an input / output bus for connecting the respective hardware components of the estimation device so as to be able to communicate with each other, and is, for example, an address bus, a data bus, a control bus, or the like.
  • the CPU 101 executes a process based on a program stored in the ROM 103, the HDD 109, or the like, and thereby functions of the estimating apparatus 100 described later in FIGS. 2, 8, and 15, and a flowchart described later in FIGS. 6, 7, 13, 14, and 17. And the like are realized.
  • the object whose number is to be estimated is a crop (for example, a bunch of fruits, flowers, grapes, and the like).
  • the object whose number is to be estimated is referred to as a target object.
  • an object that can hinder detection of the target object is referred to as an obstruction.
  • the inhibitor is a leaf.
  • the inhibitors may be not only leaves but also trees and stems.
  • the target object is not limited to agricultural products, and may be a person or a car. In that case, the obstruction may be, for example, a building.
  • the estimation device 100 detects the target object from the captured image of the target region for which the number of target objects is to be estimated, and determines the characteristics of the region determined based on the detected number of target objects. The characteristic amount shown is obtained. Then, based on the obtained feature amount and the number of target objects actually included in the region, the estimation device 100 estimates the parameter used for estimating the actual number of target objects included in the region. We will learn the parameters.
  • the actual number of target objects included in a region is defined as the actual number of target objects in the region.
  • the estimating apparatus 100 also detects a target object from an image of a specified region, which is a target for estimating the number of target objects, and indicates a feature amount indicating a feature of the region based on the detected number of target objects. Ask for. Then, the estimating apparatus 100 estimates the actual number of target objects included in the area based on the obtained feature amounts and the learned estimation parameters.
  • FIG. 2 is a diagram illustrating an example of a functional configuration of the estimation device 100 according to the present embodiment.
  • the estimation device 100 includes a number acquisition unit 201, an image acquisition unit 202, a learning unit 203, a feature amount acquisition unit 204, a parameter management unit 205, an estimation unit 206, and a display control unit 207.
  • the number acquisition unit 201 acquires the actual number (actual number) of target objects included in a preset region obtained by counting manually or the like.
  • the number acquisition unit 201 acquires the actual number by reading, for example, a text file in which the actual number of target objects in a preset area is recorded from the HDD 109 or the like. Further, the number obtaining unit 201 may receive an input of the actual number via the input device 108.
  • the image acquisition unit 202 acquires, for example, from an external imaging device or the like, an image in which a preset area including a target object has been photographed, and stores the acquired image in the HDD 109 or the like. In the present embodiment, it is assumed that each of the preset regions is the entire region photographed in the corresponding image.
  • the feature amount acquiring unit 204 detects a target object from the image acquired by the image acquiring unit 202 using an object detection technique, and based on the number of detected target objects, a preset target object in which the detected target object exists is set. A feature amount indicating the feature of the region that has been acquired is acquired. In the following, the number of target objects detected from a certain area by the feature amount acquiring unit 204 is defined as the number of detections in that area. In the present embodiment, the feature amount acquisition unit 204 acquires the number of detections of a certain region as a feature amount indicating the feature of the region.
  • the process of acquiring a feature by the feature acquiring unit 204 is an example of a feature acquiring process.
  • the learning unit 203 performs the following processing for each image received by the number acquisition unit 201. That is, the learning unit 203 corresponds to the actual number of target objects included in the preset area corresponding to the image acquired by the number acquiring unit 201 and the image acquired by the feature amount acquiring unit 204. And a feature amount indicating a feature of the region set in advance. Then, the learning unit 203 learns, by machine learning, estimation parameters used for estimating the actual number of target objects included in the designated area, based on the acquired number and the feature amount. In the present embodiment, the learning unit 203 uses linear regression as a method of machine learning, and learns parameters used for linear regression as estimation parameters. However, the learning unit 203 may learn a parameter in another method such as a support vector machine as an estimated parameter.
  • the parameter management unit 205 stores the estimated parameters learned by the learning unit 203 in the HDD 109 or the like and manages them.
  • the estimating unit 206 includes a feature amount acquired by the feature amount acquiring unit 204 from an image obtained by photographing an area in which the number of target objects is to be estimated, and a learned estimation parameter managed by the parameter managing unit 205. , The following processing is performed. That is, the estimating unit 206 estimates the actual number of target objects included in the target area for estimating the number of target objects.
  • FIG. 3 is a diagram illustrating an example of a table that manages the actual number of target objects acquired by the number acquiring unit 201 and the number of target objects detected by the feature amount acquiring unit 204 as learning data.
  • the table 301 includes items of ID, image file, number of detections, and actual number.
  • the item of ID indicates identification information for identifying learning data.
  • the item of the image file indicates which image the corresponding learning data is generated using.
  • the item of the number of detections indicates the number of target objects detected from the image indicated by the item of the corresponding image file.
  • the item of the actual number is the number of target objects actually included in a specific region photographed in the image indicated by the item of the corresponding image file (for example, the number including target objects hidden in leaves and not shown in the image). Is shown.
  • the table 301 is stored in, for example, the HDD 109 or the like.
  • an image (IMG_0001.jpg) indicated by the image file corresponding to the learning data with the ID of 1 will be described with reference to FIG. IMG_0001.
  • the number acquisition unit 201 acquires the actual number of target objects in one or more specific areas in advance.
  • the feature amount acquisition unit 204 detects a target object from each of a plurality of images in which any of the one or more specific regions is captured, and obtains the number of detections in advance. Then, the number acquiring unit 201 and the characteristic amount acquiring unit 204 store the acquired actual number and the detected number in the HDD 109 or the like as learning data in the format of the table 301 shown in FIG. Thus, learning data used for learning is prepared in advance.
  • the image file or its feature amount is referred to as input data.
  • the actual number corresponding to the input data is called correct data.
  • the learned estimation parameter is also called a learned model.
  • FIG. 5 illustrates the number of detected target objects detected by the feature amount acquiring unit 204 from an image of a region in which the actual number of target objects for which estimation is desired is to be estimated, and the actual number of target objects in the region estimated by the estimating unit 206.
  • FIG. 3 is a diagram showing an example of a table for managing the values of.
  • the table 401 includes items of ID, image file, number of detections, and estimated value.
  • the item of ID indicates identification information for identifying an area in which the actual number of target objects is estimated.
  • the item of the image file indicates the image used for estimating the actual number.
  • the item of the number of detections indicates the number of target objects (the number of detections) detected by the feature amount acquiring unit 204 from the image indicated by the item of the corresponding image file.
  • the item of the estimated value indicates the number of target objects estimated by the estimating unit 206.
  • the table 401 is stored in, for example, the HDD 109 or the like.
  • FIG. 6 is an example of a flowchart showing an example of the estimation parameter learning process.
  • step S ⁇ b> 501 the number acquisition unit 201 acquires, for example, from a text file stored in the HDD 109, the file name of an image file in which a predetermined area is captured and the actual number of target objects included in the area. I do. Then, the number acquiring unit 201 registers the acquired file names and the actual number in the table 301 stored in the HDD 109. It is assumed that the HDD 109 previously stores a text file in which image file names and actual numbers are recorded in association with each other in a format such as a CSV format.
  • the number acquisition unit 201 determines, for each of a plurality of preset regions, a file name of an image file in which the region is photographed, and a target object included in the region. And the actual number. Then, the number acquiring unit 201 registers, in the table 301, each set of the file name and the actual number acquired for each of the plurality of areas.
  • step S502 the feature amount acquisition unit 204 detects the target object from the image indicated by the image file name for each of the image file names registered in the table 301 in step S501, and determines the number of detections in the area captured in the image. Is obtained as the feature value of
  • step S503 the feature amount acquiring unit 204 registers, for example, the number of detections (feature amounts) acquired in step S502 in the table 301 stored in the HDD 109.
  • step S ⁇ b> 504 the learning unit 203 learns an estimation parameter (in this embodiment, a parameter of linear regression) using a set of the detected number (feature amount) and the actual number registered in the table 301.
  • the linear regression is represented by the following equation (1).
  • Actual number (estimated value) w0 + (w1 x number of detections) Expression (1)
  • step S505 the parameter management unit 205 starts management by, for example, storing the estimated parameters learned in step S504 in the HDD 109.
  • FIG. 7 is a flowchart showing an example of the estimation process using the estimation parameters learned by the process of FIG.
  • step S601 the estimation unit 206 requests the parameter management unit 205 for the estimation parameters learned in the process of FIG.
  • the parameter management unit 205 acquires the estimated parameters learned in S504 stored in S505 from the HDD 109, and transmits the acquired estimated parameters to the estimation unit 206.
  • step S ⁇ b> 602 the feature amount acquiring unit 204 detects a target object from an image in which an area designated as a target for estimating the number of target objects is captured, and acquires the number of detections.
  • an image obtained by photographing at least a part of the field in S602 is supplied as a processing target, that is, an area photographed in the image is designated as a target of a process of estimating the number of target objects. Is equivalent to If there are a plurality of designated images, the same processing is performed for all of them.
  • the feature amount acquiring unit 204 registers the acquired number of detections in the table 401 stored in the HDD 109, for example, in association with the image file name of the image.
  • step S603 the estimating unit 206 estimates the number of target objects included in the target region in which the number of target objects is to be estimated based on the estimation parameters acquired in step S601 and the number of detections acquired in step S602. .
  • the estimating unit 206 calculates the number of target objects included in the area using Expression (1) based on, for example, w0 and w1 that are the estimation parameters acquired in S601 and the number of detections acquired in S602. Find an estimate.
  • the estimating unit 206 outputs the obtained estimated value by registering it in the table 401.
  • the estimating unit 206 may output the obtained estimated value by displaying it on the display 106.
  • the estimated value registered in the table 401 may be used for, for example, predicting the yield of the crop, which is the target object, and visualizing the information of the predicted high- and low-yield areas on a map. .
  • the estimation device 100 detects a target object from an image in which a preset region is captured, and indicates a characteristic of the region based on the number of detected target objects (the number of detections). Take the quantity. Then, the estimation device 100 learns the estimation parameters based on the acquired feature amount and the actual number (actual number) of the target objects included in the area. By using the learned estimation parameters, it is possible to estimate the actual number of target objects included in a region based on a feature amount corresponding to the number of detected target objects detected from the region. That is, the estimating apparatus 100 can support the realization of a mechanism for estimating the total number of objects even when some or all of the target objects cannot be detected by learning the estimation parameters.
  • the estimating apparatus 100 obtains the learned estimation parameters and the number of target objects in the region obtained based on the number of detected target objects detected from the captured image of the target region. The following processing was performed based on the feature amounts of That is, the estimation device 100 estimates the actual number of target objects included in the area. As described above, the estimation device 100 can estimate the actual number of target objects included in a region from the feature amount of the region based on the number of target objects detected from the region. Accordingly, the estimation device 100 can support the realization of a mechanism for estimating the total number of objects even when some or all of the target objects cannot be detected.
  • the estimating apparatus 100 generates learning data used for learning the estimation parameters in advance, and stores the learning data in the HDD 109 as the table 301.
  • the estimation device 100 can support the realization of a mechanism for estimating the total number of objects even when some or all of the target objects cannot be detected by preparing learning data used for learning the estimation parameters.
  • the number of detections is the number of target objects detected from the image.
  • the number of detections may be, for example, the number of people who have visually detected the target object.
  • the estimation device 100 accepts the designation of the number of detections, for example, based on a user operation via the input device 108.
  • the estimation device 100 may use the number of people detected via the human sensor as the number of detections, and use the number of people actually existing in the area as the actual number. For example, the estimating apparatus 100 estimates the actual number of persons included in the region based on the feature amount of the region acquired from the number of detections, based on the set of the detected number and the actual number at each of a plurality of time points in the region. May be learned. In addition, the estimation device 100 may estimate the actual number of people included in the area at the designated time using the learned estimation parameters. In addition, the estimation device 100 may generate a set of the detected number and the actual number at each of a plurality of time points in advance and generate learning data used for learning the estimation parameter.
  • Example of use> An example of use of a system for presenting to a user the number of target objects obtained by the processing of the estimating apparatus 100 of the present embodiment and the yield of crops that can be predicted based on the number will be described.
  • This system includes an estimation device 100.
  • the user of this system can utilize the number of target objects estimated by the estimating apparatus 100 in a work to be performed later and a production plan of a processed product.
  • the processing of the present embodiment can be suitably applied to a case where grapes for wine production are cultivated as agricultural products.
  • the production control of grapes for wine production will be described as an example.
  • sampling surveys at a plurality of locations in a field or a plurality of trees have been performed.
  • the growth condition of a cared tree may vary depending on the place or year, for example, because the geographical and climatic conditions are not uniform.
  • the learned model is learned so that if the number of detected target objects in the image is small, the estimated actual number tends to be small. Therefore, for example, even if there is a tree whose growth state is worse than that of the tree on which the sampling survey was performed due to the influence of geographical conditions, the number of target objects estimated from an image of the tree is determined by the sampling survey. Will be less than trees made.
  • the processing of the present embodiment enables more accurate estimation processing irrespective of the position where the sampling investigation is performed.
  • FIGS. 19 to 21 are diagrams each showing an example of a display screen showing an estimation result output by the estimation device 100 when the system according to this usage example is introduced at a wine production grape production site.
  • the display control unit 207 generates the display screens of FIGS. 19 to 21 based on the estimation result in S603 and displays the display screens on the display 106.
  • the display control unit 207 generates the display screens of FIGS. 19 to 21 based on the estimation result in S603, transmits the generated display screen to an external device, and displays the display unit (display or the like) of the transmission destination device. ) May be controlled.
  • the screen 1900 is a screen showing, for each of the seven blocks included in the field, an identifier (ID), an area, and an estimated value of the grape harvest amount for the corresponding block.
  • the display control unit 207 estimates the weight (unit: t) of the grapes to be harvested based on the total of the results of the processing in S603 (the estimated number of the grapes bunches to be harvested) for all the images of the corresponding blocks. Find the value. Then, the display control unit 207 causes the obtained weight to be included in the screen 1900 as an estimated value of the grape harvest amount. Expressing the grape yield by weight, rather than by the number of bunches, makes it easier to use the grape yield for estimating wine production. In the example of FIG. 19, a predicted value that 19.5 (t) grape is harvested in the block B-01 represented by the area 1901 is shown.
  • the display control unit 207 when the display control unit 207 detects a pointing operation (for example, a selection operation such as a click or a tap) to the area 1901 via the input device 108, the display control unit 207 changes the screen displayed on the display 106 to an The screen is switched to a screen 2000 shown in FIG.
  • a pointing operation for example, a selection operation such as a click or a tap
  • the screen 2000 is a screen for presenting information that is the basis of the prediction of the yield for the block B-01.
  • Screen 2000 includes area 2001 at a position corresponding to area 1901 on screen 1900.
  • Each of the 66 squares in the area 2001 is a marker indicating one unit (unit) to be subjected to the counting survey.
  • the target object is a grape cluster.
  • the pattern of each marker is a pattern corresponding to the average number of tufts detected in each unit. That is, the area 2001 shows the geographic distribution of the number of detected target objects existing in the block B-01.
  • An area 2002 surrounded by a broken line shows detailed information on the selected block B-01.
  • area 2002 indicates information indicating that the average number of detected cells for all units in block B-01 is 8.4.
  • information indicating that the average estimated number of cells for all units in the block B-01 is 17.1 is shown.
  • the number of detected target objects detected by the feature amount acquisition unit 204 does not include the number of target objects whose detection is hindered by obstacles.
  • the actual number acquired by the number acquiring unit 201 is a number including the number of target objects whose detection is inhibited by the obstacle. That is, in the present embodiment, there may be a difference between the actual number serving as a pair of learning data and the number of detections.
  • the number of target objects estimated by the estimating unit 206 is larger than the number of detections, as indicated by the area 2002 in FIG. 20, by the number of target objects whose detection is inhibited by the obstacle. There is.
  • the area 2003 indicates, for each set of the set of the number of detected cells and the estimated number of cells divided into a plurality of stages, the total number of markers belonging to the corresponding stage.
  • the method of expressing the information shown in the area 2003 may be a histogram format as in the example of FIG. 20 or may be various graph formats.
  • the area 2003 shows a histogram in which the pattern of the bin is changed for each stage.
  • the pattern of the bins in this histogram corresponds to the pattern of the marker in the area 2001.
  • the display control unit 207 assigns different patterns to the bins and the markers.
  • the bins and the markers may be colored in different colors.
  • the distribution of the number of detected tufts is represented by a pseudo heat map.
  • the display in the heat map format allows the user to intuitively understand the magnitude of the number of detections and the distribution thereof.
  • the estimated cell number is also shown in the area 2002 in association with the detected cell number. Since the user who actually looks at the field sometimes sees the vine before the leaves grow, the user may intuitively recognize the number of bunches hidden in the leaves. For such a user, the number of clusters detected from the image may be felt to be less than the number known to one's own sensation.
  • the display control unit 207 associates not only the number of actually detected bunches but also the actual number estimated by the learned model with the number of detections as the basis of the predicted value of the yield. Display. For example, the user first looks at the screen 1900 and knows the predicted value of the yield. Then, when making a subsequent plan for each block, if the user wants to know the basis of the predicted value just in case, he or she clicks the target block. Then, the user uses the screen 2000 corresponding to the clicked block to display the number of clusters detected from the image (the number of clusters that are surely present) and the number of clusters not detected from the image. It is possible to confirm both the estimated and expected values.
  • the cause is that the number of detections is small, or It is possible to quickly determine whether the cause is a small estimated number (estimation processing).
  • the virtual button 2004 in the area 2002 is a button used to clearly indicate the position where the sampling survey was actually performed among the markers shown in the area 2001.
  • the display control unit 207 switches the screen displayed on the display 106 to a screen 2100 shown in FIG.
  • 66 markers included in the block B-01 are displayed in the area 2001 of the screen 2100 continuously from the screen 2000. Then, the display control unit 207 makes the ten markers corresponding to the positions where the sampling survey is actually performed out of the sixty-six markers into markers highlighted by a thick line like the marker 2101.
  • the display control unit 207 also changes the display state of the virtual button 2004 on the screen 2000 by changing the color or the like. Thereby, the display state of the virtual button 2004 is associated with whether or not the virtual button 2004 is selected. By confirming the virtual button 2004, the user can easily determine whether or not the virtual button 2004 has been selected.
  • the function related to the virtual button 2004 is particularly effective when learning data from a sampling survey for that year is used as the basis of the estimation processing. For example, when a learned model in which only data obtained as a result before last year is learned as learning data is used, it is not necessary to confirm the sampling position, and thus the virtual button 2004 may be omitted. For example, after the year in which it is determined that sufficient learning data has been obtained based on the past sampling survey results, the sampling survey for each year may be omitted, and only the counting survey by the estimation processing according to the present embodiment may be performed.
  • ⁇ Embodiment 2> a case will be described in which the estimating apparatus 100 specifies an area of a predetermined specific section set in advance and estimates the actual number of target objects included in the specified area.
  • the hardware configuration of the estimation device 100 of the present embodiment is the same as that of the first embodiment.
  • FIG. 8 is a diagram illustrating an example of a functional configuration of the estimation device 100 according to the present embodiment.
  • the estimating apparatus 100 of the present embodiment is different from the case of the first embodiment shown in FIG. 2 in that it includes a section specifying unit 801 for specifying a preset area of a section. Further, in the present embodiment, the image acquired by the image acquisition unit 202 is an image indicating a region of a preset section.
  • the section identification unit 801 detects, from an image, an object indicating an area of a preset section (eg, an area of a section set in a field), and based on the position of the detected object, determines an Identify the area.
  • the image acquisition unit 202 cuts out the area of the section specified by the section specifying unit 801 from the input image and stores the cut out area in the HDD 109 or the like.
  • An image 901 in FIG. 9 is an image obtained by photographing a region of a section
  • an image 902 is an image obtained by cutting out the area of a section.
  • the section specifying unit 801 detects an object indicating the area of the section from the image 901 and specifies an area surrounded by the detected object in the image 901 as the area of the section.
  • the image obtaining unit 202 cuts out the area specified by the section specifying unit 801 from the image 901, obtains the image 902, and stores the obtained image 902 in the HDD 109.
  • the section may not fit in one image and may be divided into a plurality of images to capture one section.
  • the section specifying unit 801 arranges a plurality of images of the section, detects objects indicating both ends of the section from the plurality of images, and specifies the area of the section for each image.
  • the image acquiring unit 202 combines the areas of the sections in each of the plurality of images specified by the section specifying unit 801 and stores the combined image in the HDD 109 or the like.
  • a plurality of images 1001 in FIG. 10 are images obtained by capturing the area of the section, and an image 1002 is an image showing the area of the combined section.
  • the section specifying unit 801 detects an object indicating the area of the section from the plurality of images 1001, and specifies an area sandwiched by the detected objects in the plurality of images 1001 as the area of the section.
  • the image obtaining unit 202 synthesizes the area specified by the partition specifying unit 801 from the image 1001, obtains the image 1002, and stores the obtained image 1002 in the HDD 109.
  • the section specifying unit 801 combines a plurality of images obtained by capturing the sections, detects an object indicating the area of the section from the combined image as one combined image, and combines the objects based on the detected object.
  • the area of the section in the image may be specified.
  • the feature amount acquiring unit 204 detects a target object from the image of the area of the preset section specified by the section specifying unit 801 obtained by the image obtaining unit 202. Then, the feature amount acquiring unit 204 acquires the feature amount of the area of this section based on the number of detected target objects (the number of detections).
  • each of the feature amounts included in the learning data used for learning the estimation parameter and the feature amounts used for the estimation process is a feature amount of any of the preset sections. That is, in the present embodiment, the estimation parameter is learned for each section defined in the field.
  • the image acquiring unit 202 may acquire an image similar to that of the first embodiment without acquiring an image of the area of the section specified by the section specifying unit 801.
  • the feature amount acquiring unit 204 detects the target object from the region of the preset section specified by the section specifying unit 801 in the image obtained by the image obtaining unit 202. Then, the feature amount acquiring unit 204 acquires the feature amount of the area of this section based on the number of detected target objects (the number of detections).
  • the estimating apparatus 100 can specify an area of a preset section and estimate the actual number of target objects included in the specified area of the section.
  • the estimation device 100 can reduce the influence of the target object that can be detected from an area other than the area in which the actual number of the target objects is to be estimated.
  • the section specifying unit 801 specifies the area of the section by detecting an object indicating the area of the section set in advance. However, the area of the section is measured using position information such as GPS data or an image measurement technique. May be specified. Furthermore, the estimating apparatus 100 may display a virtual frame on a camera finder that generates an input image so that the user can confirm an area of the section specified at the time of shooting, or a composite image in which the frame is superimposed. May be generated. Further, the estimation device 100 may store the frame information as metadata of the image in the HDD 109 or the like.
  • the estimation device 100 acquires a feature amount indicating a feature of the area based on other attributes of the area in addition to the number of target objects detected from the area.
  • the hardware configuration and the functional configuration of the estimation device 100 of the present embodiment are the same as those of the first embodiment.
  • the estimation device 100 uses a set of the number of target objects detected from a region and other attributes of the region as a feature amount of the region. Then, the estimation device 100 performs a learning process of the estimation parameter using the feature amount, and a process of estimating the number of target objects using the estimation parameter.
  • the table 11 is a table used for registering learning data, and is stored in, for example, the HDD 109 or the like.
  • the table 1101 is a table in which information used for learning the estimation parameter is added to the table 301 shown in FIG.
  • the table 1101 includes items of ID, image file, number of detections, number of adjacent detections, soil, leaf volume, and actual number.
  • the item of the number of adjacent detections is an average value of the target object detected from each of one or more areas adjacent to the area indicated by the corresponding ID.
  • the target object is a crop.
  • the estimating apparatus 100 includes, in the feature amount of the region, a feature of a region around the region (for example, a statistic (for example, a statistical value (for example, , Average, total, variance, etc.).
  • a statistic for example, a statistical value (for example, , Average, total, variance, etc.).
  • the item of soil indicates an index value indicating the goodness (easiness of fruiting) of the soil in the area indicated by the corresponding ID.
  • the larger the index value the better the soil.
  • the estimation device 100 causes the feature value to include an index value indicating the goodness of the soil in which the crop, which is the target object, is planted.
  • the estimating apparatus 100 can learn an estimation parameter capable of estimating the actual number of target objects in consideration of the characteristics of the soil, and use the estimation parameters to consider the characteristics of the soil in consideration of the characteristics of the soil. Can be estimated.
  • the item of ⁇ leaf amount indicates an index value indicating the amount of leaf detected from the area.
  • the larger the index value the larger the amount of leaves.
  • the estimation device 100 includes the amount of the detected obstacle in the feature amount. Accordingly, the estimation device 100 can learn an estimation parameter capable of estimating the actual number of target objects in consideration of the amount of the obstruction, and use the estimation parameter to consider the amount of the obstruction. The actual number of target objects can be estimated.
  • the feature amount of the region is a set of the number of detections, the number of adjacent detections, the amount of leaves, and an index value indicating the goodness of soil.
  • the feature amount of the region may be a set of the number of detections and a part of the index value indicating the number of adjacent detections / leaf volume / good soil.
  • the feature amount of the region may include an attribute of the region other than the number of detections, the number of adjacent detections, the amount of leaves, and the index value indicating the goodness of the soil.
  • Table 1201 in FIG. 12 is a table for managing the feature amount of the area in which the actual number of target objects is to be estimated and the value of the actual number of target objects in the area estimated by estimating section 206.
  • the table 1201 is a table in which information used for estimating the actual number of target objects is added to the table 401 in FIG.
  • the table 1201 includes items of ID, image file, number of detections, number of adjacent detections, soil, leaf volume, and estimated value. The items of the number of adjacent detections, soil, and leaf amount are the same items as in the table 1101.
  • FIG. 13 is a flowchart showing an example of the estimation parameter learning process.
  • a table 1101 is used instead of the table 301.
  • Steps S1301 to S1304 are processes in which a feature amount is acquired by the feature amount acquiring unit 204 for each image file registered in the table 1101, and the acquired result is registered.
  • the feature amount acquiring unit 204 detects a target object and leaves from the images registered in the table 1101.
  • the feature amount acquiring unit 204 may detect the leaf using the object detection technique, or may detect the leaf simply by detecting a pixel having a leaf color.
  • the feature amount acquiring unit 204 registers the number of detected target objects and the leaf amount detected in step S1301 in the table 1101.
  • the feature amount obtaining unit 204 obtains a leaf amount, which is an index value indicating a leaf amount, based on a ratio between the number of pixels of the detected leaf region and the number of pixels of the entire image.
  • the feature amount obtaining unit 204 obtains position information such as GPS data from the metadata of the image registered in the table 1101. Then, the feature amount obtaining unit 204 obtains information on the number of adjacent detections and the goodness of the soil based on the obtained position information. The feature amount obtaining unit 204 obtains position information of an image corresponding to IDs before and after the target ID from the table 1101, and determines whether the image is an image of an adjacent area. Then, the feature amount obtaining unit 204 obtains the number of detections for the image determined to be an adjacent area, and obtains an average value as the number of adjacent detections. For example, the feature quantity acquiring unit 204 sets the number of adjacent detections of ID2 to 3.5, which is the number of detections of ID1 and 4, the number of detections of ID3, on average.
  • the feature amount acquiring unit 204 is the region that is actually adjacent using the position information. Was determined.
  • the estimation device 100 may perform the following. That is, assuming that the table 1101 includes the position information of the area of the corresponding ID, the feature amount acquiring unit 204 specifies the data of the number of detections of the area around the certain area from the table 1101 based on the position information. You may do it.
  • the feature amount acquiring unit 204 acquires, for example, an index value indicating the goodness of the soil corresponding to the shooting position from the database or the like.
  • step S ⁇ b> 1304 the feature amount acquiring unit 204 registers information on the number of adjacent detections acquired in step S ⁇ b> 1303 and an index value indicating soil goodness in the table 1101.
  • the learning unit 203 learns an estimation parameter using the number of detections, the number of adjacent detections, an index value indicating soil goodness, and the amount of leaves in the table 1101.
  • the estimation parameter is a parameter of linear regression.
  • step S505 of FIG. 13 the parameter management unit 205 stores and manages the estimated parameters learned in step S504 in the HDD 109 or the like.
  • FIG. 14 is a flowchart illustrating an example of an estimation process for estimating the actual number of target objects using the estimation parameters learned in the process of FIG.
  • step S601 in FIG. 14 the estimating unit 206 acquires the estimation parameters learned in step S504 in FIG.
  • the feature amount acquiring unit 204 detects the target object and the leaves from the image in which the region for which the actual number of target objects is to be estimated is photographed.
  • step S1402 the feature amount acquiring unit 204 registers the number of detected target objects and the leaf amount detected in step S1401 in the table 1201 as in step S1302.
  • the feature amount obtaining unit 204 obtains a leaf amount, which is an index value indicating a leaf amount, based on a ratio between the number of pixels of the detected leaf region and the number of pixels of the entire image.
  • the feature amount obtaining unit 204 obtains position information such as GPS data from metadata of an image of a region in which the actual number of target objects is to be estimated, as in step S1303. Then, the feature amount obtaining unit 204 obtains information on the number of adjacent detections and the goodness of the soil based on the obtained position information. Then, the feature amount acquiring unit 204 registers information on the acquired number of detected neighbors and an index value indicating the goodness of soil in the table 1201.
  • step S603 of FIG. 14 the estimating unit 206 executes the following process based on the estimation parameters acquired in step S601 and the feature amounts of the regions registered in the table 1201, using equation (2). That is, the estimating unit 206 estimates the number of target objects that are actually included in the region in which the actual number of target objects is to be estimated.
  • the number of target objects detected (the number of detections) of ID836 is smaller than that of ID835 or ID837.
  • the ID 836 is an estimated value similar to the ID 835 and the ID 837. Therefore, in the region of ID836, the number of target objects that happened to be hidden by leaves may have increased as compared with ID835 or ID837, and the number of detected objects may have decreased as compared with ID835 or ID837.
  • the estimating apparatus 100 can supplement the information on the number of detections by using the number of adjacent detections as the feature quantity, so that the estimated value does not become too small even if the number of detections decreases.
  • the estimation device 100 uses the leaf amount as the feature amount, thereby preventing the estimation value from becoming too small even when the leaf amount is hidden. be able to.
  • the estimating apparatus 100 uses, as the feature amount, other attributes of the region such as the positional deviation of the yield (good soil) and the amount of leaves in addition to the number of detections. did. Thereby, the estimating apparatus 100 can estimate the real number of the target with higher accuracy than in the first embodiment.
  • the estimating apparatus 100 may use information on the size of the target object detected on the assumption that the target object is large in an easy-to-produce location as the feature amount of the region. Good.
  • the estimation device 100 may use, as a feature amount, a variety of agricultural crops, a fertilizer application state, the presence or absence of a disease, and the like, which are factors of the yield.
  • Agricultural crops tend to be fruitful depending on the weather.
  • a description will be given of a process of estimating the actual number of the crops that are the target objects and correcting them in consideration of weather conditions.
  • the hardware configuration of the estimation device 100 is the same as that of the first embodiment.
  • FIG. 15 is a diagram illustrating an example of a functional configuration of the estimation device 100 according to the present embodiment.
  • the functional configuration of the estimation device 100 of the present embodiment is different from the functional configuration of FIG. 2 in that a correction information acquisition unit 1501 that acquires correction information is included.
  • the correction information acquisition unit 1501 includes a learned estimation parameter used for estimating the actual number of target objects by the estimation unit 206, and correction information (for example, an estimated value And the like for multiplication).
  • the estimating unit 206 estimates the actual number of target objects using the estimation parameters and the correction information.
  • FIG. 16 is a diagram showing an example of a table for managing learned parameters and coefficients prepared in advance.
  • the table 1601 includes a year, an average number of detections, and parameters.
  • the item of the year is the year corresponding to the learning data registered in the table 301.
  • the item of the average number of detections is the average number of detections in the corresponding year.
  • the parameter is an estimated parameter learned by the learning unit 203 using the learning data corresponding to the corresponding year.
  • the table 1601 is a table for managing a plurality of estimation parameters learned by the learning unit 203, each of which is associated with an index value of a preset index called an average detection number.
  • the table 1601 is stored in, for example, the HDD 109 or the like.
  • the table 1602 includes items of the percentage of fine weather and the coefficient.
  • the item of sunny ratio indicates the ratio of sunny days during the period in which the crop, which is the target object actually included in a certain area, grew.
  • the item of the coefficient indicates a value for correcting the estimated value, and the larger the ratio of the corresponding sunny day, the larger the value.
  • FIG. 17 is a flowchart illustrating an example of an estimation process using estimation parameters.
  • step S1701 the correction information acquisition unit 1501 averages the number of detections in the table 401 to acquire the average number of detections for the year corresponding to the estimation target of the actual number of target objects.
  • the correction information acquisition unit 1501 acquires, from the estimation parameters registered in the table 1601, the estimation parameter whose corresponding average detection number value is closest to the acquired average detection number.
  • the correction information acquisition unit 1501 selects the acquired estimation parameter as the estimation parameter used for the estimation processing.
  • the estimation device 100 can acquire the estimation parameter learned under the condition close to the condition where the region to be subjected to the estimation processing is located without learning the estimation parameter again by the process of S1701. Accordingly, the estimation device 100 can prevent the accuracy of the estimation process from decreasing while reducing the load of the process related to learning.
  • the correction information acquisition unit 1501 uses, for example, the weather information acquired using an external weather service or the like to generate a target object in the year corresponding to the region to be estimated, during the growing period of the crop, which is the target object. Get the percentage of sunny days.
  • the correction information acquisition unit 1501 acquires, from the table 1602, a coefficient corresponding to the acquired percentage of a sunny day.
  • the estimating unit 206 obtains an estimated value of the actual number of target objects by using Expression (1), for example, using the estimation parameter selected in S1701. Then, the estimating unit 206 corrects the estimated value by multiplying the obtained estimated value by the coefficient obtained in S1702, and sets the corrected estimated value as a final estimated value.
  • the estimation device 100 acquires a coefficient used for correcting the estimated value of the object based on the weather information (the ratio of a sunny day). Then, the estimation device 100 uses the obtained coefficient to correct the estimated value of the actual number of target objects. Thereby, the estimating apparatus 100 can obtain an estimated value of the actual number of target objects with higher accuracy than in the first embodiment.
  • harvesting of agricultural products is performed once a year.
  • the harvest is not limited to once a year, but may be applied to agricultural products harvested a plurality of times a year.
  • the data managed for each year may be changed to be managed for each growth period.
  • the estimation device 100 uses the ratio of sunny days to obtain the coefficient.
  • the estimation device 100 may use an average value, an integrated value, or the like of sunshine hours, precipitation, temperature, and the like. .
  • the estimation device 100 uses the average number of detections used to acquire the estimation parameters described in the present embodiment and the ratio of sunny days used to acquire correction information as one of the feature amounts. Is also good. Further, the estimation device 100 may acquire the estimation parameter and the correction information by using a part of the feature amount described in the third embodiment.
  • the estimation device 100 performs a process of generating learning data used for learning an estimation parameter, a process of learning an estimation parameter, and a process of estimating the actual number of target objects.
  • these processes need not be executed by a single device.
  • FIG. 18 is a system configuration of an information processing system that executes a process of generating learning data used for learning an estimation parameter, a process of learning an estimation parameter, and a process of estimating the actual number of target objects using the estimation parameter in the present embodiment.
  • the information processing system includes a generating device 1801, a learning device 1802, and an estimating device 1803.
  • the hardware configuration of each of the generating device 1801, the learning device 1802, and the estimating device 1803 is the same as the hardware configuration of the estimating device 100 of the first embodiment illustrated in FIG.
  • the function of the generation device 1801 and the processing of the generation device 1801 illustrated in FIG. 18 are realized by the CPU of the generation device 1801 executing the processing based on the program stored in the ROM, the HDD, or the like of the generation device 1801.
  • the function of the learning device 1802 and the processing of the learning device 1802 illustrated in FIG. 18 are realized by the CPU of the learning device 1802 executing the processing based on the program stored in the ROM, the HDD, or the like of the learning device 1802.
  • the functions of the estimating device 1803 and the processing of the estimating device 1803 shown in FIG. 18 are realized by the CPU of the estimating device 1803 executing the processing based on the programs stored in the ROM, the HDD, or the like of the estimating device 1803.
  • the generation device 1801 includes a number acquisition unit 1811, an image acquisition unit 1812, a feature amount acquisition unit 1813, and a generation unit 1814.
  • the number acquisition unit 1811, the image acquisition unit 1812, and the feature amount acquisition unit 1813 are the same as the number acquisition unit 201, the image acquisition unit 202, and the feature amount acquisition unit 204 in FIG.
  • the generation unit 1814 generates learning data, and stores the generated learning data in a format such as a table 301 or CSV in an HDD or the like of the generation device 1801.
  • the generation unit 1814 generates learning data by executing, for example, processing similar to S501 to S503 in FIG.
  • the learning device 1802 includes a learning unit 1821 and a parameter management unit 1822.
  • the learning unit 1821 and the parameter management unit 1822 are functional components similar to the learning unit 203 and the parameter management unit 205 in FIG. 2, respectively. That is, the learning unit 1821 obtains the learning data generated by the generation device 1801 from the generation device 1801, and executes the same processing as S504 to S505 in FIG. 6 based on the obtained learning data (information in the table 301). Then, the estimation parameters are learned. Then, the parameter management unit 1822 stores the estimated parameters learned by the learning unit 1821 in the HDD or the like of the learning device 1802.
  • the estimation device 1803 includes an image acquisition unit 1831, a feature amount acquisition unit 1832, an estimation unit 1833, and a display control unit 1834.
  • the image acquisition unit 1831, the feature acquisition unit 1832, the estimation unit 1833, and the display control unit 1834 are the same as the image acquisition unit 202, the feature acquisition unit 204, the estimation unit 206, and the display control unit 207 of FIG. That is, the image acquisition unit 1831, the feature amount acquisition unit 1832, and the estimation unit 1833 execute the same processing as in FIG. 7 to thereby determine the actual number of target objects included in the target region in which the number of target objects is estimated. Is performed.
  • the respective devices execute the processing of generating the learning data used for learning the estimated parameters, the processing of learning the estimated parameters, and the processing of estimating the actual number of target objects. This makes it possible to distribute the burden of each process to a plurality of devices.
  • the present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus read and execute the program.
  • This processing can be realized. Further, it can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.
  • the functional configuration of the estimation device 100 described above may be implemented as hardware in the estimation device 100, the generation device 1801, the learning device 1802, the estimation device 1803, and the like.

Abstract

Using a feature quantity acquired from an image in which a set area of a farm field is imaged and the actual number of target objects present in the set area as learning data, the present invention learns an estimation parameter for estimating the actual number of target objects present in a designated area of a farm field.

Description

情報処理装置、情報処理方法及びプログラムInformation processing apparatus, information processing method and program
 本発明は、情報処理装置、情報処理方法及びプログラムに関する。 The present invention relates to an information processing device, an information processing method, and a program.
 特定の領域における特定のオブジェクトの個数を求める技術がある。 技術 There is a technique for calculating the number of specific objects in a specific area.
 例えば、特許文献1では、画像処理技術を用いて、撮影した画像から花の領域を検出し、花の数を求める方法が提案されている。また、特許文献2の部分検出器を用いることで、オブジェクトが部分的に隠された場合(例えば、オブジェクトである農作物の一部が葉等により隠された場合)でも、オブジェクトを検出することができる。これにより、オブジェクトが部分的に隠された場合でも、より精度よくオブジェクトの個数を求めることが可能となる。 For example, Patent Document 1 proposes a method of detecting a flower area from a captured image and calculating the number of flowers by using an image processing technique. Further, by using the partial detector of Patent Document 2, even when the object is partially hidden (for example, when a part of the crop, which is the object, is hidden by leaves or the like), the object can be detected. it can. Thus, even when the object is partially hidden, the number of objects can be obtained with higher accuracy.
 しかしながら、個数を求めたいオブジェクトを検出できない場合がある。例えば、オブジェクトが完全に隠されている場合(例えば、葉によりオブジェクトである農作物が完全に隠されている場合)、特許文献1、2の技術を用いてオブジェクトを検出できない。 However, there are cases where the object whose number is to be obtained cannot be detected. For example, when the object is completely hidden (for example, when the crop that is the object is completely hidden by leaves), the object cannot be detected using the techniques of Patent Documents 1 and 2.
特開2017-77238号公報JP 2017-77238 A 特開2012-108785号公報JP 2012-108785 A
 個数を求めたいオブジェクトの一部又は全部が検出できない場合でもオブジェクトの総数を推定する仕組みの実現を支援したいという要望がある。 There is a demand to support a mechanism for estimating the total number of objects even when some or all of the objects whose number is to be obtained cannot be detected.
 しかし、特許文献1、2では、このような仕組みの実現を支援できなかった。 However, Patent Documents 1 and 2 cannot support the realization of such a mechanism.
 本発明の情報処理装置は、農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関連する前記領域の特徴量を取得する特徴取得手段と、前記圃場のうち設定された領域に存在する前記対象オブジェクトの実際の個数を取得する個数取得手段と、前記設定された領域を撮影した画像から前記特徴取得手段によって取得された前記特徴量と、前記個数取得手段に取得された前記実際の個数とを学習データとして、前記圃場のうち指定された領域に存在する前記対象オブジェクトの実際の個数を推定するための推定パラメータを学習する学習手段と、を有することを特徴とする。 An information processing apparatus according to the present invention includes: a feature obtaining unit configured to obtain a feature amount of the region related to the number of target objects detected from the image from an image of a region that is a part of a field where a crop is grown, A number acquisition unit that acquires an actual number of the target objects existing in a set area of the field, the feature amount acquired by the feature acquisition unit from an image obtained by capturing the set area, Learning means for learning an estimation parameter for estimating the actual number of the target objects present in a designated area of the field, with the actual number acquired by the number acquisition means as learning data, It is characterized by having.
 本発明によれば、個数を求めたいオブジェクトの一部又は全部が検出できない場合でもオブジェクトの総数を推定する仕組みの実現を支援することができる。 According to the present invention, it is possible to support realization of a mechanism for estimating the total number of objects even when some or all of the objects whose number is to be obtained cannot be detected.
推定装置のハードウェア構成の一例を示す図である。FIG. 3 is a diagram illustrating an example of a hardware configuration of an estimation device. 推定装置の機能構成の一例を示す図である。It is a figure showing an example of functional composition of an estimation device. 学習データを管理するテーブルの一例を示す図である。FIG. 4 is a diagram illustrating an example of a table for managing learning data. オブジェクトの一部が葉に隠れた様子の一例を示す図である。FIG. 7 is a diagram illustrating an example of a state in which a part of an object is hidden by a leaf. 推定データを管理するテーブルの一例を示す図である。FIG. 4 is a diagram illustrating an example of a table for managing estimation data. 学習処理の一例を示すフローチャートである。It is a flowchart which shows an example of a learning process. 推定処理の一例を示すフローチャートである。It is a flowchart which shows an example of an estimation process. 推定装置の機能構成の一例を示す図である。It is a figure showing an example of functional composition of an estimation device. 区画の領域の一例を示す図である。It is a figure showing an example of a field of a section. 区画の領域の一例を示す図である。It is a figure showing an example of a field of a section. 学習データを管理するテーブルの一例を示す図である。FIG. 4 is a diagram illustrating an example of a table for managing learning data. 推定データを管理するテーブルの一例を示す図である。FIG. 4 is a diagram illustrating an example of a table for managing estimation data. 学習処理の一例を示すフローチャートである。It is a flowchart which shows an example of a learning process. 推定処理の一例を示すフローチャートである。It is a flowchart which shows an example of an estimation process. 推定装置の機能構成の一例を示す図である。It is a figure showing an example of functional composition of an estimation device. 補正情報を管理するテーブルの一例を示す図である。FIG. 4 is a diagram illustrating an example of a table for managing correction information. 推定処理の一例を示すフローチャートである。It is a flowchart which shows an example of an estimation process. 情報処理システムのシステム構成の一例を示す図である。FIG. 2 is a diagram illustrating an example of a system configuration of an information processing system. 推定結果の表示画面の一例を示す図である。It is a figure showing an example of a display screen of an estimation result. 推定結果の表示画面の一例を示す図である。It is a figure showing an example of a display screen of an estimation result. 推定結果の表示画面の一例を示す図である。It is a figure showing an example of a display screen of an estimation result.
 以下に、本発明の実施の形態を、図面に基づいて詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
 <実施形態1>
 本実施形態では、推定装置100が指定された領域に含まれるオブジェクトの個数の推定に用いられるパラメータである推定パラメータを学習し、学習した推定パラメータに基づいて指定された領域に含まれるオブジェクトの個数を推定する処理について説明する。
<First embodiment>
In the present embodiment, the estimation device 100 learns an estimation parameter which is a parameter used for estimating the number of objects included in the specified area, and determines the number of objects included in the specified area based on the learned estimation parameter. The process of estimating the value will be described.
 図1は、本実施形態における推定装置100のハードウェア構成の一例を示す図である。推定装置100は、指定された領域に含まれるオブジェクトの個数を推定するパーソナルコンピュータ、サーバ装置、タブレット装置等の情報処理装置である。 FIG. 1 is a diagram illustrating an example of a hardware configuration of the estimation device 100 according to the present embodiment. The estimation device 100 is an information processing device such as a personal computer, a server device, and a tablet device that estimates the number of objects included in a designated area.
 推定装置100は、CPU101、RAM102、ROM103、ネットワークI/F104、VRAM105、入力コントローラ107、HDD109、入力I/F110を含む。各要素は、システムバス111を介して、相互に通信可能に接続されている。 The estimation device 100 includes a CPU 101, a RAM 102, a ROM 103, a network I / F 104, a VRAM 105, an input controller 107, an HDD 109, and an input I / F 110. The components are communicably connected to each other via a system bus 111.
 CPU101は、推定装置100全体を統括的に制御する中央演算装置である。RAM102は、Random Access Memoryであり、CPU101の主メモリ、実行プログラムのロードやプログラム実行に必要なワークメモリ等として機能する。 The CPU 101 is a central processing unit that controls the entire estimating apparatus 100 as a whole. The RAM 102 is a Random Access Memory, and functions as a main memory of the CPU 101, a work memory necessary for loading an execution program and executing a program, and the like.
 ROM103は、Read Only Memoryであり、例えば、各種プログラム、各種設定情報等を記憶する。ROM103には、コンピュータシステムの機器制御を行うシステムプログラムである基本ソフトウェア(OS)等を記憶したプログラムROMと、システムを稼動するために必要な情報等が記憶されたデータROMと、が含まれる。また、HDD109が、ROM103に記憶されているプログラムや情報を記憶することとしてもよい。 The $ ROM 103 is a Read Only Memory, and stores, for example, various programs and various setting information. The ROM 103 includes a program ROM in which basic software (OS), which is a system program for controlling equipment of the computer system, is stored, and a data ROM in which information necessary for operating the system is stored. Also, the HDD 109 may store programs and information stored in the ROM 103.
 ネットワークI/F104は、ネットワークインタフェースであり、Local Area Network(LAN)等のネットワークを介して送受信される画像データ等のデータの入出力制御に用いられる。ネットワークI/F104は、有線や無線等、ネットワークの媒体に応じたインタフェースであるとする。 The network I / F 104 is a network interface, and is used for input / output control of data such as image data transmitted and received via a network such as a local area network (LAN). It is assumed that the network I / F 104 is an interface corresponding to a network medium such as a wired or wireless network.
 VRAM105は、ビデオRAMであり、表示装置であるディスプレイ106の画面に表示される画像のデータが展開される。ディスプレイ106は、表示装置であり、例えば、液晶ディスプレイや液晶パネル等である。入力コントローラ107は、入力装置108からの入力信号の制御に用いられるコントローラである。入力装置108は、ユーザからの操作指示を受け付けるための外部の入力装置であり、例えば、タッチパネル、キーボード、ポインティングデバイス、リモートコントローラ等である。 The VRAM 105 is a video RAM in which image data displayed on the screen of the display 106 as a display device is expanded. The display 106 is a display device, for example, a liquid crystal display or a liquid crystal panel. The input controller 107 is a controller used for controlling an input signal from the input device 108. The input device 108 is an external input device for receiving an operation instruction from a user, and is, for example, a touch panel, a keyboard, a pointing device, a remote controller, or the like.
 HDD109は、ハードディスクドライブであり、アプリケーションプログラムや、動画データや画像データ等のデータを記憶する。HDD109に記憶されるアプリケーションプログラムは、例えば、ハイライト動画作成アプリケーション等である。入力I/F110は、CD(DVD)-ROMドライブ、メモリカードドライブ等の外部の装置との接続に用いられるインタフェースであり、例えば、デジタルカメラで撮影した画像データの読出し等に用いられる。システムバス111は、推定装置の各ハードウェア構成要素間を相互に通信可能に接続するための入出力バスであり、例えばアドレスバス、データバス、制御バス等である。 The HDD 109 is a hard disk drive and stores application programs and data such as moving image data and image data. The application program stored in the HDD 109 is, for example, a highlight moving image creation application or the like. The input I / F 110 is an interface used for connection with an external device such as a CD (DVD) -ROM drive, a memory card drive, etc., and is used, for example, for reading image data captured by a digital camera. The system bus 111 is an input / output bus for connecting the respective hardware components of the estimation device so as to be able to communicate with each other, and is, for example, an address bus, a data bus, a control bus, or the like.
 CPU101がROM103、HDD109等に記憶されたプログラムに基づいて処理を実行することで、図2、8、15で後述する推定装置100の機能、図6、7、13、14、17で後述するフローチャートの処理等が実現される。 The CPU 101 executes a process based on a program stored in the ROM 103, the HDD 109, or the like, and thereby functions of the estimating apparatus 100 described later in FIGS. 2, 8, and 15, and a flowchart described later in FIGS. 6, 7, 13, 14, and 17. And the like are realized.
 本実施形態では、個数を推定する対象となるオブジェクトを、農作物(例えば、実、花、ぶどう等の房等)とする。以下では、個数を推定する対象となるオブジェクトを対象オブジェクトとする。また、本実施形態では、対象オブジェクトの検出を阻害し得る物体を、阻害物とする。本実施形態では、阻害物は、葉であるとする。しかし、阻害物は、葉だけでなく木や茎でもよい。また、対象オブジェクトは、農作物に限らず、人や車であるとしてもよい。その場合、阻害物は、例えば、建物等としてもよい。 In the present embodiment, the object whose number is to be estimated is a crop (for example, a bunch of fruits, flowers, grapes, and the like). Hereinafter, the object whose number is to be estimated is referred to as a target object. In the present embodiment, an object that can hinder detection of the target object is referred to as an obstruction. In the present embodiment, the inhibitor is a leaf. However, the inhibitors may be not only leaves but also trees and stems. Further, the target object is not limited to agricultural products, and may be a person or a car. In that case, the obstruction may be, for example, a building.
 また、本実施形態では、推定装置100は、対象オブジェクトの個数を推定する対象の領域が撮影された画像から対象オブジェクトを検出し、検出した対象オブジェクトの個数に基づいて決定したその領域の特徴を示す特徴量を求める。そして、推定装置100は、求めた特徴量と、その領域に実際に含まれる対象オブジェクトの個数と、に基づいて、その領域に含まれる対象オブジェクトの実際の個数の推定に用いられるパラメータである推定パラメータを学習することとする。以下では、領域に含まれる対象オブジェクトの実際の個数を、その領域における対象オブジェクトの実個数とする。 Further, in the present embodiment, the estimation device 100 detects the target object from the captured image of the target region for which the number of target objects is to be estimated, and determines the characteristics of the region determined based on the detected number of target objects. The characteristic amount shown is obtained. Then, based on the obtained feature amount and the number of target objects actually included in the region, the estimation device 100 estimates the parameter used for estimating the actual number of target objects included in the region. We will learn the parameters. Hereinafter, the actual number of target objects included in a region is defined as the actual number of target objects in the region.
 また、推定装置100は、対象オブジェクトの個数を推定する対象となる指定された領域が撮影された画像から対象オブジェクトを検出し、検出した対象オブジェクトの個数に基づいてその領域の特徴を示す特徴量を求める。そして、推定装置100は、求めた特徴量と、学習された推定パラメータと、に基づいて、その領域に含まれる対象オブジェクトの実際の個数を推定することとする。 The estimating apparatus 100 also detects a target object from an image of a specified region, which is a target for estimating the number of target objects, and indicates a feature amount indicating a feature of the region based on the detected number of target objects. Ask for. Then, the estimating apparatus 100 estimates the actual number of target objects included in the area based on the obtained feature amounts and the learned estimation parameters.
 図2は、本実施形態の推定装置100の機能構成の一例を示す図である。推定装置100は、個数取得部201、画像取得部202、学習部203、特徴量取得部204、パラメータ管理部205、推定部206、表示制御部207を含む。 FIG. 2 is a diagram illustrating an example of a functional configuration of the estimation device 100 according to the present embodiment. The estimation device 100 includes a number acquisition unit 201, an image acquisition unit 202, a learning unit 203, a feature amount acquisition unit 204, a parameter management unit 205, an estimation unit 206, and a display control unit 207.
 個数取得部201は、人手で数える等により求められた予め設定された領域に含まれる対象オブジェクトの実際の個数(実個数)を取得する。個数取得部201は、例えば、予め設定された領域における対象オブジェクトの実個数が記録されたテキストファイルをHDD109等から読込むことで、実個数を取得する。また、個数取得部201は、入力装置108を介して、実個数の入力を受付けることとしてもよい。 The number acquisition unit 201 acquires the actual number (actual number) of target objects included in a preset region obtained by counting manually or the like. The number acquisition unit 201 acquires the actual number by reading, for example, a text file in which the actual number of target objects in a preset area is recorded from the HDD 109 or the like. Further, the number obtaining unit 201 may receive an input of the actual number via the input device 108.
 画像取得部202は、例えば、外部の撮像装置等から、対象オブジェクトを含む予め設定された領域が撮影された画像を取得し、取得した画像をHDD109等に記憶する。本実施形態では、この予め設定された領域それぞれは、対応する画像に撮影されている領域全体であるとする。 The image acquisition unit 202 acquires, for example, from an external imaging device or the like, an image in which a preset area including a target object has been photographed, and stores the acquired image in the HDD 109 or the like. In the present embodiment, it is assumed that each of the preset regions is the entire region photographed in the corresponding image.
 特徴量取得部204は、画像取得部202により取得された画像から、物体検出技術を用いて対象オブジェクトを検出し、検出した対象オブジェクトの数に基づいて、検出した対象オブジェクトが存在する予め設定された領域の特徴を示す特徴量を取得する。以下では、ある領域から特徴量取得部204により検出された対象オブジェクトの数を、その領域における検出数とする。本実施形態では、特徴量取得部204は、ある領域の検出数を、その領域の特徴を示す特徴量として取得する。特徴量取得部204による特徴量を取得する処理は、特徴取得処理の一例である。 The feature amount acquiring unit 204 detects a target object from the image acquired by the image acquiring unit 202 using an object detection technique, and based on the number of detected target objects, a preset target object in which the detected target object exists is set. A feature amount indicating the feature of the region that has been acquired is acquired. In the following, the number of target objects detected from a certain area by the feature amount acquiring unit 204 is defined as the number of detections in that area. In the present embodiment, the feature amount acquisition unit 204 acquires the number of detections of a certain region as a feature amount indicating the feature of the region. The process of acquiring a feature by the feature acquiring unit 204 is an example of a feature acquiring process.
 学習部203は、個数取得部201により受付けられた画像ごとに、以下の処理を行う。即ち、学習部203は、個数取得部201により取得されたその画像に対応する予め設定された領域に含まれる対象オブジェクトの実際の個数と、特徴量取得部204により取得されたその画像に対応する予め設定された領域の特徴を示す特徴量と、を取得する。そして、学習部203は、取得した個数と特徴量とに基づいて、機械学習により、指定された領域に含まれる対象オブジェクトの実際の個数の推定に用いられる推定パラメータを学習する。本実施形態では、学習部203は、機械学習の手法として線形回帰が用いられることとし、線形回帰に用いられるパラメータを、推定パラメータとして学習することとする。しかし、学習部203は、サポートベクターマシン等の他の手法におけるパラメータを、推定パラメータとして学習することとしてもよい。 The learning unit 203 performs the following processing for each image received by the number acquisition unit 201. That is, the learning unit 203 corresponds to the actual number of target objects included in the preset area corresponding to the image acquired by the number acquiring unit 201 and the image acquired by the feature amount acquiring unit 204. And a feature amount indicating a feature of the region set in advance. Then, the learning unit 203 learns, by machine learning, estimation parameters used for estimating the actual number of target objects included in the designated area, based on the acquired number and the feature amount. In the present embodiment, the learning unit 203 uses linear regression as a method of machine learning, and learns parameters used for linear regression as estimation parameters. However, the learning unit 203 may learn a parameter in another method such as a support vector machine as an estimated parameter.
 パラメータ管理部205は、学習部203により学習された推定パラメータを、HDD109等に記憶し、管理する。 The parameter management unit 205 stores the estimated parameters learned by the learning unit 203 in the HDD 109 or the like and manages them.
 推定部206は、対象オブジェクトの個数を推定する対象となる領域が撮影された画像から特徴量取得部204により取得された特徴量と、パラメータ管理部205により管理されている学習された推定パラメータと、に基づいて、以下の処理を行う。即ち、推定部206は、対象オブジェクトの個数を推定する対象となる領域に含まれる対象オブジェクトの実際の個数を推定する。 The estimating unit 206 includes a feature amount acquired by the feature amount acquiring unit 204 from an image obtained by photographing an area in which the number of target objects is to be estimated, and a learned estimation parameter managed by the parameter managing unit 205. , The following processing is performed. That is, the estimating unit 206 estimates the actual number of target objects included in the target area for estimating the number of target objects.
 図3は、個数取得部201により取得された対象オブジェクトの実個数と、特徴量取得部204により検出された対象オブジェクトの検出数と、を学習データとして管理するテーブルの一例を示す図である。テーブル301は、ID、画像ファイル、検出数、実個数の項目を含む。IDの項目は、学習データを識別する識別情報を示す。画像ファイルの項目は、対応する学習データがどの画像を用いて生成された学習データであるかを示す。検出数の項目は、対応する画像ファイルの項目が示す画像から検出された対象オブジェクトの数を示す。実個数の項目は、対応する画像ファイルの項目が示す画像に撮影された特定の領域に実際に含まれる対象オブジェクトの個数(例えば、葉に隠れて画像に写っていない対象オブジェクトも含めた個数)を示す。テーブル301は、例えば、HDD109等に記憶される。 FIG. 3 is a diagram illustrating an example of a table that manages the actual number of target objects acquired by the number acquiring unit 201 and the number of target objects detected by the feature amount acquiring unit 204 as learning data. The table 301 includes items of ID, image file, number of detections, and actual number. The item of ID indicates identification information for identifying learning data. The item of the image file indicates which image the corresponding learning data is generated using. The item of the number of detections indicates the number of target objects detected from the image indicated by the item of the corresponding image file. The item of the actual number is the number of target objects actually included in a specific region photographed in the image indicated by the item of the corresponding image file (for example, the number including target objects hidden in leaves and not shown in the image). Is shown. The table 301 is stored in, for example, the HDD 109 or the like.
 例えば、IDが1の学習データに対応する画像ファイルが示す画像(IMG_0001.jpg)について、図4を用いて説明する。IMG_0001.jpgである画像701を見ると、対象オブジェクトが3つ写っており、4つは葉に隠れて写っていない状態であることが分かる。そのため、テーブル301のIDが1の学習データに対応する検出数と実個数とは、それぞれ3、7(=3+4)となっている。 {For example, an image (IMG_0001.jpg) indicated by the image file corresponding to the learning data with the ID of 1 will be described with reference to FIG. IMG_0001. Looking at the image 701 as a jpg, it can be seen that three target objects are shown and four are hidden behind leaves and are not shown. Therefore, the detection number and the actual number corresponding to the learning data with the ID of 1 in the table 301 are 3, 7 (= 3 + 4), respectively.
 個数取得部201は、予め、1つ以上の特定の領域について対象オブジェクトの実個数を取得する。また、特徴量取得部204は、予め、その1つ以上の特定の領域の何れかが撮影された複数の画像それぞれから、対象オブジェクトを検出し、検出数を求める。そして、個数取得部201と特徴量取得部204とは、取得した実個数と、検出数と、を図3に示すテーブル301の形式で学習データとしてHDD109等に記憶する。これにより、学習に用いられる学習データが、予め用意されることとなる。 The number acquisition unit 201 acquires the actual number of target objects in one or more specific areas in advance. In addition, the feature amount acquisition unit 204 detects a target object from each of a plurality of images in which any of the one or more specific regions is captured, and obtains the number of detections in advance. Then, the number acquiring unit 201 and the characteristic amount acquiring unit 204 store the acquired actual number and the detected number in the HDD 109 or the like as learning data in the format of the table 301 shown in FIG. Thus, learning data used for learning is prepared in advance.
 なお、本実施形態における学習データのうち、画像ファイル又はその特徴量は、入力データと呼ばれる。また、入力データに対応する実個数は、正解データと呼ばれる。また、学習された推定パラメータは、学習済みモデルとも呼ばれる。 Note that, among the learning data in the present embodiment, the image file or its feature amount is referred to as input data. The actual number corresponding to the input data is called correct data. The learned estimation parameter is also called a learned model.
 図5は、対象オブジェクトの実個数を推定したい領域が撮影された画像から特徴量取得部204により検出された対象オブジェクトの検出数と、推定部206により推定されたその領域の対象オブジェクトの実個数の値と、を管理するテーブルの一例を示す図である。テーブル401は、ID、画像ファイル、検出数、推定値の項目を含む。IDの項目は、対象オブジェクトの実個数が推定された領域を識別する識別情報を示す。画像ファイルの項目は、実個数の推定に用いられた画像を示す。検出数の項目は、対応する画像ファイルの項目が示す画像から特徴量取得部204により検出された対象オブジェクトの個数(検出数)を示す。推定値の項目は、推定部206により推定された対象オブジェクトの個数を示す。テーブル401は、例えば、HDD109等に記憶される。 FIG. 5 illustrates the number of detected target objects detected by the feature amount acquiring unit 204 from an image of a region in which the actual number of target objects for which estimation is desired is to be estimated, and the actual number of target objects in the region estimated by the estimating unit 206. FIG. 3 is a diagram showing an example of a table for managing the values of. The table 401 includes items of ID, image file, number of detections, and estimated value. The item of ID indicates identification information for identifying an area in which the actual number of target objects is estimated. The item of the image file indicates the image used for estimating the actual number. The item of the number of detections indicates the number of target objects (the number of detections) detected by the feature amount acquiring unit 204 from the image indicated by the item of the corresponding image file. The item of the estimated value indicates the number of target objects estimated by the estimating unit 206. The table 401 is stored in, for example, the HDD 109 or the like.
 図6は、推定パラメータの学習処理の一例を示すフローチャートの一例である。 FIG. 6 is an example of a flowchart showing an example of the estimation parameter learning process.
 S501において、個数取得部201は、例えば、HDD109に記憶されたテキストファイルから、予め設定された領域が撮影された画像ファイルのファイル名と、その領域に含まれる対象オブジェクトの実個数と、を取得する。そして、個数取得部201は、取得したファイル名と実個数とを、HDD109に記憶されたテーブル301に登録する。HDD109には、予め、画像ファイル名と実個数とがCSV形式等のフォーマットで対応付けて記録されているテキストファイルが記憶されているものとする。 In step S <b> 501, the number acquisition unit 201 acquires, for example, from a text file stored in the HDD 109, the file name of an image file in which a predetermined area is captured and the actual number of target objects included in the area. I do. Then, the number acquiring unit 201 registers the acquired file names and the actual number in the table 301 stored in the HDD 109. It is assumed that the HDD 109 previously stores a text file in which image file names and actual numbers are recorded in association with each other in a format such as a CSV format.
 本実施形態では、個数取得部201は、HDD109に記憶されたテキストファイルから、予め設定された複数の領域それぞれについて、領域が撮影された画像ファイルのファイル名と、その領域に含まれる対象オブジェクトの実個数と、を取得することとする。そして、個数取得部201は、複数の領域それぞれについて取得したファイル名と実個数との組それぞれをテーブル301に登録する。 In the present embodiment, for each of a plurality of preset regions, the number acquisition unit 201 determines, for each of a plurality of preset regions, a file name of an image file in which the region is photographed, and a target object included in the region. And the actual number. Then, the number acquiring unit 201 registers, in the table 301, each set of the file name and the actual number acquired for each of the plurality of areas.
 S502において、特徴量取得部204は、S501でテーブル301に登録された画像ファイル名それぞれについて、画像ファイル名が示す画像から、対象オブジェクトを検出して検出数を、その画像に撮影されている領域の特徴量として取得する。 In step S502, the feature amount acquisition unit 204 detects the target object from the image indicated by the image file name for each of the image file names registered in the table 301 in step S501, and determines the number of detections in the area captured in the image. Is obtained as the feature value of
 S503において、特徴量取得部204は、例えば、S502で取得された検出数(特徴量)を、HDD109に記憶されたテーブル301に登録する。 In step S503, the feature amount acquiring unit 204 registers, for example, the number of detections (feature amounts) acquired in step S502 in the table 301 stored in the HDD 109.
 S504において、学習部203は、テーブル301に登録された検出数(特徴量)と実個数との組を用いて、推定パラメータ(本実施形態では、線形回帰のパラメータ)を学習する。例えば、線形回帰は、以下の式(1)で表される。学習部203は、例えば、式(1)におけるパラメータw0、w1を推定パラメータとして、推定パラメータを学習する。例えば、w0=7.0、w1=1.2等といった値が学習されることとなる。
 実個数(推定値)= w0 +(w1×検出数) ・・・式(1)
In step S <b> 504, the learning unit 203 learns an estimation parameter (in this embodiment, a parameter of linear regression) using a set of the detected number (feature amount) and the actual number registered in the table 301. For example, the linear regression is represented by the following equation (1). The learning unit 203 learns the estimation parameters by using the parameters w0 and w1 in Equation (1) as the estimation parameters. For example, values such as w0 = 7.0 and w1 = 1.2 are learned.
Actual number (estimated value) = w0 + (w1 x number of detections) Expression (1)
 S505において、パラメータ管理部205は、例えば、S504で学習された推定パラメータを、HDD109に記憶することで、管理を開始する。 In step S505, the parameter management unit 205 starts management by, for example, storing the estimated parameters learned in step S504 in the HDD 109.
 図7は、図6の処理によって学習された推定パラメータを用いた推定処理の一例を示すフローチャートである。 FIG. 7 is a flowchart showing an example of the estimation process using the estimation parameters learned by the process of FIG.
 S601において、推定部206は、パラメータ管理部205に対して、図6の処理で学習された推定パラメータを要求する。パラメータ管理部205は、この要求に応じて、HDD109からS505で記憶したS504で学習された推定パラメータを取得し、取得した推定パラメータを推定部206に送信する。 In step S601, the estimation unit 206 requests the parameter management unit 205 for the estimation parameters learned in the process of FIG. In response to this request, the parameter management unit 205 acquires the estimated parameters learned in S504 stored in S505 from the HDD 109, and transmits the acquired estimated parameters to the estimation unit 206.
 S602において、特徴量取得部204は、対象オブジェクトの個数を推定する対象として指定された領域が撮影された画像から対象オブジェクトを検出し、検出数を取得する。本実施形態では、S602において圃場の少なくとも一部を撮影した画像を処理対象として供給することが、すなわち、画像に撮影されている領域を、対象オブジェクトの個数を推定する処理の対象として指定することに相当する。指定された画像が複数であれば、そのすべてについて同様に処理が行われる。そして、特徴量取得部204は、例えば、その画像の画像ファイル名と対応付けて、取得した検出数を、HDD109に記憶されたテーブル401に登録する。 In step S <b> 602, the feature amount acquiring unit 204 detects a target object from an image in which an area designated as a target for estimating the number of target objects is captured, and acquires the number of detections. In the present embodiment, an image obtained by photographing at least a part of the field in S602 is supplied as a processing target, that is, an area photographed in the image is designated as a target of a process of estimating the number of target objects. Is equivalent to If there are a plurality of designated images, the same processing is performed for all of them. Then, the feature amount acquiring unit 204 registers the acquired number of detections in the table 401 stored in the HDD 109, for example, in association with the image file name of the image.
 S603において、推定部206は、S601で取得した推定パラメータと、S602で取得された検出数と、に基づいて、対象オブジェクトの個数を推定する対象となる領域に含まれる対象オブジェクトの個数を推定する。推定部206は、例えば、S601で取得した推定パラメータであるw0、w1と、S602で取得された検出数と、に基づいて、式(1)を用いてその領域に含まれる対象オブジェクトの個数の推定値を求める。推定部206は、求めた推定値を、テーブル401に登録することで出力する。また、推定部206は、求めた推定値を、ディスプレイ106に表示することで、出力してもよい。 In step S603, the estimating unit 206 estimates the number of target objects included in the target region in which the number of target objects is to be estimated based on the estimation parameters acquired in step S601 and the number of detections acquired in step S602. . The estimating unit 206 calculates the number of target objects included in the area using Expression (1) based on, for example, w0 and w1 that are the estimation parameters acquired in S601 and the number of detections acquired in S602. Find an estimate. The estimating unit 206 outputs the obtained estimated value by registering it in the table 401. The estimating unit 206 may output the obtained estimated value by displaying it on the display 106.
 テーブル401に登録された推定値は、例えば、対象オブジェクトである農作物の収穫量の予測や、予測される収穫の多い領域、少ない領域の情報の地図上への可視化等に用いられることとしてもよい。 The estimated value registered in the table 401 may be used for, for example, predicting the yield of the crop, which is the target object, and visualizing the information of the predicted high- and low-yield areas on a map. .
 以上、本実施形態では、推定装置100は、予め設定された領域が撮影された画像から対象オブジェクトを検出し、検出した対象オブジェクトの個数(検出数)に基づいて、その領域の特徴を示す特徴量を取得した。そして、推定装置100は、取得した特徴量と、その領域に含まれる対象オブジェクトの実際の個数(実個数)と、に基づいて、推定パラメータを学習した。学習された推定パラメータを用いることで、ある領域から検出された対象オブジェクトの検出数に応じた特徴量に基づいて、その領域に含まれる対象オブジェクトの実際の個数の推定が可能となる。即ち、推定装置100は、推定パラメータを学習することで、対象オブジェクトの一部又は全部が検出できない場合でもオブジェクトの総数を推定する仕組みの実現を支援できる。 As described above, in the present embodiment, the estimation device 100 detects a target object from an image in which a preset region is captured, and indicates a characteristic of the region based on the number of detected target objects (the number of detections). Take the quantity. Then, the estimation device 100 learns the estimation parameters based on the acquired feature amount and the actual number (actual number) of the target objects included in the area. By using the learned estimation parameters, it is possible to estimate the actual number of target objects included in a region based on a feature amount corresponding to the number of detected target objects detected from the region. That is, the estimating apparatus 100 can support the realization of a mechanism for estimating the total number of objects even when some or all of the target objects cannot be detected by learning the estimation parameters.
 また、本実施形態では、推定装置100は、学習した推定パラメータと、対象オブジェクトの個数を推定する対象の領域が撮影された画像から検出された対象オブジェクトの検出数に基づいて取得されたその領域の特徴量と、に基づいて、以下の処理を行った。即ち、推定装置100は、その領域に含まれる対象オブジェクトの実際の個数を推定することとした。このように、推定装置100は、ある領域から検出された対象オブジェクトの個数に基づくその領域の特徴量から、その領域に含まれる対象オブジェクトの実際の個数を推定できる。これにより、推定装置100は、対象オブジェクトの一部又は全部が検出できない場合でもオブジェクトの総数を推定する仕組みの実現を支援できる。 Further, in the present embodiment, the estimating apparatus 100 obtains the learned estimation parameters and the number of target objects in the region obtained based on the number of detected target objects detected from the captured image of the target region. The following processing was performed based on the feature amounts of That is, the estimation device 100 estimates the actual number of target objects included in the area. As described above, the estimation device 100 can estimate the actual number of target objects included in a region from the feature amount of the region based on the number of target objects detected from the region. Accordingly, the estimation device 100 can support the realization of a mechanism for estimating the total number of objects even when some or all of the target objects cannot be detected.
 また、本実施形態では、推定装置100は、推定パラメータの学習に用いられる学習データを予め生成し、テーブル301としてHDD109に記憶することとした。このように、推定装置100は、推定パラメータの学習に用いられる学習データを用意することで、対象オブジェクトの一部又は全部が検出できない場合でもオブジェクトの総数を推定する仕組みの実現を支援できる。 In the present embodiment, the estimating apparatus 100 generates learning data used for learning the estimation parameters in advance, and stores the learning data in the HDD 109 as the table 301. In this way, the estimation device 100 can support the realization of a mechanism for estimating the total number of objects even when some or all of the target objects cannot be detected by preparing learning data used for learning the estimation parameters.
 本実施形態では、検出数は、画像から検出された対象オブジェクトの個数であるとした。しかし、検出数は、例えば、人が目視で対象オブジェクトを検出した個数であるとしてもよい。その場合、推定装置100は、例えば、入力装置108を介したユーザの操作に基づいて、検出数の指定を受付ける。 In the present embodiment, the number of detections is the number of target objects detected from the image. However, the number of detections may be, for example, the number of people who have visually detected the target object. In that case, the estimation device 100 accepts the designation of the number of detections, for example, based on a user operation via the input device 108.
 また、対象オブジェクトを人であるとして、予め設定された領域の一部に人感センサが設置されているとする。その場合、推定装置100は、人感センサを介して検出された人の数を検出数として、その領域に実際に存在する人を実個数として、用いることとしてもよい。例えば、推定装置100は、その領域における複数の時点それぞれにおける検出数と実個数との組に基づいて、検出数から取得されたその領域の特徴量に基づくその領域に含まれる実際の人数の推定に用いられる推定パラメータを学習してもよい。また、推定装置100は、学習した推定パラメータを用いて、指定された時刻におけるその領域に含まれる実際の人数を推定することとしてもよい。また、推定装置100は、予め、複数の時点それぞれにおける検出数と実個数との組を求めて、推定パラメータの学習に用いられる学習データを生成してもよい。 Further, suppose that the target object is a person, and a human sensor is installed in a part of a preset area. In that case, the estimation device 100 may use the number of people detected via the human sensor as the number of detections, and use the number of people actually existing in the area as the actual number. For example, the estimating apparatus 100 estimates the actual number of persons included in the region based on the feature amount of the region acquired from the number of detections, based on the set of the detected number and the actual number at each of a plurality of time points in the region. May be learned. In addition, the estimation device 100 may estimate the actual number of people included in the area at the designated time using the learned estimation parameters. In addition, the estimation device 100 may generate a set of the detected number and the actual number at each of a plurality of time points in advance and generate learning data used for learning the estimation parameter.
 <利用例>
 本実施形態の推定装置100の処理によって得られる対象オブジェクトの数や、それをもとに予測され得る農作物の収穫量をユーザに提示するシステムの利用例について説明する。このシステムは、推定装置100を含む。このシステムのユーザは、推定装置100によって推定された対象オブジェクトの数を、その後に行う作業や加工品の生産計画に活かすことができる。例えば、本実施形態の処理は、農作物としてワイン製造用のぶどうを栽培する場合に好適に適用できる。以下では、ワイン製造用のぶどうの生産管理を例に説明する。
<Example of use>
An example of use of a system for presenting to a user the number of target objects obtained by the processing of the estimating apparatus 100 of the present embodiment and the yield of crops that can be predicted based on the number will be described. This system includes an estimation device 100. The user of this system can utilize the number of target objects estimated by the estimating apparatus 100 in a work to be performed later and a production plan of a processed product. For example, the processing of the present embodiment can be suitably applied to a case where grapes for wine production are cultivated as agricultural products. Hereinafter, the production control of grapes for wine production will be described as an example.
 ワイン製造用のぶどうの栽培を行う場合には、その年度(ヴィンテージとも呼ばれる)に生産可能なワインの種類や量を管理するために、ぶどうの栽培段階でもその収穫量を精度よく予測することが求められる。そこで、ぶどうの圃場では、複数の生育ステージの所定のタイミングにおいて、ぶどうの芽や花、房を計数することで、その年に最終的に得られるぶどうの収穫量を繰り返し予測することが行われる。そして、収穫量が予測値より少なかったり多かったりする場合には、圃場での作業内容が変更されたり、生産されるワインの種類や量の計画又は販売計画等が調整されたりする。 When cultivating grapes for wine production, it is necessary to accurately predict the yield at the stage of grape cultivation in order to control the types and amounts of wine that can be produced in that year (also called vintage). Desired. Therefore, in a grape field, at predetermined timings of a plurality of growth stages, grape buds, flowers, and bunches are counted to repeatedly predict the amount of grape finally obtained in the year. . When the harvest amount is smaller or larger than the predicted value, the content of work in the field is changed, or the type or amount of wine to be produced or the sales plan is adjusted.
 対象オブジェクトの実際の個数を計数するために、人(作業者)がぶどうの葉のような阻害物を避けながら目視で対象オブジェクトを数えることが行われている。しかしながら、圃場は広大である場合、タイミングが重視されるような計数作業を圃場のすべての木に対して行うには人的な負荷が過大となる。 (4) In order to count the actual number of target objects, a person (operator) counts target objects visually while avoiding obstacles such as grape leaves. However, when the field is vast, the human load becomes excessive when performing a counting operation in which timing is emphasized on all the trees in the field.
 そこで、従来は、圃場のうちの複数個所、又は複数の木におけるサンプリング調査が行われてきた。しかしながら、圃場の中には地理的・気候的な条件が均一でない等の理由で、同じように世話をされた木でも場所又は年によって生育状態にばらつきが生じることがある。そのようなばらつきを考慮して、より高精度のサンプリング調査を行うには、調査の度に圃場内での地理的・気候的な条件の違いを反映したサンプリング位置を決定することが求められる。 Therefore, conventionally, sampling surveys at a plurality of locations in a field or a plurality of trees have been performed. However, even in a field, the growth condition of a cared tree may vary depending on the place or year, for example, because the geographical and climatic conditions are not uniform. In order to perform more accurate sampling surveys in consideration of such variations, it is necessary to determine a sampling position that reflects differences in geographical and climatic conditions in a field each time a survey is performed.
 一方で、圃場が広大である場合にも、圃場の全体、又は一部領域のぶどうの木の写真を一定の条件で撮影することは、全ての木について葉をめくりながら対象オブジェクトを計数するのに比べると作業負荷は低い。対象オブジェクトの実際の個数の計数が、圃場のうち一部でしか行えなかった場合でも、それよりも広い範囲で撮影された画像を得ることができれば、本実施形態の処理により、対象オブジェクトの数の推定をより高い精度で行うことが可能となる。上述したように、本実施形態では、圃場の少なくとも一部について、対象オブジェクトの実際の個数を計数した結果と、その一部を撮影した画像と、の組を学習させた学習済みモデルを利用して、画像に基づいて対象オブジェクトの実個数を推定する。この場合、画像に写る対象オブジェクトの検出数が少なければ、推定される実個数も少なくなる傾向が反映されるように、学習済みモデルが学習される。従って、例えば、地理的な条件の影響で、サンプリング調査が行われた木よりも生育状態の悪い木が存在した場合でも、その木を撮影した画像から推定される対象オブジェクトの個数は、サンプリング調査が行われた木よりも少なくなる。このように、本実施形態の処理により、サンプリング調査が行われた位置によらず、より精度の高い推定処理が可能になる。 On the other hand, even when the field is vast, taking a picture of the vine in the entire field or a part of the field under certain conditions requires counting the target object while turning over the leaves of all the trees. Work load is low compared to. Even when the actual number of target objects can be counted only in a part of the field, if an image captured in a wider range can be obtained, the processing of the present embodiment allows the number of target objects to be calculated. Can be estimated with higher accuracy. As described above, in the present embodiment, for at least a part of the field, a learned model obtained by learning a set of a result of counting the actual number of target objects and an image of the part is used. Then, the actual number of target objects is estimated based on the image. In this case, the learned model is learned so that if the number of detected target objects in the image is small, the estimated actual number tends to be small. Therefore, for example, even if there is a tree whose growth state is worse than that of the tree on which the sampling survey was performed due to the influence of geographical conditions, the number of target objects estimated from an image of the tree is determined by the sampling survey. Will be less than trees made. As described above, the processing of the present embodiment enables more accurate estimation processing irrespective of the position where the sampling investigation is performed.
 図19~21それぞれは、ワイン製造用ぶどうの生産現場において、本利用例に係るシステムが導入された場合に推定装置100が出力する推定結果を示す表示画面の一例を示す図である。本実施形態では、表示制御部207は、S603での推定結果に基づいて、図19~21の表示画面を生成し、ディスプレイ106に表示する。ただし、表示制御部207は、S603での推定結果に基づいて、図19~21の表示画面を生成し、生成した表示画面を外部の機器に送信し、送信先の機器の表示部(ディスプレイ等)に表示させるよう制御してもよい。 FIGS. 19 to 21 are diagrams each showing an example of a display screen showing an estimation result output by the estimation device 100 when the system according to this usage example is introduced at a wine production grape production site. In the present embodiment, the display control unit 207 generates the display screens of FIGS. 19 to 21 based on the estimation result in S603 and displays the display screens on the display 106. However, the display control unit 207 generates the display screens of FIGS. 19 to 21 based on the estimation result in S603, transmits the generated display screen to an external device, and displays the display unit (display or the like) of the transmission destination device. ) May be controlled.
 画面1900は、圃場に含まれる7つのブロックのそれぞれについて、識別子(ID)と、面積と、対応するブロックについてのぶどうの収穫量の推定値と、を示す画面である。表示制御部207は、対応するブロックについての全ての画像についてS603の処理の結果の合計(収穫されるぶどうの房の推定数)に基づいて、収穫されるぶどうの重量(単位:t)の推定値を求める。そして、表示制御部207は、求めた重量を、ぶどうの収穫量の推定値として、画面1900に含ませる。ぶどうの収穫量を、房の数ではなく、重量で表すことで、ぶどうの収穫量をワインの生産量の見積もりに利用しやすくなる。図19の例では、領域1901が表すブロックB-01において、19.5(t)のぶどうが収穫されるという予測値が示されている。 The screen 1900 is a screen showing, for each of the seven blocks included in the field, an identifier (ID), an area, and an estimated value of the grape harvest amount for the corresponding block. The display control unit 207 estimates the weight (unit: t) of the grapes to be harvested based on the total of the results of the processing in S603 (the estimated number of the grapes bunches to be harvested) for all the images of the corresponding blocks. Find the value. Then, the display control unit 207 causes the obtained weight to be included in the screen 1900 as an estimated value of the grape harvest amount. Expressing the grape yield by weight, rather than by the number of bunches, makes it easier to use the grape yield for estimating wine production. In the example of FIG. 19, a predicted value that 19.5 (t) grape is harvested in the block B-01 represented by the area 1901 is shown.
 さらに、本実施形態では、表示制御部207は、領域1901への入力装置108を介したポインティング操作(例えば、クリック、タップ等の選択操作)を検知すると、ディスプレイ106に表示される画面を、図20に示す画面2000に切り替える。 Further, in the present embodiment, when the display control unit 207 detects a pointing operation (for example, a selection operation such as a click or a tap) to the area 1901 via the input device 108, the display control unit 207 changes the screen displayed on the display 106 to an The screen is switched to a screen 2000 shown in FIG.
 図20を用いて、画面1900において領域1901(ブロックB-01に対応)が選択された場合に表示される画面2000について説明する。画面2000は、ブロックB-01について、収穫量の予測の根拠となった情報を提示する画面である。 Referring to FIG. 20, screen 2000 displayed when area 1901 (corresponding to block B-01) is selected on screen 1900 will be described. The screen 2000 is a screen for presenting information that is the basis of the prediction of the yield for the block B-01.
 画面2000は、画面1900における領域1901に対応する位置に、領域2001を含む。領域2001内の66個の正方形のそれぞれは、計数調査を行う1単位(ユニット)を示すマーカーである。図20の例では、対象オブジェクトは、ぶどうの房である。各マーカーの模様は、各ユニットで検出された房の平均の数に応じた模様である。つまり、領域2001には、ブロックB-01に存在する対象オブジェクトの検出数の地理的な分布が示されている。 Screen 2000 includes area 2001 at a position corresponding to area 1901 on screen 1900. Each of the 66 squares in the area 2001 is a marker indicating one unit (unit) to be subjected to the counting survey. In the example of FIG. 20, the target object is a grape cluster. The pattern of each marker is a pattern corresponding to the average number of tufts detected in each unit. That is, the area 2001 shows the geographic distribution of the number of detected target objects existing in the block B-01.
 破線で囲まれた領域2002には、選択されたブロックB-01に関する詳細情報が示される。例えば、領域2002には、ブロックB-01内の全ユニットについての平均検出房数は8.4であることを示す情報が示される。また、領域2002には、ブロックB-01内の全ユニットについての平均推定房数が17.1であることを示す情報が示される。 領域 An area 2002 surrounded by a broken line shows detailed information on the selected block B-01. For example, area 2002 indicates information indicating that the average number of detected cells for all units in block B-01 is 8.4. In the area 2002, information indicating that the average estimated number of cells for all units in the block B-01 is 17.1 is shown.
 ここで、特徴量取得部204によって検出された対象オブジェクトの検出数には、阻害物によって検出を阻害される対象オブジェクトの数が含まれない。一方で、個数取得部201によって取得される実個数とは、阻害物によって検出を阻害される対象オブジェクトの数を含む数である。つまり、本実施形態では、対の学習データとなる実個数と検出数とには差異がある場合がある。結果として、推定部206によって推定される対象オブジェクトの数は、阻害物によって検出を阻害される対象オブジェクトの数の分、図20の領域2002が示すように、検出数よりも大きい数となる場合がある。 Here, the number of detected target objects detected by the feature amount acquisition unit 204 does not include the number of target objects whose detection is hindered by obstacles. On the other hand, the actual number acquired by the number acquiring unit 201 is a number including the number of target objects whose detection is inhibited by the obstacle. That is, in the present embodiment, there may be a difference between the actual number serving as a pair of learning data and the number of detections. As a result, the number of target objects estimated by the estimating unit 206 is larger than the number of detections, as indicated by the area 2002 in FIG. 20, by the number of target objects whose detection is inhibited by the obstacle. There is.
 領域2003には、複数段階に分けられた検出房数と推定された房数との組について、段階別に、対応する段階に属するマーカーの総数が示される。領域2003に示される情報の表現方法は、図20の例のようにヒストグラム形式であってもよいし、各種グラフ形式であってもよい。 The area 2003 indicates, for each set of the set of the number of detected cells and the estimated number of cells divided into a plurality of stages, the total number of markers belonging to the corresponding stage. The method of expressing the information shown in the area 2003 may be a histogram format as in the example of FIG. 20 or may be various graph formats.
 図20の例では、領域2003には、段階別にビンの模様を変えたヒストグラムが示されている。このヒストグラムにおけるビンの模様は、領域2001でのマーカーの模様に対応している。 20. In the example of FIG. 20, the area 2003 shows a histogram in which the pattern of the bin is changed for each stage. The pattern of the bins in this histogram corresponds to the pattern of the marker in the area 2001.
 さらに、図20の例では、領域2002の下部には、領域2003及び領域2001で用いられる模様と対応する検出房数及び推定房数の段階が示されている。なお、図20の例では、表示制御部207は、ビン及びマーカーそれぞれについて、異なる模様を付けることとしたが、異なる色で着色する等してもよい。 20. Further, in the example of FIG. 20, below the area 2002, the stages of the number of detected and estimated numbers of cells corresponding to the patterns used in the area 2003 and the area 2001 are shown. In the example of FIG. 20, the display control unit 207 assigns different patterns to the bins and the markers. However, the bins and the markers may be colored in different colors.
 このように、領域2001には、疑似的なヒートマップによって、検出された房の数の分布が表現される。ヒートマップ形式の表示により、ユーザは直感的に検出数の大小とその分布を理解できる。 As described above, in the area 2001, the distribution of the number of detected tufts is represented by a pseudo heat map. The display in the heat map format allows the user to intuitively understand the magnitude of the number of detections and the distribution thereof.
 また、図20の例では、領域2002には、検出房数と対応させて推定房数も示される。実際に圃場を目にするユーザ自身は、葉が育つ前からぶどうの木を目にすることもあるため、感覚的に葉に隠された房の数を認識している場合がある。そのようなユーザにとって、画像から検出された房の数は、自らの感覚が知る数よりも少ないと感じられる可能性がある。 In the example of FIG. 20, the estimated cell number is also shown in the area 2002 in association with the detected cell number. Since the user who actually looks at the field sometimes sees the vine before the leaves grow, the user may intuitively recognize the number of bunches hidden in the leaves. For such a user, the number of clusters detected from the image may be felt to be less than the number known to one's own sensation.
 そこで、本利用例では、表示制御部207は、収穫量の予測値の根拠として、実際に検出された房の数のみならず、学習済みモデルによって推定された実個数を検出数に対応させて表示させる。例えば、ユーザは、まず画面1900を見て、収穫量の予測値を知る。そして、ブロックごとのその後の計画を立てるにあたり、念のために予測値の根拠を知りたいと考えた場合には、対象ブロックをクリックする。そして、ユーザは、クリックされたブロックに対応する画面2000により、画像から検出された房数(現実的に存在することが確実な房数)と、画像から検出されていない房までを含めて存在することが推定された房和と、の両方を確認できる。例えば、実際の圃場の状態を知るユーザが、画面1900で収穫量の予測値が少なすぎる等の印象を持った場合、画面2000を確認すれば、検出数が少ないことが原因であるのか、あるいは推定数(推定処理)が少ないことが原因であるのかを速やかに判断できる。 Therefore, in this usage example, the display control unit 207 associates not only the number of actually detected bunches but also the actual number estimated by the learned model with the number of detections as the basis of the predicted value of the yield. Display. For example, the user first looks at the screen 1900 and knows the predicted value of the yield. Then, when making a subsequent plan for each block, if the user wants to know the basis of the predicted value just in case, he or she clicks the target block. Then, the user uses the screen 2000 corresponding to the clicked block to display the number of clusters detected from the image (the number of clusters that are surely present) and the number of clusters not detected from the image. It is possible to confirm both the estimated and expected values. For example, if the user who knows the actual field condition has an impression that the predicted value of the harvest amount is too small on the screen 1900, if the user checks the screen 2000, the cause is that the number of detections is small, or It is possible to quickly determine whether the cause is a small estimated number (estimation processing).
 また、領域2002内の仮想ボタン2004は、領域2001に示されたマーカーのうち、実際にサンプリング調査が行われた位置の明示の指示に用いられるボタンである。表示制御部207は、仮想ボタン2004へのポインティング操作を検知すると、ディスプレイ106に表示される画面を、図21に示す画面2100に切り替える。 {Circle around (2)} The virtual button 2004 in the area 2002 is a button used to clearly indicate the position where the sampling survey was actually performed among the markers shown in the area 2001. When detecting the pointing operation on the virtual button 2004, the display control unit 207 switches the screen displayed on the display 106 to a screen 2100 shown in FIG.
 図21の例では、画面2100の領域2001には、画面2000から引き続きブロックB-01に含まれる66個のマーカーが表示されている。そして、表示制御部207は、66個のマーカーのうち、実際にサンプリング調査が行われた位置に対応する10個のマーカーについて、マーカー2101のように、太線で強調表示したマーカーにする。 In the example of FIG. 21, 66 markers included in the block B-01 are displayed in the area 2001 of the screen 2100 continuously from the screen 2000. Then, the display control unit 207 makes the ten markers corresponding to the positions where the sampling survey is actually performed out of the sixty-six markers into markers highlighted by a thick line like the marker 2101.
 それと併せて、表示制御部207は、仮想ボタン2004についても、画面2000での表示状態を、色を変化させる等して変化させる。これにより、仮想ボタン2004の表示状態が、仮想ボタン2004が選択されているか否かと対応づけられる。ユーザは、仮想ボタン2004を確認することで、仮想ボタン2004が選択されている状態であるか否かを容易に判断できる。 (5) At the same time, the display control unit 207 also changes the display state of the virtual button 2004 on the screen 2000 by changing the color or the like. Thereby, the display state of the virtual button 2004 is associated with whether or not the virtual button 2004 is selected. By confirming the virtual button 2004, the user can easily determine whether or not the virtual button 2004 has been selected.
 なお、仮想ボタン2004に係る機能は、推定処理の根拠としてその年のサンプリング調査による学習データが利用されている場合に特に有効である。例えば、昨年以前に実績として得られたデータのみを学習データとして学習した学習済みモデルが利用される場合、サンプリング位置を確認する必要性は低いので、仮想ボタン2004は省略されてもよい。例えば、過去のサンプリング調査の実績により十分な学習データが得られたと判断された年の後は、年ごとのサンプリング調査を省略し、本実施形態による推定処理による計数調査のみを行ってもよい。 Note that the function related to the virtual button 2004 is particularly effective when learning data from a sampling survey for that year is used as the basis of the estimation processing. For example, when a learned model in which only data obtained as a result before last year is learned as learning data is used, it is not necessary to confirm the sampling position, and thus the virtual button 2004 may be omitted. For example, after the year in which it is determined that sufficient learning data has been obtained based on the past sampling survey results, the sampling survey for each year may be omitted, and only the counting survey by the estimation processing according to the present embodiment may be performed.
 <実施形態2>
 本実施形態では、推定装置100が予め設定された特定の区画の領域を特定し、特定した領域に含まれる対象オブジェクトの実際の個数を推定する場合について説明する。
<Embodiment 2>
In the present embodiment, a case will be described in which the estimating apparatus 100 specifies an area of a predetermined specific section set in advance and estimates the actual number of target objects included in the specified area.
 本実施形態の推定装置100のハードウェア構成は、実施形態1と同様である。 ハ ー ド The hardware configuration of the estimation device 100 of the present embodiment is the same as that of the first embodiment.
 以下では、実施形態1と異なる点について説明する。 点 Hereinafter, points different from the first embodiment will be described.
 図8は、本実施形態の推定装置100の機能構成の一例を示す図である。本実施形態の推定装置100は、図2に示す実施形態1の場合と比べて、予め設定された区画の領域を特定する区画特定部801を含む点で異なる。また、本実施形態では、画像取得部202が取得する画像が、予め設定された区画の領域を示す画像となる。 FIG. 8 is a diagram illustrating an example of a functional configuration of the estimation device 100 according to the present embodiment. The estimating apparatus 100 of the present embodiment is different from the case of the first embodiment shown in FIG. 2 in that it includes a section specifying unit 801 for specifying a preset area of a section. Further, in the present embodiment, the image acquired by the image acquisition unit 202 is an image indicating a region of a preset section.
 区画特定部801は、予め設定された区画の領域(例えば、圃場に設定された区画の領域等)を示す物体を画像から検出し、検出した物体の位置に基づいて、その画像に置ける区画の領域を特定する。画像取得部202は、区画特定部801により特定された区画の領域を、入力の画像から切り出して、HDD109等に記憶する。 The section identification unit 801 detects, from an image, an object indicating an area of a preset section (eg, an area of a section set in a field), and based on the position of the detected object, determines an Identify the area. The image acquisition unit 202 cuts out the area of the section specified by the section specifying unit 801 from the input image and stores the cut out area in the HDD 109 or the like.
 区画特定部801の処理について、図9を用いて説明する。図9の画像901は、区画の領域が撮影された画像であり、画像902は、区画の領域を切り出した画像である。区画特定部801は、画像901から区画の領域を示す物体を検出し、画像901における検出した物体で囲まれている領域を、区画の領域として特定する。そして、画像取得部202は、画像901から区画特定部801により特定された領域を切り出し、画像902を取得し、取得した画像902をHDD109に記憶することとなる。 The processing of the section specifying unit 801 will be described with reference to FIG. An image 901 in FIG. 9 is an image obtained by photographing a region of a section, and an image 902 is an image obtained by cutting out the area of a section. The section specifying unit 801 detects an object indicating the area of the section from the image 901 and specifies an area surrounded by the detected object in the image 901 as the area of the section. Then, the image obtaining unit 202 cuts out the area specified by the section specifying unit 801 from the image 901, obtains the image 902, and stores the obtained image 902 in the HDD 109.
 また、区画の幅が広くなると、一枚の画像に区画が収まらず、複数の画像に分割して1つの区画が撮影される場合がある。その場合、区画特定部801は、その区画が撮影された複数の画像を並べて、区画の両端を示す物体をこの複数の画像から検出し、各画像に対して区画の領域を特定する。画像取得部202は、区画特定部801により特定された複数の画像それぞれにおけるその区画の領域を合成し、合成した画像を、HDD109等に記憶する。 If the width of the section is widened, the section may not fit in one image and may be divided into a plurality of images to capture one section. In this case, the section specifying unit 801 arranges a plurality of images of the section, detects objects indicating both ends of the section from the plurality of images, and specifies the area of the section for each image. The image acquiring unit 202 combines the areas of the sections in each of the plurality of images specified by the section specifying unit 801 and stores the combined image in the HDD 109 or the like.
 この場合の区画特定部801の処理について、図10を用いて説明する。図10の複数の画像1001は、区画の領域が撮影された画像であり、画像1002は、合成された区画の領域を示す画像である。区画特定部801は、複数の画像1001から区画の領域を示す物体を検出し、複数の画像1001における検出した物体で挟まれている領域を、区画の領域として特定する。そして、画像取得部202は、画像1001から区画特定部801により特定された領域を合成し、画像1002を取得し、取得した画像1002をHDD109に記憶することとなる。 処理 The processing of the section identification unit 801 in this case will be described with reference to FIG. A plurality of images 1001 in FIG. 10 are images obtained by capturing the area of the section, and an image 1002 is an image showing the area of the combined section. The section specifying unit 801 detects an object indicating the area of the section from the plurality of images 1001, and specifies an area sandwiched by the detected objects in the plurality of images 1001 as the area of the section. Then, the image obtaining unit 202 synthesizes the area specified by the partition specifying unit 801 from the image 1001, obtains the image 1002, and stores the obtained image 1002 in the HDD 109.
 また、区画特定部801は、その区画が撮影された複数の画像を合成して、1つの合成画像として、合成画像から区画の領域を示す物体を検出して、検出した物体に基づいて、合成画像における区画の領域を特定してもよい。 Further, the section specifying unit 801 combines a plurality of images obtained by capturing the sections, detects an object indicating the area of the section from the combined image as one combined image, and combines the objects based on the detected object. The area of the section in the image may be specified.
 本実施形態では、特徴量取得部204は、画像取得部202により取得された区画特定部801により特定された予め設定された区画の領域の画像から、対象オブジェクトを検出する。そして、特徴量取得部204は、検出した対象オブジェクトの個数(検出数)に基づいて、この区画の領域の特徴量を取得することとなる。 In the present embodiment, the feature amount acquiring unit 204 detects a target object from the image of the area of the preset section specified by the section specifying unit 801 obtained by the image obtaining unit 202. Then, the feature amount acquiring unit 204 acquires the feature amount of the area of this section based on the number of detected target objects (the number of detections).
 そのため、推定パラメータの学習に用いられる学習データに含まれる特徴量、推定処理に用いられる特徴量それぞれは、予め設定された何れかの区画の特徴量となる。つまり、本実施形態では、圃場の中に定義された区画ごとに推定パラメータを学習する。 Therefore, each of the feature amounts included in the learning data used for learning the estimation parameter and the feature amounts used for the estimation process is a feature amount of any of the preset sections. That is, in the present embodiment, the estimation parameter is learned for each section defined in the field.
 また、画像取得部202は、区画特定部801により特定された区画の領域の画像を取得せずに、実施形態1と同様の画像を取得することとしてもよい。その場合、特徴量取得部204は、画像取得部202により取得された画像における区画特定部801により特定された予め設定された区画の領域から、対象オブジェクトを検出する。そして、特徴量取得部204は、検出した対象オブジェクトの個数(検出数)に基づいて、この区画の領域の特徴量を取得することとなる。 The image acquiring unit 202 may acquire an image similar to that of the first embodiment without acquiring an image of the area of the section specified by the section specifying unit 801. In this case, the feature amount acquiring unit 204 detects the target object from the region of the preset section specified by the section specifying unit 801 in the image obtained by the image obtaining unit 202. Then, the feature amount acquiring unit 204 acquires the feature amount of the area of this section based on the number of detected target objects (the number of detections).
 以上、本実施形態では、推定装置100は、予め設定された区画の領域を特定し、特定した区画の領域に含まれる対象オブジェクトの実際の個数を推定することができる。 As described above, in the present embodiment, the estimating apparatus 100 can specify an area of a preset section and estimate the actual number of target objects included in the specified area of the section.
 ある区画の領域に含まれる対象オブジェクトの実際の個数を推定したい場合、その区画の領域以外の領域から検出され得る対象オブジェクトの影響を低減することが望ましい。本実施形態の処理により、推定装置100は、対象オブジェクトの実際の個数の推定の対象となる領域以外の領域から検出され得る対象オブジェクトの影響を低減できる。 場合 When it is desired to estimate the actual number of target objects included in an area of a certain section, it is desirable to reduce the influence of the target object that can be detected from an area other than the area of the section. By the processing of the present embodiment, the estimation device 100 can reduce the influence of the target object that can be detected from an area other than the area in which the actual number of the target objects is to be estimated.
 また、区画特定部801は、予め設定された区画の領域を示す物体を検出することで区画の領域を特定したが、GPSデータ等の位置情報や画像計測技術を用いて距離を測り区画の領域を特定してもよい。更に、推定装置100は、撮影の際に特定された区画の領域がユーザにより確認できるよう、入力画像を生成するカメラファインダに仮想的な枠を表示してもよいし、枠を重畳した合成画像を生成してもよい。また、推定装置100は、画像のメタデータとして枠の情報をHDD109等に記憶してもよい。 The section specifying unit 801 specifies the area of the section by detecting an object indicating the area of the section set in advance. However, the area of the section is measured using position information such as GPS data or an image measurement technique. May be specified. Furthermore, the estimating apparatus 100 may display a virtual frame on a camera finder that generates an input image so that the user can confirm an area of the section specified at the time of shooting, or a composite image in which the frame is superimposed. May be generated. Further, the estimation device 100 may store the frame information as metadata of the image in the HDD 109 or the like.
 <実施形態3>
 本実施形態では、推定装置100が領域から検出された対象オブジェクトの個数に加えて、その領域の他の属性に基づいて、その領域の特徴を示す特徴量を取得する場合について説明する。
<Embodiment 3>
In the present embodiment, a case will be described in which the estimation device 100 acquires a feature amount indicating a feature of the area based on other attributes of the area in addition to the number of target objects detected from the area.
 本実施形態の推定装置100のハードウェア構成、機能構成は、実施形態1と同様である。 推定 The hardware configuration and the functional configuration of the estimation device 100 of the present embodiment are the same as those of the first embodiment.
 本実施形態では、推定装置100は、ある領域の特徴量として、その領域から検出された対象オブジェクトの数と、その領域の他の属性と、の組を用いる。そして、推定装置100は、その特徴量を用いて、推定パラメータの学習処理、推定パラメータを用いた対象オブジェクトの個数の推定処理を行う。 In the present embodiment, the estimation device 100 uses a set of the number of target objects detected from a region and other attributes of the region as a feature amount of the region. Then, the estimation device 100 performs a learning process of the estimation parameter using the feature amount, and a process of estimating the number of target objects using the estimation parameter.
 図11のテーブル1101は、学習データの登録に用いられるテーブルであり、例えば、HDD109等に記憶されている。テーブル1101は、図3に示すテーブル301に推定パラメータの学習に用いられる情報が追加されたテーブルである。 11 is a table used for registering learning data, and is stored in, for example, the HDD 109 or the like. The table 1101 is a table in which information used for learning the estimation parameter is added to the table 301 shown in FIG.
 テーブル1101は、ID、画像ファイル、検出数、隣接検出数、土壌、葉量、実個数の項目を含む。隣接検出数の項目は、対応するIDが示す領域と隣接する1つ以上の領域それぞれから検出された対象オブジェクトの平均値である。本実施形態では、対象オブジェクトを農作物とする。本実施形態では、推定装置100は、その領域の特徴量に、その領域の周囲の領域の特徴(例えば、周囲の領域から検出された対象オブジェクトの特徴個数に基づいて決定された統計値(例えば、平均、合計、分散等)等)を含ませる。これにより、推定装置100は、周囲の領域を考慮した上で対象オブジェクトの実際の個数を推定可能な推定パラメータを学習できるし、その推定パラメータを用いて、周囲の領域を考慮した上で対象オブジェクトの実際の個数を推定できる。 The table 1101 includes items of ID, image file, number of detections, number of adjacent detections, soil, leaf volume, and actual number. The item of the number of adjacent detections is an average value of the target object detected from each of one or more areas adjacent to the area indicated by the corresponding ID. In the present embodiment, the target object is a crop. In the present embodiment, the estimating apparatus 100 includes, in the feature amount of the region, a feature of a region around the region (for example, a statistic (for example, a statistical value (for example, , Average, total, variance, etc.). As a result, the estimation device 100 can learn an estimation parameter capable of estimating the actual number of target objects in consideration of the surrounding area, and use the estimation parameter to evaluate the target object in consideration of the surrounding area. Can be estimated.
 土壌の項目は、対応するIDが示す領域の土壌の良さ(実りやすさ)を示す指標値を示す。この指標値が大きい値であるほど土壌が良いこととなる。本実施形態では、推定装置100は、対象オブジェクトである農作物が植えられている土壌の良さを示す指標値を特徴量に含ませる。これにより、推定装置100は、土壌の特徴を考慮した上で対象オブジェクトの実際の個数を推定可能な推定パラメータを学習できるし、その推定パラメータを用いて、土壌の特徴を考慮した上で対象オブジェクトの実際の個数を推定できる。 The item of soil indicates an index value indicating the goodness (easiness of fruiting) of the soil in the area indicated by the corresponding ID. The larger the index value, the better the soil. In the present embodiment, the estimation device 100 causes the feature value to include an index value indicating the goodness of the soil in which the crop, which is the target object, is planted. Thereby, the estimating apparatus 100 can learn an estimation parameter capable of estimating the actual number of target objects in consideration of the characteristics of the soil, and use the estimation parameters to consider the characteristics of the soil in consideration of the characteristics of the soil. Can be estimated.
 葉量の項目は、その領域から検出された葉の量を示す指標値を示す。この指標値が大きいほど、葉の量が多いこととなる。本実施形態では、対象オブジェクトの検出を阻害し得る阻害物を、葉であるとする。本実施形態では、推定装置100は、検出された阻害物の量を特徴量に含ませる。これにより、推定装置100は、阻害物の量を考慮した上で対象オブジェクトの実際の個数を推定可能な推定パラメータを学習できるし、その推定パラメータを用いて、阻害物の量を考慮した上で対象オブジェクトの実際の個数を推定できる。 The item of 量 leaf amount indicates an index value indicating the amount of leaf detected from the area. The larger the index value, the larger the amount of leaves. In the present embodiment, it is assumed that the obstacle that can inhibit the detection of the target object is a leaf. In the present embodiment, the estimation device 100 includes the amount of the detected obstacle in the feature amount. Accordingly, the estimation device 100 can learn an estimation parameter capable of estimating the actual number of target objects in consideration of the amount of the obstruction, and use the estimation parameter to consider the amount of the obstruction. The actual number of target objects can be estimated.
 本実施形態では、領域の特徴量は、検出数と、隣接検出数と、葉量と、土壌の良さを示す指標値との組であるとする。しかし、領域の特徴量は、検出数と、隣接検出数・葉量・土壌の良さを示す指標値の一部との組であるとしてもよい。また、領域の特徴量は、検出数、隣接検出数、葉量、土壌の良さを示す指標値以外の領域の属性を含むこととしてもよい。 In the present embodiment, it is assumed that the feature amount of the region is a set of the number of detections, the number of adjacent detections, the amount of leaves, and an index value indicating the goodness of soil. However, the feature amount of the region may be a set of the number of detections and a part of the index value indicating the number of adjacent detections / leaf volume / good soil. Further, the feature amount of the region may include an attribute of the region other than the number of detections, the number of adjacent detections, the amount of leaves, and the index value indicating the goodness of the soil.
 図12のテーブル1201は、対象オブジェクトの実個数を推定したい領域の特徴量と、推定部206により推定されたその領域の対象オブジェクトの実個数の値と、を管理するテーブルである。テーブル1201は、図5のテーブル401に、対象オブジェクトの実際の個数の推定に用いられる情報が追加されたテーブルである。テーブル1201は、ID、画像ファイル、検出数、隣接検出数、土壌、葉量、推定値の項目を含む。隣接検出数、土壌、葉量の項目それぞれは、テーブル1101と同様の項目である。 テ ー ブ ル Table 1201 in FIG. 12 is a table for managing the feature amount of the area in which the actual number of target objects is to be estimated and the value of the actual number of target objects in the area estimated by estimating section 206. The table 1201 is a table in which information used for estimating the actual number of target objects is added to the table 401 in FIG. The table 1201 includes items of ID, image file, number of detections, number of adjacent detections, soil, leaf volume, and estimated value. The items of the number of adjacent detections, soil, and leaf amount are the same items as in the table 1101.
 図13は、推定パラメータの学習処理の一例を示すフローチャートである。図13の処理では、図6の処理と異なり、テーブル301ではなく、テーブル1101が用いられる。 FIG. 13 is a flowchart showing an example of the estimation parameter learning process. In the processing of FIG. 13, unlike the processing of FIG. 6, a table 1101 is used instead of the table 301.
 S1301~S1304の処理は、テーブル1101に登録されている各画像ファイルに対して、特徴量取得部204により特徴量を取得し、取得した結果が登録される処理である。 Steps S1301 to S1304 are processes in which a feature amount is acquired by the feature amount acquiring unit 204 for each image file registered in the table 1101, and the acquired result is registered.
 S1301において、特徴量取得部204は、テーブル1101に登録されている画像から対象オブジェクトと葉とを検出する。特徴量取得部204は、葉を、物体検出技術を用いて検出してもよいし、単純に葉の色を持つ画素を検出することで検出してもよい。 In step S1301, the feature amount acquiring unit 204 detects a target object and leaves from the images registered in the table 1101. The feature amount acquiring unit 204 may detect the leaf using the object detection technique, or may detect the leaf simply by detecting a pixel having a leaf color.
 S1302において、特徴量取得部204は、S1301で検出した対象オブジェクトの検出数と葉量とをテーブル1101に登録する。特徴量取得部204は、葉の量を示す指標値である葉量を、検出した葉の領域の画素数と画像全体の画素数との比率に基づいて取得する。 In step S1302, the feature amount acquiring unit 204 registers the number of detected target objects and the leaf amount detected in step S1301 in the table 1101. The feature amount obtaining unit 204 obtains a leaf amount, which is an index value indicating a leaf amount, based on a ratio between the number of pixels of the detected leaf region and the number of pixels of the entire image.
 S1303において、特徴量取得部204は、テーブル1101に登録されている画像のメタデータからGPSデータ等の位置情報を取得する。そして、特徴量取得部204は、取得した位置情報に基づいて、隣接検出数と土壌の良さとの情報を取得する。特徴量取得部204は、テーブル1101から対象のIDの前後のIDに対応する画像の位置情報を取得し、隣接する領域の画像であるか否かを判定する。そして、特徴量取得部204は、隣接する領域であると判定した画像についての検出数を取得して平均値を、隣接検出数として取得する。特徴量取得部204は、例えば、ID2の隣接検出数を、ID1の検出数である3と、ID3の検出数である4と、を平均して3.5とする。 In step S1303, the feature amount obtaining unit 204 obtains position information such as GPS data from the metadata of the image registered in the table 1101. Then, the feature amount obtaining unit 204 obtains information on the number of adjacent detections and the goodness of the soil based on the obtained position information. The feature amount obtaining unit 204 obtains position information of an image corresponding to IDs before and after the target ID from the table 1101, and determines whether the image is an image of an adjacent area. Then, the feature amount obtaining unit 204 obtains the number of detections for the image determined to be an adjacent area, and obtains an average value as the number of adjacent detections. For example, the feature quantity acquiring unit 204 sets the number of adjacent detections of ID2 to 3.5, which is the number of detections of ID1 and 4, the number of detections of ID3, on average.
 本実施形態では、位置的に連続して撮影された画像が、IDの連続する画像としてテーブル1101に登録されているとする。そのため、IDの近いものが近い位置で撮影されていることを前提としている。しかし、撮影対象となる領域の端が撮影された場合、撮影が途切れて前後のIDが必ずしも隣接しない場合があるため、特徴量取得部204は、位置情報を用いて実際に隣接する領域であるかを判定することとした。 In the present embodiment, it is assumed that images captured consecutively in position are registered in the table 1101 as images having consecutive IDs. Therefore, it is assumed that an object with a close ID is photographed at a close position. However, when the edge of the region to be photographed is photographed, the photographing may be interrupted and the IDs before and after the photographing may not always be adjacent. Therefore, the feature amount acquiring unit 204 is the region that is actually adjacent using the position information. Was determined.
 また、複数個所が並行して撮影される等により、IDが近いからといって、位置的に近いこととならない場合に対応するために、推定装置100は、以下のようにしてもよい。即ち、テーブル1101が対応するIDの領域の位置情報を含むこととして、特徴量取得部204は、この位置情報に基づいて、テーブル1101から、ある領域の周囲の領域についての検出数のデータを特定することとしてもよい。 {Circle around (4)} In order to cope with the case where the IDs are close and the positions are not close due to the fact that a plurality of locations are photographed in parallel, the estimation device 100 may perform the following. That is, assuming that the table 1101 includes the position information of the area of the corresponding ID, the feature amount acquiring unit 204 specifies the data of the number of detections of the area around the certain area from the table 1101 based on the position information. You may do it.
 過去の生育状態から経験的に得られた位置と土壌の良さとの関係を示す情報が予めHDD109等に記憶されたデータベース等で管理されているとする。特徴量取得部204は、例えば、そのデータベース等から撮影位置に対応する土壌の良さを示す指標値を取得する。 と す る It is assumed that information indicating the relationship between the position obtained empirically from the past growth state and the goodness of the soil is managed in a database or the like stored in advance in the HDD 109 or the like. The feature amount acquiring unit 204 acquires, for example, an index value indicating the goodness of the soil corresponding to the shooting position from the database or the like.
 S1304において、特徴量取得部204は、S1303で取得した隣接検出数と土壌の良さを示す指標値との情報をテーブル1101に登録する。 In step S <b> 1304, the feature amount acquiring unit 204 registers information on the number of adjacent detections acquired in step S <b> 1303 and an index value indicating soil goodness in the table 1101.
 図13のS504において、学習部203は、テーブル1101の検出数、隣接検出数、土壌の良さを示す指標値、葉量を用いて推定パラメータを学習する。本実施形態では、推定パラメータは、線形回帰のパラメータであるとする。例えば、線形回帰は、以下の式(2)で表されるとする。
 実数(推定値)= w0 +(w1×検出数)+(w2×隣接検出数)+(w3×土壌の良さを示す指標値)+(w4×葉量) ・・・式(2)
In S504 of FIG. 13, the learning unit 203 learns an estimation parameter using the number of detections, the number of adjacent detections, an index value indicating soil goodness, and the amount of leaves in the table 1101. In the present embodiment, it is assumed that the estimation parameter is a parameter of linear regression. For example, it is assumed that the linear regression is represented by the following equation (2).
Real number (estimated value) = w0 + (w1 x number of detections) + (w2 x number of adjacent detections) + (w3 x index value indicating goodness of soil) + (w4 x leaf volume) ... Equation (2)
 この場合、学習部203は、推定パラメータとしてw0、w1、w2、w3、w4を学習する。例えば、w0=7.0、w1=0.7、w2=0.5、w3=1.6、w4=1.2といった値が学習される。 In this case, the learning unit 203 learns w0, w1, w2, w3, and w4 as the estimation parameters. For example, values such as w0 = 7.0, w1 = 0.7, w2 = 0.5, w3 = 1.6, and w4 = 1.2 are learned.
 図13のS505において、パラメータ管理部205は、S504で学習された推定パラメータをHDD109等に記憶し、管理する。 In step S505 of FIG. 13, the parameter management unit 205 stores and manages the estimated parameters learned in step S504 in the HDD 109 or the like.
 図14は、図13の処理で学習された推定パラメータを用いた対象オブジェクトの実際の個数を推定する推定処理の一例を示すフローチャートである。 FIG. 14 is a flowchart illustrating an example of an estimation process for estimating the actual number of target objects using the estimation parameters learned in the process of FIG.
 図14のS601において、推定部206は、図13のS504で学習された推定パラメータを取得する。 In step S601 in FIG. 14, the estimating unit 206 acquires the estimation parameters learned in step S504 in FIG.
 S1401において、特徴量取得部204は、S1301と同様に、対象オブジェクトの実際の個数を推定する対象となる領域が撮影された画像から対象オブジェクトと葉とを検出する。 In S1401, similarly to S1301, the feature amount acquiring unit 204 detects the target object and the leaves from the image in which the region for which the actual number of target objects is to be estimated is photographed.
 S1402において、特徴量取得部204は、S1302と同様に、S1401で検出した対象オブジェクトの検出数と葉量とをテーブル1201に登録する。特徴量取得部204は、葉の量を示す指標値である葉量を、検出した葉の領域の画素数と画像全体の画素数との比率に基づいて取得する。 In step S1402, the feature amount acquiring unit 204 registers the number of detected target objects and the leaf amount detected in step S1401 in the table 1201 as in step S1302. The feature amount obtaining unit 204 obtains a leaf amount, which is an index value indicating a leaf amount, based on a ratio between the number of pixels of the detected leaf region and the number of pixels of the entire image.
 S1403において、特徴量取得部204は、S1303と同様に、対象オブジェクトの実際の個数を推定する対象となる領域が撮影された画像のメタデータからGPSデータ等の位置情報を取得する。そして、特徴量取得部204は、取得した位置情報に基づいて、隣接検出数と土壌の良さとの情報を取得する。そして、特徴量取得部204は、取得した隣接検出数と土壌の良さを示す指標値との情報をテーブル1201に登録する。 In step S1403, the feature amount obtaining unit 204 obtains position information such as GPS data from metadata of an image of a region in which the actual number of target objects is to be estimated, as in step S1303. Then, the feature amount obtaining unit 204 obtains information on the number of adjacent detections and the goodness of the soil based on the obtained position information. Then, the feature amount acquiring unit 204 registers information on the acquired number of detected neighbors and an index value indicating the goodness of soil in the table 1201.
 図14のS603において、推定部206は、S601で取得した推定パラメータと、テーブル1201に登録された領域の特徴量と、に基づいて、式(2)を用いて、以下の処理を実行する。即ち、推定部206は、対象オブジェクトの実際の個数を推定する対象となる領域に実際に含まれる対象オブジェクトの個数を推定する。 In step S603 of FIG. 14, the estimating unit 206 executes the following process based on the estimation parameters acquired in step S601 and the feature amounts of the regions registered in the table 1201, using equation (2). That is, the estimating unit 206 estimates the number of target objects that are actually included in the region in which the actual number of target objects is to be estimated.
 図12の例では、ID836は、ID835やID837と比べて検出された対象オブジェクトの個数(検出数)が少なくなっている。しかし、ID836は、ID835やID837と同程度の推定値となっている。そのため、ID836の領域では、たまたま葉に隠れた対象オブジェクトが、ID835やID837と比べて多くなり、検出数がID835やID837と比べて少なくなってしまった可能性がある。 In the example of FIG. 12, the number of target objects detected (the number of detections) of ID836 is smaller than that of ID835 or ID837. However, the ID 836 is an estimated value similar to the ID 835 and the ID 837. Therefore, in the region of ID836, the number of target objects that happened to be hidden by leaves may have increased as compared with ID835 or ID837, and the number of detected objects may have decreased as compared with ID835 or ID837.
 農作物は、同じ圃場の中でも実りやすい場所と実りにくい場所とに偏りがあり、位置的に近いもの同士で対象オブジェクトの実個数に相関があることが仮定できる。推定装置100は、特徴量として隣接検出数を用いることで、検出数の情報を補完することができるため、検出数が少なくなっても推定値が小さくなりすぎないようにすることができる。また、葉量が多いほど対象オブジェクトが隠れる可能性が高くなるため、推定装置100は、特徴量として葉量を用いることで、葉量が多く隠れた場合でも推定値が小さくなりすぎることを防ぐことができる。 (4) It can be assumed that agricultural products have a bias in the easy-to-harvest and hard-to-harvest locations even in the same field, and that there is a correlation between the actual number of target objects among those close in position. The estimating apparatus 100 can supplement the information on the number of detections by using the number of adjacent detections as the feature quantity, so that the estimated value does not become too small even if the number of detections decreases. In addition, since the possibility that the target object is hidden increases as the leaf amount increases, the estimation device 100 uses the leaf amount as the feature amount, thereby preventing the estimation value from becoming too small even when the leaf amount is hidden. be able to.
 以上、本実施形態では、推定装置100は、特徴量として、検出数に加えて、実りやすさの位置的な偏り(土壌の良さ)や、葉量等の領域のその他の属性を用いることとした。これにより、推定装置100は、実施形態1よりも精度よく対象物の実数を推定することができる。 As described above, in the present embodiment, the estimating apparatus 100 uses, as the feature amount, other attributes of the region such as the positional deviation of the yield (good soil) and the amount of leaves in addition to the number of detections. did. Thereby, the estimating apparatus 100 can estimate the real number of the target with higher accuracy than in the first embodiment.
 また、推定装置100は、本実施形態で用いた特徴量以外にも、実りやすい場所では対象オブジェクトが大きくなると仮定して検出した対象オブジェクトの大きさの情報を、領域の特徴量に用いてもよい。推定装置100は、他にも、実りやすさの要因とされる農作物の品種や、肥料の散布状況、病気の有無等を特徴量として用いてもよい。 In addition to the feature amounts used in the present embodiment, the estimating apparatus 100 may use information on the size of the target object detected on the assumption that the target object is large in an easy-to-produce location as the feature amount of the region. Good. In addition, the estimation device 100 may use, as a feature amount, a variety of agricultural crops, a fertilizer application state, the presence or absence of a disease, and the like, which are factors of the yield.
 <実施形態4>
 農作物は天候により実りやすさに偏りがある。本実施形態では、対象オブジェクトである農作物の実際の個数を推定し、天候の条件を考慮した上で補正する処理について説明する。
<Embodiment 4>
Agricultural crops tend to be fruitful depending on the weather. In the present embodiment, a description will be given of a process of estimating the actual number of the crops that are the target objects and correcting them in consideration of weather conditions.
 本実施形態では、推定装置100のハードウェア構成は、実施形態1と同様である。 で は In the present embodiment, the hardware configuration of the estimation device 100 is the same as that of the first embodiment.
 以下では、本実施形態と実施形態1との異なる点について説明する。 Hereinafter, differences between the present embodiment and the first embodiment will be described.
 図15は、本実施形態の推定装置100の機能構成の一例を示す図である。本実施形態の推定装置100の機能構成は、図2の機能構成に比べて、補正情報を取得する補正情報取得部1501を含む点で異なる。補正情報取得部1501は、推定部206により対象オブジェクトの実際の個数を推定するために用いられる学習済みの推定パラメータと、推定した値の補正に用いられる情報である補正情報(例えば、推定値に乗じるための係数等)と、を取得する。本実施形態では、推定部206は、推定パラメータと、補正情報と、を用いて対象オブジェクトの実際の個数を推定する。 FIG. 15 is a diagram illustrating an example of a functional configuration of the estimation device 100 according to the present embodiment. The functional configuration of the estimation device 100 of the present embodiment is different from the functional configuration of FIG. 2 in that a correction information acquisition unit 1501 that acquires correction information is included. The correction information acquisition unit 1501 includes a learned estimation parameter used for estimating the actual number of target objects by the estimation unit 206, and correction information (for example, an estimated value And the like for multiplication). In the present embodiment, the estimating unit 206 estimates the actual number of target objects using the estimation parameters and the correction information.
 図16は、予め用意された学習済みパラメータと係数とを管理するテーブルの一例を示す図である。テーブル1601は、年度、平均検出数、パラメータを含む。年度の項目は、テーブル301に登録された学習データに対応する年度である。平均検出数の項目は、対応する年度における平均の検出数である。パラメータは、学習部203により対応する年度に対応する学習データを用いて学習された推定パラメータである。テーブル1601は、平均検出数という予め設定された指標の指標値とそれぞれ対応付けられた学習部203により学習された複数の推定パラメータを管理するテーブルである。テーブル1601は、例えば、HDD109等に記憶される。 FIG. 16 is a diagram showing an example of a table for managing learned parameters and coefficients prepared in advance. The table 1601 includes a year, an average number of detections, and parameters. The item of the year is the year corresponding to the learning data registered in the table 301. The item of the average number of detections is the average number of detections in the corresponding year. The parameter is an estimated parameter learned by the learning unit 203 using the learning data corresponding to the corresponding year. The table 1601 is a table for managing a plurality of estimation parameters learned by the learning unit 203, each of which is associated with an index value of a preset index called an average detection number. The table 1601 is stored in, for example, the HDD 109 or the like.
 テーブル1602は、晴れの割合、係数の項目を含む。晴れの割合の項目は、ある領域に実際に含まれる対象オブジェクトである農作物が生育した期間における晴れ日の割合を示す。係数の項目は、推定値を補正するための値を示し、対応する晴れ日の割合が高いほど大きい値となる。 The table 1602 includes items of the percentage of fine weather and the coefficient. The item of sunny ratio indicates the ratio of sunny days during the period in which the crop, which is the target object actually included in a certain area, grew. The item of the coefficient indicates a value for correcting the estimated value, and the larger the ratio of the corresponding sunny day, the larger the value.
 図17は、推定パラメータを用いた推定処理の一例を示すフローチャートである。 FIG. 17 is a flowchart illustrating an example of an estimation process using estimation parameters.
 S1701において、補正情報取得部1501は、テーブル401の検出数を平均して対象オブジェクトの実際の個数の推定対象に対応する年度の平均検出数を取得する。補正情報取得部1501は、テーブル1601に登録された推定パラメータから、対応する平均検出数の値が、取得した平均検出数に最も近い推定パラメータを取得する。補正情報取得部1501は、取得した推定パラメータを、推定処理に用いる推定パラメータとして選択する。 In step S1701, the correction information acquisition unit 1501 averages the number of detections in the table 401 to acquire the average number of detections for the year corresponding to the estimation target of the actual number of target objects. The correction information acquisition unit 1501 acquires, from the estimation parameters registered in the table 1601, the estimation parameter whose corresponding average detection number value is closest to the acquired average detection number. The correction information acquisition unit 1501 selects the acquired estimation parameter as the estimation parameter used for the estimation processing.
 推定装置100は、S1701の処理により、改めて推定パラメータを学習することなく、推定処理の対象の領域の置かれている条件に近い条件で学習された推定パラメータを取得できる。これにより、推定装置100は、学習に係る処理の負担を軽減しつつ、推定処理の精度の低減を防止できる。 The estimation device 100 can acquire the estimation parameter learned under the condition close to the condition where the region to be subjected to the estimation processing is located without learning the estimation parameter again by the process of S1701. Accordingly, the estimation device 100 can prevent the accuracy of the estimation process from decreasing while reducing the load of the process related to learning.
 S1702において、補正情報取得部1501は、例えば、外部の気象サービス等を用いて取得した天候の情報に基づいて、推定処理の対象となる領域に対応する年度における対象オブジェクトである農作物の生育期間における晴れの日の割合を取得する。補正情報取得部1501は、テーブル1602から、取得した晴れの日の割合に対応する係数を取得する。 In step S1702, the correction information acquisition unit 1501 uses, for example, the weather information acquired using an external weather service or the like to generate a target object in the year corresponding to the region to be estimated, during the growing period of the crop, which is the target object. Get the percentage of sunny days. The correction information acquisition unit 1501 acquires, from the table 1602, a coefficient corresponding to the acquired percentage of a sunny day.
 S1703において、推定部206は、例えば、S1701で選択された推定パラメータを用いて、式(1)により対象オブジェクトの実際の個数の推定値を取得する。そして、推定部206は、取得した推定値に、S1702で取得された係数を乗じることで、推定値を補正し、補正後の推定値を最終的な推定値とする。 In S1703, the estimating unit 206 obtains an estimated value of the actual number of target objects by using Expression (1), for example, using the estimation parameter selected in S1701. Then, the estimating unit 206 corrects the estimated value by multiplying the obtained estimated value by the coefficient obtained in S1702, and sets the corrected estimated value as a final estimated value.
 以上、本実施形態では、推定装置100は、天候の情報(晴れの日の割合)に基づいて、オブジェクトの推定値の補正に用いられる係数を取得することとした。そして、推定装置100は、取得した係数を用いて、対象オブジェクトの実際の個数の推定値を補正した。これにより、推定装置100は、実施形態1よりも精度のよい対象オブジェクトの実際の個数の推定値を取得できる。 As described above, in the present embodiment, the estimation device 100 acquires a coefficient used for correcting the estimated value of the object based on the weather information (the ratio of a sunny day). Then, the estimation device 100 uses the obtained coefficient to correct the estimated value of the actual number of target objects. Thereby, the estimating apparatus 100 can obtain an estimated value of the actual number of target objects with higher accuracy than in the first embodiment.
 また、本実施形態では、農作物の収穫は年一回としたが、年一回に限定するものではなく、年に複数回収穫する農作物を対象にしてもよい。その場合、年度ごとに管理していたデータを生育期間ごとに管理するように変更すればよい。 Further, in the present embodiment, harvesting of agricultural products is performed once a year. However, the harvest is not limited to once a year, but may be applied to agricultural products harvested a plurality of times a year. In that case, the data managed for each year may be changed to be managed for each growth period.
 また、本実施形態では、推定装置100は、係数を取得するために晴れの日の割合を用いたが、他にも日照時間、降水量、気温の平均値や積算値等を用いてもよい。 Further, in the present embodiment, the estimation device 100 uses the ratio of sunny days to obtain the coefficient. However, the estimation device 100 may use an average value, an integrated value, or the like of sunshine hours, precipitation, temperature, and the like. .
 更に、推定装置100は、本実施形態で説明した推定パラメータを取得するために用いられる平均検出数、補正情報を取得するために用いられる晴れの日の割合を、特徴量の一つとして用いてもよい。また、推定装置100は、実施形態3で説明した特徴量の一部を用いて、推定パラメータや補正情報を取得してもよい。 Furthermore, the estimation device 100 uses the average number of detections used to acquire the estimation parameters described in the present embodiment and the ratio of sunny days used to acquire correction information as one of the feature amounts. Is also good. Further, the estimation device 100 may acquire the estimation parameter and the correction information by using a part of the feature amount described in the third embodiment.
 <実施形態5>
 実施形態1では、推定装置100が、推定パラメータの学習に用いられる学習データの生成処理、推定パラメータの学習処理、対象オブジェクトの実際の個数の推定処理を行うこととした。しかしながら、これらの処理は、単一の装置により実行されることとしなくてもよい。
<Embodiment 5>
In the first embodiment, the estimation device 100 performs a process of generating learning data used for learning an estimation parameter, a process of learning an estimation parameter, and a process of estimating the actual number of target objects. However, these processes need not be executed by a single device.
 本実施形態では、推定パラメータの学習に用いられる学習データの生成処理、推定パラメータの学習処理、対象オブジェクトの実際の個数の推定処理それぞれが別個の装置によって実行される場合について説明する。 In the present embodiment, a case will be described in which the processing of generating learning data, the processing of learning estimation parameters, and the processing of estimating the actual number of target objects are performed by separate devices.
 図18は、本実施形態で推定パラメータの学習に用いられる学習データの生成処理、推定パラメータの学習処理、推定パラメータを用いた対象オブジェクトの実際の個数の推定処理を実行する情報処理システムのシステム構成等の一例を示す図である。 FIG. 18 is a system configuration of an information processing system that executes a process of generating learning data used for learning an estimation parameter, a process of learning an estimation parameter, and a process of estimating the actual number of target objects using the estimation parameter in the present embodiment. FIG.
 情報処理システムは、生成装置1801、学習装置1802、推定装置1803を含む。生成装置1801、学習装置1802、推定装置1803それぞれのハードウェア構成は、図1に示す実施形態1の推定装置100のハードウェア構成と同様である。 The information processing system includes a generating device 1801, a learning device 1802, and an estimating device 1803. The hardware configuration of each of the generating device 1801, the learning device 1802, and the estimating device 1803 is the same as the hardware configuration of the estimating device 100 of the first embodiment illustrated in FIG.
 生成装置1801のCPUが生成装置1801のROM、HDD等に記憶されたプログラムに基づいて処理を実行することで、図18に示される生成装置1801の機能、生成装置1801の処理が実現される。学習装置1802のCPUが学習装置1802のROM、HDD等に記憶されたプログラムに基づいて処理を実行することで、図18に示される学習装置1802の機能、学習装置1802の処理が実現される。推定装置1803のCPUが推定装置1803のROM、HDD等に記憶されたプログラムに基づいて処理を実行することで、図18に示される推定装置1803の機能、推定装置1803の処理が実現される。 The function of the generation device 1801 and the processing of the generation device 1801 illustrated in FIG. 18 are realized by the CPU of the generation device 1801 executing the processing based on the program stored in the ROM, the HDD, or the like of the generation device 1801. The function of the learning device 1802 and the processing of the learning device 1802 illustrated in FIG. 18 are realized by the CPU of the learning device 1802 executing the processing based on the program stored in the ROM, the HDD, or the like of the learning device 1802. The functions of the estimating device 1803 and the processing of the estimating device 1803 shown in FIG. 18 are realized by the CPU of the estimating device 1803 executing the processing based on the programs stored in the ROM, the HDD, or the like of the estimating device 1803.
 生成装置1801、学習装置1802、推定装置1803それぞれの機能構成について説明する。 The functional configuration of each of the generation device 1801, the learning device 1802, and the estimation device 1803 will be described.
 生成装置1801は、個数取得部1811、画像取得部1812、特徴量取得部1813、生成部1814を含む。個数取得部1811、画像取得部1812、特徴量取得部1813は、それぞれ、図2の個数取得部201、画像取得部202、特徴量取得部204と同様である。生成部1814は、学習データを生成し、生成した学習データを、テーブル301やCSV等の形式で、生成装置1801のHDD等に記憶する。生成部1814は、例えば、図6のS501~S503と同様の処理を実行することで、学習データを生成する。 The generation device 1801 includes a number acquisition unit 1811, an image acquisition unit 1812, a feature amount acquisition unit 1813, and a generation unit 1814. The number acquisition unit 1811, the image acquisition unit 1812, and the feature amount acquisition unit 1813 are the same as the number acquisition unit 201, the image acquisition unit 202, and the feature amount acquisition unit 204 in FIG. The generation unit 1814 generates learning data, and stores the generated learning data in a format such as a table 301 or CSV in an HDD or the like of the generation device 1801. The generation unit 1814 generates learning data by executing, for example, processing similar to S501 to S503 in FIG.
 学習装置1802は、学習部1821、パラメータ管理部1822を含む。学習部1821、パラメータ管理部1822は、それぞれ図2の学習部203、パラメータ管理部205と同様の機能構成要素である。即ち、学習部1821は、生成装置1801から生成装置1801により生成された学習データを取得し、取得した学習データ(テーブル301の情報)に基づいて、図6のS504~S505と同様の処理を実行することで、推定パラメータを学習する。そして、パラメータ管理部1822は、学習部1821により学習された推定パラメータを、学習装置1802のHDD等に記憶する。 The learning device 1802 includes a learning unit 1821 and a parameter management unit 1822. The learning unit 1821 and the parameter management unit 1822 are functional components similar to the learning unit 203 and the parameter management unit 205 in FIG. 2, respectively. That is, the learning unit 1821 obtains the learning data generated by the generation device 1801 from the generation device 1801, and executes the same processing as S504 to S505 in FIG. 6 based on the obtained learning data (information in the table 301). Then, the estimation parameters are learned. Then, the parameter management unit 1822 stores the estimated parameters learned by the learning unit 1821 in the HDD or the like of the learning device 1802.
 推定装置1803は、画像取得部1831、特徴量取得部1832、推定部1833、表示制御部1834を含む。画像取得部1831、特徴量取得部1832、推定部1833、表示制御部1834は、それぞれ図2の画像取得部202、特徴量取得部204、推定部206、表示制御部207と同様である。即ち、画像取得部1831、特徴量取得部1832、推定部1833は、図7と同様の処理を実行することで、対象オブジェクトの個数を推定する対象となる領域に含まれる対象オブジェクトの実際の個数を推定する処理を実行する。 The estimation device 1803 includes an image acquisition unit 1831, a feature amount acquisition unit 1832, an estimation unit 1833, and a display control unit 1834. The image acquisition unit 1831, the feature acquisition unit 1832, the estimation unit 1833, and the display control unit 1834 are the same as the image acquisition unit 202, the feature acquisition unit 204, the estimation unit 206, and the display control unit 207 of FIG. That is, the image acquisition unit 1831, the feature amount acquisition unit 1832, and the estimation unit 1833 execute the same processing as in FIG. 7 to thereby determine the actual number of target objects included in the target region in which the number of target objects is estimated. Is performed.
 以上、本実施形態では、それぞれ別個の装置が、推定パラメータの学習に用いられる学習データの生成処理、推定パラメータの学習処理、対象オブジェクトの実際の個数の推定処理のそれぞれを実行することとした。これにより、各処理に係る負担を、複数の装置に分散することができる。 As described above, in the present embodiment, the respective devices execute the processing of generating the learning data used for learning the estimated parameters, the processing of learning the estimated parameters, and the processing of estimating the actual number of target objects. This makes it possible to distribute the burden of each process to a plurality of devices.
 <その他の実施形態>
 本発明は、上述の実施形態の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給し、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサがプログラムを読み出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
<Other embodiments>
The present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus read and execute the program. This processing can be realized. Further, it can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.
 例えば、上述した推定装置100の機能構成の一部又は全てをハードウェアとして推定装置100、生成装置1801、学習装置1802、推定装置1803等に実装してもよい。 For example, a part or all of the functional configuration of the estimation device 100 described above may be implemented as hardware in the estimation device 100, the generation device 1801, the learning device 1802, the estimation device 1803, and the like.
 以上、本発明の実施形態の一例について詳述したが、本発明は係る特定の実施形態に限定されるものではない。例えば、上述した各実施形態を任意に組み合わせたりしてもよい。 Although an example of the embodiment of the present invention has been described above, the present invention is not limited to the specific embodiment. For example, the above embodiments may be arbitrarily combined.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, the following claims are appended to make the scope of the present invention public.
 本願は、2018年7月27日提出の日本国特許出願特願2018-141430と2019年5月24日提出の日本国特許出願特願2019-097874を基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 This application claims priority based on Japanese Patent Application No. 2018-141430 filed on Jul. 27, 2018 and Japanese Patent Application No. 2019-097874 filed on May 24, 2019, The entire contents of that description are incorporated herein.

Claims (20)

  1.  農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関する前記領域の特徴量を取得する特徴取得手段と、
     前記圃場のうち設定された領域に存在する前記対象オブジェクトの実際の個数を取得する個数取得手段と、
     前記設定された領域を撮影した画像から前記特徴取得手段によって取得された前記特徴量と、前記個数取得手段に取得された前記実際の個数とを学習データとして、前記圃場のうち指定された領域に存在する前記対象オブジェクトの実際の個数を推定するための推定パラメータを学習する学習手段と、
     を有することを特徴とする情報処理装置。
    A feature acquisition unit configured to acquire a feature amount of the region related to the number of target objects detected from the image from an image of a region that is a part of a field where a crop grows,
    Number acquisition means for acquiring the actual number of the target objects present in a set area of the field,
    The feature amount obtained by the feature obtaining unit from the image obtained by capturing the set region, and the actual number obtained by the number obtaining unit as learning data, in the designated area of the field Learning means for learning an estimation parameter for estimating the actual number of existing target objects;
    An information processing apparatus comprising:
  2.  前記対象オブジェクトは、少なくとも前記農作物の芽、実、花、房の何れかであることを特徴とする請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the target object is at least one of a bud, a fruit, a flower, and a bunch of the crop.
  3.  前記指定された領域を撮影した画像から前記特徴取得手段により取得される前記特徴量と、前記学習手段により学習された前記推定パラメータと、に基づいて、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定する推定手段を更に有することを特徴とする請求項1に記載の情報処理装置。 The target object included in the designated area based on the feature amount acquired by the feature acquiring unit from the image obtained by photographing the designated area and the estimation parameter learned by the learning unit. 2. The information processing apparatus according to claim 1, further comprising an estimating unit for estimating the actual number.
  4.  前記推定手段は、前記指定された領域の前記特徴量と、それぞれが予め設定された指標における指標値と対応付けられた前記学習手段により学習された複数の前記推定パラメータから、前記特徴量に対応する前記指標の指標値に基づいて、選択された前記推定パラメータと、に基づいて、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定することを特徴とする請求項3に記載の情報処理装置。 The estimating means corresponds to the feature quantity from the feature quantity of the designated area and a plurality of the estimation parameters learned by the learning means each associated with an index value in a preset index. 4. The method according to claim 3, further comprising: estimating an actual number of the target objects included in the designated area based on the selected estimation parameter based on the index value of the index. Information processing device.
  5.  前記推定手段は、前記指定された領域の前記特徴量と、前記学習手段により学習された前記推定パラメータと、前記推定パラメータを用いて推定された値の補正に用いられる補正情報と、に基づいて、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定することを特徴とする請求項4に記載の情報処理装置。 The estimating means is based on the feature amount of the designated area, the estimation parameter learned by the learning means, and correction information used for correcting a value estimated using the estimation parameter. The information processing apparatus according to claim 4, wherein an actual number of the target objects included in the designated area is estimated.
  6.  農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関する前記領域の特徴量を取得する特徴取得手段と、
     前記圃場のうち指定された領域を撮影した画像から前記特徴取得手段により取得された前記特徴量と、前記圃場の領域を撮影した画像から前記特徴取得手段により取得される前記特徴量から前記領域に含まれる前記対象オブジェクトの実際の個数を推定する推定処理に用いられる予め学習されたパラメータである推定パラメータと、に基づいて、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定する推定手段と、
     を有することを特徴とする情報処理装置。
    A feature acquisition unit configured to acquire a feature amount of the region related to the number of target objects detected from the image from an image of a region that is a part of a field where a crop grows,
    The feature amount acquired by the feature acquiring unit from an image of a designated area in the field, and the feature amount acquired by the feature acquiring unit from the image of the field region, Estimating the actual number of the target objects included in the specified area based on an estimation parameter that is a parameter learned in advance used for estimating the actual number of the target objects included Estimating means;
    An information processing apparatus comprising:
  7.  前記推定手段は、前記指定された領域を撮影した画像から前記特徴取得手段により取得された前記特徴量と、それぞれが予め設定された指標における指標値と対応付けられた複数の前記推定パラメータから、前記特徴量に対応する前記指標の指標値に基づいて、選択された前記推定パラメータと、に基づいて、前記設定された領域に含まれる前記対象オブジェクトの実際の個数を推定することを特徴とする請求項5に記載の情報処理装置。 The estimation means, the feature amount acquired by the feature acquisition means from an image of the designated area, from the plurality of estimation parameters each associated with an index value in a preset index, An actual number of the target objects included in the set area is estimated based on the selected estimation parameter based on the index value of the index corresponding to the feature amount. The information processing device according to claim 5.
  8.  前記推定手段は、前記指定された領域を撮影した画像から前記特徴取得手段により取得された前記特徴量と、前記推定パラメータと、前記推定パラメータを用いて推定された値の補正に用いられる補正情報と、に基づいて、前記設定された領域に含まれる前記対象オブジェクトの実際の個数を推定することを特徴とする請求項5に記載の情報処理装置。 The estimating means includes: the feature amount acquired by the feature acquiring means from an image obtained by photographing the designated area; the estimation parameter; and correction information used for correcting a value estimated using the estimation parameter. The information processing apparatus according to claim 5, wherein an actual number of the target objects included in the set area is estimated based on:
  9.  前記推定手段によって推定された前記対象オブジェクトの実際の個数に基づいて予測される前記農作物の収穫量を所定のディスプレイに表示させる表示制御手段を更に有することを特徴とする請求項3乃至8の何れか1項に記載の情報処理装置。 9. The apparatus according to claim 3, further comprising a display control unit configured to display on a predetermined display a harvest amount of the crop estimated based on an actual number of the target objects estimated by the estimation unit. The information processing device according to claim 1.
  10.  前記表示制御手段は、さらに、ユーザによる操作に応じて、前記農作物の収穫量の予測の根拠となった所定の範囲で検出された前記対象オブジェクトの個数と前記推定手段によって推定された前記対象オブジェクトの個数とを前記所定のディスプレイに表示させる制御を行うことを特徴とする請求項9に記載の情報処理装置。 The display control unit may further include, in response to an operation by a user, the number of the target objects detected in a predetermined range serving as a basis for predicting the yield of the crop and the target object estimated by the estimation unit. The information processing apparatus according to claim 9, wherein control is performed to display the number and the number on the predetermined display.
  11.  農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関する前記領域の特徴量を取得する特徴取得手段と、
     前記圃場のうち設定された領域に存在する前記対象オブジェクトの実際の個数を取得する個数取得手段と、
     前記設定された領域を撮影した画像から前記特徴取得手段により取得された前記特徴量と、前記個数取得手段により取得された前記実際の個数と、を対応付けて、前記圃場のうち指定された領域を撮影した画像から前記特徴取得手段によって取得される前記特徴量から、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定する推定処理に用いられる推定パラメータの学習に用いられる学習データを生成する生成手段と、
     を有することを特徴とする情報処理装置。
    A feature acquisition unit configured to acquire a feature amount of the region related to the number of target objects detected from the image from an image of a region that is a part of a field where a crop grows,
    Number acquisition means for acquiring the actual number of the target objects present in a set area of the field,
    The feature amount acquired by the feature acquisition unit from the image obtained by photographing the set region and the actual number acquired by the number acquisition unit are associated with each other, and a designated region in the field is designated. Learning data used for learning estimation parameters used in an estimation process for estimating the actual number of the target objects included in the specified region from the feature amount acquired by the feature acquisition unit from the image obtained by capturing the image. Generating means for generating
    An information processing apparatus comprising:
  12.  前記特徴取得手段は、前記一部である領域を撮影した画像に写る所定の物体に基づいて決定される前記画像に含まれる領域から検出された前記対象オブジェクトの個数である前記検出された個数に基づいて、前記特徴量を取得することを特徴とする請求項1乃至11の何れか1項に記載の情報処理装置。 The feature acquisition unit may include a number of the target objects detected from a region included in the image determined based on a predetermined object appearing in an image of the part of the region. The information processing apparatus according to claim 1, wherein the feature amount is obtained based on the information.
  13.  前記特徴取得手段は、前記一部である領域を撮影した複数の画像に一部が写る予め設定された1の物体に基づいて決定される、前記複数の画像に含まれる領域から検出された前記対象オブジェクトの個数である前記検出された個数に基づいて、前記特徴量を取得することを特徴とする請求項12に記載の情報処理装置。 The feature acquiring unit is determined based on one preset object that partially appears in a plurality of images obtained by capturing the region that is the part, and is detected from an area included in the plurality of images. The information processing apparatus according to claim 12, wherein the feature amount is acquired based on the detected number, which is the number of target objects.
  14.  前記特徴取得手段は、前記検出された個数と、前記一部である領域に存在する前記対象オブジェクトの検出を阻害し得る予め設定された阻害物の量を示す情報と、に基づいて、前記特徴量を取得することを特徴とする請求項1乃至13何れか1項に記載の情報処理装置。 The feature acquiring unit is configured to perform the feature based on the detected number and information indicating a preset amount of an obstruction that can impede detection of the target object existing in the part of the region. 14. The information processing apparatus according to claim 1, wherein an amount is acquired.
  15.  前記特徴取得手段は、前記検出された個数と、前記一部である領域における土壌の特徴と、に基づいて、前記特徴量を取得することを特徴とする請求項1乃至14何れか1項に記載の情報処理装置。 15. The method according to claim 1, wherein the characteristic acquiring unit acquires the characteristic amount based on the detected number and a characteristic of soil in the part of the region. An information processing apparatus according to claim 1.
  16.  前記特徴取得手段は、前記検出された個数と、前記一部である領域の周囲の領域として設定された領域の特徴と、に基づいて、前記特徴量を取得することを特徴とする請求項1乃至15何れか1項に記載の情報処理装置。 2. The feature acquiring unit according to claim 1, wherein the feature acquiring unit acquires the feature amount based on the detected number and a feature of a region set as a region around the partial region. The information processing apparatus according to any one of claims 15 to 15.
  17.  情報処理装置が実行する情報処理方法であって、
     農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関する前記領域の特徴量を取得する特徴取得ステップと、
     前記圃場のうち設定された領域に存在する前記対象オブジェクトの実際の個数を取得する個数取得ステップと、
     前記設定された領域を撮影した画像から前記特徴取得ステップで取得された前記特徴量と、前記個数取得ステップで前記実際の個数とを学習データとして、前記圃場のうち指定された領域に存在する前記対象オブジェクトの実際の個数を推定するための推定パラメータを学習する学習ステップと、
     を含むことを特徴とする情報処理方法。
    An information processing method executed by an information processing apparatus,
    A feature acquisition step of acquiring a feature amount of the area related to the number of target objects detected from the image from an image of an area that is a part of a field where a crop grows,
    A number acquisition step of acquiring the actual number of the target objects present in a set area of the field,
    The feature amount obtained in the feature obtaining step from the image obtained by capturing the set area, and the actual number in the number obtaining step as the learning data, the learning quantity is present in a designated area in the field. A learning step of learning estimation parameters for estimating the actual number of target objects,
    An information processing method comprising:
  18.  情報処理装置が実行する情報処理方法であって、
     農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関する前記領域の特徴量を取得する特徴取得ステップと、
     前記圃場のうち指定された領域を撮影した画像から前記特徴取得ステップで取得された前記特徴量と、前記圃場の領域を撮影した画像から前記特徴取得ステップで取得される前記特徴量から前記領域に含まれる前記対象オブジェクトの実際の個数を推定する推定処理に用いられる予め学習されたパラメータである推定パラメータと、に基づいて、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定する推定ステップと、
     を含むことを特徴とする情報処理方法。
    An information processing method executed by an information processing apparatus,
    A feature acquisition step of acquiring a feature amount of the area related to the number of target objects detected from the image from an image of an area that is a part of a field where a crop grows,
    The feature amount obtained in the feature obtaining step from an image obtained by capturing a designated area in the field, and the feature amount obtained in the feature obtaining step from the image obtained by capturing the field of the field into the area. Estimating the actual number of the target objects included in the specified area based on an estimation parameter that is a parameter learned in advance used for estimating the actual number of the target objects included An estimation step;
    An information processing method comprising:
  19.  農作物を生育する圃場の一部である領域を撮影した画像から、前記画像から対象オブジェクトが検出された個数に関する前記領域の特徴量を取得する特徴取得ステップと、
     前記圃場のうち設定された領域に存在する前記対象オブジェクトの実際の個数を取得する個数取得ステップと、
     前記設定された領域を撮影した画像から前記特徴取得ステップで取得された前記特徴量と、前記個数取得ステップで取得された前記実際の個数と、を対応付けて、前記圃場のうち指定された領域を撮影した画像から前記特徴取得ステップで取得される前記特徴量から、前記指定された領域に含まれる前記対象オブジェクトの実際の個数を推定する推定処理に用いられる推定パラメータの学習に用いられる学習データを生成する生成ステップと、
     を含むことを特徴とする情報処理方法。
    A feature acquisition step of acquiring a feature amount of the area related to the number of target objects detected from the image from an image of an area that is a part of a field where a crop grows,
    A number acquisition step of acquiring the actual number of the target objects present in a set area of the field,
    The feature amount obtained in the feature obtaining step from the image obtained by capturing the set area and the actual number obtained in the number obtaining step are associated with each other, and a designated area in the field is designated. Learning data used for learning estimation parameters used in an estimation process for estimating the actual number of the target objects included in the specified area from the feature amount acquired in the feature acquisition step from an image obtained by photographing A generating step of generating
    An information processing method comprising:
  20.  コンピュータを、請求項1乃至16の何れか1項に記載の情報処理装置の各手段として、機能させるためのプログラム。 A program for causing a computer to function as each unit of the information processing apparatus according to any one of claims 1 to 16.
PCT/JP2019/028464 2018-07-27 2019-07-19 Information processing device, information processing method, and program WO2020022215A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2019309839A AU2019309839A1 (en) 2018-07-27 2019-07-19 Information processing device, information processing method, and program
US17/156,267 US20210142484A1 (en) 2018-07-27 2021-01-22 Information processing apparatus, information processing method, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018141430 2018-07-27
JP2018-141430 2018-07-27
JP2019-097874 2019-05-24
JP2019097874A JP2020024672A (en) 2018-07-27 2019-05-24 Information processor, information processing method and program

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/156,267 Continuation US20210142484A1 (en) 2018-07-27 2021-01-22 Information processing apparatus, information processing method, and storage medium

Publications (1)

Publication Number Publication Date
WO2020022215A1 true WO2020022215A1 (en) 2020-01-30

Family

ID=69181562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/028464 WO2020022215A1 (en) 2018-07-27 2019-07-19 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2020022215A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228475A1 (en) * 2016-02-05 2017-08-10 The Climate Corporation Modeling trends in crop yields
US20180158207A1 (en) * 2015-05-29 2018-06-07 Université De Bordeaux System and method for estimating a harvest volume in a vineyard operation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158207A1 (en) * 2015-05-29 2018-06-07 Université De Bordeaux System and method for estimating a harvest volume in a vineyard operation
US20170228475A1 (en) * 2016-02-05 2017-08-10 The Climate Corporation Modeling trends in crop yields

Similar Documents

Publication Publication Date Title
JP2020024672A (en) Information processor, information processing method and program
US10719787B2 (en) Method for mapping crop yields
US11793119B2 (en) Information processing device and information processing method
JP5729476B2 (en) Imaging device and imaging support program
US11935282B2 (en) Server of crop growth stage determination system, growth stage determination method, and storage medium storing program
JP5756374B2 (en) Growth management method
JP7229864B2 (en) REMOTE SENSING IMAGE ACQUISITION TIME DETERMINATION SYSTEM AND CROPT GROWTH ANALYSIS METHOD
JP2007310463A (en) Farm field management support method and system
JP5657901B2 (en) Crop monitoring method, crop monitoring system, and crop monitoring device
US20200311915A1 (en) Growth status prediction system and method and computer-readable program
JP6760068B2 (en) Information processing equipment, information processing methods, and programs
JPWO2016039176A1 (en) Information processing apparatus, information processing method, and program
US20220405863A1 (en) Information processing device, information processing method, and program
Lootens et al. High-throughput phenotyping of lateral expansion and regrowth of spaced Lolium perenne plants using on-field image analysis
US20140009600A1 (en) Mobile device, computer product, and information providing method
JP7313056B2 (en) Fertilizer application amount determination device and fertilizer application amount determination method
CN107437262B (en) Crop planting area early warning method and system
KR102114384B1 (en) Image-based crop growth data measuring mobile app. and device therefor
WO2020022215A1 (en) Information processing device, information processing method, and program
JP7191785B2 (en) agricultural support equipment
WO2019163249A1 (en) Color index value calculation system and color index value calculation method
CN108363851B (en) Planting control method and control device, computer equipment and readable storage medium
CN115379150A (en) System and method for automatically generating dynamic video of rice growth process in remote way
WO2021124815A1 (en) Prediction device
JP6931418B2 (en) Image processing methods, image processing devices, user interface devices, image processing systems, servers, and image processing programs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19841848

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019309839

Country of ref document: AU

Date of ref document: 20190719

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19841848

Country of ref document: EP

Kind code of ref document: A1