WO2020044510A1 - Computer system, object sensing method, and program - Google Patents

Computer system, object sensing method, and program Download PDF

Info

Publication number
WO2020044510A1
WO2020044510A1 PCT/JP2018/032207 JP2018032207W WO2020044510A1 WO 2020044510 A1 WO2020044510 A1 WO 2020044510A1 JP 2018032207 W JP2018032207 W JP 2018032207W WO 2020044510 A1 WO2020044510 A1 WO 2020044510A1
Authority
WO
WIPO (PCT)
Prior art keywords
weight
image
module
size
computer
Prior art date
Application number
PCT/JP2018/032207
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to JP2020539962A priority Critical patent/JP7068746B2/en
Priority to PCT/JP2018/032207 priority patent/WO2020044510A1/en
Publication of WO2020044510A1 publication Critical patent/WO2020044510A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G9/00Methods of, or apparatus for, the determination of weight, not provided for in groups G01G1/00 - G01G7/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Definitions

  • the present invention relates to a computer system for estimating the weight of an object, an object detection method, and a program.
  • Patent Document 1 As a technique for estimating the weight of such an object, there has been disclosed a technique in which an elevator door is photographed and the weight of an object carried into the elevator is estimated (see Patent Document 1).
  • a method of estimating the weight of the object an image of the object carried into the car is photographed by a photographing device or the like, and the image is analyzed to estimate the contour of the object. Further, the image is divided into predetermined blocks, and it is determined whether or not the weight is equal to or more than a predetermined value based on the number of blocks, the variation of the block positions, and the estimated contour of the object.
  • each of a plurality of images is divided into a plurality of blocks, and the similarity of a motion vector in each block in each image is calculated to estimate the contour of the object. .
  • the object of the present invention is to provide a computer system, an object detection method, and a program that can easily estimate the weight of an object accurately.
  • the present invention provides the following solutions.
  • the present invention provides an acquisition unit for acquiring a captured image, Detecting means for extracting a feature amount from the image and detecting an object; Estimating means for estimating the weight of the detected object from the size shown in the image of the object, A computer system is provided.
  • the computer system acquires a captured image, extracts a feature amount from the image, detects an object, and estimates a weight of the detected object from a size shown in the image of the object.
  • the present invention is in the category of computer systems.
  • other categories such as methods and programs exhibit the same functions and effects according to the categories.
  • FIG. 1 is a diagram illustrating an outline of the object detection system 1.
  • FIG. 2 is an overall configuration diagram of the object detection system 1.
  • FIG. 3 is a flowchart illustrating a first object detection process executed by the computer 10.
  • FIG. 4 is a flowchart illustrating a second object detection process executed by the computer 10.
  • FIG. 5 is a flowchart illustrating a third object detection process executed by the computer 10.
  • FIG. 6 is a flowchart illustrating a learning process performed by the computer 10.
  • FIG. 7 is a diagram illustrating an example of the object table.
  • FIG. 8 is a diagram illustrating an example of an image.
  • FIG. 9 is a diagram illustrating an example of an image.
  • FIG. 10 is a diagram illustrating an example of an image.
  • FIG. 10 is a diagram illustrating an example of an image.
  • FIG. 11 is a diagram illustrating an example of the notification screen.
  • FIG. 12 is a diagram illustrating an example of a state where a predetermined area is superimposed on an image.
  • FIG. 13 is a diagram illustrating an example of a state where a predetermined area is superimposed on an image.
  • FIG. 14 is a diagram illustrating an example of the notification screen.
  • FIG. 15 is a diagram illustrating an example of an image.
  • FIG. 16 is a diagram illustrating an example of a complemented image obtained by complementing an image.
  • FIG. 17 is a diagram illustrating an example of the notification screen.
  • FIG. 1 is a diagram for describing an outline of an object detection system 1 according to a preferred embodiment of the present invention.
  • the object detection system 1 is a computer system that includes a computer 10 and estimates the weight of an object.
  • the object detection system 1 includes, in addition to the computer 10, other devices such as a photographing device that photographs an object, a terminal device that displays an estimated weight, and a user terminal that receives a predetermined input from a user. You may.
  • the computer 10 is connected to other devices (not shown) so as to be able to perform data communication via a public line network or the like, and transmits and receives necessary data.
  • the computer 10 acquires, as image data, an image of an object (for example, a heavy machine such as a shovel car, a crop such as a vegetable, or a person) photographed by a photographing device (not shown).
  • the image data includes the position information of the shooting point.
  • the position information of the photographing point is obtained by the photographing apparatus acquiring its own current position from a GPS (Global Positioning System) or the like, and using the acquired current position as the position information of the photographing point.
  • GPS Global Positioning System
  • the computer 10 analyzes the image included in the image data, and extracts its feature amount (for example, statistical values such as the average, variance, and histogram of pixel values, and the shape and contour of an object).
  • the computer 10 detects an object appearing in the image based on the extracted feature amount.
  • the computer 10 estimates the weight of the detected object. For example, the computer 10 estimates the distance from the shooting point to the object based on the position information of the shooting point, and based on the distance and the size (area) of the object in the image, the size of the object ( Volume). The computer 10 estimates the weight of the object based on the size of the object and the detected density of the object.
  • the computer 10 estimates the weight of the object by learning the correlation between the actual weight of the detected object and the acquired image of the object. For example, when newly acquiring image data, the computer 10 estimates the weight of the object reflected in the newly acquired image data, taking into account the learning result. At this time, the computer 10 uses at least one of the density, the name, the size of the object, and the distance to the object as learning as a correlation.
  • the computer 10 acquires image data (Step S01).
  • the computer 10 acquires an image photographed by a photographing device (not shown) and position information of the photographing device (position information of a photographing point) as image data.
  • the imaging device acquires its own position information from GPS or the like, and the computer 10 acquires this position information as the position information of the imaging point.
  • the computer 10 performs image analysis on the basis of the image data, and extracts an image feature amount in the image data (step S02).
  • the computer 10 extracts statistical numerical values such as the average, variance, and histogram of pixel values, and the shape, contour, and the like of an object as the feature amount of an image.
  • the computer 10 detects an object appearing in the image based on the extracted feature amount (Step S03).
  • the computer 10 refers to an object table in which identifiers (names, model numbers, product numbers, etc.) of various objects are associated with feature amounts of the respective objects, and specifies identifiers of the objects corresponding to the feature amounts extracted this time. Detect an object.
  • the computer 10 estimates the weight of the detected object from the size shown in the image of the object (step S04).
  • the computer 10 estimates the weight of the detected object based on, for example, the size shown in the image of the object (that is, the area of the object in the image).
  • the computer 10 estimates the distance from the shooting point of the image to the object by a method such as three-point surveying, and estimates the weight of the object based on the distance and the size of the object shown in the image. .
  • the computer 10 refers to the weight table in which the identifiers (names, model numbers, product numbers, etc.) of the objects, the sizes (volumes) of the objects, and the weight densities of the objects are detected, and the computer 10 detects the current time. Guess the weight of the object.
  • the computer 10 specifies the size and the weight density associated with the identifier of the object detected this time by referring to the weight table, and based on the specified size and the weight density and the estimated size, Guess the weight of the object.
  • the computer 10 learns the correlation between the actual weight of the detected object and the image acquired this time (at least one correlation of the density, name, size of the object or distance to the object).
  • the computer 10 estimates the weight of the object in consideration of the learning result.
  • FIG. 2 is a diagram illustrating a system configuration of an object detection system 1 according to a preferred embodiment of the present invention.
  • an object detection system 1 is a computer system that includes a computer 10 and estimates the weight of an object.
  • the computer 10 is connected to other devices (not shown) such as the above-described photographing device, terminal device, and user terminal via a public line network or the like so as to be able to perform data communication.
  • the computer 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory) and the like as a control unit, and a device for enabling communication with other terminals and devices as a communication unit.
  • a Wi-Fi (Wireless-Fidelity) compliant device or the like compliant with IEEE 802.11 is provided.
  • the computer 10 includes a data storage unit such as a hard disk, a semiconductor memory, a storage medium, and a memory card as a storage unit.
  • the computer 10 includes, as a processing unit, various devices that execute various processes.
  • the control unit when the control unit reads a predetermined program, the image data acquisition module 20, the notification module 21, the object designation data acquisition module 22, the area designation data acquisition module 23, the object data acquisition A module 24 and an actual weight data acquisition module 25 are realized. Further, in the computer 10, the control unit reads a predetermined program, thereby realizing the storage module 30 in cooperation with the storage unit. Further, in the computer 10, the control unit reads a predetermined program and cooperates with the processing unit to cooperate with the feature amount extraction module 40, the object detection module 41, the distance estimation module 42, the size estimation module 43, and the size estimation module. 44, a weight estimation module 45, an object complementing module 46, and a learning module 47 are realized.
  • FIG. 3 is a diagram illustrating a flowchart of the first object detection process executed by the computer 10. The processing executed by each module described above will be described together with this processing.
  • the image data acquisition module 20 acquires an image photographed by a photographing device (not shown) and position information of the photographing device as image data (step S10).
  • the image capturing apparatus transmits the image captured by the image capturing apparatus and its own position information acquired from GPS or the like to the computer 10 as image data.
  • the position information of the image capturing apparatus itself is the position information of the image capturing point.
  • the image data acquisition module 20 acquires the image photographed by the photographing device and the position information of the photographing point of the image.
  • the feature extracting module 40 analyzes the image included in the image data based on the acquired image data, and extracts the feature of the image (step S11).
  • the feature value extraction module 40 extracts, as feature values, statistical values such as the average, variance, and histogram of pixel values, and the shape and contour of an object.
  • the object detection module 41 determines whether an object is included in the image based on the extracted feature amount (Step S12).
  • the object detection module 41 stores an object table in which identifiers (names, model numbers, product numbers, and the like) of various objects stored in the storage module 30 in advance and feature amounts (one or more) of the objects are associated. By referencing, it is determined whether or not an object corresponding to the extracted feature amount is reflected in this image.
  • the storage module 30 stores the identifier of the heavy equipment (the name of the heavy equipment, the name of the heavy equipment classified by sales maker, the model number, the product number, etc.) and the feature amount of the heavy equipment in association with each other as an object table.
  • the storage module 30 also associates crop identifiers (crop names, crop names classified by producers, identification numbers that can identify individual crops, etc.) with the feature amounts of the crops, and creates an object table.
  • crop identifiers crop names, crop names classified by producers, identification numbers that can identify individual crops, etc.
  • the storage module 30 stores an identifier of a person (age, gender, race, height, weight, and the like) in association with a feature amount of the person as an object table.
  • the object detection module 41 compares the feature amount extracted this time with the object table, and determines whether or not the object corresponding to the feature amount is stored as the object table. Is determined.
  • step S12 when the object detection module 41 determines that the object is not reflected in the image (step S12 NO), the computer 10 ends the processing.
  • step S12 when the object detection module 41 determines that the object is reflected in the image (step S12 YES), the object detection module 41 detects the object reflected in the image (step S13). ).
  • step S13 the object detection module 41 compares the extracted feature amount with the object table, and detects an identifier of the object corresponding to the extracted feature amount as an object reflected in this image.
  • the distance estimation module 42 estimates the distance between the imaging device and the object based on the position information of the imaging point included in the image data (step S14). In step S14, the distance estimation module 42 estimates the distance from the shooting position to the object by, for example, triangulation. At this time, the distance estimation module 42 knows in advance the points at both ends of the base line passing through the imaging point, and estimates the distance from the imaging point to the object.
  • the method by which the distance estimating module 42 estimates the distance between the imaging device and the object is not limited to the above-described example, and can be appropriately changed.
  • the size estimation module 43 estimates the size (area) of the detected object in the image (step S15). In step S15, the size estimation module 43 estimates an area where the object exists as the size of the object based on, for example, the detected contour or shape of the object. The size estimation module 43 estimates, for example, the sum of the number of pixels in this area as the size.
  • the method of estimating the size of the object by the size estimating module 43 is not limited to the example described above, and can be changed as appropriate.
  • the size estimation module 44 estimates the size (volume) of the object based on the estimated distance from the shooting point to the object and the estimated size of the object (step S16). In step S16, the size estimation module 44 estimates the size of the object based on the ratio of the size of the object to the image and the distance.
  • the method of estimating the size of the object by the size estimating module 44 is not limited to the example described above, and can be appropriately changed.
  • the weight estimation module 45 refers to the weight table in which the identifier (name, model number, product number, etc.) of the object stored in the storage module 30 in advance, the size of the object, and the weight density are associated with each other, and The weight is estimated (step S17).
  • the weight estimation module 45 refers to the weight table based on the identifier of the object detected this time and the size of the object, and specifies the weight density corresponding to the detected object.
  • the weight estimation module 45 estimates the weight of the object based on the specified weight density and the estimated size of the object.
  • FIG. 7 is a diagram schematically illustrating a weight table stored in the storage module 30.
  • the storage module 30 associates the name of the object, which is the identifier of the object, the size (volume) of the object, and the weight density of the object, and registers them in the weight table, and registers the registered weight table.
  • the storage module 30 associates a shovel car (small), which is the name of the object, V1, which is the size of the object, and D1, which is the weight density of the object, and registers it as a weight table.
  • the storage module 30 registers the excavator (large), V2, and D2 in association with each other as a weight table. Similarly, the storage module 30 registers the cabbage, V3, and D3 in association with each other as a weight table. Similarly, the storage module 30 associates a person (male, 20s), V4, and D4 and registers them as a weight table. These data are obtained from an input from a user terminal or obtained through an external computer or the like, and the storage module 30 registers these data in a weight table and stores the weight table.
  • the identifier of the object registered as the weight table is not limited to the name, but may be another one.
  • the identifier may be associated with one or more combinations of age, gender, race, height, weight, and the like, and with the size and the weight density.
  • the weight estimation module 45 may estimate the weight of the object by a method other than the method of referring to the weight table when estimating the weight of the object. For example, the weight estimation module 45 may estimate the weight of the object based on a function of the identifier, the size, and the density of the object.
  • the notification module 21 notifies the estimated weight to the user terminal or the like (step S18).
  • the notification module 21 generates a weight notification in which the identifier of the estimated object and the weight are superimposed on the obtained image, and notifies the generated weight notification to a user terminal or the like.
  • the user terminal or the like acquires this notification and displays it on its own display unit or the like, thereby notifying the user of the weight of the object.
  • the computer 10 displays the estimated weight of the object on the user terminal or the like, thereby notifying the user of the weight of the object.
  • the above is the first object detection processing.
  • FIGS. 8, 9, and 10 are diagrams illustrating examples of image data acquired by the image data acquisition module 20. Each image data includes position information of each shooting point in addition to the image.
  • FIG. 8 is an image of a shovel car as a heavy machine.
  • FIG. 9 is an image of cabbage as a crop.
  • FIG. 10 is an image of a person.
  • the image data obtaining module 20 obtains the image data shown in FIGS. 8, 9 and 10 by the processing in step S10 described above.
  • the feature value extraction module 40 extracts a feature value from each image data by the process of step S11 described above.
  • the object detection module 41 detects an object appearing in this image by the processing in steps S12 and S13 described above.
  • the object detection module 41 detects the shovel car (small) 100 as an object based on the feature amount extracted from the image shown in FIG.
  • the object detection module 41 detects the cabbage 110 as an object based on the feature amount extracted from the image illustrated in FIG.
  • the object detection module 41 detects a person (male, 20's) 120 as an object based on the feature amount extracted from the image shown in FIG.
  • the distance estimation module 42 estimates the distance between the imaging device and the object by the processing in step S14 described above.
  • the size estimation module 43 estimates the size (area) in the image of the object by the processing in step S15 described above. That is, the size estimation module 43 estimates the size of the image of the shovel car (small) 100, estimates the size of the image of the cabbage 110, and estimates the size of the image of the person (male, 20's) 120.
  • the size estimating module 44 estimates the size (volume) of the object by the processing in step S16 described above. That is, the size estimation module 44 estimates the size of the excavator (small) 100, estimates the size of the cabbage 110, and estimates the size of the person (male, 20's) 120.
  • the weight estimation module 45 estimates the weight of the object by the processing in step S17 described above. Since the identifier of the object is “shovel car (small)”, the weight estimating module 45 refers to the weight table and specifies the weight density D1 associated with the “shovel car (small)”. The weight estimation module 45 estimates the weight W1 of the excavator (small) 100 based on the specified weight density D1 and the estimated size. Similarly, since the identifier of the object is “cabbage”, the weight estimation module 45 refers to the weight table and specifies the weight density D3 associated with the “cabbage”. The weight estimation module 45 estimates the weight W3 of the cabbage 110 based on the specified weight density D3 and the estimated size.
  • the weight estimating module 45 refers to the weight table and determines the weight density associated with the “person (male, 20s)”. D4 is specified. The weight estimation module 45 estimates the weight W4 of the person (male, 20's) 120 based on the specified weight density D4 and the estimated size.
  • the notification module 21 notifies the estimated weight by the processing in step S18 described above.
  • a notification screen that the notification module 21 notifies the user terminal or the like will be described with reference to FIG.
  • FIG. 11 is a diagram illustrating an example of a notification screen in which the notification module 21 notifies a user terminal or the like of the weight of an object.
  • the excavator (small) among the three objects described above will be described as an example.
  • the notification module 21 also notifies cabbage and people.
  • the notification module 21 uses, as the notification screen, a screen in which an identifier of the object (here, a shovel car (small) whose name is an identifier) and the estimated weight W1 are superimposed on the acquired image as the notification screen. It is displayed on a user terminal or the like.
  • the notification module 21 performs processing such as surrounding the specified object, highlighting, and changing the color as the notification screen to clarify which object has been specified.
  • the notification module 21 superimposes the enclosing line surrounding the shovel car (small) 100, the identifier of the object, and the weight on the acquired image as a notification screen as a notification screen on the user terminal or the like. Display.
  • the notification module 21 also displays, on the user terminal or the like, a superimposition of the enclosing line, identifier, and weight surrounding the cabbage 100 on the acquired image as the notification screen for the other cabbage 110 and the person 120. Then, an encircling line surrounding the person 120, an identifier, and a weight superimposed on the acquired image are displayed on a user terminal or the like as a notification screen.
  • FIG. 4 is a diagram illustrating a flowchart of the second object detection process executed by the computer 10. The processing executed by each module described above will be described together with this processing.
  • the object designation data acquisition module 22 determines whether or not object designation data for designating an object to be detected has been acquired (Step S20).
  • the object designation data acquisition module 22 outputs data that designates an object existing over a predetermined range such as soil, earth and sand (thing caused by a disaster, excavation under construction, and the like), water, and a whole crop. Is acquired as object designation data.
  • the user terminal or the like receives an input for designating an object to be detected, and transmits the accepted object to the computer 10 as object designation data.
  • the object designation data acquisition module 22 acquires the object designation data by receiving the object designation data transmitted by the user terminal.
  • the source of the object designation data acquired by the object designation data acquisition module 22 can be changed as appropriate. This process does not necessarily need to be executed. In this case, the computer 10 may execute the process after step S21 described later.
  • step S20 if the object designation data acquisition module 22 determines that the object designation data has not been acquired (NO in step S20), the process ends.
  • the computer 10 may execute the above-described first object detection processing.
  • step S20 when the object designation data acquisition module 22 determines that the object designation data has been acquired (step S20: YES), the image data acquisition module 20 acquires image data (step S21).
  • the processing in step S21 is the same as the processing in step S10 described above.
  • the imaging device transmits image data to a user terminal (not shown) in addition to the computer 10.
  • the user terminal receives the image data, and uses an image included in the image data for a process described below.
  • the area designation data acquisition module 23 acquires area designation data for designating a predetermined area for the acquired image (Step S22).
  • the terminal device and the computer 10 receive the same image data.
  • the user terminal displays an image based on the image data on its own display unit, receives an input such as a tap operation from the user, and receives an input specifying a predetermined area for the image from the user.
  • the user terminal specifies the coordinates of the predetermined area in the image. For example, when the region is a rectangle, the terminal device specifies the coordinates of each vertex, and specifies a region surrounded by the rectangle as a predetermined region.
  • the user terminal When the region is circular, the user terminal specifies the coordinates of the center, specifies the radius from the center to the circumference, and specifies the region surrounded by the circle as the predetermined region.
  • the user terminal transmits the specified predetermined area to the computer 10 as area specification data.
  • the area designation data acquisition module 23 receives this area designation data to acquire area designation data for designating a predetermined area for an image.
  • the feature amount extraction module 40 analyzes the image of the image based on the acquired image data, and extracts the feature amount of the image (Step S23).
  • the processing in step S23 is the same as the processing in step S11 described above.
  • the object detection module 41 detects an object reflected in a predetermined area based on the extracted feature amount (Step S24).
  • the object detection module 41 specifies an area corresponding to the predetermined area based on the obtained area specifying data. For example, when the predetermined area is a rectangle, the object detection module 41 specifies the coordinates of each vertex of the rectangle based on the area specification data, and determines a rectangular area connecting the vertices with the specified predetermined area. Identify as something.
  • the predetermined area is a circle
  • the object detection module 41 specifies the center coordinates and the radius of the circle based on the area specifying data, and the area surrounded by the circle is the specified predetermined area. Identify The object detection module 41 detects an object reflected in an area surrounded by the predetermined area.
  • the method by which the object detection module 41 detects an object is the same as the processing in steps S12 and S13 described above.
  • the distance estimation module 42 estimates the distance between the imaging device and the object based on the location information of the imaging point included in the image data (Step S25).
  • the processing in step S25 is the same as the processing in step S14 described above.
  • the size estimation module 43 estimates the size (area) of the image of the object reflected in the detected predetermined area (step S26).
  • the process in step S26 is the same as the process in step S15 described above.
  • the size estimation module 44 estimates the size (volume) of the object based on the estimated distance and the estimated size (step S27).
  • the processing in step S27 is the same as the processing in step S16 described above.
  • the weight estimation module 45 refers to the weight table in which the identifier (name, model number, product number, etc.) of the object stored in the storage module 30 in advance, the size of the object, and the weight density are associated with each other, and The weight is estimated (step S28).
  • the process in step S28 is the same as the process in step S17 described above.
  • step S29 The notification module 21 notifies the user terminal of the estimated weight (step S29).
  • the processing in step S29 is the same as the processing in step S18 described above.
  • the above is the second object detection processing.
  • FIGS. 12 and 13 are diagrams illustrating an example of a state in which a predetermined area designated based on the area designation data acquired by the area designation data acquisition module 23 is superimposed on the image acquired by the image data acquisition module 20. It is.
  • Each image data includes position information of each shooting point in addition to the image.
  • FIG. 12 shows an image including earth and sand and a designated predetermined area superimposed on the image.
  • FIG. 13 shows an image of a crop and a specified predetermined area superimposed on the image.
  • the object designation data acquisition module 22 acquires the object designation data by the processing in step S20 described above.
  • the object designation data acquisition module 22 acquires “earth and sand” as object designation data in FIG. 12, and acquires “cabbage” as object designation data in FIG.
  • the image data acquisition module 20 acquires the image data by the processing in step S21 described above.
  • the area designation data acquisition module 23 acquires the area designation data by the processing in step S22 described above.
  • the feature amount extraction module 40 extracts the feature amount of the image by the process of step S23 described above.
  • the object detection module 41 detects an object reflected in the designated predetermined area based on the above-described feature amounts by the above-described processing in step S24.
  • the object detection module 41 detects “earth and sand” as an object, and also detects the earth and sand existing in the predetermined area 200 because the specified predetermined area 200 exists.
  • the object detection module 41 specifies the “cabbage” as the object, and additionally, because the designated predetermined area 210 exists, the cabbage existing in the predetermined area 210 exists. Is detected.
  • the distance estimation module 42 estimates the distance between the imaging device and the object by the processing in step S25 described above.
  • the size estimation module 43 estimates the size (area) of the image of the object reflected in the predetermined area by the processing in step S26 described above.
  • the size estimation module 43 estimates the size of the earth and sand existing in the predetermined area 200 and the size of the cabbage existing in the predetermined area 210.
  • the size estimation module 43 estimates the size of the earth and sand existing in the predetermined area 200 by adding the size of the heavy equipment existing in the predetermined area 200 to the size of the earth and sand. This is effective when a large object such as a heavy machine hides the target object.
  • the size of the object corresponding to the other object is estimated in the image of the object assuming that there is no target object.
  • the size estimating module 43 excludes the size of cabbage in which one ball is not completely included in the predetermined region 210 from among the cabbage existing in the predetermined region 210, and Based on only the size of the cabbage included in the area 210, the size of the cabbage existing in the predetermined area 210 is estimated. This is effective when only a part of an object such as a crop is present in the predetermined area.
  • the size estimation module 44 estimates the size (volume) of this object by the processing in step S27 described above. That is, the size estimation module 44 estimates the size of the earth and sand in the predetermined region 200 and estimates the size of the cabbage in the predetermined region 210.
  • the weight estimation module 45 estimates the weight of the object by the processing in step S28 described above. Since the identifier of the object is “earth and sand”, the weight estimating module 45 refers to the weight table and specifies the weight density D5 associated with the “earth and sand”. The weight estimation module 45 estimates the weight W5 of “earth and sand” existing in the predetermined region 200 based on the specified weight density D5 and the estimated size. Similarly, since the identifier of the object is “cabbage”, the weight estimation module 45 refers to the weight table and specifies the weight density D3 associated with the “cabbage”. The weight estimation module 45 estimates the weight W6 of the “cabbage” existing in the predetermined area 210 based on the specified weight density D3 and the estimated size.
  • the notification module 21 notifies the estimated weight by the processing in step S29 described above.
  • the notification screen that the notification module 21 notifies the user terminal or the like will be described with reference to FIG.
  • FIG. 14 is a diagram illustrating an example of a notification screen in which the notification module 21 notifies a user terminal or the like of a weight of an object reflected in a predetermined area. Of the two objects described above, earth and sand will be described as an example.
  • the notification module 21 notifies the cabbage similarly.
  • the notification module 21 uses a screen obtained by superimposing an object identifier (here, earth and sand as an identifier), an estimated weight W5 kg, and a predetermined area 200 on an acquired image as a notification screen as a notification screen. To be displayed.
  • object identifier here, earth and sand as an identifier
  • W5 kg estimated weight
  • predetermined area 200 on an acquired image as a notification screen as a notification screen. To be displayed.
  • the notification module 21 causes the user terminal to display, as a notification screen, the cabbage in which the predetermined area 200, the identifier of the object, and the weight are superimposed on the acquired screen.
  • the notification module 21 performs processing such as highlighting and color change of a predetermined area as the notification screen, and clarifies where in the image the object reflected in the image has been specified. Eventually, the notification module 21 causes the user terminal to display, as the notification screen, a predetermined area, the identifier of the object, and the weight superimposed on the acquired screen as the notification screen.
  • FIG. 5 is a diagram illustrating a flowchart of the third object detection process executed by the computer 10. The processing executed by each module described above will be described together with this processing.
  • the image data acquisition module 20 acquires, as image data, an image photographed by a photographing device (not shown) and position information of the photographing device (step S30).
  • the processing in step S30 is the same as the processing in step S10 described above.
  • the feature amount extraction module 40 analyzes the image included in the image data based on the acquired image data, and extracts the feature amount of the image (step S31).
  • the process in step S31 is the same as the process in step S11 described above.
  • the object detection module 41 determines whether an object is included in this image based on the extracted feature amount (Step S32).
  • the processing in step S32 is the same as the processing in step S12 described above.
  • step S32 when the object detection module 41 determines that the object is not reflected in the image (step S32: NO), the computer 10 ends this processing.
  • step S32 when the object detection module 41 determines that the object is reflected in the image (step S32 YES), the object detection module 41 detects the object reflected in this image (step S33). ).
  • the processing in step S33 is the same as the processing in step S13 described above.
  • the object detection module 41 determines whether the entire object has been detected (step S34). In step S34, the object detection module 41 determines whether a part of the detected object is at an edge of the image (for example, if the image is rectangular, Is determined). Further, the object detection module 41 determines whether or not a part of the outline or shape of the detected object is interrupted halfway.
  • the object detection module 41 may determine whether the entire object has been detected by a method other than the method described above.
  • step S34 if the object detection module 41 determines that the entire object has been detected (step S34: YES), the computer 10 ends this processing. In this case, the computer 10 may execute the above-described first object detection processing.
  • step S34 when the object detection module 41 determines that the entire object has not been detected (step S34 NO), the object data acquisition module 24 uses the identifier of the object stored in the external computer or the storage module 30 in advance.
  • the object data which is data relating to the image, size, size, etc. of the object corresponding to is acquired (step S35).
  • step S35 the object data acquisition module 24 acquires the object data corresponding to the identifier of the detected object by referring to the external computer or various tables stored in the storage module 30.
  • the object complementing module 46 complements a missing part in the detected object based on the acquired object data (step S36).
  • the object complementing module 46 compares the image in the acquired object data with the image of the object detected this time, and estimates the ratio.
  • the object complementing module 46 corrects by reducing or enlarging the image in the acquired object data based on the estimated ratio.
  • the object complementing module 46 compares the corrected image with the detected object.
  • the object complementing module 46 specifies a portion missing from the detected object in the corrected image.
  • the object complementing module 46 pseudo-corrects the entire object as an image by connecting the specified portion to the detected image of the object, and complements the missing portion.
  • the distance estimation module 42 estimates the distance between the imaging device and the object based on the position information of the imaging point included in the image data (step S37).
  • the processing in step S37 is the same as the processing in step S14 described above.
  • the size estimation module 43 estimates the size (area) of the object shown in the complemented image based on the complemented object image and the acquired object data (step S38).
  • the processing in step S38 is the same as the processing in step S15 described above.
  • the size estimation module 44 estimates the size (volume) of the object based on the estimated distance from the shooting point to the object and the estimated size of the object (step S39).
  • the processing in step S39 is the same as the processing in step S16 described above.
  • the weight estimation module 45 estimates the weight of this object by referring to the weight table (step S40).
  • the processing in step S40 is the same as the processing in step S17 described above.
  • the notification module 21 notifies the estimated weight to the user terminal (step S41).
  • the processing in step S41 is the same as the processing in step S18 described above.
  • the above is the third object detection processing.
  • a method for estimating the weight of heavy equipment in the third object detection processing executed by the computer 10 will be described. It should be noted that the weight can be estimated for agricultural products and humans in the same manner.
  • FIG. 15 is a diagram illustrating an example of image data acquired by the image data acquisition module 20.
  • the image data includes, in addition to the image, positional information of the shooting location.
  • FIG. 15 is an image of a shovel car as a heavy machine.
  • the image data acquisition module 20 acquires the image data shown in FIG. 15 by the processing in step S30 described above.
  • the feature extraction module 40 extracts the feature from the image data by the above-described process of step S31.
  • the object detection module 41 detects an object appearing in this image by the processes in steps S32 and S33 described above.
  • the object detection module 41 detects the shovel car (small) 300 as an object based on the feature amount extracted from the image shown in FIG.
  • the object detection module 41 determines that part of the detected object is missing by the processing of step S34 described above, and the object data acquisition module 24 converts the object data relating to the detected object by the processing of step S35 described above. get.
  • the object complementing module 46 complements a missing part in the detected object by the processing in step S36 described above.
  • FIG. 16 is a diagram showing a shovel car (small) 310 in which the object complementing module 46 supplements a part missing from the shovel car (small) 300.
  • the object complementing module 46 complements the shovel car (small) 310 in which the correction portion 320 has been corrected based on the object data.
  • the distance estimation module 42 estimates the distance between the imaging device and the object by the processing in step S37 described above.
  • the size estimation module 43 estimates the size (area) of the complemented object by the processing in step S38 described above. In other words, the size estimation module 43 estimates the size of the image of the shovel car (small) 310 after the complementation.
  • the size estimation module 44 estimates the size (volume) of the object by the processing in step S39 described above. That is, the size estimating module 44 estimates the size of the shovel car (small) 310 after the complementation.
  • the weight estimation module 45 estimates the weight of the object by the processing in step S40 described above. Since the identifier of the object is “shovel car (small)”, the weight estimating module 45 refers to the weight table and specifies the weight density D1 associated with the “shovel car (small)”. The weight estimation module 45 estimates the weight W7 of the excavator (small) 100 based on the specified weight density D1 and the estimated size.
  • the notification module 21 notifies the estimated weight by the processing in step S41 described above.
  • a notification screen that the notification module 21 notifies the user terminal or the like will be described with reference to FIG.
  • FIG. 17 is a diagram illustrating an example of a notification screen in which the notification module 21 notifies a user terminal or the like of the weight of an object.
  • the notification module 21 superimposes an object identifier (here, a shovel car (small) whose name is an identifier) and an estimated weight on the acquired image (image before correction) as a notification screen. Is displayed on the user terminal or the like as a notification screen.
  • the notification module 21 performs processing such as surrounding the specified object, highlighting, and changing the color as the notification screen to clarify which object has been specified.
  • the notification module 21 superimposes the enclosing line surrounding the shovel car (small) 300, the identifier of the object, and the weight on the acquired image as a notification screen as a notification screen on the user terminal or the like. Display.
  • the notification module 21 displays the same notification on the user terminal as a notification screen for other crops and people.
  • FIG. 6 is a diagram illustrating a flowchart of the learning process performed by the computer 10. The processing executed by each module described above will be described together with this processing.
  • the actual weight data acquisition module 25 acquires actual weight data indicating the actual weight of the object whose weight has been estimated by the first, second, and third object detection processes described above (step S50). .
  • the terminal device inputs or obtains the result of actually measuring the weight of the object, and transmits data relating to the identifier, image, and actual weight of the object to the computer 10 as actual weight data. I do.
  • the actual weight data acquisition module 25 acquires the estimated actual weight of the object by receiving the actual weight data.
  • the learning module 47 learns a correlation between the acquired actual weight (actual weight) of the object and the detected image of the object (step S51). In step S51, the learning module 47 learns, as a correlation between the actual weight and the image, at least one correlation of the density, name, size, or distance to the object of the object.
  • the storage module 30 stores the learning result (Step S52).
  • the weight estimation module 45 estimates the weight of the object in consideration of the learning result in the processing of steps S17, S28, and S40 described above. That is, when estimating the weight of the object, the weight estimating module 45 estimates the weight of the object by referring to the weight table and correcting the correlation based on the learning result.
  • the means and functions described above are implemented when a computer (including a CPU, an information processing device, and various terminals) reads and executes a predetermined program.
  • the program is provided, for example, in the form of being provided from a computer via a network (SaaS: Software as a Service).
  • the program is provided in a form stored in a computer-readable storage medium such as a flexible disk, a CD (eg, a CD-ROM), and a DVD (eg, a DVD-ROM, a DVD-RAM).
  • the computer reads the program from the storage medium, transfers the program to an internal storage device or an external storage device, stores and executes the program.
  • the program may be stored in a storage device (storage medium) such as a magnetic disk, an optical disk, or a magneto-optical disk in advance, and may be provided to the computer from the storage device via a communication line.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

[Problem] An objective of the present invention is to provide a computer system, object sensing method, and program, which facilitate accurate inference of the weight of an object. [Solution] This computer system: acquires a captured image; extracts a feature value from the image; senses an object; and infers the weight of the sensed object from a size indicated in the image of the object. The computer system: infers the size of the object from the size indicated in the image of the object; and by using the inferred size and referring to the weight density of the sensed object, infers the weight of the object. The computer system infers the weight of the object by learning a correlation between the inferred actual weight of the object and the sensed object image (at least one of the density, name, or size of, or distance to, the object).

Description

コンピュータシステム、物体検知方法及びプログラムComputer system, object detection method and program
 本発明は、物体の重量を推測するコンピュータシステム、物体検知方法及びプログラムに関する。 The present invention relates to a computer system for estimating the weight of an object, an object detection method, and a program.
 近年、物体が写り込んだ画像を、画像解析することにより、この物体の重量を推測することが行われている。 In recent years, it has been practiced to estimate the weight of an object by analyzing the image in which the object is reflected.
 このような物体の重量を推測する技術として、エレベータの扉を撮影し、エレベータ内に搬入された物体の重量を推測するものが開示されている(特許文献1参照)。この場合、物体の重量を推測する方法として、かごに搬入された物体の画像を、撮影装置等により撮影し、この画像を画像解析することにより、この物体の輪郭を推測している。また、この画像を所定のブロックに分割し、このブロック数と、ブロック位置のばらつきと、推測した物体の輪郭とに基づいて、重量が所定以上であるか否かを判断する。 As a technique for estimating the weight of such an object, there has been disclosed a technique in which an elevator door is photographed and the weight of an object carried into the elevator is estimated (see Patent Document 1). In this case, as a method of estimating the weight of the object, an image of the object carried into the car is photographed by a photographing device or the like, and the image is analyzed to estimate the contour of the object. Further, the image is divided into predetermined blocks, and it is determined whether or not the weight is equal to or more than a predetermined value based on the number of blocks, the variation of the block positions, and the estimated contour of the object.
特開2016-169072号公報JP 2016-169072 A
 しかしながら、特許文献1の構成では、複数の画像の其々を、複数のブロックに分割するとともに、其々の画像における各ブロックにおける動きベクトルの類似度を算出し、物体の輪郭を推測している。加えて、ブロック数とブロック位置のばらつきとに基づいて、物体の重量が所定以上であるか否かを判断するものに過ぎず、物体の重量そのものを推測するものではなく、あくまでも予め設定された重量を超過しているか否かを判断するものに過ぎなかった。そのため、物体の重量を正確に推測することが困難であった。 However, in the configuration of Patent Literature 1, each of a plurality of images is divided into a plurality of blocks, and the similarity of a motion vector in each block in each image is calculated to estimate the contour of the object. . In addition, based on the number of blocks and the variation in the block position, it is merely a matter of judging whether or not the weight of the object is equal to or more than a predetermined value, and is not intended to estimate the weight of the object itself, but is set in advance. It was merely a measure of whether the weight was exceeded. Therefore, it has been difficult to accurately estimate the weight of the object.
 本発明は、物体の重量を正確に推測することが容易なコンピュータシステム、物体検知方法及びプログラムを提供することを目的とする。 The object of the present invention is to provide a computer system, an object detection method, and a program that can easily estimate the weight of an object accurately.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 本発明は、撮影した画像を取得する取得手段と、
 前記画像から特徴量を抽出し、物体を検知する検知手段と、
 検知した前記物体の重量を前記物体の画像に示すサイズから推測する推測手段と、
 を備えることを特徴とするコンピュータシステムを提供する。
The present invention provides an acquisition unit for acquiring a captured image,
Detecting means for extracting a feature amount from the image and detecting an object;
Estimating means for estimating the weight of the detected object from the size shown in the image of the object,
A computer system is provided.
 本発明によれば、コンピュータシステムは、撮影した画像を取得し、前記画像から特徴量を抽出し、物体を検知し、検知した前記物体の重量を前記物体の画像に示すサイズから推測する。 According to the present invention, the computer system acquires a captured image, extracts a feature amount from the image, detects an object, and estimates a weight of the detected object from a size shown in the image of the object.
 本発明は、コンピュータシステムのカテゴリであるが、方法及びプログラム等の他のカテゴリにおいても、そのカテゴリに応じた同様の作用・効果を発揮する。 The present invention is in the category of computer systems. However, other categories such as methods and programs exhibit the same functions and effects according to the categories.
 本発明によれば、物体の重量を正確に推測することが容易なコンピュータシステム、物体検知方法及びプログラムを提供することが可能となる。 According to the present invention, it is possible to provide a computer system, an object detection method, and a program that can easily estimate the weight of an object accurately.
図1は、物体検知システム1の概要を示す図である。FIG. 1 is a diagram illustrating an outline of the object detection system 1. 図2は、物体検知システム1の全体構成図である。FIG. 2 is an overall configuration diagram of the object detection system 1. 図3は、コンピュータ10が実行する第一の物体検知処理を示すフローチャートである。FIG. 3 is a flowchart illustrating a first object detection process executed by the computer 10. 図4は、コンピュータ10が実行する第二の物体検知処理を示すフローチャートである。FIG. 4 is a flowchart illustrating a second object detection process executed by the computer 10. 図5は、コンピュータ10が実行する第三の物体検知処理を示すフローチャートである。FIG. 5 is a flowchart illustrating a third object detection process executed by the computer 10. 図6は、コンピュータ10が実行する学習処理を示すフローチャートである。FIG. 6 is a flowchart illustrating a learning process performed by the computer 10. 図7は、物体テーブルの一例を示す図である。FIG. 7 is a diagram illustrating an example of the object table. 図8は、画像の一例を示す図である。FIG. 8 is a diagram illustrating an example of an image. 図9は、画像の一例を示す図である。FIG. 9 is a diagram illustrating an example of an image. 図10は、画像の一例を示す図である。FIG. 10 is a diagram illustrating an example of an image. 図11は、通知画面の一例を示す図である。FIG. 11 is a diagram illustrating an example of the notification screen. 図12は、画像に所定領域を重畳させた状態の一例を示す図である。FIG. 12 is a diagram illustrating an example of a state where a predetermined area is superimposed on an image. 図13は、画像に所定領域を重畳させた状態の一例を示す図である。FIG. 13 is a diagram illustrating an example of a state where a predetermined area is superimposed on an image. 図14は、通知画面の一例を示す図である。FIG. 14 is a diagram illustrating an example of the notification screen. 図15は、画像の一例を示す図である。FIG. 15 is a diagram illustrating an example of an image. 図16は、画像を補完した補完後画像の一例を示す図である。FIG. 16 is a diagram illustrating an example of a complemented image obtained by complementing an image. 図17は、通知画面の一例を示す図である。FIG. 17 is a diagram illustrating an example of the notification screen.
 以下、本発明を実施するための最良の形態について図を参照しながら説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれらに限られるものではない。 Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings. This is only an example, and the technical scope of the present invention is not limited to these.
 [物体検知システム1の概要]
 本発明の好適な実施形態の概要について、図1に基づいて説明する。図1は、本発明の好適な実施形態である物体検知システム1の概要を説明するための図である。物体検知システム1は、コンピュータ10から構成され、物体の重量を推測するコンピュータシステムである。
[Overview of Object Detection System 1]
An outline of a preferred embodiment of the present invention will be described with reference to FIG. FIG. 1 is a diagram for describing an outline of an object detection system 1 according to a preferred embodiment of the present invention. The object detection system 1 is a computer system that includes a computer 10 and estimates the weight of an object.
 なお、物体検知システム1は、コンピュータ10に加え、物体を撮影する撮影装置、推測した重量を表示する端末装置、利用者から所定の入力を受け付ける利用者端末等のその他の装置類が含まれていてもよい。 The object detection system 1 includes, in addition to the computer 10, other devices such as a photographing device that photographs an object, a terminal device that displays an estimated weight, and a user terminal that receives a predetermined input from a user. You may.
 コンピュータ10は、図示していないその他の装置類と公衆回線網等を介してデータ通信可能に接続されており、必要なデータの送受信を実行する。 The computer 10 is connected to other devices (not shown) so as to be able to perform data communication via a public line network or the like, and transmits and receives necessary data.
 コンピュータ10は、図示していない撮影装置が物体(例えば、ショベルカー等の重機、野菜等の農作物、人)を撮影した画像を、画像データとして取得する。この画像データには、撮影地点の位置情報が含まれる。この撮影地点の位置情報は、撮影装置が、自身の現在位置を、GPS(Global Positioning System)等から取得し、この取得した自身の現在位置を撮影地点の位置情報としたものである。 The computer 10 acquires, as image data, an image of an object (for example, a heavy machine such as a shovel car, a crop such as a vegetable, or a person) photographed by a photographing device (not shown). The image data includes the position information of the shooting point. The position information of the photographing point is obtained by the photographing apparatus acquiring its own current position from a GPS (Global Positioning System) or the like, and using the acquired current position as the position information of the photographing point.
 コンピュータ10は、この画像データに含まれる画像を、画像解析し、その特徴量(例えば、画素値の平均、分散、ヒストグラム等の統計的な数値や物体の形状、輪郭)を抽出する。コンピュータ10は、この抽出した特徴量に基づいて、この画像に写っている物体を検知する。 The computer 10 analyzes the image included in the image data, and extracts its feature amount (for example, statistical values such as the average, variance, and histogram of pixel values, and the shape and contour of an object). The computer 10 detects an object appearing in the image based on the extracted feature amount.
 コンピュータ10は、この検知した物体の重量を推測する。例えば、コンピュータ10は、撮影地点の位置情報に基づいて、撮影地点から物体までの間の距離を推測し、この距離と、画像における物体のサイズ(面積)とに基づいて、物体の大きさ(体積)を推測する。コンピュータ10は、この物体の大きさと、検知した物体の重量密度とに基づいて、この物体の重量を推測する。 The computer 10 estimates the weight of the detected object. For example, the computer 10 estimates the distance from the shooting point to the object based on the position information of the shooting point, and based on the distance and the size (area) of the object in the image, the size of the object ( Volume). The computer 10 estimates the weight of the object based on the size of the object and the detected density of the object.
 また、コンピュータ10は、この検知した物体の実際の重量と、取得したこの物体の画像との相関関係を学習することにより、この物体の重量を推測する。例えば、コンピュータ10は、新たに画像データを取得した際、学習の結果を加味して、この新たに取得した画像データに写り込んでいる物体の重量を推測する。このとき、コンピュータ10は、相関関係として学習するものとして、物体の密度、名称、大きさ又は物体までの距離の少なくとも一つを利用する。 {Circle around (4)} The computer 10 estimates the weight of the object by learning the correlation between the actual weight of the detected object and the acquired image of the object. For example, when newly acquiring image data, the computer 10 estimates the weight of the object reflected in the newly acquired image data, taking into account the learning result. At this time, the computer 10 uses at least one of the density, the name, the size of the object, and the distance to the object as learning as a correlation.
 物体検知システム1が実行する処理の概要について説明する。 An outline of the processing executed by the object detection system 1 will be described.
 はじめに、コンピュータ10は、画像データを取得する(ステップS01)。コンピュータ10は、図示していない撮影装置が撮影した画像及びこの撮影装置の位置情報(撮影地点の位置情報)を、画像データとして取得する。撮影装置は、GPS等から自身の位置情報を取得し、この位置情報を、撮影地点の位置情報として、コンピュータ10が取得する。 First, the computer 10 acquires image data (Step S01). The computer 10 acquires an image photographed by a photographing device (not shown) and position information of the photographing device (position information of a photographing point) as image data. The imaging device acquires its own position information from GPS or the like, and the computer 10 acquires this position information as the position information of the imaging point.
 コンピュータ10は、この画像データに基づいて、画像解析を実行し、この画像データにおける画像の特徴量を抽出する(ステップS02)。コンピュータ10は、画像の特徴量として、画素値の平均、分散、ヒストグラム等の統計的な数値や物体の形状、輪郭等を抽出する。 The computer 10 performs image analysis on the basis of the image data, and extracts an image feature amount in the image data (step S02). The computer 10 extracts statistical numerical values such as the average, variance, and histogram of pixel values, and the shape, contour, and the like of an object as the feature amount of an image.
 コンピュータ10は、抽出した特徴量に基づいて、この画像に写り込んでいる物体を検知する(ステップS03)。コンピュータ10は、様々な物体の識別子(名称、型番、製品番号等)と、各物体の特徴量とを対応付けた物体テーブルを参照し、今回抽出した特徴量に該当する物体の識別子を特定し、物体を検知する。 The computer 10 detects an object appearing in the image based on the extracted feature amount (Step S03). The computer 10 refers to an object table in which identifiers (names, model numbers, product numbers, etc.) of various objects are associated with feature amounts of the respective objects, and specifies identifiers of the objects corresponding to the feature amounts extracted this time. Detect an object.
 コンピュータ10は、この検知した物体の重量を、物体の画像に示すサイズから推測する(ステップS04)。コンピュータ10は、例えば、検知した物体の画像に示すサイズ(すなわち、この物体の画像における面積)に基づいて、この物体の重量を推測する。 The computer 10 estimates the weight of the detected object from the size shown in the image of the object (step S04). The computer 10 estimates the weight of the detected object based on, for example, the size shown in the image of the object (that is, the area of the object in the image).
 また、コンピュータ10は、この画像の撮影地点から物体までの間の距離を、3点測量等の方法により推測し、この距離と、画像に示す物体のサイズとに基づいて、その重量を推測する。このとき、コンピュータ10は、物体の識別子(名称、型番、製品番号等)と、各物体の大きさ(体積)と、各物体の重量密度とを対応付けた重量テーブルを参照し、今回検知した物体の重量を推測する。コンピュータ10は、今回検知した物体の識別子に対応付けられた大きさ及び重量密度を、重量テーブルを参照することにより特定し、この特定した大きさ及び重量密度と、推測したサイズとに基づいて、物体の重量を推測する。 Further, the computer 10 estimates the distance from the shooting point of the image to the object by a method such as three-point surveying, and estimates the weight of the object based on the distance and the size of the object shown in the image. . At this time, the computer 10 refers to the weight table in which the identifiers (names, model numbers, product numbers, etc.) of the objects, the sizes (volumes) of the objects, and the weight densities of the objects are detected, and the computer 10 detects the current time. Guess the weight of the object. The computer 10 specifies the size and the weight density associated with the identifier of the object detected this time by referring to the weight table, and based on the specified size and the weight density and the estimated size, Guess the weight of the object.
 さらに、コンピュータ10は、検知した物体の実際の重量と、今回取得した画像との相関関係(物体の密度、名称、大きさ又は物体までの距離の少なくとも一つの相関関係)を学習する。コンピュータ10は、次回以降において、取得した画像に写り込んだ物体の重量を推測する際、この学習結果を加味して、物体の重量を推測する。 コ ン ピ ュ ー タ Furthermore, the computer 10 learns the correlation between the actual weight of the detected object and the image acquired this time (at least one correlation of the density, name, size of the object or distance to the object). When estimating the weight of the object reflected in the acquired image from the next time, the computer 10 estimates the weight of the object in consideration of the learning result.
 以上が、物体検知システム1の概要である。 The above is the outline of the object detection system 1.
 [物体検知システム1のシステム構成]
 図2に基づいて、本発明の好適な実施形態である物体検知システム1のシステム構成について説明する。図2は、本発明の好適な実施形態である物体検知システム1のシステム構成を示す図である。図2において、物体検知システム1は、コンピュータ10から構成され、物体の重量を推測するコンピュータシステムである。コンピュータ10は、図示していない上述した撮影装置、端末装置、利用者端末等のその他の装置類と、公衆回線網等を介して、データ通信可能に接続されている。
[System Configuration of Object Detection System 1]
The system configuration of the object detection system 1 according to a preferred embodiment of the present invention will be described with reference to FIG. FIG. 2 is a diagram illustrating a system configuration of an object detection system 1 according to a preferred embodiment of the present invention. In FIG. 2, an object detection system 1 is a computer system that includes a computer 10 and estimates the weight of an object. The computer 10 is connected to other devices (not shown) such as the above-described photographing device, terminal device, and user terminal via a public line network or the like so as to be able to perform data communication.
 コンピュータ10は、制御部として、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)等を備え、通信部として、その他の端末や機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWi―Fi(Wireless―Fidelity)対応デバイス等を備える。また、コンピュータ10は、記憶部として、ハードディスクや半導体メモリ、記憶媒体、メモリカード等によるデータのストレージ部を備える。また、コンピュータ10は、処理部として、各種処理を実行する各種デバイス等を備える。 The computer 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory) and the like as a control unit, and a device for enabling communication with other terminals and devices as a communication unit. For example, a Wi-Fi (Wireless-Fidelity) compliant device or the like compliant with IEEE 802.11 is provided. In addition, the computer 10 includes a data storage unit such as a hard disk, a semiconductor memory, a storage medium, and a memory card as a storage unit. Further, the computer 10 includes, as a processing unit, various devices that execute various processes.
 コンピュータ10において、制御部が所定のプログラムを読み込むことにより、通信部と協働して、画像データ取得モジュール20、通知モジュール21、物体指定データ取得モジュール22、領域指定データ取得モジュール23、物体データ取得モジュール24、実重量データ取得モジュール25を実現する。また、コンピュータ10において、制御部が所定のプログラムを読み込むことにより、記憶部と協働して、記憶モジュール30を実現する。また、コンピュータ10において、制御部が所定のプログラムを読み込むことにより、処理部と協働して、特徴量抽出モジュール40、物体検知モジュール41、距離推測モジュール42、サイズ推測モジュール43、大きさ推測モジュール44、重量推測モジュール45、物体補完モジュール46、学習モジュール47を実現する。 In the computer 10, when the control unit reads a predetermined program, the image data acquisition module 20, the notification module 21, the object designation data acquisition module 22, the area designation data acquisition module 23, the object data acquisition A module 24 and an actual weight data acquisition module 25 are realized. Further, in the computer 10, the control unit reads a predetermined program, thereby realizing the storage module 30 in cooperation with the storage unit. Further, in the computer 10, the control unit reads a predetermined program and cooperates with the processing unit to cooperate with the feature amount extraction module 40, the object detection module 41, the distance estimation module 42, the size estimation module 43, and the size estimation module. 44, a weight estimation module 45, an object complementing module 46, and a learning module 47 are realized.
 [第一の物体検知処理]
 図3に基づいて、物体検知システム1が実行する第一の物体検知処理について説明する。図3は、コンピュータ10が実行する第一の物体検知処理のフローチャートを示す図である。上述した各モジュールが実行する処理について、本処理に併せて説明する。
[First object detection processing]
The first object detection process executed by the object detection system 1 will be described based on FIG. FIG. 3 is a diagram illustrating a flowchart of the first object detection process executed by the computer 10. The processing executed by each module described above will be described together with this processing.
 画像データ取得モジュール20は、図示していない撮影装置が撮影した画像及びこの撮影装置の位置情報を、画像データとして取得する(ステップS10)。ステップS10において、撮影装置は、自身が撮影した画像と、GPS等から取得した自身の位置情報とを、画像データとしてコンピュータ10に送信する。この撮影装置が取得する自身の位置情報が、撮影地点の位置情報となる。画像データ取得モジュール20は、この画像データを受信することにより、撮影装置が撮影した画像及びこの画像の撮影地点の位置情報を取得する。 The image data acquisition module 20 acquires an image photographed by a photographing device (not shown) and position information of the photographing device as image data (step S10). In step S10, the image capturing apparatus transmits the image captured by the image capturing apparatus and its own position information acquired from GPS or the like to the computer 10 as image data. The position information of the image capturing apparatus itself is the position information of the image capturing point. By receiving the image data, the image data acquisition module 20 acquires the image photographed by the photographing device and the position information of the photographing point of the image.
 特徴量抽出モジュール40は、取得した画像データに基づいて、この画像データに含まれる画像を、画像解析し、この画像の特徴量を抽出する(ステップS11)。ステップS11において、特徴量抽出モジュール40は、特徴量として、画素値の平均、分散、ヒストグラム等の統計的な数値や物体の形状、輪郭等を抽出する The feature extracting module 40 analyzes the image included in the image data based on the acquired image data, and extracts the feature of the image (step S11). In step S11, the feature value extraction module 40 extracts, as feature values, statistical values such as the average, variance, and histogram of pixel values, and the shape and contour of an object.
 物体検知モジュール41は、抽出した特徴量に基づいて、この画像に物体が写り込んでいるか否かを判断する(ステップS12)。ステップS12において、物体検知モジュール41は、予め記憶モジュール30が記憶した様々な物体の識別子(名称、型番、製品番号等)と、物体の特徴量(一又は複数)とを対応付けた物体テーブルを参照することにより、抽出した特徴量に該当する物体がこの画像に写り込んでいるか否かを判断する。例えば、記憶モジュール30は、重機の識別子(重機名、販売メーカー別に分類された重機名、型番、製品番号等)と、この重機の特徴量とを対応付けて、物体テーブルとして記憶する。また、記憶モジュール30は、農作物の識別子(農作物名、生産者別に分類された農作物名、個々の農作物を識別可能な識別番号等)と、この農作物の特徴量とを対応付けて、物体テーブルとして記憶する。また、記憶モジュール30は、人の識別子(年齢、性別、人種、身長、体重等)と、この人の特徴量とを対応付けて、物体テーブルとして記憶する。 The object detection module 41 determines whether an object is included in the image based on the extracted feature amount (Step S12). In step S12, the object detection module 41 stores an object table in which identifiers (names, model numbers, product numbers, and the like) of various objects stored in the storage module 30 in advance and feature amounts (one or more) of the objects are associated. By referencing, it is determined whether or not an object corresponding to the extracted feature amount is reflected in this image. For example, the storage module 30 stores the identifier of the heavy equipment (the name of the heavy equipment, the name of the heavy equipment classified by sales maker, the model number, the product number, etc.) and the feature amount of the heavy equipment in association with each other as an object table. The storage module 30 also associates crop identifiers (crop names, crop names classified by producers, identification numbers that can identify individual crops, etc.) with the feature amounts of the crops, and creates an object table. Remember. The storage module 30 stores an identifier of a person (age, gender, race, height, weight, and the like) in association with a feature amount of the person as an object table.
 物体検知モジュール41は、今回抽出した特徴量と、物体テーブルとを比較し、この特徴量に該当する物体が、物体テーブルとして記憶されているか否かを判断することにより、画像に物体が写り込んでいるか否かを判断する。 The object detection module 41 compares the feature amount extracted this time with the object table, and determines whether or not the object corresponding to the feature amount is stored as the object table. Is determined.
 ステップS12において、物体検知モジュール41は、画像に物体が写り込んでいないと判断した場合(ステップS12 NO)、コンピュータ10は、本処理を終了する。 In step S12, when the object detection module 41 determines that the object is not reflected in the image (step S12 NO), the computer 10 ends the processing.
 一方、ステップS12において、物体検知モジュール41は、画像に物体が写り込んでいると判断した場合(ステップS12 YES)、物体検知モジュール41は、この画像に写り込んでいる物体を検知する(ステップS13)。ステップS13において、物体検知モジュール41は、抽出した特徴量と、物体テーブルとを比較した結果、抽出した特徴量に該当する物体の識別子をこの画像に写り込んでいる物体として検知する。 On the other hand, in step S12, when the object detection module 41 determines that the object is reflected in the image (step S12 YES), the object detection module 41 detects the object reflected in the image (step S13). ). In step S13, the object detection module 41 compares the extracted feature amount with the object table, and detects an identifier of the object corresponding to the extracted feature amount as an object reflected in this image.
 距離推測モジュール42は、画像データに含まれる撮影地点の位置情報に基づいて、撮影装置から物体までの間の距離を推測する(ステップS14)。ステップS14において、距離推測モジュール42は、例えば、三角測量により、撮影位置から物体までの距離を推測する。このとき、距離推測モジュール42は、予めこの撮影地点を通過する基線の両端の点を把握しておくことにおり、撮影地点から物体までの間の距離を推測する。 The distance estimation module 42 estimates the distance between the imaging device and the object based on the position information of the imaging point included in the image data (step S14). In step S14, the distance estimation module 42 estimates the distance from the shooting position to the object by, for example, triangulation. At this time, the distance estimation module 42 knows in advance the points at both ends of the base line passing through the imaging point, and estimates the distance from the imaging point to the object.
 なお、距離推測モジュール42が撮影装置から物体までの間の距離を推測する方法は、上述した例に限らず適宜変更可能である。 The method by which the distance estimating module 42 estimates the distance between the imaging device and the object is not limited to the above-described example, and can be appropriately changed.
 サイズ推測モジュール43は、検知した物体の画像におけるサイズ(面積)を推測する(ステップS15)。ステップS15において、サイズ推測モジュール43は、例えば、検知した物体の輪郭や形状に基づいて、物体が存在する領域を、この物体のサイズとして推測する。サイズ推測モジュール43は、例えば、この領域のピクセル数の総和をサイズとして推測する。 The size estimation module 43 estimates the size (area) of the detected object in the image (step S15). In step S15, the size estimation module 43 estimates an area where the object exists as the size of the object based on, for example, the detected contour or shape of the object. The size estimation module 43 estimates, for example, the sum of the number of pixels in this area as the size.
 なお、サイズ推測モジュール43が物体のサイズを推測する方法は、上述した例に限らず、適宜変更可能である。 The method of estimating the size of the object by the size estimating module 43 is not limited to the example described above, and can be changed as appropriate.
 大きさ推測モジュール44は、推測した撮影地点から物体までの間の距離と、推測した物体のサイズとに基づいて、この物体の大きさ(体積)を推測する(ステップS16)。ステップS16において、大きさ推測モジュール44は、画像に対する物体のサイズと、距離との比率に基づいて、この物体の大きさを推測する。 The size estimation module 44 estimates the size (volume) of the object based on the estimated distance from the shooting point to the object and the estimated size of the object (step S16). In step S16, the size estimation module 44 estimates the size of the object based on the ratio of the size of the object to the image and the distance.
 なお、大きさ推測モジュール44が物体の大きさを推測する方法は、上述した例に限らず、適宜変更可能である。 The method of estimating the size of the object by the size estimating module 44 is not limited to the example described above, and can be appropriately changed.
 重量推測モジュール45は、予め記憶モジュール30が記憶した物体の識別子(名称、型番、製品番号等)と、物体の大きさと、重量密度とを対応付けた重量テーブルを参照することにより、この物体の重量を推測する(ステップS17)。ステップS17において、重量推測モジュール45は、今回検知した物体の識別子及びこの物体の大きさに基づいて、この重量テーブルを参照し、検知した物体に該当する重量密度を特定する。重量推測モジュール45は、この特定した重量密度と、推測した物体の大きさとに基づいて、この物体の重量を推測する。 The weight estimation module 45 refers to the weight table in which the identifier (name, model number, product number, etc.) of the object stored in the storage module 30 in advance, the size of the object, and the weight density are associated with each other, and The weight is estimated (step S17). In step S17, the weight estimation module 45 refers to the weight table based on the identifier of the object detected this time and the size of the object, and specifies the weight density corresponding to the detected object. The weight estimation module 45 estimates the weight of the object based on the specified weight density and the estimated size of the object.
 図7に基づいて、記憶モジュール30が記憶する重量テーブルについて説明する。図7は、記憶モジュール30が記憶する重量テーブルを模式的に示した図である。図7において、記憶モジュール30は、物体の識別子である物体の名称と、物体の大きさ(体積)と、物体の重量密度とを対応付けて、重量テーブルに登録し、この登録した重量テーブルを、記憶する。例えば、記憶モジュール30は、物体の名称であるショベルカー(小)と、物体の大きさであるV1と、物体の重量密度であるD1とを対応付けて重量テーブルとして登録する。また、同様に、記憶モジュール30は、ショベルカー(大)と、V2と、D2とを対応付けて重量テーブルとして登録する。また、同様に、記憶モジュール30は、キャベツと、V3と、D3とを対応付けて重量テーブルとして登録する。また、同様に、記憶モジュール30は、人(男性、20代)と、V4と、D4とを対応付けて重量テーブルとして登録する。これらのデータは、利用者端末からの入力や外部コンピュータ等を介して取得されるものであり、記憶モジュール30は、これらのデータを重量テーブルに登録し、この重量テーブルを記憶する。 The weight table stored in the storage module 30 will be described with reference to FIG. FIG. 7 is a diagram schematically illustrating a weight table stored in the storage module 30. In FIG. 7, the storage module 30 associates the name of the object, which is the identifier of the object, the size (volume) of the object, and the weight density of the object, and registers them in the weight table, and registers the registered weight table. ,Remember. For example, the storage module 30 associates a shovel car (small), which is the name of the object, V1, which is the size of the object, and D1, which is the weight density of the object, and registers it as a weight table. Similarly, the storage module 30 registers the excavator (large), V2, and D2 in association with each other as a weight table. Similarly, the storage module 30 registers the cabbage, V3, and D3 in association with each other as a weight table. Similarly, the storage module 30 associates a person (male, 20s), V4, and D4 and registers them as a weight table. These data are obtained from an input from a user terminal or obtained through an external computer or the like, and the storage module 30 registers these data in a weight table and stores the weight table.
 なお、重量テーブルとして登録される物体の識別子は、名称に限らずその他のものであってもよい。また、物体が人である場合、識別子として、年齢、性別、人種、身長、体重等を一又は複数組み合わせたものと、大きさと、重量密度とを対応付けてもよい。 Note that the identifier of the object registered as the weight table is not limited to the name, but may be another one. When the object is a person, the identifier may be associated with one or more combinations of age, gender, race, height, weight, and the like, and with the size and the weight density.
 また、重量推測モジュール45は、物体の重量を推測する際、重量テーブルを参照する方法以外の方法で、物体の重量を推測してもよい。例えば、重量推測モジュール45は、物体の識別子と大きさと重量密度との関数に基づいて、物体の重量を推測してもよい。 The weight estimation module 45 may estimate the weight of the object by a method other than the method of referring to the weight table when estimating the weight of the object. For example, the weight estimation module 45 may estimate the weight of the object based on a function of the identifier, the size, and the density of the object.
 通知モジュール21は、推測した重量を、利用者端末等に通知する(ステップS18)。ステップS18において、通知モジュール21は、取得した画像に、推測した物体の識別子及び重量を重畳させた重量通知を生成し、この生成した重量通知を、利用者端末等に通知する。利用者端末等は、この通知を取得し、自身の表示部等に表示することにより、利用者に対して、この物体の重量を通知する。このようにして、コンピュータ10は推測した物体の重量を、利用者端末等に表示させることにより、利用者にこの物体の重量を通知する。 (4) The notification module 21 notifies the estimated weight to the user terminal or the like (step S18). In step S18, the notification module 21 generates a weight notification in which the identifier of the estimated object and the weight are superimposed on the obtained image, and notifies the generated weight notification to a user terminal or the like. The user terminal or the like acquires this notification and displays it on its own display unit or the like, thereby notifying the user of the weight of the object. In this way, the computer 10 displays the estimated weight of the object on the user terminal or the like, thereby notifying the user of the weight of the object.
 以上が、第一の物体検知処理である。 The above is the first object detection processing.
 上述したコンピュータ10が実行する第一の物体検知処理について、重機、農作物及び人の其々の重量を推測する方法を、説明する。 (4) Regarding the first object detection processing executed by the computer 10, a method of estimating the weight of each of the heavy equipment, the crop, and the person will be described.
 図8、図9及び図10に基づいて、コンピュータ10が物体として、重機、農作物、人の其々の重量を推測する方法について説明する。図8、図9及び図10は、画像データ取得モジュール20が取得した画像データの一例を示す図である。其々の画像データには、画像に加え、其々の撮影地点の位置情報が含まれている。図8は、重機としてショベルカーの画像である。図9は、農作物としてキャベツの画像である。図10は、人の画像である。 A method in which the computer 10 estimates the weight of each of a heavy machine, a crop, and a person as an object will be described with reference to FIGS. 8, 9, and 10. 8, 9, and 10 are diagrams illustrating examples of image data acquired by the image data acquisition module 20. Each image data includes position information of each shooting point in addition to the image. FIG. 8 is an image of a shovel car as a heavy machine. FIG. 9 is an image of cabbage as a crop. FIG. 10 is an image of a person.
 画像データ取得モジュール20は、上述したステップS10の処理により、図8、図9及び図10に示す画像データを取得する。 The image data obtaining module 20 obtains the image data shown in FIGS. 8, 9 and 10 by the processing in step S10 described above.
 特徴量抽出モジュール40は、上述したステップS11の処理により、其々の画像データに対して、特徴量を抽出する。 The feature value extraction module 40 extracts a feature value from each image data by the process of step S11 described above.
 物体検知モジュール41は、上述したステップS12及びS13の処理により、この画像に写り込んでいる物体を検知する。物体検知モジュール41は、図8に示す画像から抽出した特徴量に基づいて、物体として、ショベルカー(小)100を検知する。また、物体検知モジュール41は、図9に示す画像から抽出した特徴量に基づいて、物体として、キャベツ110を検知する。物体検知モジュール41は、図10に示す画像から抽出した特徴量に基づいて、物体として、人(男性、20代)120を検知する。 The object detection module 41 detects an object appearing in this image by the processing in steps S12 and S13 described above. The object detection module 41 detects the shovel car (small) 100 as an object based on the feature amount extracted from the image shown in FIG. In addition, the object detection module 41 detects the cabbage 110 as an object based on the feature amount extracted from the image illustrated in FIG. The object detection module 41 detects a person (male, 20's) 120 as an object based on the feature amount extracted from the image shown in FIG.
 なお、人や農作物等の近距離に複数の物体が存在するものを検知した場合、各物体を個別に検知する。 (4) When a plurality of objects, such as a person or a crop, are present at a short distance, each object is detected individually.
 距離推測モジュール42は、上述したステップS14の処理により、撮影装置から物体までの間の距離を推測する。 The distance estimation module 42 estimates the distance between the imaging device and the object by the processing in step S14 described above.
 サイズ推測モジュール43は、上述したステップS15の処理により、物体の画像におけるサイズ(面積)を其々推測する。すなわち、サイズ推測モジュール43は、ショベルカー(小)100の画像におけるサイズを推測し、キャベツ110の画像におけるサイズを推測し、人(男性、20代)120の画像におけるサイズを推測する。 The size estimation module 43 estimates the size (area) in the image of the object by the processing in step S15 described above. That is, the size estimation module 43 estimates the size of the image of the shovel car (small) 100, estimates the size of the image of the cabbage 110, and estimates the size of the image of the person (male, 20's) 120.
 大きさ推測モジュール44は、上述したステップS16の処理により、物体の大きさ(体積)を推測する。すなわち、大きさ推測モジュール44は、ショベルカー(小)100の大きさを推測し、キャベツ110の大きさを推測し、人(男性、20代)120の大きさを推測する。 The size estimating module 44 estimates the size (volume) of the object by the processing in step S16 described above. That is, the size estimation module 44 estimates the size of the excavator (small) 100, estimates the size of the cabbage 110, and estimates the size of the person (male, 20's) 120.
 重量推測モジュール45は、上述したステップS17の処理により、物体の重量を推測する。重量推測モジュール45は、物体の識別子が「ショベルカー(小)」であることから、重量テーブルを参照し、この「ショベルカー(小)」に対応付けられた重量密度D1を特定する。重量推測モジュール45は、この特定した重量密度D1と、推測した大きさとに基づいて、このショベルカー(小)100の重量W1を推測する。同様に、重量推測モジュール45は、物体の識別子が「キャベツ」であることから、重量テーブルを参照し、この「キャベツ」に対応付けられた重量密度D3を特定する。重量推測モジュール45は、この特定した重量密度D3と、推測した大きさとに基づいて、このキャベツ110の重量W3を推測する。同様に、重量推測モジュール45は、物体の識別子が「人(男性、20代)」であることから、重量テーブルを参照し、この「人(男性、20代)」に対応付けられた重量密度D4を特定する。重量推測モジュール45は、この特定した重量密度D4と、推測した大きさとに基づいて、この人(男性、20代)120の重量W4を推測する。 The weight estimation module 45 estimates the weight of the object by the processing in step S17 described above. Since the identifier of the object is “shovel car (small)”, the weight estimating module 45 refers to the weight table and specifies the weight density D1 associated with the “shovel car (small)”. The weight estimation module 45 estimates the weight W1 of the excavator (small) 100 based on the specified weight density D1 and the estimated size. Similarly, since the identifier of the object is “cabbage”, the weight estimation module 45 refers to the weight table and specifies the weight density D3 associated with the “cabbage”. The weight estimation module 45 estimates the weight W3 of the cabbage 110 based on the specified weight density D3 and the estimated size. Similarly, since the identifier of the object is “person (male, 20s)”, the weight estimating module 45 refers to the weight table and determines the weight density associated with the “person (male, 20s)”. D4 is specified. The weight estimation module 45 estimates the weight W4 of the person (male, 20's) 120 based on the specified weight density D4 and the estimated size.
 通知モジュール21は、上述したステップS18の処理により、推測した重量を通知する。図11に基づいて、通知モジュール21が利用者端末等に通知する通知画面について説明する。図11は、通知モジュール21が利用者端末等に物体の重量を通知する通知画面の一例を示す図である。上述した三つの物体のうち、ショベルカー(小)を例として説明する。通知モジュール21は、キャベツ及び人についても同様に、通知する。 The notification module 21 notifies the estimated weight by the processing in step S18 described above. A notification screen that the notification module 21 notifies the user terminal or the like will be described with reference to FIG. FIG. 11 is a diagram illustrating an example of a notification screen in which the notification module 21 notifies a user terminal or the like of the weight of an object. The excavator (small) among the three objects described above will be described as an example. The notification module 21 also notifies cabbage and people.
 通知モジュール21は、通知画面として、取得した画像に、物体の識別子(ここでは、識別子として名称であるショベルカー(小))と、推測した重量W1とを、重畳させた画面を、通知画面として利用者端末等に表示させる。通知モジュール21は、この通知画面として、特定した物体を囲む、強調表示、色変更等の加工を行い、どの物体を特定したかを明確にする。最終的に、通知モジュール21は、通知画面として、ショベルカー(小)100を囲んだ囲み線、物体の識別子及び重量を、取得した画像に重畳させたものを、通知画面として利用者端末等に表示させる。 The notification module 21 uses, as the notification screen, a screen in which an identifier of the object (here, a shovel car (small) whose name is an identifier) and the estimated weight W1 are superimposed on the acquired image as the notification screen. It is displayed on a user terminal or the like. The notification module 21 performs processing such as surrounding the specified object, highlighting, and changing the color as the notification screen to clarify which object has been specified. Finally, the notification module 21 superimposes the enclosing line surrounding the shovel car (small) 100, the identifier of the object, and the weight on the acquired image as a notification screen as a notification screen on the user terminal or the like. Display.
 通知モジュール21は、他のキャベツ110及び人120についても、同様に、キャベツ100を囲んだ囲み線、識別子及び重量を、取得した画像に重畳させたものを、通知画面として利用者端末等に表示させ、人120を囲んだ囲み線、識別子及び重量を、取得した画像に重畳させたものを、通知画面として利用者端末等に表示させる。 The notification module 21 also displays, on the user terminal or the like, a superimposition of the enclosing line, identifier, and weight surrounding the cabbage 100 on the acquired image as the notification screen for the other cabbage 110 and the person 120. Then, an encircling line surrounding the person 120, an identifier, and a weight superimposed on the acquired image are displayed on a user terminal or the like as a notification screen.
 以上が、第一の物体検知処理における実際の物体を例とする説明である。 The above is the description of the actual object in the first object detection process as an example.
[第二の物体検知処理]
 図4に基づいて、物体検知システム1が実行する第二の物体検知処理について説明する。図4は、コンピュータ10が実行する第二の物体検知処理のフローチャートを示す図である。上述した各モジュールが実行する処理について、本処理に併せて説明する。
[Second object detection process]
The second object detection process executed by the object detection system 1 will be described based on FIG. FIG. 4 is a diagram illustrating a flowchart of the second object detection process executed by the computer 10. The processing executed by each module described above will be described together with this processing.
 なお、上述した第一の物体検知処理と同様の処理については、その詳細な説明は省略する。 A detailed description of the same processing as the above-described first object detection processing is omitted.
 はじめに、物体指定データ取得モジュール22は、検知する物体を指定する物体指定データを取得したか否かを判断する(ステップS20)。ステップS20において、物体指定データ取得モジュール22は、土や土砂(災害により発生したものや工事中の掘削物等)、水、一面の農作物等の所定の範囲に亘って存在する物体を指定するデータを、物体指定データとして取得する。このとき、利用者端末等が、検知する物体を指定する入力を受け付け、この入力を受け付けた物体を、物体指定データとしてコンピュータ10に送信する。物体指定データ取得モジュール22は、この利用者端末が送信した物体指定データを受信することにより、この物体指定データを取得する。 First, the object designation data acquisition module 22 determines whether or not object designation data for designating an object to be detected has been acquired (Step S20). In step S <b> 20, the object designation data acquisition module 22 outputs data that designates an object existing over a predetermined range such as soil, earth and sand (thing caused by a disaster, excavation under construction, and the like), water, and a whole crop. Is acquired as object designation data. At this time, the user terminal or the like receives an input for designating an object to be detected, and transmits the accepted object to the computer 10 as object designation data. The object designation data acquisition module 22 acquires the object designation data by receiving the object designation data transmitted by the user terminal.
 なお、物体指定データ取得モジュール22が取得する物体指定データの送信元は、適宜変更可能である。また、本処理は、必ずしも実行される必要はない、この場合、コンピュータ10は、後述するステップS21以降の処理を実行すればよい。 The source of the object designation data acquired by the object designation data acquisition module 22 can be changed as appropriate. This process does not necessarily need to be executed. In this case, the computer 10 may execute the process after step S21 described later.
 ステップS20において、物体指定データ取得モジュール22は、物体指定データを取得していないと判断した場合(ステップS20 NO)、本処理を終了する。なお、この場合、コンピュータ10は、上述した第一の物体検知処理を実行すればよい。 In step S20, if the object designation data acquisition module 22 determines that the object designation data has not been acquired (NO in step S20), the process ends. In this case, the computer 10 may execute the above-described first object detection processing.
 一方、ステップS20において、物体指定データ取得モジュール22は、物体指定データを取得したと判断した場合(ステップS20 YES)、画像データ取得モジュール20は、画像データを取得する(ステップS21)。ステップS21の処理は、上述したステップS10の処理と同様である。また、このとき、撮影装置は、コンピュータ10に加え、図示していない利用者端末に画像データを送信する。利用者端末は、この画像データを受信し、後述する処理に画像データに含まれる画像を用いる。 On the other hand, in step S20, when the object designation data acquisition module 22 determines that the object designation data has been acquired (step S20: YES), the image data acquisition module 20 acquires image data (step S21). The processing in step S21 is the same as the processing in step S10 described above. At this time, the imaging device transmits image data to a user terminal (not shown) in addition to the computer 10. The user terminal receives the image data, and uses an image included in the image data for a process described below.
 領域指定データ取得モジュール23は、取得した画像に対する所定領域を指定する領域指定データを取得する(ステップS22)。ステップS22において、端末装置が、画像データを受信することにより、この端末装置と、コンピュータ10とが同一の画像データを受信する。利用者端末は、この画像データに基づいた画像を自身の表示部に表示し、利用者からのタップ操作等の入力を受け付け、この画像に対する利用者からの所定領域を指定する入力を受け付ける。利用者端末は、この所定領域の画像における座標を特定する。例えば、端末装置は、この領域が矩形である場合、各頂点の座標を特定し、この矩形で囲まれた領域を所定領域として特定する。また、利用者端末は、この領域が円形である場合、中心の座標を特定し、この中心から円周までの半径を特定し、この円形で囲まれた領域を所定領域として特定する。利用者端末は、この特定した所定領域を、領域指定データとして、コンピュータ10に送信する。領域指定データ取得モジュール23は、この領域指定データを受信することにより、画像に対する所定領域を指定する領域指定データを取得する。 (4) The area designation data acquisition module 23 acquires area designation data for designating a predetermined area for the acquired image (Step S22). In step S22, when the terminal device receives the image data, the terminal device and the computer 10 receive the same image data. The user terminal displays an image based on the image data on its own display unit, receives an input such as a tap operation from the user, and receives an input specifying a predetermined area for the image from the user. The user terminal specifies the coordinates of the predetermined area in the image. For example, when the region is a rectangle, the terminal device specifies the coordinates of each vertex, and specifies a region surrounded by the rectangle as a predetermined region. When the region is circular, the user terminal specifies the coordinates of the center, specifies the radius from the center to the circumference, and specifies the region surrounded by the circle as the predetermined region. The user terminal transmits the specified predetermined area to the computer 10 as area specification data. The area designation data acquisition module 23 receives this area designation data to acquire area designation data for designating a predetermined area for an image.
 特徴量抽出モジュール40は、取得した画像データに基づいて、この画像を画像解析し、この画像の特徴量を抽出する(ステップS23)。ステップS23の処理は、上述したステップS11の処理と同様である。 The feature amount extraction module 40 analyzes the image of the image based on the acquired image data, and extracts the feature amount of the image (Step S23). The processing in step S23 is the same as the processing in step S11 described above.
 物体検知モジュール41は、抽出した特徴量に基づいて、所定領域に写り込んでいる物体を検知する(ステップS24)。ステップS24において、物体検知モジュール41は、取得した領域指定データに基づいて、この所定領域に該当する領域を特定する。例えば、物体検知モジュール41は、所定領域が矩形である場合、この矩形の各頂点の座標を、領域指定データに基づいて特定し、各頂点を結んだ矩形の領域を、指定された所定領域であるものとして特定する。また、物体検知モジュール41は、所定領域が円形である場合、この円形の中心座標及び半径を、領域指定データに基づいて特定し、この円に囲まれた領域を、指定された所定領域であるものとして特定する。物体検知モジュール41は、この所定領域に囲まれた領域に写り込んでいる物体を検知する。物体検知モジュール41が物体を検知する方法は、上述したステップS12及びS13の処理と同様である。 The object detection module 41 detects an object reflected in a predetermined area based on the extracted feature amount (Step S24). In step S24, the object detection module 41 specifies an area corresponding to the predetermined area based on the obtained area specifying data. For example, when the predetermined area is a rectangle, the object detection module 41 specifies the coordinates of each vertex of the rectangle based on the area specification data, and determines a rectangular area connecting the vertices with the specified predetermined area. Identify as something. When the predetermined area is a circle, the object detection module 41 specifies the center coordinates and the radius of the circle based on the area specifying data, and the area surrounded by the circle is the specified predetermined area. Identify The object detection module 41 detects an object reflected in an area surrounded by the predetermined area. The method by which the object detection module 41 detects an object is the same as the processing in steps S12 and S13 described above.
 距離推測モジュール42は、画像データに含まれる撮影地点の位置情報に基づいて、撮影装置から物体までの間の距離を推測する(ステップS25)。ステップS25の処理は、上述したステップS14の処理と同様である。 The distance estimation module 42 estimates the distance between the imaging device and the object based on the location information of the imaging point included in the image data (Step S25). The processing in step S25 is the same as the processing in step S14 described above.
 サイズ推測モジュール43は、検知した所定領域内に写り込んでいる物体の画像におけるサイズ(面積)を推測する(ステップS26)。ステップS26の処理は、上述したステップS15の処理と同様である。 The size estimation module 43 estimates the size (area) of the image of the object reflected in the detected predetermined area (step S26). The process in step S26 is the same as the process in step S15 described above.
 大きさ推測モジュール44は、推測した距離と、推測したサイズとに基づいて、この物体の大きさ(体積)を推測する(ステップS27)。ステップS27の処理は、上述したステップS16の処理と同様である。 The size estimation module 44 estimates the size (volume) of the object based on the estimated distance and the estimated size (step S27). The processing in step S27 is the same as the processing in step S16 described above.
 重量推測モジュール45は、予め記憶モジュール30が記憶した物体の識別子(名称、型番、製品番号等)と、物体の大きさと、重量密度とを対応付けた重量テーブルを参照することにより、この物体の重量を推測する(ステップS28)。ステップS28の処理は、上述したステップS17の処理と同様である。 The weight estimation module 45 refers to the weight table in which the identifier (name, model number, product number, etc.) of the object stored in the storage module 30 in advance, the size of the object, and the weight density are associated with each other, and The weight is estimated (step S28). The process in step S28 is the same as the process in step S17 described above.
 通知モジュール21は、推測した重量を、利用者端末に通知する(ステップS29)。ステップS29の処理は、上述したステップS18の処理と同様である。 (4) The notification module 21 notifies the user terminal of the estimated weight (step S29). The processing in step S29 is the same as the processing in step S18 described above.
 以上が、第二の物体検知処理である。 The above is the second object detection processing.
 上述したコンピュータ10が実行する第二の物体検知処理について、重機及び農作物の其々の重量を推測する方法を、説明する。なお、人に対しても同様の方法で重量を推測可能である。 (4) Regarding the second object detection process executed by the computer 10, a method of estimating the weight of each of the heavy equipment and the crop will be described. The weight can be estimated for a person in the same manner.
 図12及び図13に基づいて、コンピュータ10が物体として、重機及び農作物の其々の重量を推測する方法について説明する。図12及び図13は、画像データ取得モジュール20が取得した画像に対して、領域指定データ取得モジュール23が取得した領域指定データに基づいて指定された所定領域を重畳させた状態の一例を示す図である。其々の画像データには、画像に加え、其々の撮影地点の位置情報が含まれている。図12は、土砂を含んだ画像及び指定された所定領域を画像に重畳させたものである。図13は、農作物の画像及び指定された所定領域を画像に重畳させたものである。 A method of estimating the weight of each of the heavy equipment and the crop as the object by the computer 10 will be described with reference to FIGS. 12 and 13 are diagrams illustrating an example of a state in which a predetermined area designated based on the area designation data acquired by the area designation data acquisition module 23 is superimposed on the image acquired by the image data acquisition module 20. It is. Each image data includes position information of each shooting point in addition to the image. FIG. 12 shows an image including earth and sand and a designated predetermined area superimposed on the image. FIG. 13 shows an image of a crop and a specified predetermined area superimposed on the image.
 物体指定データ取得モジュール22は、上述したステップS20の処理により、物体指定データを取得する。ここで、物体指定データ取得モジュール22は、図12においては、「土砂」を物体指定データとして取得し、図13においては、「キャベツ」を物体指定データとして取得する。 The object designation data acquisition module 22 acquires the object designation data by the processing in step S20 described above. Here, the object designation data acquisition module 22 acquires “earth and sand” as object designation data in FIG. 12, and acquires “cabbage” as object designation data in FIG.
 画像データ取得モジュール20は、上述したステップS21の処理により、画像データを取得する The image data acquisition module 20 acquires the image data by the processing in step S21 described above.
 領域指定データ取得モジュール23は、上述したステップS22の処理により、領域指定データを取得する。 (4) The area designation data acquisition module 23 acquires the area designation data by the processing in step S22 described above.
 特徴量抽出モジュール40は、上述したステップS23の処理により、画像の特徴量を抽出する。 The feature amount extraction module 40 extracts the feature amount of the image by the process of step S23 described above.
 物体検知モジュール41は、上述したステップS24の処理により、上述した特徴量に基づいて、指定された所定領域に写り込んでいる物体を検知する。図12において、物体検知モジュール41は、物体として「土砂」が指定されており、加えて、指定された所定領域200が存在することから、この所定領域200内に存在する土砂を検知する。また、同様に、図13において、物体検知モジュール41は、物体として「キャベツ」が指定されており、加えて、指定された所定領域210が存在することから、この所定領域210内に存在するキャベツを検知する。 The object detection module 41 detects an object reflected in the designated predetermined area based on the above-described feature amounts by the above-described processing in step S24. In FIG. 12, the object detection module 41 detects “earth and sand” as an object, and also detects the earth and sand existing in the predetermined area 200 because the specified predetermined area 200 exists. Similarly, in FIG. 13, the object detection module 41 specifies the “cabbage” as the object, and additionally, because the designated predetermined area 210 exists, the cabbage existing in the predetermined area 210 exists. Is detected.
 距離推測モジュール42は、上述したステップS25の処理により、撮影装置から物体までの間の距離を推測する。 The distance estimation module 42 estimates the distance between the imaging device and the object by the processing in step S25 described above.
 サイズ推測モジュール43は、上述したステップS26の処理により、この所定領域内に写り込んでいる物体の画像におけるサイズ(面積)を推測する。サイズ推測モジュール43は、この所定領域200内に存在する土砂のサイズ及びこの所定領域210内に存在するキャベツのサイズを推測する。 The size estimation module 43 estimates the size (area) of the image of the object reflected in the predetermined area by the processing in step S26 described above. The size estimation module 43 estimates the size of the earth and sand existing in the predetermined area 200 and the size of the cabbage existing in the predetermined area 210.
 このとき、図12に示すように、所定領域200内に他の物体が写り込んでいる場合、この他の物体に該当する箇所を対象となる物体が存在するものとして物体の画像におけるサイズを推測する。具体的には、サイズ推測モジュール43は、所定領域200に存在する重機のサイズを、土砂のサイズに加算して、この所定領域200内に存在する土砂のサイズを推測する。これは、重機等の大きな物体が対象とする物体を隠すような場合においては有効となる。 At this time, as shown in FIG. 12, when another object is reflected in the predetermined area 200, the size of the object in the image corresponding to the other object is estimated assuming that the target object exists. I do. Specifically, the size estimation module 43 estimates the size of the earth and sand existing in the predetermined area 200 by adding the size of the heavy equipment existing in the predetermined area 200 to the size of the earth and sand. This is effective when a large object such as a heavy machine hides the target object.
 また、図13に示すように、所定領域210内に他の物体が写り込んでいる場合、この他の物体に該当する箇所を対象となる物体が存在しないものとして物体の画像におけるサイズを推測する。具体的には、サイズ推測モジュール43は、所定領域210に存在するキャベツのうち、一玉が完全にこの所定領域210内に含まれていないキャベツのサイズを除外し、一玉が完全にこの所定領域210内に含まれているキャベツのサイズのみに基づいて、この所定領域210内に存在するキャベツのサイズを推測する。これは、農作物等の物体の一部だけが所定領域内に存在するような場合において有効となる。 Further, as shown in FIG. 13, when another object is included in the predetermined area 210, the size of the object corresponding to the other object is estimated in the image of the object assuming that there is no target object. . Specifically, the size estimating module 43 excludes the size of cabbage in which one ball is not completely included in the predetermined region 210 from among the cabbage existing in the predetermined region 210, and Based on only the size of the cabbage included in the area 210, the size of the cabbage existing in the predetermined area 210 is estimated. This is effective when only a part of an object such as a crop is present in the predetermined area.
 大きさ推測モジュール44は、上述したステップS27の処理により、この物体の大きさ(体積)を推測する。すなわち、大きさ推測モジュール44は、所定領域200における土砂の大きさを推測し、所定領域210におけるキャベツの大きさを推測する。 The size estimation module 44 estimates the size (volume) of this object by the processing in step S27 described above. That is, the size estimation module 44 estimates the size of the earth and sand in the predetermined region 200 and estimates the size of the cabbage in the predetermined region 210.
 重量推測モジュール45は、上述したステップS28の処理により、物体の重量を推測する。重量推測モジュール45は、物体の識別子が「土砂」であることから、重量テーブルを参照し、この「土砂」に対応付けられた重量密度D5を特定する。重量推測モジュール45は、この特定した重量密度D5と推測した大きさとに基づいて、この所定領域200内に存在する「土砂」の重量W5を推測する。同様に、重量推測モジュール45は、物体の識別子が「キャベツ」であることから、重量テーブルを参照し、この「キャベツ」に対応付けられた重量密度D3を特定する。重量推測モジュール45は、この特定した重量密度D3と推測した大きさとに基づいて、所定領域210内に存在する「キャベツ」の重量W6を推測する。 The weight estimation module 45 estimates the weight of the object by the processing in step S28 described above. Since the identifier of the object is “earth and sand”, the weight estimating module 45 refers to the weight table and specifies the weight density D5 associated with the “earth and sand”. The weight estimation module 45 estimates the weight W5 of “earth and sand” existing in the predetermined region 200 based on the specified weight density D5 and the estimated size. Similarly, since the identifier of the object is “cabbage”, the weight estimation module 45 refers to the weight table and specifies the weight density D3 associated with the “cabbage”. The weight estimation module 45 estimates the weight W6 of the “cabbage” existing in the predetermined area 210 based on the specified weight density D3 and the estimated size.
 通知モジュール21は、上述したステップS29の処理により、推測した重量を通知する。図14に基づいて、通知モジュール21が利用者端末等に通知する通知画面について説明する。図14は、通知モジュール21が利用者端末等に所定領域内に写り込んだ物体の重量を通知する通知画面の一例を示す図である。上述した二つの物体のうち、土砂を例として説明する。通知モジュール21は、キャベツについても同様に、通知する。 The notification module 21 notifies the estimated weight by the processing in step S29 described above. The notification screen that the notification module 21 notifies the user terminal or the like will be described with reference to FIG. FIG. 14 is a diagram illustrating an example of a notification screen in which the notification module 21 notifies a user terminal or the like of a weight of an object reflected in a predetermined area. Of the two objects described above, earth and sand will be described as an example. The notification module 21 notifies the cabbage similarly.
 通知モジュール21は、通知画面として、取得した画像に、物体の識別子(ここでは、識別子として土砂)と、推測した重量W5kgと、所定領域200とを重畳させた画面を通知画面として利用者端末等に表示させる。 The notification module 21 uses a screen obtained by superimposing an object identifier (here, earth and sand as an identifier), an estimated weight W5 kg, and a predetermined area 200 on an acquired image as a notification screen as a notification screen. To be displayed.
 通知モジュール21は、キャベツについても、同様に、所定領域200、物体の識別子及び重量を、取得した画面に重畳させたものを、通知画面として利用者端末に表示させる。 Similarly, the notification module 21 causes the user terminal to display, as a notification screen, the cabbage in which the predetermined area 200, the identifier of the object, and the weight are superimposed on the acquired screen.
 なお、説明は、省略しているが、物体が人である場合についても上述した土砂やキャベツと同様の処理を実行すればよい。通知モジュール21は、この通知画面として、所定領域を強調表示、色変更等の加工を行い、画像におけるどの場所に写り込んだ物体を特定したかを明確にする。最終的に、通知モジュール21は、通知画面として、所定領域、物体の識別子及び重量を、取得した画面に重畳させたものを、通知画面として利用者端末に表示させる。 Note that, although the description is omitted, the same processing as that for the above-described earth and sand or cabbage may be performed even when the object is a person. The notification module 21 performs processing such as highlighting and color change of a predetermined area as the notification screen, and clarifies where in the image the object reflected in the image has been specified. Eventually, the notification module 21 causes the user terminal to display, as the notification screen, a predetermined area, the identifier of the object, and the weight superimposed on the acquired screen as the notification screen.
 以上が、第二の物体検知処理における実際の物体を例とする説明である。 The above is the description of the actual object in the second object detection process as an example.
 [第三の物体検知処理]
 図5に基づいて、物体検知システム1が実行する第三の物体検知処理について説明する。図5は、コンピュータ10が実行する第三の物体検知処理のフローチャートを示す図である。上述した各モジュールが実行する処理について、本処理に併せて説明する。
[Third object detection process]
The third object detection process executed by the object detection system 1 will be described based on FIG. FIG. 5 is a diagram illustrating a flowchart of the third object detection process executed by the computer 10. The processing executed by each module described above will be described together with this processing.
 なお、上述した第一の物体検知処理又は第二の物体検知処理と同様の処理については、その詳細な説明は省略する。 A detailed description of the same processing as the first object detection processing or the second object detection processing described above is omitted.
 はじめに、画像データ取得モジュール20は、図示していない撮影装置が撮影した画像及びこの撮影装置の位置情報を、画像データとして取得する(ステップS30)。ステップS30の処理は、上述したステップS10の処理と同様である。 First, the image data acquisition module 20 acquires, as image data, an image photographed by a photographing device (not shown) and position information of the photographing device (step S30). The processing in step S30 is the same as the processing in step S10 described above.
 特徴量抽出モジュール40は、取得した画像データに基づいて、この画像データに含まれる画像を、画像解析し、この画像の特徴量を抽出する(ステップS31)。ステップS31の処理は、上述したステップS11の処理と同様である。 The feature amount extraction module 40 analyzes the image included in the image data based on the acquired image data, and extracts the feature amount of the image (step S31). The process in step S31 is the same as the process in step S11 described above.
 物体検知モジュール41は、抽出した特徴量に基づいて、この画像に物体が写り込んでいるか否かを判断する(ステップS32)。ステップS32の処理は、上述したステップS12の処理と同様である。 The object detection module 41 determines whether an object is included in this image based on the extracted feature amount (Step S32). The processing in step S32 is the same as the processing in step S12 described above.
 ステップS32において、物体検知モジュール41は、画像に物体が写り込んでいないと判断した場合(ステップS32 NO)、コンピュータ10は、本処理を終了する。 In step S32, when the object detection module 41 determines that the object is not reflected in the image (step S32: NO), the computer 10 ends this processing.
 一方、ステップS32において、物体検知モジュール41は、画像に物体が写り込んでいると判断した場合(ステップS32 YES)、物体検知モジュール41は、この画像に写り込んでいる物体を検知する(ステップS33)。ステップS33の処理は、上述したステップS13の処理と同様である。 On the other hand, in step S32, when the object detection module 41 determines that the object is reflected in the image (step S32 YES), the object detection module 41 detects the object reflected in this image (step S33). ). The processing in step S33 is the same as the processing in step S13 described above.
 物体検知モジュール41は、物体の全体を検知したか否かを判断する(ステップS34)。ステップS34において、物体検知モジュール41は、検知した物体の一部が、画像の端(例えば、画像が矩形である場合、物体の一部が各辺に接触又は物体の一部が各辺の何れかの位置にある)にあるか否かを判断する。また、物体検知モジュール41は、検知した物体の輪郭や形状の一部が途中で途切れているか否かを判断する。 (4) The object detection module 41 determines whether the entire object has been detected (step S34). In step S34, the object detection module 41 determines whether a part of the detected object is at an edge of the image (for example, if the image is rectangular, Is determined). Further, the object detection module 41 determines whether or not a part of the outline or shape of the detected object is interrupted halfway.
 なお、物体検知モジュール41は、上述した方法以外の方法により、全体を検知したか否かを判断してもよい。 The object detection module 41 may determine whether the entire object has been detected by a method other than the method described above.
 ステップS34において、物体検知モジュール41は、物体の全体を検知したと判断した場合(ステップS34 YES)、コンピュータ10は、本処理を終了する。なお、この場合、コンピュータ10は、上述した第一の物体検知処理を実行すればよい。 In step S34, if the object detection module 41 determines that the entire object has been detected (step S34: YES), the computer 10 ends this processing. In this case, the computer 10 may execute the above-described first object detection processing.
 一方、ステップS34において、物体検知モジュール41は、物体の全体を検知できなかったと判断した場合(ステップS34 NO)、物体データ取得モジュール24は、外部コンピュータや記憶モジュール30が予め記憶するこの物体の識別子に該当する物体の画像やサイズや大きさ等に関するデータである物体データを取得する(ステップS35)。ステップS35において、物体データ取得モジュール24は、検知した物体の識別子を、外部コンピュータや記憶モジュール30が記憶する各種テーブル等を参照することにより、この物体の識別子に該当する物体データを取得する。 On the other hand, in step S34, when the object detection module 41 determines that the entire object has not been detected (step S34 NO), the object data acquisition module 24 uses the identifier of the object stored in the external computer or the storage module 30 in advance. The object data which is data relating to the image, size, size, etc. of the object corresponding to is acquired (step S35). In step S35, the object data acquisition module 24 acquires the object data corresponding to the identifier of the detected object by referring to the external computer or various tables stored in the storage module 30.
 物体補完モジュール46は、取得した物体データに基づいて、検知した物体において、欠けている部分を補完する(ステップS36)。ステップS36において、物体補完モジュール46は、取得した物体データにおける画像と、今回検知した物体の画像とを比較し、その比率を推測する。物体補完モジュール46は、推測した比率に基づいて、取得した物体データにおける画像を縮小又は拡大することにより補正する。物体補完モジュール46は、この補正後の画像と、検知した物体とを比較する。物体補完モジュール46は、この検知した物体に欠けている部分を、この補正後の画像において特定する。物体補完モジュール46は、この特定した部分を、検知した物体の画像につなげることにより、物体の全体を画像として疑似的に補正し、欠けている部分を補完する。 The object complementing module 46 complements a missing part in the detected object based on the acquired object data (step S36). In step S36, the object complementing module 46 compares the image in the acquired object data with the image of the object detected this time, and estimates the ratio. The object complementing module 46 corrects by reducing or enlarging the image in the acquired object data based on the estimated ratio. The object complementing module 46 compares the corrected image with the detected object. The object complementing module 46 specifies a portion missing from the detected object in the corrected image. The object complementing module 46 pseudo-corrects the entire object as an image by connecting the specified portion to the detected image of the object, and complements the missing portion.
 距離推測モジュール42は、画像データに含まれる撮影地点の位置情報に基づいて、撮影装置から物体までの間の距離を推測する(ステップS37)。ステップS37の処理は、上述したステップS14の処理と同様である。 The distance estimation module 42 estimates the distance between the imaging device and the object based on the position information of the imaging point included in the image data (step S37). The processing in step S37 is the same as the processing in step S14 described above.
 サイズ推測モジュール43は、補完した物体の画像と、取得した物体データとに基づいて、この補完後の画像に写っている物体のサイズ(面積)を推測する(ステップS38)。ステップS38の処理は、上述したステップS15の処理と同様である。 The size estimation module 43 estimates the size (area) of the object shown in the complemented image based on the complemented object image and the acquired object data (step S38). The processing in step S38 is the same as the processing in step S15 described above.
 大きさ推測モジュール44は、推測した撮影地点から物体までの間の距離と、推測した物体のサイズとに基づいて、この物体の大きさ(体積)を推測する(ステップS39)。ステップS39の処理は、上述したステップS16の処理と同様である。 The size estimation module 44 estimates the size (volume) of the object based on the estimated distance from the shooting point to the object and the estimated size of the object (step S39). The processing in step S39 is the same as the processing in step S16 described above.
 重量推測モジュール45は、重量テーブルを参照することにより、この物体の重量を推測する(ステップS40)。ステップS40の処理は、上述したステップS17の処理と同様である。 The weight estimation module 45 estimates the weight of this object by referring to the weight table (step S40). The processing in step S40 is the same as the processing in step S17 described above.
 通知モジュール21は、推測した重量を、利用者端末に通知する(ステップS41)。ステップS41の処理は、上述したステップS18の処理と同様である。 The notification module 21 notifies the estimated weight to the user terminal (step S41). The processing in step S41 is the same as the processing in step S18 described above.
 以上が、第三の物体検知処理である。 The above is the third object detection processing.
 上述したコンピュータ10が実行する第三の物体検知処理について、重機の重量を推測する方法を、説明する。なお、農作物や人に対しても同様の方法で重量を推測可能である。 A method for estimating the weight of heavy equipment in the third object detection processing executed by the computer 10 will be described. It should be noted that the weight can be estimated for agricultural products and humans in the same manner.
 図15に基づいて、コンピュータ10が物体として重機の重量を推測する方法について説明する。図15は、画像データ取得モジュール20が取得した画像データの一例を示す図である。画像データには、画像に加え、撮影地点の位置情報が含まれている。図15は、重機としてショベルカーの画像である。 A method for estimating the weight of a heavy machine as an object by the computer 10 will be described with reference to FIG. FIG. 15 is a diagram illustrating an example of image data acquired by the image data acquisition module 20. The image data includes, in addition to the image, positional information of the shooting location. FIG. 15 is an image of a shovel car as a heavy machine.
 画像データ取得モジュール20は、上述したステップS30の処理により、図15に示す画像データを取得する。 The image data acquisition module 20 acquires the image data shown in FIG. 15 by the processing in step S30 described above.
 特徴量抽出モジュール40は、上述したステップS31の処理により、画像データに対して、特徴量を抽出する。 The feature extraction module 40 extracts the feature from the image data by the above-described process of step S31.
 物体検知モジュール41は、上述したステップS32及びS33の処理により、この画像に写り込んでいる物体を検知する。物体検知モジュール41は、図15に示す画像から抽出した特徴量に基づいて、物体としてショベルカー(小)300を検知する。 (4) The object detection module 41 detects an object appearing in this image by the processes in steps S32 and S33 described above. The object detection module 41 detects the shovel car (small) 300 as an object based on the feature amount extracted from the image shown in FIG.
 物体検知モジュール41は、上述したステップS34の処理により、検知した物体の一部が欠けていると判断し、物体データ取得モジュール24は、上述したステップS35の処理により、検知した物体に関する物体データを取得する。 The object detection module 41 determines that part of the detected object is missing by the processing of step S34 described above, and the object data acquisition module 24 converts the object data relating to the detected object by the processing of step S35 described above. get.
 物体補完モジュール46は、上述したステップS36の処理により、検知した物体において、欠けている部分を補完する。 The object complementing module 46 complements a missing part in the detected object by the processing in step S36 described above.
 図16は、物体補完モジュール46がショベルカー(小)300に欠けている部分を補完したショベルカー(小)310を示す図である。図16において、物体補完モジュール46は、物体データに基づいて、補正部分320を補正したショベルカー(小)310を補完する。 FIG. 16 is a diagram showing a shovel car (small) 310 in which the object complementing module 46 supplements a part missing from the shovel car (small) 300. In FIG. 16, the object complementing module 46 complements the shovel car (small) 310 in which the correction portion 320 has been corrected based on the object data.
 距離推測モジュール42は、上述したステップS37の処理により、撮影装置から物体までの間の距離を推測する。 The distance estimation module 42 estimates the distance between the imaging device and the object by the processing in step S37 described above.
 サイズ推測モジュール43は、上述したステップS38の処理により、補完後の物体のサイズ(面積)を推測する。すなわち、サイズ推測モジュール43は、補完後のショベルカー(小)310の画像におけるサイズを推測する。 The size estimation module 43 estimates the size (area) of the complemented object by the processing in step S38 described above. In other words, the size estimation module 43 estimates the size of the image of the shovel car (small) 310 after the complementation.
 大きさ推測モジュール44は、上述したステップS39の処理により、物体の大きさ(体積)を推測する。すなわち、大きさ推測モジュール44は、この補完後のショベルカー(小)310の大きさを推測する。 The size estimation module 44 estimates the size (volume) of the object by the processing in step S39 described above. That is, the size estimating module 44 estimates the size of the shovel car (small) 310 after the complementation.
 重量推測モジュール45は、上述したステップS40の処理により、物体の重量を推測する。重量推測モジュール45は、物体の識別子が「ショベルカー(小)」であることから、重量テーブルを参照し、この「ショベルカー(小)」に対応付けられた重量密度D1を特定する。重量推測モジュール45は、この特定した重量密度D1と、推測した大きさとに基づいて、このショベルカー(小)100の重量W7を推測する。 The weight estimation module 45 estimates the weight of the object by the processing in step S40 described above. Since the identifier of the object is “shovel car (small)”, the weight estimating module 45 refers to the weight table and specifies the weight density D1 associated with the “shovel car (small)”. The weight estimation module 45 estimates the weight W7 of the excavator (small) 100 based on the specified weight density D1 and the estimated size.
 通知モジュール21は、上述したステップS41の処理により、推測した重量を通知する。図17に基づいて、通知モジュール21が利用者端末等に通知する通知画面について説明する。図17は、通知モジュール21が利用者端末等に物体の重量を通知する通知画面の一例を示す図である。 The notification module 21 notifies the estimated weight by the processing in step S41 described above. A notification screen that the notification module 21 notifies the user terminal or the like will be described with reference to FIG. FIG. 17 is a diagram illustrating an example of a notification screen in which the notification module 21 notifies a user terminal or the like of the weight of an object.
 通知モジュール21は、通知画面として、取得した画像(補正前の画像)に、物体の識別子(ここでは、識別子として名称であるショベルカー(小))と、推測した重量とを、重畳させた画面を、通知画面として利用者端末等に表示させる。通知モジュール21は、この通知画面として、特定した物体を囲む、強調表示、色変更等の加工を行い、どの物体を特定したかを明確にする。最終的に、通知モジュール21は、通知画面として、ショベルカー(小)300を囲んだ囲み線、物体の識別子及び重量を、取得した画像に重畳させたものを、通知画面として利用者端末等に表示させる。 The notification module 21 superimposes an object identifier (here, a shovel car (small) whose name is an identifier) and an estimated weight on the acquired image (image before correction) as a notification screen. Is displayed on the user terminal or the like as a notification screen. The notification module 21 performs processing such as surrounding the specified object, highlighting, and changing the color as the notification screen to clarify which object has been specified. Finally, the notification module 21 superimposes the enclosing line surrounding the shovel car (small) 300, the identifier of the object, and the weight on the acquired image as a notification screen as a notification screen on the user terminal or the like. Display.
 なお、通知モジュール21は、他の農作物や人についても、同様の通知を通知画面として利用者端末に表示させる。 The notification module 21 displays the same notification on the user terminal as a notification screen for other crops and people.
 以上が、第案の物体検知処理における実際の物体を例とする説明である。 The above is the description of the actual object in the proposed object detection processing as an example.
 [学習処理]
 図6に基づいて、物体検知システム1が実行する学習処理について説明する。図6は、コンピュータ10が実行する学習処理のフローチャートを示す図である。上述した各モジュールが実行する処理について、本処理に併せて説明する。
[Learning process]
The learning process performed by the object detection system 1 will be described based on FIG. FIG. 6 is a diagram illustrating a flowchart of the learning process performed by the computer 10. The processing executed by each module described above will be described together with this processing.
 はじめに、実重量データ取得モジュール25は、上述した第一、第二及び第三の物体検知処理により重量を推測した物体の実際の重量である実重量を示す実重量データを取得する(ステップS50)。ステップS50において、端末装置が、実際にこの物体の重量を計測した結果を入力又はこの計測した結果を取得し、この物体の識別子、画像及び実重量に関するデータを、実重量データとしてコンピュータ10に送信する。実重量データ取得モジュール25は、この実重量データを受信することにより、推測した物体の実際の重量を取得する。 First, the actual weight data acquisition module 25 acquires actual weight data indicating the actual weight of the object whose weight has been estimated by the first, second, and third object detection processes described above (step S50). . In step S50, the terminal device inputs or obtains the result of actually measuring the weight of the object, and transmits data relating to the identifier, image, and actual weight of the object to the computer 10 as actual weight data. I do. The actual weight data acquisition module 25 acquires the estimated actual weight of the object by receiving the actual weight data.
 学習モジュール47は、取得した物体の実際の重量(実重量)と、検知した物体の画像との相関関係を学習する(ステップS51)。ステップS51において、学習モジュール47は、実重量と、画像との相関関係として、この物体の密度、名称、大きさ又は物体までの距離の少なくとも一つの相関関係を学習する。 The learning module 47 learns a correlation between the acquired actual weight (actual weight) of the object and the detected image of the object (step S51). In step S51, the learning module 47 learns, as a correlation between the actual weight and the image, at least one correlation of the density, name, size, or distance to the object of the object.
 記憶モジュール30は、学習結果を記憶する(ステップS52)。 (4) The storage module 30 stores the learning result (Step S52).
 重量推測モジュール45は、上述したステップS17、S28、S40の処理に際して、この学習結果を加味して、物体の重量を推測する。すなわち、重量推測モジュール45は、物体の重量を推測する際、重量テーブルを参照するとともに、学習結果に基づいた相関関係による補正をすることにより、この物体の重量を推測する。 The weight estimation module 45 estimates the weight of the object in consideration of the learning result in the processing of steps S17, S28, and S40 described above. That is, when estimating the weight of the object, the weight estimating module 45 estimates the weight of the object by referring to the weight table and correcting the correlation based on the learning result.
 以上が、学習処理である。 The above is the learning process.
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、コンピュータからネットワーク経由で提供される(SaaS:ソフトウェア・アズ・ア・サービス)形態で提供される。また、プログラムは、例えば、フレキシブルディスク、CD(CD-ROMなど)、DVD(DVD-ROM、DVD-RAMなど)等のコンピュータ読取可能な記憶媒体に記憶された形態で提供される。この場合、コンピュータはその記憶媒体からプログラムを読み取って内部記憶装置又は外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記憶媒体)に予め記憶しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The means and functions described above are implemented when a computer (including a CPU, an information processing device, and various terminals) reads and executes a predetermined program. The program is provided, for example, in the form of being provided from a computer via a network (SaaS: Software as a Service). The program is provided in a form stored in a computer-readable storage medium such as a flexible disk, a CD (eg, a CD-ROM), and a DVD (eg, a DVD-ROM, a DVD-RAM). In this case, the computer reads the program from the storage medium, transfers the program to an internal storage device or an external storage device, stores and executes the program. Further, the program may be stored in a storage device (storage medium) such as a magnetic disk, an optical disk, or a magneto-optical disk in advance, and may be provided to the computer from the storage device via a communication line.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。 Although the embodiments of the present invention have been described above, the present invention is not limited to these embodiments. In addition, the effects described in the embodiments of the present invention merely enumerate the most preferable effects resulting from the present invention, and the effects of the present invention are limited to those described in the embodiments of the present invention. is not.
 1 物体検知システム、10 コンピュータ {1} Object detection system, 10} Computer

Claims (6)

  1.  撮影した画像を取得する取得手段と、
     前記画像から特徴量を抽出し、物体を検知する検知手段と、
     検知した前記物体の重量を前記物体の画像に示すサイズから推測する推測手段と、
     を備えることを特徴とするコンピュータシステム。
    Acquisition means for acquiring a captured image;
    Detecting means for extracting a feature amount from the image and detecting an object;
    Estimating means for estimating the weight of the detected object from the size shown in the image of the object,
    A computer system comprising:
  2.  前記推測手段は、前記物体の画像に示すサイズから前記物体の大きさを推測し、推測した大きさに基づいて、検知した前記物体の重量密度を参照することで、前記物体の重量を推測する、
     ことを特徴とする請求項1に記載のコンピュータシステム。
    The estimating means estimates the size of the object from the size shown in the image of the object, and estimates the weight of the object by referring to the detected density of the object based on the estimated size. ,
    The computer system according to claim 1, wherein:
  3.  前記推測手段は、推測した前記物体の実際の重量と、検知した前記物体の画像との相関関係を学習することで、前記物体の重量を推測する、
     ことを特徴とする請求項1に記載のコンピュータシステム。
    The estimating means estimates the weight of the object by learning the correlation between the estimated actual weight of the object and the detected image of the object,
    The computer system according to claim 1, wherein:
  4.  前記推測手段は、推測した前記物体の実際の重量と、検知した物体の密度、検知した物体の名称、検知した物体の大きさ又は検知した物体までの距離との少なくとも1つの相関関係を学習することで、前記物体の重量を推測する、
     ことを特徴とする請求項3に記載のコンピュータシステム。
    The estimating means learns at least one correlation between the estimated actual weight of the object and the density of the detected object, the name of the detected object, the size of the detected object, or the distance to the detected object. By estimating the weight of the object,
    The computer system according to claim 3, wherein:
  5.  コンピュータシステムが実行する物体検知方法であって、
     撮影した画像を取得するステップと、
     前記画像から特徴量を抽出し、物体を検知するステップと、
     検知した前記物体の重量を前記物体の画像に示すサイズから推測するステップと、
     を備えることを特徴とする物体検知方法。
    An object detection method executed by a computer system,
    Obtaining a captured image;
    Extracting a feature amount from the image and detecting an object;
    Estimating the weight of the detected object from the size shown in the image of the object,
    An object detection method, comprising:
  6.  コンピュータシステムに、
     撮影した画像を取得するステップ、
     前記画像から特徴量を抽出し、物体を検知するステップ、
     検知した前記物体の重量を前記物体の画像に示すサイズから推測するステップ、
     を実行させるためのコンピュータ読み取り可能なプログラム。
    For computer systems,
    Obtaining a captured image,
    Extracting a feature amount from the image and detecting an object;
    Estimating the detected weight of the object from the size shown in the image of the object,
    Computer-readable program for executing
PCT/JP2018/032207 2018-08-30 2018-08-30 Computer system, object sensing method, and program WO2020044510A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020539962A JP7068746B2 (en) 2018-08-30 2018-08-30 Computer system, object detection method and program
PCT/JP2018/032207 WO2020044510A1 (en) 2018-08-30 2018-08-30 Computer system, object sensing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/032207 WO2020044510A1 (en) 2018-08-30 2018-08-30 Computer system, object sensing method, and program

Publications (1)

Publication Number Publication Date
WO2020044510A1 true WO2020044510A1 (en) 2020-03-05

Family

ID=69644014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/032207 WO2020044510A1 (en) 2018-08-30 2018-08-30 Computer system, object sensing method, and program

Country Status (2)

Country Link
JP (1) JP7068746B2 (en)
WO (1) WO2020044510A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022180864A1 (en) * 2021-02-26 2022-09-01 日本電気株式会社 Weight estimation method, weight estimation device, weight estimation system
WO2023074818A1 (en) * 2021-10-27 2023-05-04 株式会社安川電機 Weighing system, support control system, weighing method, and weighing program
WO2023189216A1 (en) * 2022-03-31 2023-10-05 日立建機株式会社 Work assistance system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018027581A (en) * 2016-08-17 2018-02-22 株式会社安川電機 Picking system
JP2018124962A (en) * 2017-01-27 2018-08-09 パナソニックIpマネジメント株式会社 Information processor and information processing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6724499B2 (en) * 2016-04-05 2020-07-15 株式会社リコー Object gripping device and grip control program
JP2017220876A (en) * 2016-06-10 2017-12-14 アイシン精機株式会社 Periphery monitoring device
JP2018036770A (en) * 2016-08-30 2018-03-08 富士通株式会社 Position attitude estimation device, position attitude estimation method, and position attitude estimation program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018027581A (en) * 2016-08-17 2018-02-22 株式会社安川電機 Picking system
JP2018124962A (en) * 2017-01-27 2018-08-09 パナソニックIpマネジメント株式会社 Information processor and information processing method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022180864A1 (en) * 2021-02-26 2022-09-01 日本電気株式会社 Weight estimation method, weight estimation device, weight estimation system
WO2023074818A1 (en) * 2021-10-27 2023-05-04 株式会社安川電機 Weighing system, support control system, weighing method, and weighing program
WO2023189216A1 (en) * 2022-03-31 2023-10-05 日立建機株式会社 Work assistance system

Also Published As

Publication number Publication date
JPWO2020044510A1 (en) 2021-08-26
JP7068746B2 (en) 2022-05-17

Similar Documents

Publication Publication Date Title
JP6942488B2 (en) Image processing equipment, image processing system, image processing method, and program
WO2020044510A1 (en) Computer system, object sensing method, and program
US8872851B2 (en) Augmenting image data based on related 3D point cloud data
US20170068840A1 (en) Predicting accuracy of object recognition in a stitched image
CN112700552A (en) Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
KR101510206B1 (en) Urban Change Detection Method Using the Aerial Hyper Spectral images for Digital Map modify Drawing
GB2554111A (en) Image processing apparatus, imaging apparatus, and image processing method
EP3214604A1 (en) Orientation estimation method and orientation estimation device
US11887331B2 (en) Information processing apparatus, control method, and non-transitory storage medium
CN112949375A (en) Computing system, computing method, and storage medium
JP2012226645A (en) Image processing apparatus, image processing method, recording medium, and program
JP2014048131A (en) Image processing device, method, and program
US11423622B2 (en) Apparatus for generating feature positions in a virtual world, information processing method, and storage medium
Shi et al. A method for detecting pedestrian height and distance based on monocular vision technology
WO2013088199A1 (en) System and method for estimating target size
CN109035686B (en) Loss prevention alarm method and device
CN108805004B (en) Functional area detection method and device, electronic equipment and storage medium
JP6831396B2 (en) Video monitoring device
WO2020157879A1 (en) Computer system, crop growth assistance method, and program
KR20120138459A (en) Fire detection device based on image processing with motion detect function
US11967108B2 (en) Computer-readable recording medium storing position identification program, position identification method, and information processing apparatus
US20230137094A1 (en) Measurement device, measurement system, measurement method, and computer program product
CN113658313B (en) Face model rendering method and device and electronic equipment
Diamantatos et al. Android based electronic travel aid system for blind people
US10878581B2 (en) Movement detection for an image information processing apparatus, control method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18932047

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020539962

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18932047

Country of ref document: EP

Kind code of ref document: A1