WO2022181753A1 - Loading space recognition device, system, method, and program - Google Patents

Loading space recognition device, system, method, and program Download PDF

Info

Publication number
WO2022181753A1
WO2022181753A1 PCT/JP2022/007817 JP2022007817W WO2022181753A1 WO 2022181753 A1 WO2022181753 A1 WO 2022181753A1 JP 2022007817 W JP2022007817 W JP 2022007817W WO 2022181753 A1 WO2022181753 A1 WO 2022181753A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
volume
voxel
cargo
loading space
Prior art date
Application number
PCT/JP2022/007817
Other languages
French (fr)
Japanese (ja)
Inventor
悟己 上野
教之 青木
真則 高岡
研二 河野
ゆり 安達
Original Assignee
日本電気通信システム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気通信システム株式会社 filed Critical 日本電気通信システム株式会社
Priority to JP2023502529A priority Critical patent/JPWO2022181753A1/ja
Publication of WO2022181753A1 publication Critical patent/WO2022181753A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Definitions

  • the present invention is based on the priority claim of Japanese Patent Application: Japanese Patent Application No. 2021-030741 (filed on February 26, 2021), and the entire description of the application is incorporated herein by reference. shall be The present invention relates to a loading space recognition device, system, method, and program.
  • Patent Documents 1 and 2 disclose three-dimensional information of a first region on the surface of a plurality of articles obtained by imaging or scanning the plurality of articles stacked from a first point as a technique for suppressing collapse of cargo. and a second information acquisition unit for acquiring three-dimensional information of a second region of the surfaces of the plurality of articles obtained by imaging or scanning the plurality of articles from a second point.
  • a synthesizing unit that generates information indicating at least a part of the three-dimensional shape, wherein the positions of the first point and the second point are different from each other, and the synthesizing unit generates the three-dimensional information of the first region and the A shape information generation device is disclosed that complements one of the three-dimensional information of the second region with the other to generate information indicating the three-dimensional shape of at least part of the surfaces of the plurality of articles.
  • Patent Document 3 discloses a stowage method for stacking a plurality of objects to be stowed in a predetermined packing style and with a predetermined weight as a technique for preventing collapse of cargo, wherein each of the objects to be stowed a step of measuring the weight and packing style of each of the stowage items, a step of calculating the density of each stowage item from the weight and packing style of each of these stowage items, and a step of accumulating the density information of each of these stowage items. and calculating the stowage position of each of these items to be stowed.
  • Patent Document 4 discloses a technology that allows the driver to check the state of luggage in the luggage compartment at any time (whether or not the luggage has collapsed) on the upper surface of the luggage compartment at an intermediate position in the vehicle width direction.
  • An information processing device for inputting image data of the luggage compartment from the surveillance camera, and an image data of the luggage compartment input from the information processing device and displayed. Equipped with a monitor and a communication device for inputting image data of the luggage compartment from the information processing device and transmitting the image data to a base station, the vehicle is constructed so that the availability of the loading space can be grasped from the image of the surveillance camera.
  • a cargo room monitor is disclosed.
  • Patent Document 5 a sensor installed in the loading platform of a transportation vehicle is used as a technology that enables drivers and management centers to easily know that cargo has collapsed in a transportation vehicle and to respond quickly to the collapse of cargo.
  • an in-vehicle terminal that monitors collapse of cargo and, when collapse of cargo is detected, issues a warning to a driver indicating that collapse of cargo has occurred;
  • a management center for receiving a load collapse monitoring signal indicating that the cargo collapse has occurred, which is transmitted from the A system is disclosed.
  • Patent Document 6 as a technology capable of detecting collapse of cargo that occurs during and after transfer work, each time an individual item is transferred, a group of items stacked on a pallet before and after the transfer is displayed upward. to acquire a first image before the transfer of the article and a second image after the transfer, compare the first image and the second image, and determine the area other than the area where the transferred article existed.
  • an article collapse detection method for determining whether or not a collapse of cargo has occurred based on the degree of change in the article area.
  • the collapse of cargo is determined based on the presence or absence of reception of signals such as infrared rays and ultrasonic waves at the sensor. Movement of cargo cannot be detected, and the possibility of cargo collapse due to movement of cargo cannot be determined.
  • Patent Document 6 that can detect collapse of cargo that occurs during transfer and after transfer work is completed, a first image taken before transfer and a first image after transfer are taken. An edge image is generated for each of the two images, the two images are compared, and the presence or absence of collapse of cargo is determined based on the degree of change of the goods other than the transferred goods.
  • the main object of the present invention is to provide a loading space recognition device, system, method, and program that can contribute to determining the possibility of cargo collapse due to movement of cargo.
  • a loading space recognizing device estimates an overall image of cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the cargo from a predetermined direction, and outputs it as estimation result data.
  • a voxelization unit configured to voxelize the estimation result data and output as voxel data; the voxel data at an arbitrary reference time; and the reference estimating the amount of change in volume or amount of movement of the load by comparing the voxel data after a predetermined or arbitrary time has passed from time to time, and comparing the estimated amount of change in volume or amount of movement with a threshold value; and a determination unit configured to determine whether or not there is a possibility of cargo collapse.
  • a load space recognition system includes a sensor that senses the surface of a load in the load space and outputs imaged three-dimensional data, and a load space recognition device according to the first viewpoint. .
  • a loading space recognition method is a loading space recognition method for recognizing a cargo loading space using hardware resources, and is based on three-dimensional data obtained by imaging the loading space from a predetermined direction, a step of estimating an overall image of the cargo loaded in the loading space and outputting it as estimation result data; a step of voxelizing the estimation result data and outputting it as voxel data; the voxel data at an arbitrary reference time; estimating the amount of change in volume or amount of movement of the cargo by comparing the voxel data after a predetermined or arbitrary time has elapsed from the reference time, and comparing the estimated amount of change in volume or amount of movement with a threshold; and determining whether or not there is a possibility of cargo collapse by comparing.
  • a program according to a fourth aspect is a program that causes a hardware resource to execute a process of recognizing a cargo loading space, and is based on three-dimensional data obtained by imaging the cargo space from a predetermined direction.
  • a process of estimating the overall image of the loaded luggage and outputting it as estimation result data a process of voxelizing the estimation result data and outputting it as voxel data, the voxel data at an arbitrary reference time, and the voxel data from the reference time
  • By comparing the voxel data after the elapse of a predetermined or arbitrary period of time with the voxel data and by comparing the estimated amount of change in volume or movement with a threshold value. and a process of determining whether or not there is a possibility of cargo collapse.
  • the program can be recorded on a computer-readable storage medium.
  • the storage medium can be non-transient such as semiconductor memory, hard disk, magnetic recording medium, optical recording medium, and the like.
  • the present disclosure may also be embodied as a computer program product.
  • a program is input to a computer device via an input device or an external communication interface, is stored in a storage device, drives a processor in accordance with predetermined steps or processes, and stages the results of processing including intermediate states as necessary. can be displayed via a display device, or can be communicated with the outside via a communication interface.
  • a computer device for this purpose typically includes a processor, a storage device, an input device, a communication interface, and optionally a display device, all of which are connectable to each other by a bus.
  • FIG. 1 is an image diagram schematically showing an example of the configuration and usage of a loading space recognition system according to Embodiment 1.
  • FIG. 2 is a block diagram schematically showing the configuration of a loading space recognition device in the loading space recognition system according to Embodiment 1;
  • FIG. 4 is a flowchart schematically showing the operation of the loading space recognition device in the loading space recognition system according to Embodiment 1;
  • FIG. 10 is an image diagram schematically showing some examples in which there is a varying volume amount when extracting the difference between the reference voxel and the comparison voxel.
  • FIG. 10 is an image diagram schematically showing several examples in which there is no volume variation and there is movement when the difference between the reference voxel and the comparison voxel is extracted.
  • FIG. 6 is an image diagram schematically showing a transition from coherent estimation to longest movement amount estimation in example 2-1 of FIG. 5;
  • FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-2 of FIG. 5;
  • FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-3 of FIG. 5 ;
  • FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-4 of FIG. 5;
  • FIG. 5 is an image diagram schematically showing a modification of the configuration and usage of the loading space recognition system according to the first embodiment;
  • FIG. 7 is a block diagram schematically showing the configuration of a loading space recognition device according to Embodiment 2;
  • 3 is a block diagram schematically showing the configuration of hardware resources;
  • connection lines between blocks in drawings and the like referred to in the following description include both bidirectional and unidirectional connections.
  • the unidirectional arrows schematically show the flow of main signals (data) and do not exclude bidirectionality.
  • an input port and an output port exist at the input end and the output end of each connection line, respectively, although not explicitly shown.
  • the input/output interface is the same.
  • the program is executed via a computer device, and the computer device includes, for example, a processor, a storage device, an input device, a communication interface, and optionally a display device. It is configured to be able to communicate with external devices (including computers), whether wired or wireless.
  • FIG. 1 is an image diagram schematically showing an example of the configuration and usage of the loading space recognition system according to the first embodiment.
  • FIG. 2 is a block diagram schematically showing the configuration of the loading space recognition device in the loading space recognition system according to the first embodiment.
  • the cargo of containers on a truck will be described as an example.
  • the loading space recognition system 1 uses a sensor 10 to detect changes in an object (volume variation, It is a system that recognizes distance fluctuations (see Fig. 1).
  • the sensor 10 and the loading space recognition device 200 are connected so as to be communicable (wired communication or wireless communication).
  • the loading space recognition system 1 can be mounted on the truck 2 .
  • the loading space recognition device 200 recognizes the change of the object in the loading space 5 based on the photographed data (100 in FIG. 2) photographed by the sensor 10.
  • the photographing data 100 is three-dimensional data photographed by the sensor 10 (data obtained by photographing three-dimensional elements such as point cloud data and depth data, data reconstructed into a three-dimensional space from a plurality of images, etc.). ).
  • the sensor 10 is a sensor that senses and photographs (images) the surface of the load 4 in the loading space 5, which is the photographing area (see FIG. 1).
  • the sensor 10 is communicably connected to the loading space recognition device 200 .
  • the sensor 10 outputs photographed data ( 100 in FIG. 2 ) created from three-dimensional data obtained by photographing the loading space 5 to the loading space recognition device 200 .
  • the sensor 10 can be selected according to the photographing conditions such as the photographing distance, the angle of view, and the size of the container 3 necessary for detecting the package 4 and the customer's request. Products sold by various manufacturers can be used for the sensor 10 .
  • the sensor 10 for example, a stereo camera, ToF (Time of Flight) camera, 3D-lidar (Three Dimensions - light detection and ranging), 2D-lidar (Two Dimensions - light detection and ranging), LIDAR (Laser Imaging Detection And Ranging ) etc. can be used.
  • the sensor 10 can be installed at a position where the loading space 5, which is an imaging area, can be photographed, for example, near the upper part of the container 3 on the loading entrance side.
  • the sensor 10 can photograph the cargo 4 in the loading space 5 from above the cargo entrance side of the container 3 .
  • At least one sensor 10 may be provided in the loading space 5, but a plurality of sensors may be provided.
  • a plurality of sensors 10 may be used even when the loading space 5 is so large that it is difficult to photograph with one sensor.
  • the types and manufacturers may differ.
  • the loading space recognizing device 200 synthesizes each photographed data captured by each sensor 10 .
  • the loading space recognizing device 200 is a device that recognizes fluctuations (volume fluctuations, distance fluctuations) of the cargo 4 in the loading space 5 in which the cargo 4 is loaded, based on the photographed data 100 from the sensor 10 (FIGS. 1 and 2). 2).
  • a device having functional units (eg, a processor, a storage device, an input device, a communication interface, and a display device) constituting a computer can be used. Computers, smart phones, tablet terminals, etc. can be used.
  • the loading space recognition device 200 has a function of determining whether or not there is a possibility that the cargo 4 will collapse.
  • the loading space recognizing device 200 can also be employed when a certain amount of volumetric change is expected in the event of collapse of cargo under conditions such as pallet stacking.
  • Loading space recognizing device 200 is implemented by preprocessing unit 210, packing appearance grasping unit 220, determining unit 230, and user interface unit 240 by executing a predetermined program.
  • the preprocessing unit 210 is a functional unit that performs preprocessing for carrying out loading space recognition processing on the photographed data 100 (see FIG. 2).
  • the preprocessing unit 210 outputs preprocessed data 101 obtained by preprocessing the photographed data 100 to the packing style grasping unit 220 and the user interface unit 240 .
  • the preprocessing section 210 includes a format conversion section 211 and a noise removal section 212 .
  • the format conversion unit 211 is a functional unit that converts the format of the photographed data 100 into a common format that can be commonly used in the loading space recognition device 200 as preprocessing (see FIG. 2).
  • the format conversion unit 211 outputs the common format photographed data 100 to the noise elimination unit 212 . Note that if the format of the photographed data 100 is originally a common format, the process of the format conversion unit 211 can be omitted (skipped).
  • the noise removal unit 212 is a functional unit that removes noise (for example, point groups unnecessary for loading space recognition) from the photographed data 100 from the format conversion unit 211 as preprocessing (see FIG. 2).
  • the noise removal unit 212 outputs the preprocessed data 101 from which the noise in the photographed data 100 has been removed to the package overall image estimation unit 222 of the package grasping unit 220, and if necessary, the display unit of the user interface unit 240. 241 output.
  • Noise removal methods include, for example, smoothing processing, filtering (eg, moving average filter processing, median filter processing, etc.), outlier removal processing (eg, outlier removal processing by chi-square test), and the like. Note that the processing of the noise removal unit 212 may be omitted (skipped) if there is almost no noise.
  • the preprocessed data 101 is output to the display unit 241 and can be viewed by the user.
  • the photographed data 100 and the preprocessed data 101 are the basis of all processing, it is possible to ensure that they can be saved by making them storable.
  • the packing style grasping unit 220 is a functional block that grasps the current packing style from the preprocessed data 101 (see FIG. 2).
  • the packing style grasping unit 220 outputs the voxel data 102 related to the grasped current packing style to the determining unit 230 .
  • the packing style grasping unit 220 includes an area designating unit 221 , a whole package image estimating unit 222 , and a voxelizing unit 223 .
  • the packing form grasping unit 220 measures information about the shape of the loaded baggage or empty space from the preprocessed data 101 and provides quantitative information.
  • the area designation unit 221 is a functional unit that designates an area (loading area) in which the cargo 4 can be loaded within the container 3 (see FIG. 2). For example, if there is an area in the container 3 in which the cargo 4 cannot be loaded, the area designating unit 221 designates the area by excluding that area.
  • the area specifying unit 221 allows the user to specify the stacking area via the operation unit 242, and can automatically specify the area in combination with a system that automatically acquires the edge of the area.
  • the area specifying unit 221 may specify a determination exclusion area to be excluded from determination by the determination unit 230 from the user via the operation unit 242 .
  • the change determination exclusion area may be specified not only in units of areas but also in units of voxels, or in units of single/individual packages (in units of objects).
  • the package overall image estimation unit 222 estimates the entire package 4 loaded in the loading area. This is the functional part for estimating the image (see FIG. 2).
  • the package overall image estimation unit 222 creates estimation result data relating to the overall image of the loaded package 4 .
  • the package overall image estimation unit 222 outputs the generated estimation result data to the voxelization unit 223 .
  • the package overall image estimation unit 222 detects not only the packages loaded on the surface of the package and the frontmost package, but also the packages (if necessary gap) is also estimated.
  • Preprocessed data 101 stored in chronological order may be used, or machine learning may be used, in addition to a simple algorithm that assumes that a coordinate point indicating a package exists on an extension of .
  • An example of a machine learning teacher data collection method is a method of associating preprocessed data 101 with the actual loading status and recording it in a database.
  • distance sensors are provided at predetermined intervals on the ceiling surface of the container, and the distance to the loaded cargo is measured by the distance sensors.
  • the loading status of the cargo such as whether the cargo directly below the sensor is piled up to the ceiling, is piled up to about half the height, or is completely empty.
  • By providing multiple sensors on the ceiling surface it is possible to grasp the loading status of the entire cargo inside the container.
  • machine learning method is not limited to this, and other methods may be used.
  • a function of estimating information/attributes other than shape such as "weight” and "strictly no stacking/strictly prohibiting stacking below” may be provided.
  • the voxelization unit 223 is a functional unit that voxels the estimation result data of the package overall image estimation unit 222 (see FIG. 2).
  • the voxelization unit 223 creates voxel data 102 relating to the overall image of the package 4 based on the estimation result data of the package overall image estimation unit 222 .
  • the voxelization unit 223 creates the voxel data 102 by treating the portion where the loaded baggage 4 does not exist as an empty space.
  • the voxelization unit 223 outputs the created voxel data 102 to the reference determination unit 231 and the difference extraction unit 232 of the determination unit 230 .
  • the voxel data 102 is data representing the overall image of the package 4 by combining a plurality of voxels (cubes) of a predetermined size.
  • the voxel data 102 includes information on the dimensions of each voxel and the position of each plane.
  • the voxel data 102 may be stored each time it is created.
  • the voxel data 102 may also include information such as the number of packages in one voxel, information about which multiple voxels one package is located across, and information such as weight and shape.
  • the determination unit 230 compares the voxel data 102 (reference voxel data) at a certain reference point in time with the voxel data 102 (comparison voxel data) at a point in time after a predetermined or arbitrary period of time has passed, thereby determining changes in the load. It is a functional unit that determines the possibility of collapse of cargo by estimating the amount of volume or amount of movement and comparing the estimated amount of fluctuating volume or amount of movement with a threshold value (see FIG. 2). Thereby, the load 4 can be prevented from falling.
  • the determination unit 230 not only compares the reference voxel data and the comparison voxel data, but also compares the comparison voxel data with another comparison voxel data created immediately before to determine the possibility of cargo collapse. You may make it detect the presence or absence. As a result, when the amount of change between the comparison voxel data per unit time is smaller than the amount of change between the reference voxel data and the comparison voxel data, it is possible not to determine that there is a possibility of collapse of cargo. can. Furthermore, the truck 2 (a structure having a loading space) is provided with a detection unit 20 for detecting vibration and sound (see FIG.
  • the determination unit 230 detects vibration and sound exceeding a certain level in the detection unit 20.
  • comparison voxel data may be obtained from the voxelization unit 223 to detect the possibility of collapse of cargo.
  • information obtained from another system such as the fact that there are many heavy packages and many light packages, may be used.
  • the determination unit 230 includes a reference determination unit 231, a difference extraction unit 232, a volume fluctuation estimation unit 233, a unity extraction unit 234, a combination estimation unit 235, a movement amount estimation unit 236, and a load collapse determination unit 237. , provided.
  • the reference determination unit 231 is a functional unit that determines the voxel data 102 at a certain reference point in time (reference time) from the voxelization unit 223 as a change reference point (see FIG. 2).
  • the reference determination unit 231 stores the voxel data 102 at the reference time point (the indicated time point) from the voxelization unit 223 according to an instruction from the operation unit 242, and holds it as reference data for positional variation.
  • the reference determination unit 231 outputs the voxel data 102 at the reference time to the difference extraction unit 232 .
  • the difference extraction unit 232 compares the voxel data 102 at an arbitrary reference time (reference voxel data) with the voxel data 102 at a point in time when a predetermined or arbitrary time has passed from the reference time (comparison voxel data). is a functional unit that extracts the difference (for example, the difference in the position in the depth direction) between voxels corresponding to the surface of the cargo 4 viewed from a predetermined position (for example, the rear of the truck 2, the position of the sensor 10). (See Figure 2).
  • the difference extraction unit 232 acquires reference voxel data from the reference determination unit 231 and acquires comparison voxel data from the voxelization unit 223 in the difference extraction process.
  • the difference extraction process for example, when the position of the surface of the voxel when viewing the load 4 from behind the truck 2 changes to the front side of the track 2 in the depth direction (for example, when the load 4 at the position of the target voxel moves to the left, right, or bottom, and the cargo 4 on the far side appears; extracts the difference so as to indicate that the volume has "decreased", and similarly when the position of the voxel surface changes to the rear side of the track 2 (for example, the load 4 at the target voxel position moves to the front side If the target voxel moves, if another load 4 moves in front of the load 4 at the position of the target voxel, if the occlusion part increases), the difference is extracted so as to indicate that the volume "increased".
  • the difference is extracted so as to express the degree of volume decrease or increase stepwise according to the magnitude of the distance by which the surface position of the voxel changes forward or backward on the track 2. can be done.
  • the difference extraction unit 232 outputs the difference data extracted by the difference extraction process to the variation volume estimation unit 233 and the unity extraction unit 234 .
  • the fluctuating volume estimation unit 233 is a functional unit that estimates the fluctuating volume (fluctuation volume) of the overall image of the package 4 based on the difference data from the difference extraction unit 232 (see FIG. 2).
  • the fluctuating volume can be represented, for example, by the sum of the differences in the depth direction positions of voxels at corresponding positions on the surface of the cargo 4 viewed from the rear of the truck 2 .
  • there is no change in volume (0) see movement examples 2-1 to 2-4 in FIG. 5.
  • the fluctuating volume amount will be a negative value.
  • the fluctuating volume estimation unit 233 outputs the estimated fluctuating volume estimation data to the cargo collapse determination unit 237 .
  • the unity extraction unit 234 Based on the difference data from the difference extraction unit 232, the unity extraction unit 234 extracts voxels with increased volume (increased volume voxels) and voxels with decreased volume (decreased volume voxels) at the same horizontal position. ), and extracts a group of voxels with no increase or decrease in volume (see FIG. 2).
  • the unity extraction unit 234 can be based on the premise that the volume of the packages 4 does not change.
  • the grouping extracting section 234 outputs difference data including the extracted grouping data to the combination estimating section 235 .
  • the grouping extracting section 234 outputs the difference data from the difference extracting section 232 to the combination estimating section 235 when the grouping cannot be extracted.
  • the combination estimating unit 235 is a functional unit that estimates a combination of increased volume voxels and decreased volume voxels based on the difference data from the unity extraction unit 234 (including unity data if a unity can be extracted). (See Figure 2).
  • the combination estimating unit 235 can assume that the volume of the package 4 does not change. In the combination estimation process, for example, when a plurality of combinations are possible, it is possible to preferentially combine increased volume voxels and decreased volume voxels at the same horizontal position. Also, when a plurality of combinations are possible, it is possible to preferentially combine the increased volume voxel and the decreased volume voxel that are farthest apart from each other.
  • the combination estimating section 235 outputs difference data including the extracted combination data to the movement amount estimating section 236 .
  • the combination estimation unit 235 outputs the difference data from the unity extraction unit 234 to the movement amount estimation unit 236 when the combination cannot be estimated.
  • the movement amount estimation unit 236 calculates the number of packages generated between the reference time and the time when a predetermined or arbitrary time has passed. 4 (see FIG. 2).
  • the moving amount estimator 236 can assume that the volume of the load 4 does not change.
  • the movement amount estimation unit 236 calculates the distance from the volume reduction voxel to the volume increase voxel related to the combination data, and calculates the distance from the volume amount increase voxel related to the combined data. is calculated, and a value obtained by subtracting the calculated length from the calculated distance is estimated as the amount of movement of the load 4 .
  • the movement amount estimation unit 236 calculates the distance from the volume reduction voxel to the volume increase voxel related to the combination data, and calculates the calculated distance. is estimated as the amount of movement of the load 4 . If the difference data does not include combination data (regardless of the presence or absence of grouped data), the movement of the cargo 4 is accompanied by a change in volume. It is possible to preferentially perform the estimation of the volume variation in the volume estimation unit 233 . In addition, when estimating the amount of movement, correction may be made so that different weights are assigned to horizontal movement and vertical movement (falling).
  • the movement amount estimator 236 estimates the movement amount for each combination data.
  • the movement amount estimation unit 236 selects the longest movement amount from the estimated movement amounts, and estimates the selected movement amount as the longest movement amount. In addition, when the estimated movement amount is one, the movement amount is estimated as the longest movement amount. Movement amount estimation section 236 outputs the estimated longest movement amount estimation data to cargo collapse determination section 237 .
  • the load collapse determination unit 237 determines the estimated volume fluctuation data from the volume fluctuation estimation unit 233 or the estimated longest movement amount data from the movement amount estimation unit 236 and a preset threshold (the first value for the volume fluctuation). This is a functional unit that determines whether or not there is a possibility that the load 4 will collapse by comparing the threshold value (or the second threshold value for the longest movement amount) (see FIG. 2). The cargo collapse determination unit 237 determines that there is a possibility that the cargo 4 collapses when the estimated volume fluctuation data is greater than the first threshold value (even when the first threshold value or more is possible).
  • the cargo collapse determination unit 237 determines that there is no possibility of cargo collapse of the cargo 4 when the estimated volume fluctuation data is equal to or less than the first threshold value (or less than the first threshold value). The cargo collapse determination unit 237 determines that there is a possibility that the cargo 4 collapses when the longest movement amount is larger than the second threshold value (even when the second threshold value or more is possible). The cargo collapse determination unit 237 determines that there is no possibility of cargo collapse of the cargo 4 when the longest movement amount is equal to or less than the second threshold value (even when less than the second threshold value is possible). It is arbitrary for the cargo collapse determination unit 237 to determine whether or not there is a possibility of collapse of the cargo 4 by preferentially using either the variable volume amount or the longest movement amount.
  • the determination can be made by preferentially using the variable volume amount with a relatively small load.
  • the cargo collapse determination unit 237 outputs warning output instruction information to the warning output unit 243 to warn the user of the occurrence of an abnormality.
  • the user interface unit 240 is a functional unit that has a user interface (function for exchanging information with the user) (see FIG. 2).
  • the user interface unit 240 provides an interface between the user and the loading space recognition device 200 so that each process can be operated and the result of each process can be confirmed.
  • the user interface section 240 includes a display section 241 , an operation section 242 and a warning output section 243 .
  • the display unit 241 is a functional unit that displays the preprocessed data 101 and the like from the noise removal unit 212 (see FIG. 2).
  • a liquid crystal display for example, an organic EL (Electroluminescence) display, an AR (Augmented Reality) glass, or the like can be used.
  • the operation unit 242 is a functional unit that performs area designation and confirmation instructions to the area designation unit 221 and the reference determination unit 231 based on user operations (see FIG. 2).
  • the operation unit 242 for example, a touch panel, a mouse, a camera and software for recognizing gestures and eye movements can be used.
  • the warning output unit 243 is a functional unit that outputs a warning to the user based on the warning output instruction information from the cargo collapse determination unit 237 (see FIG. 2).
  • a display that displays characters and images related to the warning, a speaker that outputs an alarm sound, a lamp that lights up the alarm, a communication unit that transmits warning output instruction information to another system, etc. can be used. .
  • FIG. 3 is a flow chart schematically showing the operation of the loading space recognition device in the loading space recognition system according to the first embodiment. 1 and 2 and their descriptions should be referred to for the configuration and details of the loading space recognition device.
  • the format conversion unit 211 of the preprocessing unit 210 acquires from the sensor 10 reference photographing data 100 (reference photographing data; three-dimensional data) obtained by photographing the cargo 4 in the loading space 5 serving as the photographing area ( Step A1).
  • the format conversion section 211 of the preprocessing section 210 converts the format of the reference photographing data 100 into a common format (step A2).
  • the noise removing unit 212 of the preprocessing unit 210 removes noise from the reference photographing data 100 converted into the common format to create reference preprocessing data 101 (step A3).
  • the overall baggage image estimation section 222 of the packing style grasping section 220 determines the loaded baggage 4 on the loading area. is estimated (step A4).
  • the voxelization unit 223 of the packing style grasping unit 220 creates reference voxel data 102 (reference voxel data) relating to the overall image of the package 4 based on the estimation result data estimated in step A4 ( Step A5).
  • the reference determination unit 231 of the determination unit 230 saves the reference voxel data created by the voxelization unit 223 as a reference time value for the load collapse determination process by the operation of the operation unit 242 by the user ( Step A6).
  • the operation of the operation unit 242 by the user can be performed, for example, after the loading of the cargo 4 into the container 3 is completed and before delivery is started.
  • the format conversion unit 211 of the preprocessing unit 210 converts the image data 100 (reference image data) from the sensor 10 into the image pickup area when a predetermined or arbitrary time has passed since the reference image data 100 (reference image data) was acquired.
  • Photographed data 100 for comparison comparative photographed data; three-dimensional data obtained by photographing the package 4 in the space 5 is acquired (step A7).
  • the format conversion section 211 of the preprocessing section 210 converts the format of the photographing data 100 for comparison into a common format (step A8).
  • the noise removal unit 212 of the preprocessing unit 210 removes noise from the comparison imaging data 100 converted into the common format to create comparison preprocessing data 101 (step A9).
  • the overall baggage image estimation section 222 of the packing style grasping section 220 determines the loaded baggage 4 on the loading area. is estimated (step A10).
  • the voxelization unit 223 of the packing style grasping unit 220 creates comparison voxel data 102 (comparison voxel data) relating to the overall image of the package 4 based on the estimation result data estimated in step A10 ( Step A11).
  • the difference extraction unit 232 of the determination unit 230 compares the reference voxel data stored in the reference determination unit 231 with the comparison voxel data created by the voxelization unit 223 to obtain a predetermined position (for example, , the rear of the truck 2, the position of the sensor 10), and the difference between the voxels at the corresponding positions on the surface of the cargo 4 (for example, the difference in the position in the depth direction) is extracted (step A12).
  • a predetermined position for example, the rear of the truck 2, the position of the sensor 10
  • the fluctuating volume estimation unit 233 of the determination unit 230 estimates the fluctuating volume of the overall image of the package 4 (fluctuation volume) based on the difference data from the difference extraction unit 232 (step A13).
  • the cargo collapse determination unit 237 of the determination unit 230 determines whether or not the fluctuation volume estimation data from the fluctuation volume estimation unit 233 is greater than a preset first threshold value for the fluctuation volume. (Step A14). If the estimated volume change is greater than the first threshold (YES in step A14), it is determined that there is a possibility that the cargo 4 may collapse, output warning output instruction information to the warning output unit 243, and proceed to step A20. .
  • the unity extraction unit 234 of the determination unit 230 extracts the volume at the same horizontal position based on the difference data from the difference extraction unit 232.
  • a set of voxels with no increase or decrease in volume between the increased voxel and the decreased volume voxel is extracted (step A15). Skip when no grouping can be extracted.
  • the combination estimation unit 235 of the determination unit 230 combines the increased volume voxels and decreased volume voxels based on the difference data from the unity extraction unit 234 (including the unity data if the unity can be extracted). is estimated (step A16).
  • the movement amount estimator 236 of the determination unit 230 determines when a predetermined or arbitrary time has elapsed from the reference time. Estimate the amount of movement of the cargo 4 that occurred during the period up to (step A17).
  • the distance from the volume reduction voxel to the volume increase voxel related to the combined data is calculated, and the volume increase or decrease related to the combined data is calculated.
  • the length of the non-existent voxel is calculated, and a value obtained by subtracting the calculated length from the calculated distance is estimated as the movement amount of the load 4 .
  • the distance from the volume decrease voxel to the volume increase voxel related to the combination data is calculated, and the calculated The distance value is estimated as the amount of movement of the load 4 .
  • the movement amount is estimated for each combination data.
  • the movement amount estimation unit 236 of the determination unit 230 selects the longest movement amount from the estimated movement amounts, and estimates the selected movement amount as the longest movement amount (step A18).
  • the load collapse determination unit 237 of the determination unit 230 determines whether or not the longest movement amount estimation data from the movement amount estimation unit 236 is larger than a preset second threshold value for the longest movement amount ( Step A19). If the estimated longest movement amount data is greater than the second threshold value (YES in step A19), it is determined that there is a possibility that the load 4 may collapse, output warning output instruction information to the warning output unit 243, and proceed to step A20. . If the estimated longest movement amount data is equal to or less than the second threshold value (NO in step A19), it is determined that there is no possibility of collapse of the cargo 4, and one cycle is terminated until the user instructs termination. Repeat steps A7 to A20.
  • the warning output unit 243 of the user interface unit 240 outputs a warning to the user based on the warning output instruction information from the cargo collapse determination unit 237 (step A20). After that, one cycle is finished, and steps A7 to A20 are repeated until the user instructs to finish.
  • FIG. 4 is an image diagram schematically showing some examples in which there is a varying volume amount when extracting the difference between the reference voxel and the comparison voxel.
  • the reference voxel data when the whole image of the package (4 in FIG. 1) is viewed from the rear side of the truck (2 in FIG. 1) is in a state like the reference voxel data in FIG. 4, the reference voxel data in FIG. 4, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is changed from the comparison voxel data according to the movement example 1-1 to the difference extraction data according to the movement example 1-1 in FIG. Become.
  • the volume amounts of only the upper left four voxels increase by one step, and the other voxels do not increase or decrease.
  • the reference voxel data and comparison voxel data differ in total volume.
  • the occlusion part (gap or space) has increased on the back side of the voxel whose volume has increased.
  • the volume estimator (233 in FIG. 2) estimates the volume fluctuation, and the collapse determination unit (237 in FIG. 2) performs threshold determination of the volume fluctuation.
  • the difference extraction data extracted by the difference extraction unit becomes the difference extraction data according to the movement example 1-2 in FIG. It becomes like differential extraction data.
  • the volumes of four voxels each decrease by one level, the volumes of other eight voxels increase by one level, and the other voxels increase or decrease no change in In other words, there is an increase and a decrease in volume, but even if the increase and decrease in volume are offset, there are four increases in volume by one stage, so the reference voxel data and the comparison voxel data Different total volume.
  • the fluctuating volume estimation unit estimates the fluctuating volume
  • the collapse determination unit (237 in FIG. 2) performs threshold determination of the fluctuating volume.
  • the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 1-3 in FIG. It becomes like differential extraction data.
  • the volumes of 8 voxels decrease by one level
  • the volumes of another 12 voxels increase by one level
  • the volumes of another four voxels increase by one level.
  • the volume increases by two steps, and there is no increase or decrease in other voxels.
  • the fluctuating volume estimation unit (233 in FIG. 2) estimates the fluctuating volume, and the cargo collapse determination unit (237 in FIG. 2) ) to determine the threshold value of the volume fluctuation.
  • FIG. 5 is an image diagram schematically showing several examples in which there is no volume variation and there is movement when the difference between the reference voxel and the comparison voxel is extracted.
  • FIG. 6 is an image diagram schematically showing the transition from collective estimation to longest movement amount estimation in example 2-1 of FIG.
  • FIG. 7 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in example 2-2 of FIG.
  • FIG. 8 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in Example 2-3 of FIG.
  • FIG. 9 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in Example 2-4 of FIG.
  • the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is changed from the comparison voxel data according to the movement example 2-1 to the difference extraction data according to the movement example 2-1 in FIG. Become.
  • the volume of each of the six voxels increases by one step, the volume of another six voxels decreases by one step, and the other voxels undergo an increase or decrease change.
  • the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 2-2 in FIG. It becomes like differential extraction data.
  • the volumes of eight voxels increase by one level
  • the volumes of other eight voxels decrease by one level
  • the other voxels increase or decrease.
  • the movement amount estimation unit (236 in FIG. 2) calculates the distance (see the two arrows in FIG. 7C) from the volume decrease voxel related to the combination to the volume increase voxel as shown in FIG. Calculate the length of voxels (not shown; one) with no increase or decrease in the volume amount related to the combined data, and from the distance calculated as shown in FIG.
  • the calculated length is estimated as the amount of movement of the cargo 4 (see the two arrows in FIG. 7(D); one is deducted, the other is not deducted), and the movement amount estimation unit (236 in FIG. 2)
  • the longest amount of movement from among them is estimated as the longest amount of movement (see the circled arrow in FIG. 7(D); in this case, there are two, but either can be used), and the cargo collapse determination unit (Fig. 237) performs a threshold determination of the maximum movement amount.
  • the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 2-3 in FIG. It becomes like differential extraction data.
  • the volume of each of the eight voxels increases by one step, the volume of another eight voxels decreases by one step, and the other voxels undergo an increase or decrease change.
  • the increase and decrease in volume are offset by the same number and number of stages, so the total volume of the reference voxel data and comparison voxel data is the same.
  • the displacement estimation unit (236 in FIG. 2) calculates the distance from the volume decrease voxel to the volume increase voxel (two 8(D)), and the distance calculated as shown in FIG. 8(D) is estimated as the amount of movement of the load 4 (see two arrows in FIG. 236), the longest movement amount among the movement amounts is estimated as the longest movement amount (see the circled arrow in FIG. 8(D)), and the cargo collapse determination unit (237 in FIG. 2) determines the longest movement amount. Perform threshold judgment.
  • the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 2-4 in FIG. It becomes like differential extraction data.
  • the volumes of 12 voxels increase by one step
  • the volumes of other 12 voxels decrease by one step
  • the other voxels increase or decrease.
  • the displacement estimation unit (236 in FIG. 2) calculates the distance from the volume decrease voxel to the volume increase voxel (three distances in FIG. 9C) as shown in FIG. (see arrows)), and the distance calculated as shown in FIG. 236), the longest movement amount out of the movement amounts is estimated as the longest movement amount (see the circled arrow in FIG. 9(D)), and the cargo collapse determination unit (237 in FIG. 2) determines the longest movement amount. Perform threshold judgment.
  • the unity extraction processing criteria of the unity extraction unit 234, the combination estimation processing criteria of the combination estimation unit 235, and the movement amount estimation unit See the detailed description of FIG. 2 for the criteria of the H.236 displacement estimation process.
  • the difference between the voxels at the corresponding positions between the reference voxel data and the comparison voxel data is extracted to estimate the amount of change in volume or the maximum amount of movement of the overall image of the package 4, and perform threshold determination. Therefore, it is possible to contribute to determining the possibility of collapse of cargo due to movement of cargo.
  • the occlusion portion is estimated and the packing appearance of the cargo 4 is grasped. Therefore, it is possible to consider the change of the cargo 4 in the occlusion portion. It is possible.
  • the reference voxel data of the packing appearance at a specific time such as before the start of movement of the truck 2 is held, and the comparative voxel data of the packing appearance after a predetermined or arbitrary time has passed from the reference time.
  • the possibility of collapse of cargo Drivers can be notified.
  • the possibility of collapse of the cargo 4 can be detected at an early stage, the damage of the cargo 4 can be prevented, and the transportation quality can be prevented from being deteriorated.
  • FIG. 11 is a block diagram schematically showing the configuration of the loading space recognition device according to the second embodiment.
  • the loading space recognizing device 200 is a device that recognizes variations (volumetric variations, distance variations) of the cargo in the loading space where the cargo is loaded, based on the photographed data.
  • Loading space recognizing device 200 includes package overall image estimating unit 222 , voxelizing unit 223 , and determining unit 230 .
  • the luggage overall image estimating unit 222 is configured to estimate the overall image of the luggage loaded in the loading space based on the three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and output it as estimation result data. ing.
  • the voxelization unit 223 is configured to voxelize the estimation result data and output it as voxel data.
  • the determination unit 230 compares voxel data at an arbitrary reference time with voxel data after a predetermined or arbitrary time has elapsed from the reference time, thereby estimating the amount of change in volume or the amount of movement of the load. It is configured to determine the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold value.
  • the difference between the voxel data at the corresponding position between the voxel data at the reference time and the voxel data at the time other than the reference time is extracted, and the amount of change in volume or the maximum movement amount of the overall image of the package is estimated, and threshold determination is performed. is carried out, it is possible to contribute to determining the possibility of collapse of cargo due to movement of cargo.
  • the loading space recognition device can be configured by so-called hardware resources (information processing device, computer), and can use one having the configuration illustrated in FIG. 12 .
  • hardware resource 1000 includes processor 1001 , memory 1002 , network interface 1003 , etc., which are interconnected by internal bus 1004 .
  • the configuration shown in FIG. 12 is not intended to limit the hardware configuration of the hardware resource 1000 .
  • the hardware resource 1000 may include hardware not shown (for example, an input/output interface).
  • the number of units such as the processors 1001 included in the device is not limited to the illustration in FIG.
  • a CPU Central Processing Unit
  • MPU Micro Processor Unit
  • GPU Graphics Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • HDD Hard Disk Drive
  • SSD Solid State Drive
  • LAN Local Area Network
  • network adapter for example, a LAN (Local Area Network) card, network adapter, network interface card, etc.
  • network interface card for example, a LAN (Local Area Network) card, network adapter, network interface card, etc. can be used.
  • LAN Local Area Network
  • the functions of the hardware resource 1000 are realized by the processing modules described above.
  • the processing module is implemented by the processor 1001 executing a program stored in the memory 1002, for example.
  • the program can be downloaded via a network or updated using a storage medium storing the program.
  • the processing module may be realized by a semiconductor chip.
  • the functions performed by the above processing modules may be realized by executing software in some kind of hardware.
  • An overall luggage image estimation unit configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data.
  • a voxelization unit configured to voxelize the estimation result data and output as voxel data;
  • the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a determination unit configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
  • a loading space recognition device configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data.
  • the determination unit is a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point; By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data; a volume variation estimating unit configured to estimate a volume variation of the overall image of the package based on the difference data and output as volume variation estimation data; a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the estimated data of the volume fluctuation and the threshold for the volume fluctuation as the threshold;
  • the load space recognition device comprising: [Appendix 3] The determination unit is Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased
  • the determination unit is a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point; By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data; Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity
  • the loading space recognition device according to any one of Appendices 1 to 4.
  • the region specifying unit is configured to specify a determination exclusion region to be excluded from determination by the determination unit by user operation
  • the luggage overall image estimating unit is configured to exclude the luggage loaded in the determination exclusion area and estimate the overall image of the luggage loaded in the loading area.
  • the loading space recognition device according to appendix 5.
  • Appendix 7 Further comprising a detection unit attached to the structure having the loading space and detecting vibration or sound, The determination unit acquires the voxel data from the voxelization unit when the detection unit detects shaking or sound of a certain level or more, and compares the voxel data at the reference time and the acquired voxel data.
  • the movement amount estimator corrects the movement amount to be smaller than the estimated movement amount when the movement direction of the baggage is the horizontal direction, or the estimated movement amount when the movement direction of the baggage is the vertical direction or the oblique direction. corrected so as to be larger than the corrected movement amount, and selecting the longest movement amount from the corrected movement amount.
  • the loading space recognition device according to appendix 3 or 4.
  • the determination unit further includes: a change amount between the voxel data other than the reference time and the other voxel data immediately before the voxel data; It is configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change between the data, The loading space recognition device according to any one of appendices 1 to 8.
  • the determination unit is configured to output warning output instruction information when it is determined that there is a possibility of cargo collapse,
  • the loading space recognition device further includes a warning output unit configured to output a warning based on the warning output instruction information.
  • the loading space recognition device according to any one of Appendices 1 to 9.
  • [Appendix 11] a sensor that senses the surface of the cargo in the loading space and outputs imaged three-dimensional data; a loading space recognition device according to any one of appendices 1 to 10; A loading space recognition system.
  • [Appendix 12] A loading space recognition method for recognizing a cargo loading space using hardware resources, a step of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimated image as estimation result data; a step of voxelizing the estimation result data and outputting it as voxel data; By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a step of determining whether or not there is a possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Provided is a loading space recognition device, etc. that can contribute to determining the possibility of the occurrence of a cargo collapse due to the movement of baggage. The loading space recognition device comprises an entire baggage image estimation unit, a voxelization unit, and a determination unit. The entire baggage image estimation unit is configured so as to estimate, on the basis of three-dimensional data in which images of a loading space for baggage are captured from a predetermined direction, an entire image of baggage loaded in the loading space and output the entire image as estimation result data. The voxelization unit is configured so as to voxelize the estimation result data and output the estimation result data as voxel data. The determination unit is configured so as to estimate a fluctuated volume amount or movement amount of the baggage by comparing the voxel data of an arbitrary reference time with the voxel data after the passage of a predetermined or arbitrary amount of time from the reference time and determine the presence or absence of the possibility of a cargo collapse by comparing the estimated fluctuated volume amount or movement amount with a threshold value.

Description

積載空間認識装置、システム、方法、及びプログラムLoad space recognition device, system, method, and program
 [関連出願についての記載]
本発明は、日本国特許出願:特願2021-030741号(2021年2月26日出願)の優先権主張に基づくものであり、同出願の全記載内容は引用をもって本書に組み込み記載されているものとする。
本発明は、積載空間認識装置、システム、方法、及びプログラムに関する。
[Description of related applications]
The present invention is based on the priority claim of Japanese Patent Application: Japanese Patent Application No. 2021-030741 (filed on February 26, 2021), and the entire description of the application is incorporated herein by reference. shall be
The present invention relates to a loading space recognition device, system, method, and program.
 物流業界では、トラックコンテナに積載した荷物が、輸送中の落下などの荷崩れにより、荷物の損傷が発生し、輸送品質の低下が発生している。そこで、荷崩れを未然に防止するように荷物を配置する情報を生成したり、実際に荷崩れが発生してしまった場合の早急に発見するための技術が提案されている(例えば、特許文献1~6参照) In the logistics industry, cargo loaded in truck containers is damaged due to collapse of cargo such as falling during transportation, resulting in a decline in transportation quality. Therefore, techniques have been proposed for generating information for arranging cargo so as to prevent cargo from collapsing, and for promptly discovering collapse of cargo when it actually occurs (for example, see Patent Document 1). 1 to 6)
 特許文献1、2には、荷崩れを抑制する技術として、積み上げられた複数の物品を第1地点から撮像又は走査して得られた、前記複数の物品の表面の第1領域の3次元情報を取得する第1情報取得部と、前記複数の物品を第2地点から撮像又は走査して得られた、前記複数の物品の表面の第2領域の3次元情報を取得する第2情報取得部と、前記第1情報取得部が取得した前記第1領域の3次元情報、及び、前記第2情報取得部が取得した前記第2領域の3次元情報に基づいて、前記複数の物品の表面の少なくとも一部の3次元形状を示す情報を生成する合成部と、を備え、前記第1地点及び前記第2地点の位置は、互いに異なり、前記合成部は、前記第1領域の3次元情報及び前記第2領域の3次元情報の一方を他方により補完して、前記複数の物品の表面の少なくとも一部の3次元形状を示す情報を生成する、形状情報生成装置が開示されている。 Patent Documents 1 and 2 disclose three-dimensional information of a first region on the surface of a plurality of articles obtained by imaging or scanning the plurality of articles stacked from a first point as a technique for suppressing collapse of cargo. and a second information acquisition unit for acquiring three-dimensional information of a second region of the surfaces of the plurality of articles obtained by imaging or scanning the plurality of articles from a second point. and, based on the three-dimensional information of the first region acquired by the first information acquisition unit and the three-dimensional information of the second region acquired by the second information acquisition unit, the surfaces of the plurality of articles a synthesizing unit that generates information indicating at least a part of the three-dimensional shape, wherein the positions of the first point and the second point are different from each other, and the synthesizing unit generates the three-dimensional information of the first region and the A shape information generation device is disclosed that complements one of the three-dimensional information of the second region with the other to generate information indicating the three-dimensional shape of at least part of the surfaces of the plurality of articles.
 特許文献3には、荷崩れを防止する技術として、所定の荷姿で、かつ所定の重量となるように複数の被積付物を積み付ける積み付け方法であって、前記各被積付物の重量および荷姿をそれぞれ測定するステップと、これら各被積付物の重量および荷姿からこれら各被積付物の密度を算出するステップと、これら各被積付物の密度情報を集積してこれら各被積付物それぞれの積み付け位置を算出するステップとを具備している、積み付け方法が開示されている。 Patent Document 3 discloses a stowage method for stacking a plurality of objects to be stowed in a predetermined packing style and with a predetermined weight as a technique for preventing collapse of cargo, wherein each of the objects to be stowed a step of measuring the weight and packing style of each of the stowage items, a step of calculating the density of each stowage item from the weight and packing style of each of these stowage items, and a step of accumulating the density information of each of these stowage items. and calculating the stowage position of each of these items to be stowed.
 特許文献4には、運転者がモニタにより荷室内の荷物の状況(荷崩れしていないかどうか)をいつでも確認することができる技術として、荷室の上面における車幅方向中間位置で且つ荷室の全長を三等分する二箇所に取り付けられた監視カメラと、該監視カメラによる荷室内の画像データを入力する情報処理装置と、該情報処理装置から荷室内の画像データを入力して表示するモニタと、前記情報処理装置から荷室内の画像データを入力して基地局に向け送信する通信機とを備え、前記監視カメラの画像により積載スペースの空き状況を把握し得るよう構成した、車両の荷室内監視装置が開示されている。 Patent Document 4 discloses a technology that allows the driver to check the state of luggage in the luggage compartment at any time (whether or not the luggage has collapsed) on the upper surface of the luggage compartment at an intermediate position in the vehicle width direction. An information processing device for inputting image data of the luggage compartment from the surveillance camera, and an image data of the luggage compartment input from the information processing device and displayed. Equipped with a monitor and a communication device for inputting image data of the luggage compartment from the information processing device and transmitting the image data to a base station, the vehicle is constructed so that the availability of the loading space can be grasped from the image of the surveillance camera. A cargo room monitor is disclosed.
 特許文献5には、ドライバ及び管理センターが運送車両の荷崩れを容易に知ることができ、荷崩れの処置に対し迅速に対応できる技術として、運送車両の荷台内に取り付けられたセンサにより積荷の荷崩れを監視し、積荷の荷崩れが検出されたとき、積荷の荷崩れが発生した旨を示す警告をドライバに発する車載端末と、前記車載端末が前記警告をドライバに発するときに前記車載端末から送信された、前記積荷の荷崩れが発生した旨を示す荷崩れ監視信号を受信して運送車両の積荷の荷崩れ情報を収集し管理する管理センターとを備えた、運送車両の荷崩れ監視システムが開示されている。 In Patent Document 5, a sensor installed in the loading platform of a transportation vehicle is used as a technology that enables drivers and management centers to easily know that cargo has collapsed in a transportation vehicle and to respond quickly to the collapse of cargo. an in-vehicle terminal that monitors collapse of cargo and, when collapse of cargo is detected, issues a warning to a driver indicating that collapse of cargo has occurred; a management center for receiving a load collapse monitoring signal indicating that the cargo collapse has occurred, which is transmitted from the A system is disclosed.
 特許文献6には、移載中及び移載作業終了後に発生する荷崩れを検出することができる技術として、個々の物品の移載毎に、その前後におけるパレットに積付けられた物品群を上方から撮像して、物品の移載前の第1画像と、移載後の第2画像を取得し、第1画像と第2画像を比較し、移載された物品が存在していた領域以外の物品領域の変化度合に基づいて荷崩れの発生の有無を判定する物品の荷崩れ検出方法が開示されている。 In Patent Document 6, as a technology capable of detecting collapse of cargo that occurs during and after transfer work, each time an individual item is transferred, a group of items stacked on a pallet before and after the transfer is displayed upward. to acquire a first image before the transfer of the article and a second image after the transfer, compare the first image and the second image, and determine the area other than the area where the transferred article existed Disclosed is an article collapse detection method for determining whether or not a collapse of cargo has occurred based on the degree of change in the article area.
特許第6511681号公報Japanese Patent No. 6511681 特許第6577687号公報Japanese Patent No. 6577687 特開2002-29631号公報JP-A-2002-29631 特開2018-199489号公報JP 2018-199489 A 特開2005-018472号公報JP 2005-018472 A 特開2007-179301号公報Japanese Unexamined Patent Application Publication No. 2007-179301
 以下の分析は、本願発明者により与えられる。 The following analysis is given by the inventor of this application.
 しかしながら、特許文献1~3に記載の荷崩れを抑制、防止する技術では、荷崩れが発生しないように荷物を積み込むだけでは荷崩れが完全になくなるわけではなく、車両の加速、制動、振動などにより荷崩れに繋がる荷物のあらゆる方向の移動があるため、荷物の移動による荷崩れの発生の可能性を判定することができない。 However, in the technologies for suppressing and preventing collapse of cargo described in Patent Documents 1 to 3, collapse of cargo cannot be completely eliminated by simply loading cargo so that collapse of cargo does not occur, acceleration, braking, vibration of the vehicle, etc. Since there are movements of cargo in all directions that lead to collapse of cargo due to movement of cargo, it is impossible to determine the possibility of occurrence of collapse of cargo due to movement of cargo.
 また、特許文献4に記載の荷物を確認する技術では、運転者がモニタにより荷室内の荷物の状況を確認して荷崩れを知ることになるため、運転者が車両運転中に荷物の移動による荷崩れの発生の可能性を判定することは安全上困難である。 In addition, in the technology for checking the luggage described in Patent Document 4, the driver checks the status of the luggage in the luggage compartment with a monitor and knows that the luggage has collapsed. It is difficult from a safety point of view to determine the possibility of collapse of cargo.
 また、特許文献5に記載の荷崩れを知る技術では、センサでの赤外線、超音波等の信号の受信の有無により荷崩れの判断をするので、センサでの信号の受信に変化がない範囲での荷物の移動を検出することができず、荷物の移動による荷崩れの発生の可能性を判定することができない可能性がある。 In addition, in the technology for detecting the collapse of cargo described in Patent Document 5, the collapse of cargo is determined based on the presence or absence of reception of signals such as infrared rays and ultrasonic waves at the sensor. movement of cargo cannot be detected, and the possibility of cargo collapse due to movement of cargo cannot be determined.
 また、特許文献6の移載中及び移載作業終了後に発生する荷崩れを検出することができる技術では、撮像された、移載される前の第1画像と、移載された後の第2画像との各々のエッジ画像を生成して両画像を比較して、移載した物品以外の物品の変化度合に基づいて荷崩れの発生の有無を判定しているが、エッジ画像間の比較ではエッジ位置に移動のない荷物の移動(撮像位置に対する遠近方向の荷物の移動)を検出することができず、荷物の移動による荷崩れの発生の可能性を判定することができない可能性がある。 In addition, in the technology of Patent Document 6 that can detect collapse of cargo that occurs during transfer and after transfer work is completed, a first image taken before transfer and a first image after transfer are taken. An edge image is generated for each of the two images, the two images are compared, and the presence or absence of collapse of cargo is determined based on the degree of change of the goods other than the transferred goods. However, it is not possible to detect the movement of cargo without movement at the edge position (movement of cargo in the far and near direction with respect to the imaging position), and it may not be possible to determine the possibility of cargo collapse due to cargo movement. .
 本発明の主な課題は、荷物の移動による荷崩れの発生の可能性を判定することに貢献することができる積載空間認識装置、システム、方法、及びプログラムを提供することである。 The main object of the present invention is to provide a loading space recognition device, system, method, and program that can contribute to determining the possibility of cargo collapse due to movement of cargo.
 第1の視点に係る積載空間認識装置は、荷物の積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するように構成されている荷物全体像推定部と、前記推定結果データをボクセル化してボクセルデータとして出力するように構成されているボクセル化部と、任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するように構成されている判定部と、を備える。 A loading space recognizing device according to a first viewpoint estimates an overall image of cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the cargo from a predetermined direction, and outputs it as estimation result data. a voxelization unit configured to voxelize the estimation result data and output as voxel data; the voxel data at an arbitrary reference time; and the reference estimating the amount of change in volume or amount of movement of the load by comparing the voxel data after a predetermined or arbitrary time has passed from time to time, and comparing the estimated amount of change in volume or amount of movement with a threshold value; and a determination unit configured to determine whether or not there is a possibility of cargo collapse.
 第2の視点に係る積載空間認識システムは、積載空間内の荷物の表面をセンシングして撮像された3次元データを出力するセンサと、前記第1の視点に係る積載空間認識装置と、を備える。 A load space recognition system according to a second viewpoint includes a sensor that senses the surface of a load in the load space and outputs imaged three-dimensional data, and a load space recognition device according to the first viewpoint. .
 第3の視点に係る積載空間認識方法は、ハードウェア資源を用いて荷物の積載空間を認識する積載空間認識方法であって、前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するステップと、前記推定結果データをボクセル化してボクセルデータとして出力するステップと、任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するステップと、を含む。 A loading space recognition method according to a third viewpoint is a loading space recognition method for recognizing a cargo loading space using hardware resources, and is based on three-dimensional data obtained by imaging the loading space from a predetermined direction, a step of estimating an overall image of the cargo loaded in the loading space and outputting it as estimation result data; a step of voxelizing the estimation result data and outputting it as voxel data; the voxel data at an arbitrary reference time; estimating the amount of change in volume or amount of movement of the cargo by comparing the voxel data after a predetermined or arbitrary time has elapsed from the reference time, and comparing the estimated amount of change in volume or amount of movement with a threshold; and determining whether or not there is a possibility of cargo collapse by comparing.
 第4の視点に係るプログラムは、荷物の積載空間を認識させる処理をハードウェア資源に実行させるプログラムであって、前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力する処理と、前記推定結果データをボクセル化してボクセルデータとして出力する処理と、任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定する処理と、を前記ハードウェア資源に実行させる。 A program according to a fourth aspect is a program that causes a hardware resource to execute a process of recognizing a cargo loading space, and is based on three-dimensional data obtained by imaging the cargo space from a predetermined direction. A process of estimating the overall image of the loaded luggage and outputting it as estimation result data, a process of voxelizing the estimation result data and outputting it as voxel data, the voxel data at an arbitrary reference time, and the voxel data from the reference time By comparing the voxel data after the elapse of a predetermined or arbitrary period of time with the voxel data, and by comparing the estimated amount of change in volume or movement with a threshold value. and a process of determining whether or not there is a possibility of cargo collapse.
 なお、プログラムは、コンピュータが読み取り可能な記憶媒体に記録することができる。記憶媒体は、半導体メモリ、ハードディスク、磁気記録媒体、光記録媒体等の非トランジェント(non-transient)なものとすることができる。また、本開示では、コンピュータプログラム製品として具現することも可能である。プログラムは、コンピュータ装置に入力装置又は外部から通信インタフェイスを介して入力され、記憶装置に記憶されて、プロセッサを所定のステップないし処理に従って駆動させ、必要に応じ中間状態を含めその処理結果を段階毎に表示装置を介して表示することができ、あるいは通信インタフェイスを介して、外部と交信することができる。そのためのコンピュータ装置は、一例として、典型的には互いにバスによって接続可能なプロセッサ、記憶装置、入力装置、通信インタフェイス、及び必要に応じ表示装置を備える。 The program can be recorded on a computer-readable storage medium. The storage medium can be non-transient such as semiconductor memory, hard disk, magnetic recording medium, optical recording medium, and the like. The present disclosure may also be embodied as a computer program product. A program is input to a computer device via an input device or an external communication interface, is stored in a storage device, drives a processor in accordance with predetermined steps or processes, and stages the results of processing including intermediate states as necessary. can be displayed via a display device, or can be communicated with the outside via a communication interface. A computer device for this purpose, as an example, typically includes a processor, a storage device, an input device, a communication interface, and optionally a display device, all of which are connectable to each other by a bus.
 前記第1~第4の視点によれば、荷物の移動による荷崩れの発生の可能性を判定することに貢献することができる。 According to the first to fourth viewpoints, it is possible to contribute to determining the possibility of cargo collapse due to cargo movement.
実施形態1に係る積載空間認識システムの構成及び使用態様の一例を模式的に示したイメージ図である。1 is an image diagram schematically showing an example of the configuration and usage of a loading space recognition system according to Embodiment 1. FIG. 実施形態1に係る積載空間認識システムにおける積載空間認識装置の構成を模式的に示したブロック図である。2 is a block diagram schematically showing the configuration of a loading space recognition device in the loading space recognition system according to Embodiment 1; FIG. 実施形態1に係る積載空間認識システムにおける積載空間認識装置の動作を模式的に示したフローチャートである。4 is a flowchart schematically showing the operation of the loading space recognition device in the loading space recognition system according to Embodiment 1; 基準ボクセルと対比ボクセルとの差分を抽出したときに変動体積量がある場合のいくつかの例を模式的に示したイメージ図である。FIG. 10 is an image diagram schematically showing some examples in which there is a varying volume amount when extracting the difference between the reference voxel and the comparison voxel. 基準ボクセルと対比ボクセルとの差分を抽出したときに変動体積量がなく、かつ、移動量がある場合のいくつかの例を模式的に示したイメージ図である。FIG. 10 is an image diagram schematically showing several examples in which there is no volume variation and there is movement when the difference between the reference voxel and the comparison voxel is extracted. 図5の例2-1のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。FIG. 6 is an image diagram schematically showing a transition from coherent estimation to longest movement amount estimation in example 2-1 of FIG. 5; 図5の例2-2のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-2 of FIG. 5; 図5の例2-3のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-3 of FIG. 5 ; 図5の例2-4のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-4 of FIG. 5; 実施形態1に係る積載空間認識システムの構成及び使用態様の変形例を模式的に示したイメージ図である。FIG. 5 is an image diagram schematically showing a modification of the configuration and usage of the loading space recognition system according to the first embodiment; 実施形態2に係る積載空間認識装置の構成を模式的に示したブロック図である。FIG. 7 is a block diagram schematically showing the configuration of a loading space recognition device according to Embodiment 2; ハードウェア資源の構成を模式的に示したブロック図である。3 is a block diagram schematically showing the configuration of hardware resources; FIG.
 以下、実施形態について図面を参照しつつ説明する。なお、本出願において図面参照符号を付している場合は、それらは、専ら理解を助けるためのものであり、図示の態様に限定することを意図するものではない。また、下記の実施形態は、あくまで例示であり、本発明を限定するものではない。また、以降の説明で参照する図面等のブロック間の接続線は、双方向及び単方向の双方を含む。一方向矢印については、主たる信号(データ)の流れを模式的に示すものであり、双方向性を排除するものではない。さらに、本願開示に示す回路図、ブロック図、内部構成図、接続図などにおいて、明示は省略するが、入力ポート及び出力ポートが各接続線の入力端及び出力端のそれぞれに存在する。入出力インタフェイスも同様である。プログラムはコンピュータ装置を介して実行され、コンピュータ装置は、例えば、プロセッサ、記憶装置、入力装置、通信インタフェイス、及び必要に応じ表示装置を備え、コンピュータ装置は、通信インタフェイスを介して装置内又は外部の機器(コンピュータを含む)と、有線、無線を問わず、交信可能に構成される。 Hereinafter, embodiments will be described with reference to the drawings. It should be noted that when reference numerals are attached to the drawings in this application, they are solely for the purpose of helping understanding, and are not intended to limit the embodiments shown in the drawings. Moreover, the following embodiments are only examples, and do not limit the present invention. Also, connection lines between blocks in drawings and the like referred to in the following description include both bidirectional and unidirectional connections. The unidirectional arrows schematically show the flow of main signals (data) and do not exclude bidirectionality. Furthermore, in the circuit diagrams, block diagrams, internal configuration diagrams, connection diagrams, etc. disclosed in the present application, an input port and an output port exist at the input end and the output end of each connection line, respectively, although not explicitly shown. The input/output interface is the same. The program is executed via a computer device, and the computer device includes, for example, a processor, a storage device, an input device, a communication interface, and optionally a display device. It is configured to be able to communicate with external devices (including computers), whether wired or wireless.
 [実施形態1]
実施形態1に係る積載空間認識システムについて図面を用いて説明する。図1は、実施形態1に係る積載空間認識システムの構成及び使用態様の一例を模式的に示したイメージ図である。図2は、実施形態1に係る積載空間認識システムにおける積載空間認識装置の構成を模式的に示したブロック図である。ここでは、トラックのコンテナの積み荷を例に説明する。
[Embodiment 1]
A loading space recognition system according to the first embodiment will be described with reference to the drawings. FIG. 1 is an image diagram schematically showing an example of the configuration and usage of the loading space recognition system according to the first embodiment. FIG. 2 is a block diagram schematically showing the configuration of the loading space recognition device in the loading space recognition system according to the first embodiment. Here, the cargo of containers on a truck will be described as an example.
 積載空間認識システム1は、センサ10を用いて、物体(図1では荷物4)が積載される積載空間(図1ではトラック2のコンテナ3内の積載空間5)における物体の変動(体積変動、距離変動)を認識するシステムである(図1参照)。積載空間認識システム1では、センサ10と、積載空間認識装置200とが通信(有線通信又は無線通信)可能に接続された構成となっている。積載空間認識システム1は、トラック2に搭載することができる。積載空間認識システム1では、積載空間認識装置200がセンサ10で撮影された撮影データ(図2の100)に基づいて積載空間5における物体の変動を認識する。ここで、撮影データ100は、センサ10で撮影した3次元データ(点群データ、深度データ等の3次元的な要素を撮影したデータ、他にも複数画像から3次元空間に再構成したデータ等)である。 The loading space recognition system 1 uses a sensor 10 to detect changes in an object (volume variation, It is a system that recognizes distance fluctuations (see Fig. 1). In the loading space recognition system 1, the sensor 10 and the loading space recognition device 200 are connected so as to be communicable (wired communication or wireless communication). The loading space recognition system 1 can be mounted on the truck 2 . In the loading space recognition system 1, the loading space recognition device 200 recognizes the change of the object in the loading space 5 based on the photographed data (100 in FIG. 2) photographed by the sensor 10. FIG. Here, the photographing data 100 is three-dimensional data photographed by the sensor 10 (data obtained by photographing three-dimensional elements such as point cloud data and depth data, data reconstructed into a three-dimensional space from a plurality of images, etc.). ).
 センサ10は、撮影領域となる積載空間5内の荷物4の表面をセンシングして撮影(撮像)するセンサである(図1参照)。センサ10は、積載空間認識装置200と通信可能に接続されている。センサ10は、積載空間5を撮影した3次元データから作成された撮影データ(図2の100)を積載空間認識装置200に向けて出力する。センサ10は、荷物4の検知に必要な撮影距離、画角、コンテナ3の大きさなどの撮影条件や顧客要望に応じて選択することができる。また、センサ10には、各種メーカーから販売されている製品を用いることができる。センサ10として、例えば、ステレオカメラ、ToF(Time of Flight)カメラ、3D-lidar(Three Dimensions - light detection and ranging)、2D-lidar(Two Dimensions - light detection and ranging)、LIDAR(Laser Imaging Detection And Ranging)等を用いることができる。センサ10は、撮影領域となる積載空間5を撮影できる位置に取り付けることができ、例えば、コンテナ3の荷物搬入口側の上方付近に取り付けることができる。センサ10は、積載空間5内の荷物4をコンテナ3の荷物搬入口側の上方から撮影することができる。センサ10は、積載空間5に少なくとも1つあればよいが、複数あってもよい。また、センサ10は、積載空間5が広大で1つのセンサで撮影することが困難な場合にも複数用いてもよい。なお、センサ10が積載空間5に複数ある場合には、種類やメーカーが異なっていてもよい。また、センサ10が積載空間5に複数ある場合には、各センサ10で撮影された各撮影データを積載空間認識装置200で合成すればよい。 The sensor 10 is a sensor that senses and photographs (images) the surface of the load 4 in the loading space 5, which is the photographing area (see FIG. 1). The sensor 10 is communicably connected to the loading space recognition device 200 . The sensor 10 outputs photographed data ( 100 in FIG. 2 ) created from three-dimensional data obtained by photographing the loading space 5 to the loading space recognition device 200 . The sensor 10 can be selected according to the photographing conditions such as the photographing distance, the angle of view, and the size of the container 3 necessary for detecting the package 4 and the customer's request. Products sold by various manufacturers can be used for the sensor 10 . As the sensor 10, for example, a stereo camera, ToF (Time of Flight) camera, 3D-lidar (Three Dimensions - light detection and ranging), 2D-lidar (Two Dimensions - light detection and ranging), LIDAR (Laser Imaging Detection And Ranging ) etc. can be used. The sensor 10 can be installed at a position where the loading space 5, which is an imaging area, can be photographed, for example, near the upper part of the container 3 on the loading entrance side. The sensor 10 can photograph the cargo 4 in the loading space 5 from above the cargo entrance side of the container 3 . At least one sensor 10 may be provided in the loading space 5, but a plurality of sensors may be provided. Further, a plurality of sensors 10 may be used even when the loading space 5 is so large that it is difficult to photograph with one sensor. In addition, when there are a plurality of sensors 10 in the loading space 5, the types and manufacturers may differ. Also, if there are a plurality of sensors 10 in the loading space 5 , the loading space recognizing device 200 synthesizes each photographed data captured by each sensor 10 .
 積載空間認識装置200は、センサ10からの撮影データ100に基づいて、荷物4が積載される積載空間5における荷物4の変動(体積変動、距離変動)を認識する装置である(図1、図2参照)。積載空間認識装置200として、コンピュータを構成する機能部(例えば、プロセッサ、記憶装置、入力装置、通信インタフェイス、及び表示装置)を有する装置(コンピュータ装置)を用いることができ、例えば、ノート型パーソナルコンピュータ、スマートフォン、タブレット端末などを用いることができる。積載空間認識装置200は、荷物4の荷崩れの可能性の有無を判定する機能を有する。積載空間認識装置200は、パレット積み付けなどの状況下で、荷崩れが発生する場合には一定量のまとまった体積変化が見込まれる場合にも採用できる。積載空間認識装置200は、所定のプログラムを実行することによって、前処理部210と、荷姿把握部220と、判定部230と、ユーザインタフェイス部240と、により実現される。 The loading space recognizing device 200 is a device that recognizes fluctuations (volume fluctuations, distance fluctuations) of the cargo 4 in the loading space 5 in which the cargo 4 is loaded, based on the photographed data 100 from the sensor 10 (FIGS. 1 and 2). 2). As the loading space recognition device 200, a device (computer device) having functional units (eg, a processor, a storage device, an input device, a communication interface, and a display device) constituting a computer can be used. Computers, smart phones, tablet terminals, etc. can be used. The loading space recognition device 200 has a function of determining whether or not there is a possibility that the cargo 4 will collapse. The loading space recognizing device 200 can also be employed when a certain amount of volumetric change is expected in the event of collapse of cargo under conditions such as pallet stacking. Loading space recognizing device 200 is implemented by preprocessing unit 210, packing appearance grasping unit 220, determining unit 230, and user interface unit 240 by executing a predetermined program.
 前処理部210は、撮影データ100に対して、積載空間認識処理をするための前処理を行う機能部である(図2参照)。前処理部210は、撮影データ100を前処理した前処理データ101を荷姿把握部220及びユーザインタフェイス部240に向けて出力する。前処理部210は、フォーマット変換部211と、ノイズ除去部212と、を備える。 The preprocessing unit 210 is a functional unit that performs preprocessing for carrying out loading space recognition processing on the photographed data 100 (see FIG. 2). The preprocessing unit 210 outputs preprocessed data 101 obtained by preprocessing the photographed data 100 to the packing style grasping unit 220 and the user interface unit 240 . The preprocessing section 210 includes a format conversion section 211 and a noise removal section 212 .
 フォーマット変換部211は、前処理として、撮影データ100のフォーマットを、積載空間認識装置200において共通に使える共通フォーマットに変換する機能部である(図2参照)。フォーマット変換部211は、共通フォーマットの撮影データ100を、ノイズ除去部212に向けて出力する。なお、撮影データ100のフォーマットがもともと共通フォーマットの場合は、フォーマット変換部211の処理を省略(スキップ)することができる。 The format conversion unit 211 is a functional unit that converts the format of the photographed data 100 into a common format that can be commonly used in the loading space recognition device 200 as preprocessing (see FIG. 2). The format conversion unit 211 outputs the common format photographed data 100 to the noise elimination unit 212 . Note that if the format of the photographed data 100 is originally a common format, the process of the format conversion unit 211 can be omitted (skipped).
 ノイズ除去部212は、前処理として、フォーマット変換部211からの撮影データ100からノイズ(例えば、積載空間認識に不要な点群)を除去する機能部である(図2参照)。ノイズ除去部212は、撮影データ100におけるノイズが除去された前処理データ101を荷姿把握部220の荷物全体像推定部222に向けて出力し、必要に応じてユーザインタフェイス部240の表示部241に向けて出力する。ノイズ除去方法として、例えば、平滑化処理、フィルタリング(例えば、移動平均フィルタ処理、メディアンフィルタ処理など)、外れ値除去処理(例えば、カイの二乗検定による外れ値除去処理)などが挙げられる。なお、ノイズがほとんどない状態であれば、ノイズ除去部212の処理を省略(スキップ)してもよい。 The noise removal unit 212 is a functional unit that removes noise (for example, point groups unnecessary for loading space recognition) from the photographed data 100 from the format conversion unit 211 as preprocessing (see FIG. 2). The noise removal unit 212 outputs the preprocessed data 101 from which the noise in the photographed data 100 has been removed to the package overall image estimation unit 222 of the package grasping unit 220, and if necessary, the display unit of the user interface unit 240. 241 output. Noise removal methods include, for example, smoothing processing, filtering (eg, moving average filter processing, median filter processing, etc.), outlier removal processing (eg, outlier removal processing by chi-square test), and the like. Note that the processing of the noise removal unit 212 may be omitted (skipped) if there is almost no noise.
 前処理データ101は、表示部241に出力され、ユーザはこれらを閲覧することができる。また、撮影データ100、前処理データ101は全ての処理の基礎となるため、保存可能とすることで、確証とすることも可能である。 The preprocessed data 101 is output to the display unit 241 and can be viewed by the user. In addition, since the photographed data 100 and the preprocessed data 101 are the basis of all processing, it is possible to ensure that they can be saved by making them storable.
 荷姿把握部220は、前処理データ101から現状の荷姿を把握する機能ブロックである(図2参照)。荷姿把握部220は、把握された現状の荷姿に係るボクセルデータ102を判定部230に向けて出力する。荷姿把握部220は、領域指定部221と、荷物全体像推定部222と、ボクセル化部223とを備える。荷姿把握部220は、前処理データ101から積載された荷物の形状、あるいは空き空間についての情報を計測し、定量的な情報を提供する。 The packing style grasping unit 220 is a functional block that grasps the current packing style from the preprocessed data 101 (see FIG. 2). The packing style grasping unit 220 outputs the voxel data 102 related to the grasped current packing style to the determining unit 230 . The packing style grasping unit 220 includes an area designating unit 221 , a whole package image estimating unit 222 , and a voxelizing unit 223 . The packing form grasping unit 220 measures information about the shape of the loaded baggage or empty space from the preprocessed data 101 and provides quantitative information.
 領域指定部221は、コンテナ3内で荷物4の積載が可能な領域(積載領域)を指定する機能部である(図2参照)。領域指定部221は、例えば、コンテナ3内で荷物4を積載できない領域がある場合などは、その領域を除外して領域指定を行う。領域指定部221は、操作部242を介してユーザから積載領域を指定させることができ、領域端を自動的に取得するシステムと組み合せて、自動で領域を指定することもできる。 The area designation unit 221 is a functional unit that designates an area (loading area) in which the cargo 4 can be loaded within the container 3 (see FIG. 2). For example, if there is an area in the container 3 in which the cargo 4 cannot be loaded, the area designating unit 221 designates the area by excluding that area. The area specifying unit 221 allows the user to specify the stacking area via the operation unit 242, and can automatically specify the area in combination with a system that automatically acquires the edge of the area.
 また、領域指定部221は、操作部242を介してユーザから、判定部230での判定から除外する判定除外領域を指定するようにしてもよい。これにより、一定形状を保てない荷物4の荷姿変化の判定を行わないようにして、過剰な荷姿変化の判定を行わないようにすることができる。変化判定除外領域は、領域単位だけでなく、ボクセル単位や、単一・個別の荷物単位(オブジェクト単位)で指定されるようにしてもよい。 Also, the area specifying unit 221 may specify a determination exclusion area to be excluded from determination by the determination unit 230 from the user via the operation unit 242 . As a result, it is possible to prevent excessive change in packing appearance from being judged by not judging the change in packing appearance of the package 4 that cannot maintain a constant shape. The change determination exclusion area may be specified not only in units of areas but also in units of voxels, or in units of single/individual packages (in units of objects).
 荷物全体像推定部222は、前処理データ101のノイズ除去部212からの前処理データ101、及び、領域指定部221に指定された積載領域に基づいて、積載領域に積載された荷物4の全体像を推定する機能部である(図2参照)。荷物全体像推定部222は、積載済みの荷物4の全体像に係る推定結果データを作成する。荷物全体像推定部222は、作成した推定結果データをボクセル化部223に向けて出力する。荷物全体像推定部222は、荷物表面や最前面に積載された荷物だけではなく、それらに遮蔽され、センサ10からは確認できない奥の部分(オクルージョン部分)に積載された荷物(必要に応じて隙間)も推定する。オクルージョン部分に積載された荷物の推定には、時系列順に蓄積した過去のデータによる補完や、積載荷物に関するサイズや個数などの事前知識を用いてもよく、最前面の座標点とセンサを結ぶ直線の延長線上には荷物を示す座標点が存在すると仮定する様な単純なアルゴリズムの他、時系列に保存した前処理データ101を利用してもよく、機械学習を利用してもよい。 Based on the preprocessed data 101 from the noise remover 212 of the preprocessed data 101 and the loading area specified by the area designating unit 221, the package overall image estimation unit 222 estimates the entire package 4 loaded in the loading area. This is the functional part for estimating the image (see FIG. 2). The package overall image estimation unit 222 creates estimation result data relating to the overall image of the loaded package 4 . The package overall image estimation unit 222 outputs the generated estimation result data to the voxelization unit 223 . The package overall image estimation unit 222 detects not only the packages loaded on the surface of the package and the frontmost package, but also the packages (if necessary gap) is also estimated. For estimating the load loaded in the occlusion area, it is possible to supplement with past data accumulated in chronological order or use prior knowledge of the size and number of loaded loads. Preprocessed data 101 stored in chronological order may be used, or machine learning may be used, in addition to a simple algorithm that assumes that a coordinate point indicating a package exists on an extension of .
 機械学習の教師データ収集方法の例としては、前処理データ101と、実際の積載状況とを関連付けてデータベースに記録する方法が挙げられる。例えば、コンテナの天井面に所定の間隔で距離センサを設け、積載された荷物との距離を距離センサにより測定する。これにより、センサ直下の荷物が天井近くまで積み上がっている、半分程度の高さまで積み上がっている、全く荷物が積まれていない、など、荷物の積載状況を把握することができる。天井面のセンサを複数設けることでコンテナ内全体の荷物の積載状況を把握することができる。このようにして、前処理データ101と、実際の積載状況との関連付けを行うことにより、機械学習を行うことができる。機械学習の方法はこれに限らず、他の方法により行われても良い。また、機械学習を利用する場合は、「重量」、「上積厳禁/下積厳禁」などの形状以外の情報・属性を推定する機能を具備しても良い。 An example of a machine learning teacher data collection method is a method of associating preprocessed data 101 with the actual loading status and recording it in a database. For example, distance sensors are provided at predetermined intervals on the ceiling surface of the container, and the distance to the loaded cargo is measured by the distance sensors. As a result, it is possible to ascertain the loading status of the cargo, such as whether the cargo directly below the sensor is piled up to the ceiling, is piled up to about half the height, or is completely empty. By providing multiple sensors on the ceiling surface, it is possible to grasp the loading status of the entire cargo inside the container. In this manner, machine learning can be performed by associating the preprocessed data 101 with the actual loading status. The machine learning method is not limited to this, and other methods may be used. Also, when machine learning is used, a function of estimating information/attributes other than shape such as "weight" and "strictly no stacking/strictly prohibiting stacking below" may be provided.
 ボクセル化部223は、荷物全体像推定部222の推定結果データをボクセル化する機能部である(図2参照)。ボクセル化部223は、荷物全体像推定部222の推定結果データに基づいて、荷物4の全体像に係るボクセルデータ102を作成する。ボクセル化部223は、積載済みの荷物4が存在しない部分は空き空間としてボクセルデータ102を作成する。ボクセル化部223は、作成されたボクセルデータ102を判定部230の基準確定部231及び差分抽出部232に向けて出力する。ボクセル化処理においては、3Dセンサからは見えない遮蔽された領域の推測、あるいは時系列順に蓄積したデータによる補完などを行ってもよく、積載荷物に関するサイズや個数などの事前知識を用いてもよい。 The voxelization unit 223 is a functional unit that voxels the estimation result data of the package overall image estimation unit 222 (see FIG. 2). The voxelization unit 223 creates voxel data 102 relating to the overall image of the package 4 based on the estimation result data of the package overall image estimation unit 222 . The voxelization unit 223 creates the voxel data 102 by treating the portion where the loaded baggage 4 does not exist as an empty space. The voxelization unit 223 outputs the created voxel data 102 to the reference determination unit 231 and the difference extraction unit 232 of the determination unit 230 . In the voxelization process, it is possible to estimate the hidden area that cannot be seen from the 3D sensor, or to complement it with data accumulated in chronological order, or to use prior knowledge such as the size and number of cargo to be loaded. .
 ここで、ボクセルデータ102は、荷物4の全体像を、所定サイズの複数のボクセル(立方体)の組み合せによって表したデータである。ボクセルデータ102は、各ボクセルの寸法、各面の位置に係る情報を含む。ボクセルデータ102は、作成の度に記憶されるようにしてもよい。ボクセルデータ102は、1ボクセル内の荷物の数、あるいは1つの荷物がどの複数ボクセルに跨って存在するかの情報、重さや形状などの情報などもあってもよい。 Here, the voxel data 102 is data representing the overall image of the package 4 by combining a plurality of voxels (cubes) of a predetermined size. The voxel data 102 includes information on the dimensions of each voxel and the position of each plane. The voxel data 102 may be stored each time it is created. The voxel data 102 may also include information such as the number of packages in one voxel, information about which multiple voxels one package is located across, and information such as weight and shape.
 判定部230は、ある基準時点でのボクセルデータ102(基準ボクセルデータ)と、所定又は任意の時間が経過した時点でのボクセルデータ102(対比ボクセルデータ)と、を比較することによって、荷物の変動体積量又は移動量を推定し、推定された変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定する機能部である(図2参照)。これにより、荷物4の落下を防止することができる。また、判定部230は、基準ボクセルデータと対比ボクセルデータとの比較だけでなく、対比ボクセルデータと1つ前に作成された他の対比ボクセルデータとを比較することによって、荷崩れの可能性の有無を検出するようにしてもよい。これにより、単位時間あたりの対比ボクセルデータ間の変化量が、基準ボクセルデータと対比ボクセルデータとの間の変化量よりも小さい場合に、荷崩れの可能性があると判定しないようにすることができる。さらに、トラック2(積載空間を有する構造体)に揺れや音を検出する検出部20を付けておき(図10参照)、判定部230は、検出部20で一定以上の揺れや音を検出したときにボクセル化部223から対比ボクセルデータを取得して、荷崩れの可能性の有無を検出するようにしてもよい。さらに、移動推定の確からしさを補完、検証するために、別のシステムから得た情報、例えば重量のある荷物が多い、逆に軽い荷物が多い、など、を用いてもよい。判定部230は、基準確定部231と、差分抽出部232と、変動体積量推定部233と、まとまり抽出部234と、組み合せ推定部235と、移動量推定部236と、荷崩れ判定部237と、を備える。 The determination unit 230 compares the voxel data 102 (reference voxel data) at a certain reference point in time with the voxel data 102 (comparison voxel data) at a point in time after a predetermined or arbitrary period of time has passed, thereby determining changes in the load. It is a functional unit that determines the possibility of collapse of cargo by estimating the amount of volume or amount of movement and comparing the estimated amount of fluctuating volume or amount of movement with a threshold value (see FIG. 2). Thereby, the load 4 can be prevented from falling. In addition, the determination unit 230 not only compares the reference voxel data and the comparison voxel data, but also compares the comparison voxel data with another comparison voxel data created immediately before to determine the possibility of cargo collapse. You may make it detect the presence or absence. As a result, when the amount of change between the comparison voxel data per unit time is smaller than the amount of change between the reference voxel data and the comparison voxel data, it is possible not to determine that there is a possibility of collapse of cargo. can. Furthermore, the truck 2 (a structure having a loading space) is provided with a detection unit 20 for detecting vibration and sound (see FIG. 10), and the determination unit 230 detects vibration and sound exceeding a certain level in the detection unit 20. In some cases, comparison voxel data may be obtained from the voxelization unit 223 to detect the possibility of collapse of cargo. Furthermore, in order to supplement and verify the certainty of movement estimation, information obtained from another system, such as the fact that there are many heavy packages and many light packages, may be used. The determination unit 230 includes a reference determination unit 231, a difference extraction unit 232, a volume fluctuation estimation unit 233, a unity extraction unit 234, a combination estimation unit 235, a movement amount estimation unit 236, and a load collapse determination unit 237. , provided.
 基準確定部231は、ボクセル化部223からのある基準時点(基準時刻)のボクセルデータ102を変化基準点として確定する機能部である(図2参照)。基準確定部231は、操作部242からの指示によって、ボクセル化部223からの基準時点(指示された時点)のボクセルデータ102を記憶し、位置変動の基準データとして保持する。基準確定部231は、基準時点でのボクセルデータ102を差分抽出部232に向けて出力する。 The reference determination unit 231 is a functional unit that determines the voxel data 102 at a certain reference point in time (reference time) from the voxelization unit 223 as a change reference point (see FIG. 2). The reference determination unit 231 stores the voxel data 102 at the reference time point (the indicated time point) from the voxelization unit 223 according to an instruction from the operation unit 242, and holds it as reference data for positional variation. The reference determination unit 231 outputs the voxel data 102 at the reference time to the difference extraction unit 232 .
 差分抽出部232は、任意の基準時のボクセルデータ102(基準ボクセルデータ)と、前記基準時から所定又は任意の時間が経過した時点でのボクセルデータ102(対比ボクセルデータ)と、を比較することにより、所定の位置(例えば、トラック2の後方、センサ10の位置)から荷物4を見た面の対応する位置のボクセルの差分(例えば、奥行き方向の位置の差分)を抽出する機能部である(図2参照)。差分抽出部232は、差分抽出処理にあたって、基準ボクセルデータを基準確定部231から取得するとともに、対比ボクセルデータをボクセル化部223から取得する。差分抽出処理では、例えば、トラック2の後方から荷物4を見たときのボクセルの表面の位置が奥行き方向のトラック2の前方側に変化した場合(例えば、対象となるボクセルの位置にある荷物4が左側、右側、又は下側に移動してその1つ奥側の荷物4が表れた場合、対象となるボクセルの位置にある荷物4が奥側に移動した場合、オクルージョン部分が減少した場合)は体積が「減少」したことを表すように差分を抽出し、同じくボクセルの表面の位置からトラック2の後方側に変化した場合(例えば、対象となるボクセルの位置にある荷物4が手前側に移動した場合、対象となるボクセルの位置にある荷物4の手前に別の荷物4が移動してきた場合、オクルージョン部分が増加した場合)は体積が「増加」したことを表すように差分を抽出することができる。また、差分抽出処理では、例えば、ボクセルの表面の位置がトラック2の前方又は後方に変化した距離の大きさに応じて体積の減少又は増加の程度を段階的に表すように差分を抽出することができる。差分抽出部232は、差分抽出処理によって抽出された差分データを変動体積量推定部233及びまとまり抽出部234に向けて出力する。 The difference extraction unit 232 compares the voxel data 102 at an arbitrary reference time (reference voxel data) with the voxel data 102 at a point in time when a predetermined or arbitrary time has passed from the reference time (comparison voxel data). is a functional unit that extracts the difference (for example, the difference in the position in the depth direction) between voxels corresponding to the surface of the cargo 4 viewed from a predetermined position (for example, the rear of the truck 2, the position of the sensor 10). (See Figure 2). The difference extraction unit 232 acquires reference voxel data from the reference determination unit 231 and acquires comparison voxel data from the voxelization unit 223 in the difference extraction process. In the difference extraction process, for example, when the position of the surface of the voxel when viewing the load 4 from behind the truck 2 changes to the front side of the track 2 in the depth direction (for example, when the load 4 at the position of the target voxel moves to the left, right, or bottom, and the cargo 4 on the far side appears; extracts the difference so as to indicate that the volume has "decreased", and similarly when the position of the voxel surface changes to the rear side of the track 2 (for example, the load 4 at the target voxel position moves to the front side If the target voxel moves, if another load 4 moves in front of the load 4 at the position of the target voxel, if the occlusion part increases), the difference is extracted so as to indicate that the volume "increased". be able to. In addition, in the difference extraction process, for example, the difference is extracted so as to express the degree of volume decrease or increase stepwise according to the magnitude of the distance by which the surface position of the voxel changes forward or backward on the track 2. can be done. The difference extraction unit 232 outputs the difference data extracted by the difference extraction process to the variation volume estimation unit 233 and the unity extraction unit 234 .
 変動体積量推定部233は、差分抽出部232からの差分データに基づいて、荷物4の全体像の変動した体積量(変動体積量)を推定する機能部である(図2参照)。変動体積量は、例えば、トラック2の後方から荷物4を見た面の対応する位置のボクセルの奥行き方向の位置の差分の合計によって表すことができる。ある位置で、ある体積量が増加し、別の位置で同じ体積量が減少した場合、変動体積量は無し(0)となる(図5の移動例2-1~2-4参照)。また、ある位置で、ある体積量が増加し、別の位置で、増加した体積量よりも多い体積量が減少した場合、変動体積量はマイナスの値となる。また、ある位置で、ある体積量が増加し、別の位置で、増加した体積量よりも少ない体積量が減少した場合、変動体積量はプラスの値となる(図4の移動例1-1~1-3参照)。また、ある位置で、ある体積量が増加し、別の位置での体積量の減少がない場合、変動体積量はプラスの値となる。さらに、ある位置で、ある体積量が減少し、別の位置での体積量の増加がない場合、変動体積量はマイナスの値となる。変動体積量推定部233は、推定された変動体積量推定データを荷崩れ判定部237に向けて出力する。 The fluctuating volume estimation unit 233 is a functional unit that estimates the fluctuating volume (fluctuation volume) of the overall image of the package 4 based on the difference data from the difference extraction unit 232 (see FIG. 2). The fluctuating volume can be represented, for example, by the sum of the differences in the depth direction positions of voxels at corresponding positions on the surface of the cargo 4 viewed from the rear of the truck 2 . When a certain volume increases at a certain position and the same volume decreases at another position, there is no change in volume (0) (see movement examples 2-1 to 2-4 in FIG. 5). Also, if a certain amount of volume increases at a certain position and a greater amount of volume decreases than the increased volume amount at another position, the fluctuating volume amount will be a negative value. In addition, if a certain volume increases at a certain position and a volume that is smaller than the increased volume decreases at another position, the fluctuating volume becomes a positive value (movement example 1-1 in FIG. 4 ~ see 1-3). Also, if a certain volume increases at a certain position and there is no volume decrease at another position, the fluctuating volume has a positive value. Furthermore, if at one location there is a decrease in one volume and there is no increase in another location, the variable volume will be a negative value. The fluctuating volume estimation unit 233 outputs the estimated fluctuating volume estimation data to the cargo collapse determination unit 237 .
 まとまり抽出部234は、差分抽出部232からの差分データに基づいて、同じ水平方向の位置にある、体積量が増加したボクセル(体積量増加ボクセル)と体積量が減少したボクセル(体積量減少ボクセル)との間に存在する体積量の増減のないボクセルのまとまりを抽出する機能部である(図2参照)。まとまり抽出部234は、荷物4の体積量の変動を伴わないことを前提とすることができる。まとまり抽出部234は、まとまりを抽出することができた場合、抽出されたまとまりデータを含む差分データを組み合せ推定部235に向けて出力する。まとまり抽出部234は、まとまりを抽出することができない場合、差分抽出部232からの差分データを組み合せ推定部235に向けて出力する。 Based on the difference data from the difference extraction unit 232, the unity extraction unit 234 extracts voxels with increased volume (increased volume voxels) and voxels with decreased volume (decreased volume voxels) at the same horizontal position. ), and extracts a group of voxels with no increase or decrease in volume (see FIG. 2). The unity extraction unit 234 can be based on the premise that the volume of the packages 4 does not change. When the grouping extracting section 234 can extract the grouping, the grouping extracting section 234 outputs difference data including the extracted grouping data to the combination estimating section 235 . The grouping extracting section 234 outputs the difference data from the difference extracting section 232 to the combination estimating section 235 when the grouping cannot be extracted.
 組み合せ推定部235は、まとまり抽出部234からの差分データ(まとまりを抽出できた場合はまとまりデータを含む)に基づいて、体積量増加ボクセルと体積量減少ボクセルとの組み合せを推定する機能部である(図2参照)。組み合せ推定部235は、荷物4の体積量の変動を伴わないことを前提とすることができる。組み合せ推定処理では、例えば、組み合せが複数通りできる場合には、同じ水平位置にある体積量増加ボクセルと体積量減少ボクセルとを優先的に組み合せることができる。また、組み合せが複数通りできる場合には、互いの距離が最も大きく離れている体積量増加ボクセルと体積量減少ボクセルとを優先的に組み合せることができる。また、組み合せが複数通りできる場合には、距離の合計が最大になるように体積量増加ボクセルと体積量減少ボクセルとを優先的に組み合せることができる。また、体積量減少ボクセルは、荷物4が落下する場合には荷物4が上に移動することはありえないので、その1つ上の段にある体積量増加ボクセルとは組み合せないようにすることができる。体積量増加ボクセルは、荷物4が落下する場合には荷物4が上に移動することはありえないので、その1つ下の段にある体積量減少ボクセルとは組み合せないようにすることができる。組み合せ推定部235は、組み合せを推定することができた場合、抽出された組み合せデータを含む差分データを移動量推定部236に向けて出力する。組み合せ推定部235は、組み合せを推定することができない場合、まとまり抽出部234からの差分データを移動量推定部236に向けて出力する。 The combination estimating unit 235 is a functional unit that estimates a combination of increased volume voxels and decreased volume voxels based on the difference data from the unity extraction unit 234 (including unity data if a unity can be extracted). (See Figure 2). The combination estimating unit 235 can assume that the volume of the package 4 does not change. In the combination estimation process, for example, when a plurality of combinations are possible, it is possible to preferentially combine increased volume voxels and decreased volume voxels at the same horizontal position. Also, when a plurality of combinations are possible, it is possible to preferentially combine the increased volume voxel and the decreased volume voxel that are farthest apart from each other. Also, when a plurality of combinations are possible, it is possible to preferentially combine increased volume voxels and decreased volume voxels so as to maximize the total distance. Also, a volume-decreasing voxel can be prevented from being combined with a volume-increasing voxel that is one level above it, since it is impossible for the cargo 4 to move upward if the cargo 4 falls. . A volume-increasing voxel may not be combined with a volume-decreasing voxel one level below it, since the load 4 cannot move up if it falls. If the combination can be estimated, the combination estimating section 235 outputs difference data including the extracted combination data to the movement amount estimating section 236 . The combination estimation unit 235 outputs the difference data from the unity extraction unit 234 to the movement amount estimation unit 236 when the combination cannot be estimated.
 移動量推定部236は、組み合せ推定部235からの差分データ(まとまりデータ及び組み合せデータを含むことがある)に基づいて、基準時から所定又は任意の時間が経過した時までの間に生じた荷物4の移動量を推定する機能部である(図2参照)。移動量推定部236は、荷物4の体積量の変動を伴わないことを前提とすることができる。移動量推定部236は、差分データにおいてまとまりデータ、組み合せデータを含む場合、組み合せデータに係る体積量減少ボクセルから体積量増加ボクセルまでの距離を算出し、まとまりデータに係る体積量の増減のないボクセルの長さを算出し、算出された前記距離から、算出された前記長さを差し引いた値を荷物4の移動量として推定する。移動量推定部236は、差分データにおいて、まとまりデータを含まず、かつ、組み合せデータを含む場合、組み合せデータに係る体積量減少ボクセルから体積量増加ボクセルまでの距離を算出し、算出された前記距離の値を荷物4の移動量として推定する。なお、差分データにおいて、組み合せデータを含まない場合(まとまりデータの有無は不問)は、体積量の変動を伴う荷物4の移動になるので、移動量推定部236では移動量を推定せず、変動体積量推定部233での変動体積量の推定を優先的に行うことができる。また、移動量の推定では、横方向の移動と縦方向の移動(落下)それぞれで異なる重みをつけるように補正してもよく、例えば、横方向の移動は大きく動いても警告しないように推定された移動量よりも小さくなるように補正し、縦方向又は斜め方向の移動が発生した場合は小さな移動でも警告するように推定された移動量よりも大きくなるように補正するようにしてもよい。移動量推定部236は、差分データにおいて複数の組み合せデータを含む場合、組み合せデータごとに移動量を推定することになる。 Based on the difference data (which may include grouped data and combination data) from the combination estimation unit 235, the movement amount estimation unit 236 calculates the number of packages generated between the reference time and the time when a predetermined or arbitrary time has passed. 4 (see FIG. 2). The moving amount estimator 236 can assume that the volume of the load 4 does not change. When the difference data includes grouped data and combination data, the movement amount estimation unit 236 calculates the distance from the volume reduction voxel to the volume increase voxel related to the combination data, and calculates the distance from the volume amount increase voxel related to the combined data. is calculated, and a value obtained by subtracting the calculated length from the calculated distance is estimated as the amount of movement of the load 4 . If the difference data does not include combined data and includes combination data, the movement amount estimation unit 236 calculates the distance from the volume reduction voxel to the volume increase voxel related to the combination data, and calculates the calculated distance. is estimated as the amount of movement of the load 4 . If the difference data does not include combination data (regardless of the presence or absence of grouped data), the movement of the cargo 4 is accompanied by a change in volume. It is possible to preferentially perform the estimation of the volume variation in the volume estimation unit 233 . In addition, when estimating the amount of movement, correction may be made so that different weights are assigned to horizontal movement and vertical movement (falling). may be corrected to be smaller than the estimated movement amount, and correction may be made to be larger than the estimated movement amount so that even a small movement is warned when a vertical or oblique movement occurs. . When the difference data includes a plurality of combination data, the movement amount estimator 236 estimates the movement amount for each combination data.
 また、移動量推定部236は、推定された移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定する。なお、推定された移動量が1つの場合は、その移動量を最長移動量と推定する。移動量推定部236は、推定された最長移動量推定データを荷崩れ判定部237に向けて出力する。 Also, the movement amount estimation unit 236 selects the longest movement amount from the estimated movement amounts, and estimates the selected movement amount as the longest movement amount. In addition, when the estimated movement amount is one, the movement amount is estimated as the longest movement amount. Movement amount estimation section 236 outputs the estimated longest movement amount estimation data to cargo collapse determination section 237 .
 なお、まとまり抽出部234、組み合せ推定部235、及び移動量推定部236の動作の例については、後述する。 Examples of operations of the grouping extraction unit 234, the combination estimation unit 235, and the movement amount estimation unit 236 will be described later.
 荷崩れ判定部237は、変動体積量推定部233からの変動体積量推定データ、又は、移動量推定部236からの最長移動量推定データと、予め設定された閾値(変動体積量用の第1閾値、又は、最長移動量用の第2閾値)とを比較することによって、荷物4の荷崩れの可能性の有無を判定する機能部である(図2参照)。荷崩れ判定部237は、変動体積量推定データが第1閾値よりも大きいとき(第1閾値以上のときでも可)に、荷物4の荷崩れの可能性があると判定する。荷崩れ判定部237は、変動体積量推定データが第1閾値以下のとき(第1閾値未満のときでも可)に、荷物4の荷崩れの可能性がないと判定する。荷崩れ判定部237は、最長移動量が第2閾値よりも大きいとき(第2閾値以上のときでも可)に、荷物4の荷崩れの可能性があると判定する。荷崩れ判定部237は、最長移動量が第2閾値以下のとき(第2閾値未満のときでも可)に、荷物4の荷崩れの可能性がないと判定する。荷崩れ判定部237は、変動体積量及び最長移動量のどちらを優先的に用いて荷物4の荷崩れの可能性の有無を判定するかは任意であるが、データ処理負荷を考慮すると、相対的に負荷が小さい変動体積量を優先的に用いて判定することができる。荷崩れ判定部237は、荷物4の荷崩れの可能性があると判定された場合、警告出力指示情報を警告出力部243に向けて出力し、異常の発生をユーザに警告することになる。 The load collapse determination unit 237 determines the estimated volume fluctuation data from the volume fluctuation estimation unit 233 or the estimated longest movement amount data from the movement amount estimation unit 236 and a preset threshold (the first value for the volume fluctuation). This is a functional unit that determines whether or not there is a possibility that the load 4 will collapse by comparing the threshold value (or the second threshold value for the longest movement amount) (see FIG. 2). The cargo collapse determination unit 237 determines that there is a possibility that the cargo 4 collapses when the estimated volume fluctuation data is greater than the first threshold value (even when the first threshold value or more is possible). The cargo collapse determination unit 237 determines that there is no possibility of cargo collapse of the cargo 4 when the estimated volume fluctuation data is equal to or less than the first threshold value (or less than the first threshold value). The cargo collapse determination unit 237 determines that there is a possibility that the cargo 4 collapses when the longest movement amount is larger than the second threshold value (even when the second threshold value or more is possible). The cargo collapse determination unit 237 determines that there is no possibility of cargo collapse of the cargo 4 when the longest movement amount is equal to or less than the second threshold value (even when less than the second threshold value is possible). It is arbitrary for the cargo collapse determination unit 237 to determine whether or not there is a possibility of collapse of the cargo 4 by preferentially using either the variable volume amount or the longest movement amount. The determination can be made by preferentially using the variable volume amount with a relatively small load. When it is determined that the cargo 4 may collapse, the cargo collapse determination unit 237 outputs warning output instruction information to the warning output unit 243 to warn the user of the occurrence of an abnormality.
 ユーザインタフェイス部240は、ユーザインタフェイス(ユーザとの情報のやりとりをする機能)を備えた機能部である(図2参照)。ユーザインタフェイス部240は、ユーザと積載空間認識装置200との間のインタフェイスを行い、各処理への操作、及び、各処理の結果の確認を行えるようにする。ユーザインタフェイス部240は、表示部241、操作部242、警告出力部243を備えている。 The user interface unit 240 is a functional unit that has a user interface (function for exchanging information with the user) (see FIG. 2). The user interface unit 240 provides an interface between the user and the loading space recognition device 200 so that each process can be operated and the result of each process can be confirmed. The user interface section 240 includes a display section 241 , an operation section 242 and a warning output section 243 .
 表示部241は、ノイズ除去部212からの前処理データ101等を表示する機能部である(図2参照)。表示部241として、例えば、液晶ディスプレイ、有機EL(Electroluminescence)ディスプレイ、AR(Augmented Reality)グラス等を用いることができる。 The display unit 241 is a functional unit that displays the preprocessed data 101 and the like from the noise removal unit 212 (see FIG. 2). As the display unit 241, for example, a liquid crystal display, an organic EL (Electroluminescence) display, an AR (Augmented Reality) glass, or the like can be used.
 操作部242は、ユーザ操作に基づいて、領域指定部221及び基準確定部231への領域指定及び確定指示を行う機能部である(図2参照)。操作部242として、例えば、タッチパネル、マウス、ジェスチャーや目の動きを認識するカメラとソフトウェア等を用いることができる。 The operation unit 242 is a functional unit that performs area designation and confirmation instructions to the area designation unit 221 and the reference determination unit 231 based on user operations (see FIG. 2). As the operation unit 242, for example, a touch panel, a mouse, a camera and software for recognizing gestures and eye movements can be used.
 警告出力部243は、荷崩れ判定部237からの警告出力指示情報に基づいて、ユーザに対し警告を出力する機能部である(図2参照)。警告出力部243として、例えば、警告に係る文字や画像を表示するディスプレイ、アラーム音を出力するスピーカ、アラーム点灯するランプ、他のシステムへ警告出力指示情報を送信する通信部などを用いることができる。 The warning output unit 243 is a functional unit that outputs a warning to the user based on the warning output instruction information from the cargo collapse determination unit 237 (see FIG. 2). As the warning output unit 243, for example, a display that displays characters and images related to the warning, a speaker that outputs an alarm sound, a lamp that lights up the alarm, a communication unit that transmits warning output instruction information to another system, etc. can be used. .
 次に、実施形態1に係る積載空間認識システムにおける積載空間認識装置の動作について図面を用いて説明する。図3は、実施形態1に係る積載空間認識システムにおける積載空間認識装置の動作を模式的に示したフローチャートである。なお、積載空間認識装置の構成部及びその詳細については図1及び図2並びにそれらの説明を参照されたい。 Next, the operation of the loading space recognition device in the loading space recognition system according to Embodiment 1 will be described with reference to the drawings. FIG. 3 is a flow chart schematically showing the operation of the loading space recognition device in the loading space recognition system according to the first embodiment. 1 and 2 and their descriptions should be referred to for the configuration and details of the loading space recognition device.
 まず、前処理部210のフォーマット変換部211は、センサ10から、撮影領域となる積載空間5内の荷物4を撮影した基準用の撮影データ100(基準撮影データ;3次元データ)を取得する(ステップA1)。 First, the format conversion unit 211 of the preprocessing unit 210 acquires from the sensor 10 reference photographing data 100 (reference photographing data; three-dimensional data) obtained by photographing the cargo 4 in the loading space 5 serving as the photographing area ( Step A1).
 次に、前処理部210のフォーマット変換部211は、基準用の撮影データ100のフォーマットを共通フォーマットに変換する(ステップA2)。 Next, the format conversion section 211 of the preprocessing section 210 converts the format of the reference photographing data 100 into a common format (step A2).
 次に、前処理部210のノイズ除去部212は、共通フォーマットに変換された基準用の撮影データ100からノイズを除去して基準用の前処理データ101を作成する(ステップA3)。 Next, the noise removing unit 212 of the preprocessing unit 210 removes noise from the reference photographing data 100 converted into the common format to create reference preprocessing data 101 (step A3).
 次に、荷姿把握部220の荷物全体像推定部222は、基準用の前処理データ101、及び、領域指定部221で指定された積載領域に基づいて、積載領域上の積載済みの荷物4の全体像を推定する(ステップA4)。 Next, based on the reference preprocessed data 101 and the loading area designated by the area designating section 221, the overall baggage image estimation section 222 of the packing style grasping section 220 determines the loaded baggage 4 on the loading area. is estimated (step A4).
 次に、荷姿把握部220のボクセル化部223は、ステップA4で推定された推定結果データに基づいて、荷物4の全体像に係る基準用のボクセルデータ102(基準ボクセルデータ)を作成する(ステップA5)。 Next, the voxelization unit 223 of the packing style grasping unit 220 creates reference voxel data 102 (reference voxel data) relating to the overall image of the package 4 based on the estimation result data estimated in step A4 ( Step A5).
 次に、判定部230の基準確定部231は、ボクセル化部223で作成された基準ボクセルデータを、ユーザによる操作部242の操作により、荷崩れ判定処理のための基準時の値として保存する(ステップA6)。ユーザによる操作部242の操作は、例えば、コンテナ3への荷物4の積み付けが完了し、配送を開始する前に行うことができる。 Next, the reference determination unit 231 of the determination unit 230 saves the reference voxel data created by the voxelization unit 223 as a reference time value for the load collapse determination process by the operation of the operation unit 242 by the user ( Step A6). The operation of the operation unit 242 by the user can be performed, for example, after the loading of the cargo 4 into the container 3 is completed and before delivery is started.
 次に、前処理部210のフォーマット変換部211は、基準用の撮影データ100(基準撮影データ)を取得してから所定又は任意の時間が経過した時点で、センサ10から、撮影領域となる積載空間5内の荷物4を撮影した対比用の撮影データ100(対比撮影データ;3次元データ)を取得する(ステップA7)。 Next, the format conversion unit 211 of the preprocessing unit 210 converts the image data 100 (reference image data) from the sensor 10 into the image pickup area when a predetermined or arbitrary time has passed since the reference image data 100 (reference image data) was acquired. Photographed data 100 for comparison (comparative photographed data; three-dimensional data) obtained by photographing the package 4 in the space 5 is acquired (step A7).
 次に、前処理部210のフォーマット変換部211は、対比用の撮影データ100のフォーマットを共通フォーマットに変換する(ステップA8)。 Next, the format conversion section 211 of the preprocessing section 210 converts the format of the photographing data 100 for comparison into a common format (step A8).
 次に、前処理部210のノイズ除去部212は、共通フォーマットに変換された対比用の撮影データ100からノイズを除去して対比用の前処理データ101を作成する(ステップA9)。 Next, the noise removal unit 212 of the preprocessing unit 210 removes noise from the comparison imaging data 100 converted into the common format to create comparison preprocessing data 101 (step A9).
 次に、荷姿把握部220の荷物全体像推定部222は、対比用の前処理データ101、及び、領域指定部221で指定された積載領域に基づいて、積載領域上の積載済みの荷物4の全体像を推定する(ステップA10)。 Next, based on the preprocessed data 101 for comparison and the loading area designated by the area designating section 221, the overall baggage image estimation section 222 of the packing style grasping section 220 determines the loaded baggage 4 on the loading area. is estimated (step A10).
 次に、荷姿把握部220のボクセル化部223は、ステップA10で推定された推定結果データに基づいて、荷物4の全体像に係る対比用のボクセルデータ102(対比ボクセルデータ)を作成する(ステップA11)。 Next, the voxelization unit 223 of the packing style grasping unit 220 creates comparison voxel data 102 (comparison voxel data) relating to the overall image of the package 4 based on the estimation result data estimated in step A10 ( Step A11).
 次に、判定部230の差分抽出部232は、基準確定部231に保存された基準ボクセルデータと、ボクセル化部223で作成された対比ボクセルデータと、を比較することにより、所定の位置(例えば、トラック2の後方、センサ10の位置)から荷物4を見た面の対応する位置のボクセルの差分(例えば、奥行き方向の位置の差分)を抽出する(ステップA12)。 Next, the difference extraction unit 232 of the determination unit 230 compares the reference voxel data stored in the reference determination unit 231 with the comparison voxel data created by the voxelization unit 223 to obtain a predetermined position (for example, , the rear of the truck 2, the position of the sensor 10), and the difference between the voxels at the corresponding positions on the surface of the cargo 4 (for example, the difference in the position in the depth direction) is extracted (step A12).
 次に、判定部230の変動体積量推定部233は、差分抽出部232からの差分データに基づいて、荷物4の全体像の変動した体積量(変動体積量)を推定する(ステップA13)。 Next, the fluctuating volume estimation unit 233 of the determination unit 230 estimates the fluctuating volume of the overall image of the package 4 (fluctuation volume) based on the difference data from the difference extraction unit 232 (step A13).
 次に、判定部230の荷崩れ判定部237は、変動体積量推定部233からの変動体積量推定データが、予め設定された変動体積量用の第1閾値よりも大きいか否かを判定する(ステップA14)。変動体積量推定が第1閾値よりも大きい場合(ステップA14のYES)、荷物4の荷崩れの可能性があると判定し、警告出力部243に警告出力指示情報を出力し、ステップA20に進む。 Next, the cargo collapse determination unit 237 of the determination unit 230 determines whether or not the fluctuation volume estimation data from the fluctuation volume estimation unit 233 is greater than a preset first threshold value for the fluctuation volume. (Step A14). If the estimated volume change is greater than the first threshold (YES in step A14), it is determined that there is a possibility that the cargo 4 may collapse, output warning output instruction information to the warning output unit 243, and proceed to step A20. .
 変動体積量推定が第1閾値以下の場合(ステップA14のNO)、判定部230のまとまり抽出部234は、差分抽出部232からの差分データに基づいて、同じ水平方向の位置にある、体積量増加ボクセルと体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出する(ステップA15)。まとまりを抽出できないときはスキップする。 If the estimated fluctuation volume is equal to or less than the first threshold (NO in step A14), the unity extraction unit 234 of the determination unit 230 extracts the volume at the same horizontal position based on the difference data from the difference extraction unit 232. A set of voxels with no increase or decrease in volume between the increased voxel and the decreased volume voxel is extracted (step A15). Skip when no grouping can be extracted.
 次に、判定部230の組み合せ推定部235は、まとまり抽出部234からの差分データ(まとまりを抽出できた場合はまとまりデータを含む)に基づいて、体積量増加ボクセルと体積量減少ボクセルとの組み合せを推定する(ステップA16)。 Next, the combination estimation unit 235 of the determination unit 230 combines the increased volume voxels and decreased volume voxels based on the difference data from the unity extraction unit 234 (including the unity data if the unity can be extracted). is estimated (step A16).
 次に、判定部230の移動量推定部236は、組み合せ推定部235からの差分データ(まとまりデータ及び組み合せデータを含むことがある)に基づいて、基準時から所定又は任意の時間が経過した時までの間に生じた荷物4の移動量を推定する(ステップA17)。 Next, the movement amount estimator 236 of the determination unit 230, based on the difference data (which may include grouped data and combination data) from the combination estimator 235, determines when a predetermined or arbitrary time has elapsed from the reference time. Estimate the amount of movement of the cargo 4 that occurred during the period up to (step A17).
 ここで、移動量の推定では、差分データにおいてまとまりデータ、組み合せデータを含む場合、組み合せデータに係る体積量減少ボクセルから体積量増加ボクセルまでの距離を算出し、まとまりデータに係る体積量の増減のないボクセルの長さを算出し、算出された前記距離から、算出された前記長さを差し引いた値を荷物4の移動量として推定する。また、移動量の推定では、差分データにおいて、まとまりデータを含まず、かつ、組み合せデータを含む場合、組み合せデータに係る体積量減少ボクセルから体積量増加ボクセルまでの距離を算出し、算出された前記距離の値を荷物4の移動量として推定する。また、移動量の推定では、差分データにおいて複数の組み合せデータを含む場合、組み合せデータごとに移動量を推定する、 Here, in estimating the amount of movement, when the difference data includes grouped data and combined data, the distance from the volume reduction voxel to the volume increase voxel related to the combined data is calculated, and the volume increase or decrease related to the combined data is calculated. The length of the non-existent voxel is calculated, and a value obtained by subtracting the calculated length from the calculated distance is estimated as the movement amount of the load 4 . Further, in estimating the amount of movement, when the difference data does not include combined data and includes combination data, the distance from the volume decrease voxel to the volume increase voxel related to the combination data is calculated, and the calculated The distance value is estimated as the amount of movement of the load 4 . Further, in estimating the movement amount, if the difference data includes a plurality of combination data, the movement amount is estimated for each combination data.
 次に、判定部230の移動量推定部236は、推定された移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定する(ステップA18)。 Next, the movement amount estimation unit 236 of the determination unit 230 selects the longest movement amount from the estimated movement amounts, and estimates the selected movement amount as the longest movement amount (step A18).
 次に、判定部230の荷崩れ判定部237は、移動量推定部236からの最長移動量推定データが、予め設定された最長移動量用の第2閾値よりも大きいか否かを判定する(ステップA19)。最長移動量推定データが第2閾値より大きい場合(ステップA19のYES)、荷物4の荷崩れの可能性があると判定し、警告出力部243に警告出力指示情報を出力し、ステップA20に進む。最長移動量推定データが第2閾値以下である場合(ステップA19のNO)、荷物4の荷崩れの可能性が無いと判定して、1つのサイクルを終了し、ユーザの終了の指示があるまでステップA7~A20を繰り返す。 Next, the load collapse determination unit 237 of the determination unit 230 determines whether or not the longest movement amount estimation data from the movement amount estimation unit 236 is larger than a preset second threshold value for the longest movement amount ( Step A19). If the estimated longest movement amount data is greater than the second threshold value (YES in step A19), it is determined that there is a possibility that the load 4 may collapse, output warning output instruction information to the warning output unit 243, and proceed to step A20. . If the estimated longest movement amount data is equal to or less than the second threshold value (NO in step A19), it is determined that there is no possibility of collapse of the cargo 4, and one cycle is terminated until the user instructs termination. Repeat steps A7 to A20.
 変動体積量推定が第1閾値よりも大きい場合(ステップA14のYES)、又は、最長移動量推定データが第2閾値より大きい場合(ステップA19のYES)、ユーザインタフェイス部240の警告出力部243は、荷崩れ判定部237からの警告出力指示情報に基づいて、ユーザに対し警告を出力する(ステップA20)。その後、1つのサイクルを終了し、ユーザの終了の指示があるまでステップA7~A20を繰り返す。 If the estimated volume change is greater than the first threshold (YES in step A14), or if the estimated maximum movement amount data is greater than the second threshold (YES in step A19), the warning output unit 243 of the user interface unit 240 outputs a warning to the user based on the warning output instruction information from the cargo collapse determination unit 237 (step A20). After that, one cycle is finished, and steps A7 to A20 are repeated until the user instructs to finish.
 次に、実施形態1に係る積載空間認識システムにおける積載空間認識装置の差分抽出部及び変動体積量推定部並びに荷崩れ判定部の動作をいくつかの例及び図面を用いて説明する。図4は、基準ボクセルと対比ボクセルとの差分を抽出したときに変動体積量がある場合のいくつかの例を模式的に示したイメージ図である。 Next, the operations of the difference extraction unit, the volume fluctuation estimation unit, and the collapse determination unit of the loading space recognition device in the loading space recognition system according to the first embodiment will be described using several examples and drawings. FIG. 4 is an image diagram schematically showing some examples in which there is a varying volume amount when extracting the difference between the reference voxel and the comparison voxel.
 トラック(図1の2)の後方側から荷物(図1の4)の全体像を見たときの基準ボクセルデータが図4の基準ボクセルデータのような状態である場合、図4の基準ボクセルデータから移動例1-1に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図4の移動例1-1に係る差分抽出データのようになる。移動例1-1に係る差分抽出データでは、左上の4個のボクセルのみの体積量がそれぞれ1段増加し、かつ、その他のボクセルには増減の変化がない。つまり、体積の増加が1段分4個しか存在しないので、基準ボクセルデータと対比ボクセルデータとは総体積が異なる。この場合、体積が増加したボクセルの奥側でオクルージョン部(隙間や空間)が増加したと考えられるため、基準ボクセルデータの表面位置から対比ボクセルデータの増加した表面位置までの距離が移動したとして変動体積量推定部(図2の233)で変動体積量を推定し、荷崩れ判定部(図2の237)での変動体積量の閾値判定が行える。 When the reference voxel data when the whole image of the package (4 in FIG. 1) is viewed from the rear side of the truck (2 in FIG. 1) is in a state like the reference voxel data in FIG. 4, the reference voxel data in FIG. 4, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is changed from the comparison voxel data according to the movement example 1-1 to the difference extraction data according to the movement example 1-1 in FIG. Become. In the difference extraction data according to Movement Example 1-1, the volume amounts of only the upper left four voxels increase by one step, and the other voxels do not increase or decrease. That is, since there are only four increases in volume for one stage, the reference voxel data and comparison voxel data differ in total volume. In this case, it is thought that the occlusion part (gap or space) has increased on the back side of the voxel whose volume has increased. The volume estimator (233 in FIG. 2) estimates the volume fluctuation, and the collapse determination unit (237 in FIG. 2) performs threshold determination of the volume fluctuation.
 図4の基準ボクセルデータから移動例1-2に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図4の移動例1-2に係る差分抽出データのようになる。移動例1-2に係る差分抽出データでは、4個のボクセルの体積量がそれぞれ1段減少し、別の8個のボクセルの体積量がそれぞれ1段増加し、かつ、その他のボクセルには増減の変化がない。つまり、体積の増加分と減少分とが存在するが、体積の増加分と減少分を相殺しても体積の増加が1段分4個多く存在するので、基準ボクセルデータと対比ボクセルデータとは総体積が異なる。この場合、全体的に荷物が動いてしまっているため、正確な移動量の算出は不可能である。また、この場合、体積の増加4個分と減少4個分との間で荷物4の移動があり、残りの体積の増加4個分で基準ボクセルデータの表面位置から対比ボクセルデータの増加した表面位置までの距離が移動したとして、変動体積量推定部(図2の233)で変動体積量を推定し、荷崩れ判定部(図2の237)で変動体積量の閾値判定を行う。 When the reference voxel data in FIG. 4 changes to the comparison voxel data according to the movement example 1-2, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) becomes the difference extraction data according to the movement example 1-2 in FIG. It becomes like differential extraction data. In the differential extraction data according to Movement Example 1-2, the volumes of four voxels each decrease by one level, the volumes of other eight voxels increase by one level, and the other voxels increase or decrease no change in In other words, there is an increase and a decrease in volume, but even if the increase and decrease in volume are offset, there are four increases in volume by one stage, so the reference voxel data and the comparison voxel data Different total volume. In this case, since the load has moved as a whole, it is impossible to accurately calculate the amount of movement. Also, in this case, there is a movement of the load 4 between the increase of 4 volumes and the decrease of 4 volumes, and the surface position of the comparison voxel data increased from the surface position of the reference voxel data to the remaining 4 increases of the volume. Assuming that the distance to the position has moved, the fluctuating volume estimation unit (233 in FIG. 2) estimates the fluctuating volume, and the collapse determination unit (237 in FIG. 2) performs threshold determination of the fluctuating volume.
 図4の基準ボクセルデータから移動例1-3に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図4の移動例1-3に係る差分抽出データのようになる。移動例1-3に係る差分抽出データでは、8個のボクセルの体積量がそれぞれ1段減少し、別の12個のボクセルの体積量がそれぞれ1段増加し、さらに別の4個のボクセルの体積量がそれぞれ2段増加し、かつ、その他のボクセルには増減の変化がない。つまり、体積の増加分と減少分とが存在するが、体積の増加分と減少分を相殺しても体積の増加が1段分4個多く存在し、かつ、体積の増加が2段分4個多く存在するので、基準ボクセルデータと対比ボクセルデータとは総体積が異なる。この場合、体積の増加1段8個分と減少1段8個分との間で荷物4の移動があり、残りの体積の増加1段4個分と体積の増加2段4個分で基準ボクセルデータの表面位置から対比ボクセルデータの増加した表面位置までの距離が移動したとして、変動体積量推定部(図2の233)で変動体積量を推定し、荷崩れ判定部(図2の237)で変動体積量の閾値判定を行う。 When the reference voxel data in FIG. 4 changes to the comparison voxel data according to the movement example 1-3, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is the difference extraction data according to the movement example 1-3 in FIG. It becomes like differential extraction data. In the differential extraction data according to Movement Example 1-3, the volumes of 8 voxels decrease by one level, the volumes of another 12 voxels increase by one level, and the volumes of another four voxels increase by one level. The volume increases by two steps, and there is no increase or decrease in other voxels. In other words, there is an increase and a decrease in volume, but even if the increase and decrease in volume are offset, there are 4 more increases in volume by 1 step, and the increase in volume is 4 by 2 steps. Since there are many, the reference voxel data and comparison voxel data have different total volumes. In this case, there is movement of the cargo 4 between the volume increase of 8 units and the volume decrease of 8 units. Assuming that the distance from the surface position of the voxel data to the increased surface position of the comparison voxel data has moved, the fluctuating volume estimation unit (233 in FIG. 2) estimates the fluctuating volume, and the cargo collapse determination unit (237 in FIG. 2) ) to determine the threshold value of the volume fluctuation.
 次に、実施形態1に係る積載空間認識システムにおける積載空間認識装置の差分抽出部、まとまり抽出部、組み合せ推定部、移動量推定部、及び荷崩れ判定部の動作をいくつかの例及び図面を用いて説明する。図5は、基準ボクセルと対比ボクセルとの差分を抽出したときに変動体積量がなく、かつ、移動量がある場合のいくつかの例を模式的に示したイメージ図である。図6は、図5の例2-1のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。図7は、図5の例2-2のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。図8は、図5の例2-3のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。図9は、図5の例2-4のまとまり推定から最長移動量推定までの推移を模式的に示したイメージ図である。 Next, some examples and drawings of the operation of the difference extraction unit, unity extraction unit, combination estimation unit, movement amount estimation unit, and load collapse determination unit of the loading space recognition device in the loading space recognition system according to the first embodiment will be described. will be used for explanation. FIG. 5 is an image diagram schematically showing several examples in which there is no volume variation and there is movement when the difference between the reference voxel and the comparison voxel is extracted. FIG. 6 is an image diagram schematically showing the transition from collective estimation to longest movement amount estimation in example 2-1 of FIG. FIG. 7 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in example 2-2 of FIG. FIG. 8 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in Example 2-3 of FIG. FIG. 9 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in Example 2-4 of FIG.
 トラック(図1の2)の後方側から荷物(図1の4)の全体像を見たときの基準ボクセルデータが図5の基準ボクセルデータのような状態である場合、図5の基準ボクセルデータから移動例2-1に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図5の移動例2-1に係る差分抽出データのようになる。移動例2-1に係る差分抽出データでは、6個のボクセルの体積量がそれぞれ1段増加し、別の6個のボクセルの体積量がそれぞれ1段減少し、その他のボクセルには増減の変化がない。つまり、体積の増加分と減少分とが存在し、体積の増加分と減少分とが同じ数及び段数で相殺されるので、基準ボクセルデータと対比ボクセルデータとの総体積が同じである。この場合、まとまり抽出部(図2の234)で図6(A)のように同じ水平方向の位置にある、体積量増加ボクセルと体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまり2つを抽出し、組み合せ推定部(図2の235)で図6(B)のように紐付けされた体積量増加ボクセルのまとまりと体積量減少ボクセルのまとまりとの組み合せ2つを推定し、移動量推定部(図2の236)で図6(C)のように組み合せに係る体積量減少ボクセルから体積量増加ボクセルまでの距離(図6(C)の2つの矢印参照)を算出し、まとまりデータに係る体積量の増減のないボクセルの長さ(図示せず)を算出し、図6(D)のように算出された前記距離から、算出された前記長さを差し引いた値を荷物4の移動量(図6(D)の2つの矢印参照)として推定し、移動量推定部(図2の236)で移動量の中から最も長い移動量を最長移動量(図6(D)の丸で囲まれた矢印参照)と推定し、荷崩れ判定部(図2の237)で最長移動量の閾値判定を行う。 If the reference voxel data when the entire image of the package (4 in FIG. 1) is viewed from the rear side of the truck (2 in FIG. 1) is in a state like the reference voxel data in FIG. 5, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is changed from the comparison voxel data according to the movement example 2-1 to the difference extraction data according to the movement example 2-1 in FIG. Become. In the difference extraction data according to Movement Example 2-1, the volume of each of the six voxels increases by one step, the volume of another six voxels decreases by one step, and the other voxels undergo an increase or decrease change. There is no In other words, there is an increase and a decrease in volume, and the increase and decrease in volume are offset by the same number and number of stages, so the total volume of the reference voxel data and comparison voxel data is the same. In this case, there is no increase or decrease in the volume existing between the volume increase voxel and the volume decrease voxel located at the same horizontal position as in FIG. Two groups of voxels are extracted, and two combinations of a group of increased volume voxels and a group of decreased volume voxels linked as shown in FIG. Then, the movement amount estimating unit (236 in FIG. 2) calculates the distance (see the two arrows in FIG. 6C) from the volume decrease voxel to the volume increase voxel related to the combination as shown in FIG. Then, the length (not shown) of the voxel with no volume increase or decrease related to the aggregated data was calculated, and the calculated length was subtracted from the distance calculated as shown in FIG. 6(D). The value is estimated as the movement amount of the load 4 (see two arrows in FIG. 6(D)), and the movement amount estimator (236 in FIG. 2) selects the longest movement amount from among the movement amounts as the longest movement amount (FIG. 6 (See the circled arrow in (D)), and the load collapse determination unit (237 in FIG. 2) determines the maximum amount of movement with a threshold value.
 図5の基準ボクセルデータから移動例2-2に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図5の移動例2-2に係る差分抽出データのようになる。移動例2-2に係る差分抽出データでは、8個のボクセルの体積量がそれぞれ1段増加し、別の8個のボクセルの体積量がそれぞれ1段減少し、その他のボクセルには増減の変化がない。つまり、体積の増加分と減少分とが存在し、体積の増加分と減少分とが同じ数及び段数で相殺されるので、基準ボクセルデータと対比ボクセルデータとの総体積が同じである。この場合、まとまり抽出部(図2の234)で図7(A)のように同じ水平方向の位置にある、体積量増加ボクセルと体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまり1つを抽出し、組み合せ推定部(図2の235)で図7(B)のように紐付けされた体積量増加ボクセルのまとまりと体積量減少ボクセルのまとまりとの組み合せ2つを推定し、移動量推定部(図2の236)で図7(C)のように組み合せに係る体積量減少ボクセルから体積量増加ボクセルまでの距離(図7(C)の2つの矢印参照)を算出し、まとまりデータに係る体積量の増減のないボクセルの長さ(図示せず;1つ)を算出し、図7(D)のように算出された前記距離から、算出された前記長さを差し引いた値を荷物4の移動量(図7(D)の2つの矢印参照;一方は差し引き有り、他方は差し引き無し)として推定し、移動量推定部(図2の236)で移動量の中から最も長い移動量を最長移動量(図7(D)の丸で囲まれた矢印参照;当該事例の場合、2つあるがどちらでも可)と推定し、荷崩れ判定部(図2の237)で最長移動量の閾値判定を行う。 When the reference voxel data in FIG. 5 changes to the comparison voxel data according to the movement example 2-2, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is the difference extraction data according to the movement example 2-2 in FIG. It becomes like differential extraction data. In the differential extraction data according to Movement Example 2-2, the volumes of eight voxels increase by one level, the volumes of other eight voxels decrease by one level, and the other voxels increase or decrease. There is no In other words, there is an increase and a decrease in volume, and the increase and decrease in volume are offset by the same number and number of stages, so the total volume of the reference voxel data and comparison voxel data is the same. In this case, there is no increase or decrease in the volume existing between the volume increase voxel and the volume decrease voxel located at the same horizontal position as in FIG. One group of voxels is extracted, and two combinations of a group of increased volume voxels and a group of decreased volume voxels linked as shown in FIG. Then, the movement amount estimation unit (236 in FIG. 2) calculates the distance (see the two arrows in FIG. 7C) from the volume decrease voxel related to the combination to the volume increase voxel as shown in FIG. Calculate the length of voxels (not shown; one) with no increase or decrease in the volume amount related to the combined data, and from the distance calculated as shown in FIG. 7(D), the calculated length is estimated as the amount of movement of the cargo 4 (see the two arrows in FIG. 7(D); one is deducted, the other is not deducted), and the movement amount estimation unit (236 in FIG. 2) The longest amount of movement from among them is estimated as the longest amount of movement (see the circled arrow in FIG. 7(D); in this case, there are two, but either can be used), and the cargo collapse determination unit (Fig. 237) performs a threshold determination of the maximum movement amount.
 図5の基準ボクセルデータから移動例2-3に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図5の移動例2-3に係る差分抽出データのようになる。移動例2-3に係る差分抽出データでは、8個のボクセルの体積量がそれぞれ1段増加し、別の8個のボクセルの体積量がそれぞれ1段減少し、その他のボクセルには増減の変化がない。つまり、体積の増加分と減少分とが存在し、体積の増加分と減少分とが同じ数及び段数で相殺されるので、基準ボクセルデータと対比ボクセルデータとの総体積が同じである。この場合、まとまり抽出部(図2の234)では図8(A)のように同じ水平方向の位置にある、体積量増加ボクセルと体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出することができないのでスキップし、組み合せ推定部(図2の235)で図8(B)のように紐付けされた体積量増加ボクセルのまとまりと体積量減少ボクセルのまとまりとの組み合せ2つを推定し、移動量推定部(図2の236)で図8(C)のように組み合せに係る体積量減少ボクセルから体積量増加ボクセルまでの距離(図8(C)の2つの矢印参照)を算出し、図8(D)のように算出された前記距離を荷物4の移動量(図8(D)の2つの矢印参照)として推定し、移動量推定部(図2の236)で移動量の中から最も長い移動量を最長移動量(図8(D)の丸で囲まれた矢印参照)と推定し、荷崩れ判定部(図2の237)で最長移動量の閾値判定を行う。 When the reference voxel data in FIG. 5 changes to the comparison voxel data according to the movement example 2-3, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is the difference extraction data according to the movement example 2-3 in FIG. It becomes like differential extraction data. In the differential extraction data according to Movement Example 2-3, the volume of each of the eight voxels increases by one step, the volume of another eight voxels decreases by one step, and the other voxels undergo an increase or decrease change. There is no In other words, there is an increase and a decrease in volume, and the increase and decrease in volume are offset by the same number and number of stages, so the total volume of the reference voxel data and comparison voxel data is the same. In this case, in the unity extraction unit (234 in FIG. 2), there is no increase or decrease in volume existing between the volume increase voxel and the volume decrease voxel located at the same horizontal position as shown in FIG. 8A. Since the group of voxels cannot be extracted, it is skipped, and the group of voxels with increased volume and the group of voxels with decreased volume that are linked as shown in FIG. Two combinations are estimated, and the displacement estimation unit (236 in FIG. 2) calculates the distance from the volume decrease voxel to the volume increase voxel (two 8(D)), and the distance calculated as shown in FIG. 8(D) is estimated as the amount of movement of the load 4 (see two arrows in FIG. 236), the longest movement amount among the movement amounts is estimated as the longest movement amount (see the circled arrow in FIG. 8(D)), and the cargo collapse determination unit (237 in FIG. 2) determines the longest movement amount. Perform threshold judgment.
 図5の基準ボクセルデータから移動例2-4に係る対比ボクセルデータのように変化すると、差分抽出部(図2の232)で抽出される差分抽出データは図5の移動例2-4に係る差分抽出データのようになる。移動例2-4に係る差分抽出データでは、12個のボクセルの体積量がそれぞれ1段増加し、別の12個のボクセルの体積量がそれぞれ1段減少し、その他のボクセルには増減の変化がない。つまり、体積の増加分と減少分とが存在し、体積の増加分と減少分とが同じ数及び段数で相殺されるので、基準ボクセルデータと対比ボクセルデータとの総体積が同じである。この場合、まとまり抽出部(図2の234)では図9(A)のように同じ水平方向の位置にある、体積量増加ボクセルと体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出することができないのでスキップし、組み合せ推定部(図2の235)で図9(B)のように紐付けされた体積量増加ボクセルのまとまりと体積量減少ボクセルのまとまりとの組み合せ3つを推定し、移動量推定部(図2の236)で図9(C)のように組み合せに係る体積量減少ボクセルから体積量増加ボクセルまでの距離(図9(C)の3つの矢印参照)を算出し、図9(D)のように算出された前記距離を荷物4の移動量(図9(D)の3つの矢印参照)として推定し、移動量推定部(図2の236)で移動量の中から最も長い移動量を最長移動量(図9(D)の丸で囲まれた矢印参照)と推定し、荷崩れ判定部(図2の237)で最長移動量の閾値判定を行う。 When the reference voxel data in FIG. 5 changes to the comparison voxel data according to the movement example 2-4, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is the difference extraction data according to the movement example 2-4 in FIG. It becomes like differential extraction data. In the differential extraction data according to Movement Example 2-4, the volumes of 12 voxels increase by one step, the volumes of other 12 voxels decrease by one step, and the other voxels increase or decrease. There is no In other words, there is an increase and a decrease in volume, and the increase and decrease in volume are offset by the same number and number of stages, so the total volume of the reference voxel data and comparison voxel data is the same. In this case, in the unity extraction unit (234 in FIG. 2), there is no increase or decrease in volume existing between the volume increase voxel and the volume decrease voxel located at the same horizontal position as shown in FIG. 9A. Since the group of voxels cannot be extracted, it is skipped, and the group of increased volume voxels and the group of decreased volume voxels linked as shown in FIG. Three combinations are estimated, and the displacement estimation unit (236 in FIG. 2) calculates the distance from the volume decrease voxel to the volume increase voxel (three distances in FIG. 9C) as shown in FIG. (see arrows)), and the distance calculated as shown in FIG. 236), the longest movement amount out of the movement amounts is estimated as the longest movement amount (see the circled arrow in FIG. 9(D)), and the cargo collapse determination unit (237 in FIG. 2) determines the longest movement amount. Perform threshold judgment.
 なお、上記の移動例1-1~1-3、2-1~2-4におけるまとまり抽出部234のまとまり抽出処理の基準、組み合せ推定部235の組み合せ推定処理の基準、及び、移動量推定部236の移動量推定処理の基準については、図2の詳細な説明を参照されたい。 Note that, in the movement examples 1-1 to 1-3 and 2-1 to 2-4 above, the unity extraction processing criteria of the unity extraction unit 234, the combination estimation processing criteria of the combination estimation unit 235, and the movement amount estimation unit See the detailed description of FIG. 2 for the criteria of the H.236 displacement estimation process.
 実施形態1によれば、基準ボクセルデータと対比ボクセルデータとの対応する位置のボクセルの差分を抽出して荷物4の全体像の変動体積量又は最長移動量を推定して閾値判定を行っているので、荷物の移動による荷崩れの発生の可能性を判定することに貢献することができる。 According to the first embodiment, the difference between the voxels at the corresponding positions between the reference voxel data and the comparison voxel data is extracted to estimate the amount of change in volume or the maximum amount of movement of the overall image of the package 4, and perform threshold determination. Therefore, it is possible to contribute to determining the possibility of collapse of cargo due to movement of cargo.
 また、実施形態1によれば、荷物4で隠れたオクルージョン部分が生じても、そのオクルージョン部分を推定して荷物4の荷姿を把握するので、オクルージョン部分における荷物4の変化を考慮することが可能である。 Further, according to the first embodiment, even if an occlusion portion hidden by the cargo 4 occurs, the occlusion portion is estimated and the packing appearance of the cargo 4 is grasped. Therefore, it is possible to consider the change of the cargo 4 in the occlusion portion. It is possible.
 さらに、実施形態1によれば、トラック2の移動開始前など、特定時点の荷姿の基準ボクセルデータを保持し、基準時から所定又は任意の時間が経過した後の荷姿の対比ボクセルデータと比較することにより、荷崩れの可能性の有無を検出することができるので、荷物の積み降ろし作業などによって不安定な配置となり、落下の危険がある荷物があった場合でも、それを検出し、ドライバに通知することができる。これにより荷物4の荷崩れの可能性を早期に発見し、荷物4の破損を未然に防ぐことができ、輸送品質の低下を防ぐことに貢献することができる。 Furthermore, according to the first embodiment, the reference voxel data of the packing appearance at a specific time, such as before the start of movement of the truck 2, is held, and the comparative voxel data of the packing appearance after a predetermined or arbitrary time has passed from the reference time. By comparing, it is possible to detect the possibility of collapse of cargo. Drivers can be notified. As a result, the possibility of collapse of the cargo 4 can be detected at an early stage, the damage of the cargo 4 can be prevented, and the transportation quality can be prevented from being deteriorated.
 [実施形態2]
実施形態2に係る積載空間認識装置について図面を用いて説明する。図11は、実施形態2に係る積載空間認識装置の構成を模式的に示したブロック図である。
[Embodiment 2]
A loading space recognition device according to the second embodiment will be described with reference to the drawings. FIG. 11 is a block diagram schematically showing the configuration of the loading space recognition device according to the second embodiment.
 積載空間認識装置200は、撮影データに基づいて、荷物が積載される積載空間における荷物の変動(体積変動、距離変動)を認識する装置である。積載空間認識装置200は、荷物全体像推定部222と、ボクセル化部223と、判定部230と、を備える。 The loading space recognizing device 200 is a device that recognizes variations (volumetric variations, distance variations) of the cargo in the loading space where the cargo is loaded, based on the photographed data. Loading space recognizing device 200 includes package overall image estimating unit 222 , voxelizing unit 223 , and determining unit 230 .
 荷物全体像推定部222は、荷物の積載空間を所定の方向から撮像した3次元データに基づいて、積載空間に積載された荷物の全体像を推定して推定結果データとして出力するように構成されている。 The luggage overall image estimating unit 222 is configured to estimate the overall image of the luggage loaded in the loading space based on the three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and output it as estimation result data. ing.
 ボクセル化部223は、推定結果データをボクセル化してボクセルデータとして出力するように構成されている。 The voxelization unit 223 is configured to voxelize the estimation result data and output it as voxel data.
 判定部230は、任意の基準時のボクセルデータと、基準時から所定又は任意の時間を経過した後のボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するように構成されている。 The determination unit 230 compares voxel data at an arbitrary reference time with voxel data after a predetermined or arbitrary time has elapsed from the reference time, thereby estimating the amount of change in volume or the amount of movement of the load. It is configured to determine the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold value.
 実施形態2によれば、基準時のボクセルデータと基準時以外のボクセルデータとの対応する位置のボクセルの差分を抽出して荷物の全体像の変動体積量又は最長移動量を推定して閾値判定を行っているので、荷物の移動による荷崩れの発生の可能性を判定することに貢献することができる。 According to the second embodiment, the difference between the voxel data at the corresponding position between the voxel data at the reference time and the voxel data at the time other than the reference time is extracted, and the amount of change in volume or the maximum movement amount of the overall image of the package is estimated, and threshold determination is performed. is carried out, it is possible to contribute to determining the possibility of collapse of cargo due to movement of cargo.
 なお、実施形態1、2に係る積載空間認識装置は、いわゆるハードウェア資源(情報処理装置、コンピュータ)により構成することができ、図12に例示する構成を備えたものを用いることができる。例えば、ハードウェア資源1000は、内部バス1004により相互に接続される、プロセッサ1001、メモリ1002、ネットワークインタフェイス1003等を備える。 It should be noted that the loading space recognition device according to the first and second embodiments can be configured by so-called hardware resources (information processing device, computer), and can use one having the configuration illustrated in FIG. 12 . For example, hardware resource 1000 includes processor 1001 , memory 1002 , network interface 1003 , etc., which are interconnected by internal bus 1004 .
 なお、図12に示す構成は、ハードウェア資源1000のハードウェア構成を限定する趣旨ではない。ハードウェア資源1000は、図示しないハードウェア(例えば、入出力インタフェイス)を含んでもよい。あるいは、装置に含まれるプロセッサ1001等のユニットの数も図12の例示に限定する趣旨ではなく、例えば、複数のプロセッサ1001がハードウェア資源1000に含まれていてもよい。プロセッサ1001には、例えば、CPU(Central Processing Unit)、MPU(Micro Processor Unit)、GPU(Graphics Processing Unit)等を用いることができる。 It should be noted that the configuration shown in FIG. 12 is not intended to limit the hardware configuration of the hardware resource 1000 . The hardware resource 1000 may include hardware not shown (for example, an input/output interface). Alternatively, the number of units such as the processors 1001 included in the device is not limited to the illustration in FIG. For the processor 1001, for example, a CPU (Central Processing Unit), MPU (Micro Processor Unit), GPU (Graphics Processing Unit), etc. can be used.
 メモリ1002には、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、HDD(Hard Disk Drive)、SSD(Solid State Drive)等を用いることができる。 For the memory 1002, for example, RAM (Random Access Memory), ROM (Read Only Memory), HDD (Hard Disk Drive), SSD (Solid State Drive), etc. can be used.
 ネットワークインタフェイス1003には、例えば、LAN(Local Area Network)カード、ネットワークアダプタ、ネットワークインタフェイスカード等を用いることができる。 For the network interface 1003, for example, a LAN (Local Area Network) card, network adapter, network interface card, etc. can be used.
 ハードウェア資源1000の機能は、上述の処理モジュールにより実現される。当該処理モジュールは、例えば、メモリ1002に格納されたプログラムをプロセッサ1001が実行することで実現される。また、そのプログラムは、ネットワークを介してダウンロードするか、あるいは、プログラムを記憶した記憶媒体を用いて、更新することができる。さらに、上記処理モジュールは、半導体チップにより実現されてもよい。即ち、上記処理モジュールが行う機能は、何らかのハードウェアにおいてソフトウェアが実行されることによって実現できればよい。 The functions of the hardware resource 1000 are realized by the processing modules described above. The processing module is implemented by the processor 1001 executing a program stored in the memory 1002, for example. Also, the program can be downloaded via a network or updated using a storage medium storing the program. Furthermore, the processing module may be realized by a semiconductor chip. In other words, the functions performed by the above processing modules may be realized by executing software in some kind of hardware.
 上記実施形態の一部または全部は以下の付記のようにも記載され得るが、以下には限られない。 Some or all of the above embodiments can be described as the following supplementary notes, but are not limited to the following.
 [付記1]
荷物の積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するように構成されている荷物全体像推定部と、
前記推定結果データをボクセル化してボクセルデータとして出力するように構成されているボクセル化部と、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するように構成されている判定部と、
を備える、積載空間認識装置。
[付記2]
前記判定部は、
前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
前記差分データに基づいて荷物の全体像の変動体積量を推定して変動体積量推定データとして出力するように構成された変動体積量推定部と、
前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
を備える、付記1記載の積載空間認識装置。
[付記3]
前記判定部は、
前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
をさらに備え、
前記荷崩れ判定部は、前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定し、荷崩れの可能性が有ると判定されたときに、前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成されている、付記2記載の積載空間認識装置。
[付記4]
前記判定部は、
前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
を備える、付記1記載の積載空間認識装置。
[付記5]
ユーザの操作により、前記積載空間における荷物の積載領域を指定するように構成された領域指定部をさらに備え、
前記荷物全体像推定部は、前記積載領域に積載された荷物の全体像を推定するように構成されている、
付記1乃至4のいずれか一に記載の積載空間認識装置。
[付記6]
前記領域指定部は、ユーザの操作により、前記判定部での判定から除外する判定除外領域を指定するように構成され、
前記荷物全体像推定部は、前記判定除外領域に積載された荷物を除外して前記積載領域に積載された荷物の全体像を推定するように構成されている、
付記5記載の積載空間認識装置。
[付記7]
前記積載空間を有する構造体に取り付けられるとともに、揺れ又は音を検出する検出部をさらに備え、
前記判定部は、前記検出部で一定以上の揺れ又は音を検出したときに前記ボクセル化部から前記ボクセルデータを取得して、前記基準時の前記ボクセルデータと、取得した前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定するように構成されている、
付記1乃至6のいずれか一に記載の積載空間認識装置。
[付記8]
前記移動量推定部は、荷物の移動方向が横方向のときに、推定された前記移動量よりも小さくなるように補正し、又は、荷物の移動方向が縦方向又は斜め方向のときに、推定された前記移動量よりも大きくなるように補正し、補正された前記移動量の中から最も長い移動量を選択するように構成されている、
付記3又は4記載の積載空間認識装置。
[付記9]
前記判定部は、さらに、前記基準時以外の前記ボクセルデータと、その1つ前の他の前記ボクセルデータとの間の変化量と、前記基準時の前記ボクセルデータと前記基準時以外の前記ボクセルデータとの間の変化量とを比較することにより、荷崩れの可能性の有無を判定するように構成されている、
付記1乃至8のいずれか一に記載の積載空間認識装置。
[付記10]
前記判定部は、荷物の荷崩れの可能性があると判定したときに、警告出力指示情報を出力するように構成され、
前記積載空間認識装置は、前記警告出力指示情報に基づいて警告を出力するように構成された警告出力部をさらに備える、
付記1乃至9のいずれか一に記載の積載空間認識装置。
[付記11]
積載空間内の荷物の表面をセンシングして撮像された3次元データを出力するセンサと、
付記1乃至10のいずれか一に記載の積載空間認識装置と、
を備える、積載空間認識システム。
[付記12]
ハードウェア資源を用いて荷物の積載空間を認識する積載空間認識方法であって、
前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するステップと、
前記推定結果データをボクセル化してボクセルデータとして出力するステップと、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するステップと、
を含む、積載空間認識方法。
[付記13]
荷物の積載空間を認識させる処理をハードウェア資源に実行させるプログラムであって、
前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力する処理と、
前記推定結果データをボクセル化してボクセルデータとして出力する処理と、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定する処理と、
を前記ハードウェア資源に実行させる、プログラム。
[Appendix 1]
An overall luggage image estimation unit configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data. When,
a voxelization unit configured to voxelize the estimation result data and output as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a determination unit configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
A loading space recognition device.
[Appendix 2]
The determination unit is
a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
a volume variation estimating unit configured to estimate a volume variation of the overall image of the package based on the difference data and output as volume variation estimation data;
a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the estimated data of the volume fluctuation and the threshold for the volume fluctuation as the threshold;
The load space recognition device according to appendix 1, comprising:
[Appendix 3]
The determination unit is
Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
further comprising
The cargo collapse determination unit compares the estimated data of the volume fluctuation with the threshold for the volume fluctuation as the threshold to determine the possibility of cargo collapse. When it is determined that there is a possibility of cargo collapse, the presence or absence of the possibility of collapse of cargo is determined by comparing the estimated longest movement amount data with the threshold value for the longest movement amount as the threshold value. , the loading space recognition device according to Supplementary Note 2.
[Appendix 4]
The determination unit is
a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the longest movement amount estimation data with the threshold for the maximum movement amount as the threshold;
The load space recognition device according to appendix 1, comprising:
[Appendix 5]
further comprising an area specifying unit configured to specify a cargo loading area in the loading space by a user's operation;
The luggage overall image estimation unit is configured to estimate the overall image of the luggage loaded in the loading area.
5. The loading space recognition device according to any one of Appendices 1 to 4.
[Appendix 6]
The region specifying unit is configured to specify a determination exclusion region to be excluded from determination by the determination unit by user operation,
The luggage overall image estimating unit is configured to exclude the luggage loaded in the determination exclusion area and estimate the overall image of the luggage loaded in the loading area.
The loading space recognition device according to appendix 5.
[Appendix 7]
Further comprising a detection unit attached to the structure having the loading space and detecting vibration or sound,
The determination unit acquires the voxel data from the voxelization unit when the detection unit detects shaking or sound of a certain level or more, and compares the voxel data at the reference time and the acquired voxel data. configured to estimate the varying volume or displacement of the load by comparing
The loading space recognition device according to any one of Appendices 1 to 6.
[Appendix 8]
The movement amount estimator corrects the movement amount to be smaller than the estimated movement amount when the movement direction of the baggage is the horizontal direction, or the estimated movement amount when the movement direction of the baggage is the vertical direction or the oblique direction. corrected so as to be larger than the corrected movement amount, and selecting the longest movement amount from the corrected movement amount.
The loading space recognition device according to appendix 3 or 4.
[Appendix 9]
The determination unit further includes: a change amount between the voxel data other than the reference time and the other voxel data immediately before the voxel data; It is configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change between the data,
The loading space recognition device according to any one of appendices 1 to 8.
[Appendix 10]
The determination unit is configured to output warning output instruction information when it is determined that there is a possibility of cargo collapse,
The loading space recognition device further includes a warning output unit configured to output a warning based on the warning output instruction information.
The loading space recognition device according to any one of Appendices 1 to 9.
[Appendix 11]
a sensor that senses the surface of the cargo in the loading space and outputs imaged three-dimensional data;
a loading space recognition device according to any one of appendices 1 to 10;
A loading space recognition system.
[Appendix 12]
A loading space recognition method for recognizing a cargo loading space using hardware resources,
a step of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimated image as estimation result data;
a step of voxelizing the estimation result data and outputting it as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a step of determining whether or not there is a possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
A load space recognition method, comprising:
[Appendix 13]
A program for causing a hardware resource to execute a process for recognizing a loading space for cargo,
a process of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimation result data;
a process of voxelizing the estimation result data and outputting it as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated A process of determining the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold;
to the hardware resource.
 なお、上記の特許文献の各開示は、本書に引用をもって繰り込み記載されているものとし、必要に応じて本発明の基礎ないし一部として用いることが出来るものとする。本発明の全開示(特許請求の範囲及び図面を含む)の枠内において、さらにその基本的技術思想に基づいて、実施形態ないし実施例の変更・調整が可能である。また、本発明の全開示の枠内において種々の開示要素(各請求項の各要素、各実施形態ないし実施例の各要素、各図面の各要素等を含む)の多様な組み合せないし選択(必要により不選択)が可能である。すなわち、本発明は、請求の範囲及び図面を含む全開示、技術的思想にしたがって当業者であればなし得るであろう各種変形、修正を含むことは勿論である。また、本願に記載の数値及び数値範囲については、明記がなくともその任意の中間値、下位数値、及び、小範囲が記載されているものとみなされる。さらに、上記引用した文献の各開示事項は、必要に応じ、本願発明の趣旨に則り、本願発明の開示の一部として、その一部又は全部を、本書の記載事項と組み合せて用いることも、本願の開示事項に含まれる(属する)ものと、みなされる。 It should be noted that the disclosures of the above patent documents are incorporated herein by reference, and can be used as the basis or part of the present invention as necessary. Within the framework of the entire disclosure of the present invention (including claims and drawings), modifications and adjustments of the embodiments and examples are possible based on the basic technical concept thereof. Also, various combinations or selections of various disclosure elements (including each element of each claim, each element of each embodiment or example, each element of each drawing, etc.) within the framework of the full disclosure of the present invention (if necessary not selected) is possible. That is, the present invention naturally includes various variations and modifications that can be made by those skilled in the art according to the entire disclosure including the claims and drawings and the technical idea. Also, with regard to numerical values and numerical ranges described in this application, it is assumed that any intermediate values, sub-numerical values and sub-ranges thereof are described even if not specified. Furthermore, each disclosure matter of the above-cited documents can be used in combination with the descriptions of this document as part of the disclosure of the present invention in accordance with the spirit of the present invention, if necessary. are considered to be included in (belong to) the disclosure of the present application.
1 積載空間認識システム
2 トラック
3 コンテナ
4 荷物
5 積載空間
10 センサ
20 検出部
100 撮影データ
101 前処理データ
102 ボクセルデータ
200 積載空間認識装置
210 前処理部
211 フォーマット変換部
212 ノイズ除去部
220 荷姿把握部
221 領域指定部
222 荷物全体像推定部
223 ボクセル化部
230 判定部
231 基準確定部
232 差分抽出部
233 変動体積量推定部
234 まとまり抽出部
235 組み合せ推定部
236 移動量推定部
237 荷崩れ判定部
240 ユーザインタフェイス部
241 表示部
242 操作部
243 警告出力部
1000 ハードウェア資源
1001 プロセッサ
1002 メモリ
1003 ネットワークインタフェイス
1004 内部バス
1 Loading space recognition system 2 Truck 3 Container 4 Cargo 5 Loading space 10 Sensor 20 Detecting unit 100 Photographed data 101 Preprocessed data 102 Voxel data 200 Loading space recognition device 210 Preprocessing unit 211 Format conversion unit 212 Noise removal unit 220 Packing grasp Section 221 Region Designating Section 222 Parcel Whole Image Estimating Section 223 Voxelizing Section 230 Judging Section 231 Reference Establishing Section 232 Difference Extracting Section 233 Fluctuating Volume Estimating Section 234 Grouping Extracting Section 235 Combination Estimating Section 236 Moving Amount Estimating Section 237 Load Collapse Judging Section 240 User interface unit 241 Display unit 242 Operation unit 243 Warning output unit 1000 Hardware resource 1001 Processor 1002 Memory 1003 Network interface 1004 Internal bus

Claims (13)

  1.  荷物の積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するように構成されている荷物全体像推定部と、
     前記推定結果データをボクセル化してボクセルデータとして出力するように構成されているボクセル化部と、
     任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するように構成されている判定部と、
    を備える、積載空間認識装置。
    An overall luggage image estimation unit configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data. When,
    a voxelization unit configured to voxelize the estimation result data and output as voxel data;
    By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a determination unit configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
    A loading space recognition device.
  2.  前記判定部は、
     前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
     前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
     前記差分データに基づいて荷物の全体像の変動体積量を推定して変動体積量推定データとして出力するように構成された変動体積量推定部と、
     前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
    を備える、請求項1記載の積載空間認識装置。
    The determination unit is
    a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
    By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
    a volume variation estimating unit configured to estimate a volume variation of the overall image of the package based on the difference data and output as volume variation estimation data;
    a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the estimated data of the volume fluctuation and the threshold for the volume fluctuation as the threshold;
    The load space perceiving device of claim 1, comprising:
  3.  前記判定部は、
     前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
     前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
     前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
    をさらに備え、
     前記荷崩れ判定部は、前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定し、荷崩れの可能性が有ると判定されたときに、前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成されている、請求項2記載の積載空間認識装置。
    The determination unit is
    Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
    a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
    estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
    further comprising
    The cargo collapse determination unit compares the estimated data of the volume fluctuation with the threshold for the volume fluctuation as the threshold to determine the possibility of cargo collapse. When it is determined that there is a possibility of cargo collapse, the presence or absence of the possibility of collapse of cargo is determined by comparing the estimated longest movement amount data with the threshold value for the longest movement amount as the threshold value. 3. The loading space recognition device according to claim 2.
  4.  前記判定部は、
     前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
     前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
     前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
     前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
     前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
     前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
    を備える、請求項1記載の積載空間認識装置。
    The determination unit is
    a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
    By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
    Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
    a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
    estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
    a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the longest movement amount estimation data with the threshold for the maximum movement amount as the threshold;
    The load space perceiving device of claim 1, comprising:
  5.  ユーザの操作により、前記積載空間における荷物の積載領域を指定するように構成された領域指定部をさらに備え、
     前記荷物全体像推定部は、前記積載領域に積載された荷物の全体像を推定するように構成されている、
    請求項1乃至4のいずれか一に記載の積載空間認識装置。
    further comprising an area specifying unit configured to specify a cargo loading area in the loading space by a user's operation;
    The luggage overall image estimation unit is configured to estimate the overall image of the luggage loaded in the loading area.
    The loading space recognition device according to any one of claims 1 to 4.
  6.  前記領域指定部は、ユーザの操作により、前記判定部での判定から除外する判定除外領域を指定するように構成され、
     前記荷物全体像推定部は、前記判定除外領域に積載された荷物を除外して前記積載領域に積載された荷物の全体像を推定するように構成されている、
    請求項5記載の積載空間認識装置。
    The region specifying unit is configured to specify a determination exclusion region to be excluded from determination by the determination unit by user operation,
    The luggage overall image estimating unit is configured to exclude the luggage loaded in the determination exclusion area and estimate the overall image of the luggage loaded in the loading area.
    The loading space recognition device according to claim 5.
  7.  前記積載空間を有する構造体に取り付けられるとともに、揺れ又は音を検出する検出部をさらに備え、
     前記判定部は、前記検出部で一定以上の揺れ又は音を検出したときに前記ボクセル化部から前記ボクセルデータを取得して、前記基準時の前記ボクセルデータと、取得した前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定するように構成されている、
    請求項1乃至6のいずれか一に記載の積載空間認識装置。
    Further comprising a detection unit attached to the structure having the loading space and detecting vibration or sound,
    The determination unit acquires the voxel data from the voxelization unit when the detection unit detects shaking or sound of a certain level or more, and compares the voxel data at the reference time and the acquired voxel data. configured to estimate the varying volume or displacement of the load by comparing
    The loading space recognition device according to any one of claims 1 to 6.
  8.  前記移動量推定部は、荷物の移動方向が横方向のときに、推定された前記移動量よりも小さくなるように補正し、又は、荷物の移動方向が縦方向又は斜め方向のときに、推定された前記移動量よりも大きくなるように補正し、補正された前記移動量の中から最も長い移動量を選択するように構成されている、
    請求項3又は4記載の積載空間認識装置。
    The movement amount estimator corrects the movement amount to be smaller than the estimated movement amount when the movement direction of the baggage is the horizontal direction, or the estimated movement amount when the movement direction of the baggage is the vertical direction or the oblique direction. corrected so as to be larger than the corrected movement amount, and selecting the longest movement amount from the corrected movement amount.
    The loading space recognition device according to claim 3 or 4.
  9.  前記判定部は、さらに、前記基準時以外の前記ボクセルデータと、その1つ前の他の前記ボクセルデータとの間の変化量と、前記基準時の前記ボクセルデータと前記基準時以外の前記ボクセルデータとの間の変化量とを比較することにより、荷崩れの可能性の有無を判定するように構成されている、
    請求項1乃至8のいずれか一に記載の積載空間認識装置。
    The determination unit further includes: a change amount between the voxel data other than the reference time and the other voxel data immediately before the voxel data; It is configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change between the data,
    The loading space recognition device according to any one of claims 1 to 8.
  10.  前記判定部は、荷物の荷崩れの可能性があると判定したときに、警告出力指示情報を出力するように構成され、
     前記積載空間認識装置は、前記警告出力指示情報に基づいて警告を出力するように構成された警告出力部をさらに備える、
    請求項1乃至9のいずれか一に記載の積載空間認識装置。
    The determination unit is configured to output warning output instruction information when it is determined that there is a possibility of cargo collapse,
    The loading space recognition device further includes a warning output unit configured to output a warning based on the warning output instruction information.
    The loading space recognition device according to any one of claims 1 to 9.
  11.  積載空間内の荷物の表面をセンシングして撮像された3次元データを出力するセンサと、
     請求項1乃至10のいずれか一に記載の積載空間認識装置と、
    を備える、積載空間認識システム。
    a sensor that senses the surface of the cargo in the loading space and outputs imaged three-dimensional data;
    a loading space recognition device according to any one of claims 1 to 10;
    A loading space recognition system.
  12.  ハードウェア資源を用いて荷物の積載空間を認識する積載空間認識方法であって、
     前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するステップと、
     前記推定結果データをボクセル化してボクセルデータとして出力するステップと、
     任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するステップと、
    を含む、積載空間認識方法。
    A loading space recognition method for recognizing a cargo loading space using hardware resources,
    a step of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimated image as estimation result data;
    a step of voxelizing the estimation result data and outputting it as voxel data;
    By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a step of determining whether or not there is a possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
    A load space recognition method, comprising:
  13.  荷物の積載空間を認識させる処理をハードウェア資源に実行させるプログラムであって、
     前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力する処理と、
     前記推定結果データをボクセル化してボクセルデータとして出力する処理と、
     任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定する処理と、
    を前記ハードウェア資源に実行させる、プログラム。
    A program for causing a hardware resource to execute processing for recognizing a loading space for cargo,
    a process of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimation result data;
    a process of voxelizing the estimation result data and outputting it as voxel data;
    By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated A process of determining the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold;
    to the hardware resource.
PCT/JP2022/007817 2021-02-26 2022-02-25 Loading space recognition device, system, method, and program WO2022181753A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023502529A JPWO2022181753A1 (en) 2021-02-26 2022-02-25

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021030741 2021-02-26
JP2021-030741 2021-02-26

Publications (1)

Publication Number Publication Date
WO2022181753A1 true WO2022181753A1 (en) 2022-09-01

Family

ID=83049149

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/007817 WO2022181753A1 (en) 2021-02-26 2022-02-25 Loading space recognition device, system, method, and program

Country Status (2)

Country Link
JP (1) JPWO2022181753A1 (en)
WO (1) WO2022181753A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021123581A1 (en) 2021-09-13 2023-03-16 Zf Cv Systems Global Gmbh Procedures for cargo monitoring

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6577687B1 (en) * 2019-03-18 2019-09-18 株式会社Mujin Shape information generating device, control device, loading / unloading device, physical distribution system, program, and control method
JP2019219907A (en) * 2018-06-20 2019-12-26 三菱電機株式会社 Cargo-collapse prediction system, cargo-collapse prediction apparatus and method for them
JP2020060451A (en) * 2018-10-10 2020-04-16 日野自動車株式会社 Luggage space monitoring system and luggage space monitoring method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019219907A (en) * 2018-06-20 2019-12-26 三菱電機株式会社 Cargo-collapse prediction system, cargo-collapse prediction apparatus and method for them
JP2020060451A (en) * 2018-10-10 2020-04-16 日野自動車株式会社 Luggage space monitoring system and luggage space monitoring method
JP6577687B1 (en) * 2019-03-18 2019-09-18 株式会社Mujin Shape information generating device, control device, loading / unloading device, physical distribution system, program, and control method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021123581A1 (en) 2021-09-13 2023-03-16 Zf Cv Systems Global Gmbh Procedures for cargo monitoring

Also Published As

Publication number Publication date
JPWO2022181753A1 (en) 2022-09-01

Similar Documents

Publication Publication Date Title
US10909667B1 (en) Image rectification using transformation data
US11373320B1 (en) Detecting inventory changes by comparing image data
US11315262B1 (en) Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras
JP7228671B2 (en) Store Realog Based on Deep Learning
JP7228670B2 (en) Real-time inventory tracking using deep learning
JP6511681B1 (en) Shape information generation device, control device, unloading device, distribution system, program, and control method
US10692236B2 (en) Container use estimation
US20110215915A1 (en) Detection system and detecting method for car
US10805556B1 (en) Storage units with shifted-lens cameras
US11087273B1 (en) Item recognition system using reference images
JP6577687B1 (en) Shape information generating device, control device, loading / unloading device, physical distribution system, program, and control method
JP2015041164A (en) Image processor, image processing method and program
WO2022181753A1 (en) Loading space recognition device, system, method, and program
JP5780083B2 (en) Inspection device, inspection system, inspection method and program
US11922728B1 (en) Associating events with actors using digital imagery and machine learning
WO2023236825A1 (en) Method and apparatus for monitoring capacity utilization rate, and computer-readable storage medium
JP7191630B2 (en) Luggage room monitoring system and luggage room monitoring method
WO2022132239A1 (en) Method, system and apparatus for managing warehouse by detecting damaged cargo
KR20230094948A (en) Method for forklift pickup, computer device, and non-volatile storage medium
US11195140B1 (en) Determination of untidy item return to an inventory location using weight
US11117744B1 (en) Determination of untidy item return to an inventory location
US11398094B1 (en) Locally and globally locating actors by digital cameras and machine learning
US11468698B1 (en) Associating events with actors using digital imagery and machine learning
US11200677B2 (en) Method, system and apparatus for shelf edge detection
JP7323170B2 (en) Loading volume ratio measuring device, system, method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22759787

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023502529

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 11202305459U

Country of ref document: SG

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22759787

Country of ref document: EP

Kind code of ref document: A1