WO2022181753A1 - Loading space recognition device, system, method, and program - Google Patents
Loading space recognition device, system, method, and program Download PDFInfo
- Publication number
- WO2022181753A1 WO2022181753A1 PCT/JP2022/007817 JP2022007817W WO2022181753A1 WO 2022181753 A1 WO2022181753 A1 WO 2022181753A1 JP 2022007817 W JP2022007817 W JP 2022007817W WO 2022181753 A1 WO2022181753 A1 WO 2022181753A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- volume
- voxel
- cargo
- loading space
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 36
- 238000000605 extraction Methods 0.000 claims description 72
- 230000007423 decrease Effects 0.000 claims description 60
- 230000008859 change Effects 0.000 claims description 50
- 230000003247 decreasing effect Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 18
- 238000003384 imaging method Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 10
- 230000007717 exclusion Effects 0.000 claims description 6
- 238000006073 displacement reaction Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 24
- 238000012856 packing Methods 0.000 description 21
- 238000007781 pre-processing Methods 0.000 description 17
- 238000006243 chemical reaction Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000012546 transfer Methods 0.000 description 8
- 230000007704 transition Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 239000000284 extract Substances 0.000 description 6
- YLZOPXRUQYQQID-UHFFFAOYSA-N 3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)-1-[4-[2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidin-5-yl]piperazin-1-yl]propan-1-one Chemical compound N1N=NC=2CN(CCC=21)CCC(=O)N1CCN(CC1)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F YLZOPXRUQYQQID-UHFFFAOYSA-N 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- VZSRBBMJRBPUNF-UHFFFAOYSA-N 2-(2,3-dihydro-1H-inden-2-ylamino)-N-[3-oxo-3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)propyl]pyrimidine-5-carboxamide Chemical compound C1C(CC2=CC=CC=C12)NC1=NC=C(C=N1)C(=O)NCCC(N1CC2=C(CC1)NN=N2)=O VZSRBBMJRBPUNF-UHFFFAOYSA-N 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- NIPNSKYNPDTRPC-UHFFFAOYSA-N N-[2-oxo-2-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)ethyl]-2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidine-5-carboxamide Chemical compound O=C(CNC(=O)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F)N1CC2=C(CC1)NN=N2 NIPNSKYNPDTRPC-UHFFFAOYSA-N 0.000 description 3
- AFCARXCZXQIEQB-UHFFFAOYSA-N N-[3-oxo-3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)propyl]-2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidine-5-carboxamide Chemical compound O=C(CCNC(=O)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F)N1CC2=C(CC1)NN=N2 AFCARXCZXQIEQB-UHFFFAOYSA-N 0.000 description 3
- 238000011038 discontinuous diafiltration by volume reduction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000000546 chi-square test Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
Definitions
- the present invention is based on the priority claim of Japanese Patent Application: Japanese Patent Application No. 2021-030741 (filed on February 26, 2021), and the entire description of the application is incorporated herein by reference. shall be The present invention relates to a loading space recognition device, system, method, and program.
- Patent Documents 1 and 2 disclose three-dimensional information of a first region on the surface of a plurality of articles obtained by imaging or scanning the plurality of articles stacked from a first point as a technique for suppressing collapse of cargo. and a second information acquisition unit for acquiring three-dimensional information of a second region of the surfaces of the plurality of articles obtained by imaging or scanning the plurality of articles from a second point.
- a synthesizing unit that generates information indicating at least a part of the three-dimensional shape, wherein the positions of the first point and the second point are different from each other, and the synthesizing unit generates the three-dimensional information of the first region and the A shape information generation device is disclosed that complements one of the three-dimensional information of the second region with the other to generate information indicating the three-dimensional shape of at least part of the surfaces of the plurality of articles.
- Patent Document 3 discloses a stowage method for stacking a plurality of objects to be stowed in a predetermined packing style and with a predetermined weight as a technique for preventing collapse of cargo, wherein each of the objects to be stowed a step of measuring the weight and packing style of each of the stowage items, a step of calculating the density of each stowage item from the weight and packing style of each of these stowage items, and a step of accumulating the density information of each of these stowage items. and calculating the stowage position of each of these items to be stowed.
- Patent Document 4 discloses a technology that allows the driver to check the state of luggage in the luggage compartment at any time (whether or not the luggage has collapsed) on the upper surface of the luggage compartment at an intermediate position in the vehicle width direction.
- An information processing device for inputting image data of the luggage compartment from the surveillance camera, and an image data of the luggage compartment input from the information processing device and displayed. Equipped with a monitor and a communication device for inputting image data of the luggage compartment from the information processing device and transmitting the image data to a base station, the vehicle is constructed so that the availability of the loading space can be grasped from the image of the surveillance camera.
- a cargo room monitor is disclosed.
- Patent Document 5 a sensor installed in the loading platform of a transportation vehicle is used as a technology that enables drivers and management centers to easily know that cargo has collapsed in a transportation vehicle and to respond quickly to the collapse of cargo.
- an in-vehicle terminal that monitors collapse of cargo and, when collapse of cargo is detected, issues a warning to a driver indicating that collapse of cargo has occurred;
- a management center for receiving a load collapse monitoring signal indicating that the cargo collapse has occurred, which is transmitted from the A system is disclosed.
- Patent Document 6 as a technology capable of detecting collapse of cargo that occurs during and after transfer work, each time an individual item is transferred, a group of items stacked on a pallet before and after the transfer is displayed upward. to acquire a first image before the transfer of the article and a second image after the transfer, compare the first image and the second image, and determine the area other than the area where the transferred article existed.
- an article collapse detection method for determining whether or not a collapse of cargo has occurred based on the degree of change in the article area.
- the collapse of cargo is determined based on the presence or absence of reception of signals such as infrared rays and ultrasonic waves at the sensor. Movement of cargo cannot be detected, and the possibility of cargo collapse due to movement of cargo cannot be determined.
- Patent Document 6 that can detect collapse of cargo that occurs during transfer and after transfer work is completed, a first image taken before transfer and a first image after transfer are taken. An edge image is generated for each of the two images, the two images are compared, and the presence or absence of collapse of cargo is determined based on the degree of change of the goods other than the transferred goods.
- the main object of the present invention is to provide a loading space recognition device, system, method, and program that can contribute to determining the possibility of cargo collapse due to movement of cargo.
- a loading space recognizing device estimates an overall image of cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the cargo from a predetermined direction, and outputs it as estimation result data.
- a voxelization unit configured to voxelize the estimation result data and output as voxel data; the voxel data at an arbitrary reference time; and the reference estimating the amount of change in volume or amount of movement of the load by comparing the voxel data after a predetermined or arbitrary time has passed from time to time, and comparing the estimated amount of change in volume or amount of movement with a threshold value; and a determination unit configured to determine whether or not there is a possibility of cargo collapse.
- a load space recognition system includes a sensor that senses the surface of a load in the load space and outputs imaged three-dimensional data, and a load space recognition device according to the first viewpoint. .
- a loading space recognition method is a loading space recognition method for recognizing a cargo loading space using hardware resources, and is based on three-dimensional data obtained by imaging the loading space from a predetermined direction, a step of estimating an overall image of the cargo loaded in the loading space and outputting it as estimation result data; a step of voxelizing the estimation result data and outputting it as voxel data; the voxel data at an arbitrary reference time; estimating the amount of change in volume or amount of movement of the cargo by comparing the voxel data after a predetermined or arbitrary time has elapsed from the reference time, and comparing the estimated amount of change in volume or amount of movement with a threshold; and determining whether or not there is a possibility of cargo collapse by comparing.
- a program according to a fourth aspect is a program that causes a hardware resource to execute a process of recognizing a cargo loading space, and is based on three-dimensional data obtained by imaging the cargo space from a predetermined direction.
- a process of estimating the overall image of the loaded luggage and outputting it as estimation result data a process of voxelizing the estimation result data and outputting it as voxel data, the voxel data at an arbitrary reference time, and the voxel data from the reference time
- By comparing the voxel data after the elapse of a predetermined or arbitrary period of time with the voxel data and by comparing the estimated amount of change in volume or movement with a threshold value. and a process of determining whether or not there is a possibility of cargo collapse.
- the program can be recorded on a computer-readable storage medium.
- the storage medium can be non-transient such as semiconductor memory, hard disk, magnetic recording medium, optical recording medium, and the like.
- the present disclosure may also be embodied as a computer program product.
- a program is input to a computer device via an input device or an external communication interface, is stored in a storage device, drives a processor in accordance with predetermined steps or processes, and stages the results of processing including intermediate states as necessary. can be displayed via a display device, or can be communicated with the outside via a communication interface.
- a computer device for this purpose typically includes a processor, a storage device, an input device, a communication interface, and optionally a display device, all of which are connectable to each other by a bus.
- FIG. 1 is an image diagram schematically showing an example of the configuration and usage of a loading space recognition system according to Embodiment 1.
- FIG. 2 is a block diagram schematically showing the configuration of a loading space recognition device in the loading space recognition system according to Embodiment 1;
- FIG. 4 is a flowchart schematically showing the operation of the loading space recognition device in the loading space recognition system according to Embodiment 1;
- FIG. 10 is an image diagram schematically showing some examples in which there is a varying volume amount when extracting the difference between the reference voxel and the comparison voxel.
- FIG. 10 is an image diagram schematically showing several examples in which there is no volume variation and there is movement when the difference between the reference voxel and the comparison voxel is extracted.
- FIG. 6 is an image diagram schematically showing a transition from coherent estimation to longest movement amount estimation in example 2-1 of FIG. 5;
- FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-2 of FIG. 5;
- FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-3 of FIG. 5 ;
- FIG. 6 is an image diagram schematically showing a transition from collective estimation to longest movement amount estimation in Example 2-4 of FIG. 5;
- FIG. 5 is an image diagram schematically showing a modification of the configuration and usage of the loading space recognition system according to the first embodiment;
- FIG. 7 is a block diagram schematically showing the configuration of a loading space recognition device according to Embodiment 2;
- 3 is a block diagram schematically showing the configuration of hardware resources;
- connection lines between blocks in drawings and the like referred to in the following description include both bidirectional and unidirectional connections.
- the unidirectional arrows schematically show the flow of main signals (data) and do not exclude bidirectionality.
- an input port and an output port exist at the input end and the output end of each connection line, respectively, although not explicitly shown.
- the input/output interface is the same.
- the program is executed via a computer device, and the computer device includes, for example, a processor, a storage device, an input device, a communication interface, and optionally a display device. It is configured to be able to communicate with external devices (including computers), whether wired or wireless.
- FIG. 1 is an image diagram schematically showing an example of the configuration and usage of the loading space recognition system according to the first embodiment.
- FIG. 2 is a block diagram schematically showing the configuration of the loading space recognition device in the loading space recognition system according to the first embodiment.
- the cargo of containers on a truck will be described as an example.
- the loading space recognition system 1 uses a sensor 10 to detect changes in an object (volume variation, It is a system that recognizes distance fluctuations (see Fig. 1).
- the sensor 10 and the loading space recognition device 200 are connected so as to be communicable (wired communication or wireless communication).
- the loading space recognition system 1 can be mounted on the truck 2 .
- the loading space recognition device 200 recognizes the change of the object in the loading space 5 based on the photographed data (100 in FIG. 2) photographed by the sensor 10.
- the photographing data 100 is three-dimensional data photographed by the sensor 10 (data obtained by photographing three-dimensional elements such as point cloud data and depth data, data reconstructed into a three-dimensional space from a plurality of images, etc.). ).
- the sensor 10 is a sensor that senses and photographs (images) the surface of the load 4 in the loading space 5, which is the photographing area (see FIG. 1).
- the sensor 10 is communicably connected to the loading space recognition device 200 .
- the sensor 10 outputs photographed data ( 100 in FIG. 2 ) created from three-dimensional data obtained by photographing the loading space 5 to the loading space recognition device 200 .
- the sensor 10 can be selected according to the photographing conditions such as the photographing distance, the angle of view, and the size of the container 3 necessary for detecting the package 4 and the customer's request. Products sold by various manufacturers can be used for the sensor 10 .
- the sensor 10 for example, a stereo camera, ToF (Time of Flight) camera, 3D-lidar (Three Dimensions - light detection and ranging), 2D-lidar (Two Dimensions - light detection and ranging), LIDAR (Laser Imaging Detection And Ranging ) etc. can be used.
- the sensor 10 can be installed at a position where the loading space 5, which is an imaging area, can be photographed, for example, near the upper part of the container 3 on the loading entrance side.
- the sensor 10 can photograph the cargo 4 in the loading space 5 from above the cargo entrance side of the container 3 .
- At least one sensor 10 may be provided in the loading space 5, but a plurality of sensors may be provided.
- a plurality of sensors 10 may be used even when the loading space 5 is so large that it is difficult to photograph with one sensor.
- the types and manufacturers may differ.
- the loading space recognizing device 200 synthesizes each photographed data captured by each sensor 10 .
- the loading space recognizing device 200 is a device that recognizes fluctuations (volume fluctuations, distance fluctuations) of the cargo 4 in the loading space 5 in which the cargo 4 is loaded, based on the photographed data 100 from the sensor 10 (FIGS. 1 and 2). 2).
- a device having functional units (eg, a processor, a storage device, an input device, a communication interface, and a display device) constituting a computer can be used. Computers, smart phones, tablet terminals, etc. can be used.
- the loading space recognition device 200 has a function of determining whether or not there is a possibility that the cargo 4 will collapse.
- the loading space recognizing device 200 can also be employed when a certain amount of volumetric change is expected in the event of collapse of cargo under conditions such as pallet stacking.
- Loading space recognizing device 200 is implemented by preprocessing unit 210, packing appearance grasping unit 220, determining unit 230, and user interface unit 240 by executing a predetermined program.
- the preprocessing unit 210 is a functional unit that performs preprocessing for carrying out loading space recognition processing on the photographed data 100 (see FIG. 2).
- the preprocessing unit 210 outputs preprocessed data 101 obtained by preprocessing the photographed data 100 to the packing style grasping unit 220 and the user interface unit 240 .
- the preprocessing section 210 includes a format conversion section 211 and a noise removal section 212 .
- the format conversion unit 211 is a functional unit that converts the format of the photographed data 100 into a common format that can be commonly used in the loading space recognition device 200 as preprocessing (see FIG. 2).
- the format conversion unit 211 outputs the common format photographed data 100 to the noise elimination unit 212 . Note that if the format of the photographed data 100 is originally a common format, the process of the format conversion unit 211 can be omitted (skipped).
- the noise removal unit 212 is a functional unit that removes noise (for example, point groups unnecessary for loading space recognition) from the photographed data 100 from the format conversion unit 211 as preprocessing (see FIG. 2).
- the noise removal unit 212 outputs the preprocessed data 101 from which the noise in the photographed data 100 has been removed to the package overall image estimation unit 222 of the package grasping unit 220, and if necessary, the display unit of the user interface unit 240. 241 output.
- Noise removal methods include, for example, smoothing processing, filtering (eg, moving average filter processing, median filter processing, etc.), outlier removal processing (eg, outlier removal processing by chi-square test), and the like. Note that the processing of the noise removal unit 212 may be omitted (skipped) if there is almost no noise.
- the preprocessed data 101 is output to the display unit 241 and can be viewed by the user.
- the photographed data 100 and the preprocessed data 101 are the basis of all processing, it is possible to ensure that they can be saved by making them storable.
- the packing style grasping unit 220 is a functional block that grasps the current packing style from the preprocessed data 101 (see FIG. 2).
- the packing style grasping unit 220 outputs the voxel data 102 related to the grasped current packing style to the determining unit 230 .
- the packing style grasping unit 220 includes an area designating unit 221 , a whole package image estimating unit 222 , and a voxelizing unit 223 .
- the packing form grasping unit 220 measures information about the shape of the loaded baggage or empty space from the preprocessed data 101 and provides quantitative information.
- the area designation unit 221 is a functional unit that designates an area (loading area) in which the cargo 4 can be loaded within the container 3 (see FIG. 2). For example, if there is an area in the container 3 in which the cargo 4 cannot be loaded, the area designating unit 221 designates the area by excluding that area.
- the area specifying unit 221 allows the user to specify the stacking area via the operation unit 242, and can automatically specify the area in combination with a system that automatically acquires the edge of the area.
- the area specifying unit 221 may specify a determination exclusion area to be excluded from determination by the determination unit 230 from the user via the operation unit 242 .
- the change determination exclusion area may be specified not only in units of areas but also in units of voxels, or in units of single/individual packages (in units of objects).
- the package overall image estimation unit 222 estimates the entire package 4 loaded in the loading area. This is the functional part for estimating the image (see FIG. 2).
- the package overall image estimation unit 222 creates estimation result data relating to the overall image of the loaded package 4 .
- the package overall image estimation unit 222 outputs the generated estimation result data to the voxelization unit 223 .
- the package overall image estimation unit 222 detects not only the packages loaded on the surface of the package and the frontmost package, but also the packages (if necessary gap) is also estimated.
- Preprocessed data 101 stored in chronological order may be used, or machine learning may be used, in addition to a simple algorithm that assumes that a coordinate point indicating a package exists on an extension of .
- An example of a machine learning teacher data collection method is a method of associating preprocessed data 101 with the actual loading status and recording it in a database.
- distance sensors are provided at predetermined intervals on the ceiling surface of the container, and the distance to the loaded cargo is measured by the distance sensors.
- the loading status of the cargo such as whether the cargo directly below the sensor is piled up to the ceiling, is piled up to about half the height, or is completely empty.
- By providing multiple sensors on the ceiling surface it is possible to grasp the loading status of the entire cargo inside the container.
- machine learning method is not limited to this, and other methods may be used.
- a function of estimating information/attributes other than shape such as "weight” and "strictly no stacking/strictly prohibiting stacking below” may be provided.
- the voxelization unit 223 is a functional unit that voxels the estimation result data of the package overall image estimation unit 222 (see FIG. 2).
- the voxelization unit 223 creates voxel data 102 relating to the overall image of the package 4 based on the estimation result data of the package overall image estimation unit 222 .
- the voxelization unit 223 creates the voxel data 102 by treating the portion where the loaded baggage 4 does not exist as an empty space.
- the voxelization unit 223 outputs the created voxel data 102 to the reference determination unit 231 and the difference extraction unit 232 of the determination unit 230 .
- the voxel data 102 is data representing the overall image of the package 4 by combining a plurality of voxels (cubes) of a predetermined size.
- the voxel data 102 includes information on the dimensions of each voxel and the position of each plane.
- the voxel data 102 may be stored each time it is created.
- the voxel data 102 may also include information such as the number of packages in one voxel, information about which multiple voxels one package is located across, and information such as weight and shape.
- the determination unit 230 compares the voxel data 102 (reference voxel data) at a certain reference point in time with the voxel data 102 (comparison voxel data) at a point in time after a predetermined or arbitrary period of time has passed, thereby determining changes in the load. It is a functional unit that determines the possibility of collapse of cargo by estimating the amount of volume or amount of movement and comparing the estimated amount of fluctuating volume or amount of movement with a threshold value (see FIG. 2). Thereby, the load 4 can be prevented from falling.
- the determination unit 230 not only compares the reference voxel data and the comparison voxel data, but also compares the comparison voxel data with another comparison voxel data created immediately before to determine the possibility of cargo collapse. You may make it detect the presence or absence. As a result, when the amount of change between the comparison voxel data per unit time is smaller than the amount of change between the reference voxel data and the comparison voxel data, it is possible not to determine that there is a possibility of collapse of cargo. can. Furthermore, the truck 2 (a structure having a loading space) is provided with a detection unit 20 for detecting vibration and sound (see FIG.
- the determination unit 230 detects vibration and sound exceeding a certain level in the detection unit 20.
- comparison voxel data may be obtained from the voxelization unit 223 to detect the possibility of collapse of cargo.
- information obtained from another system such as the fact that there are many heavy packages and many light packages, may be used.
- the determination unit 230 includes a reference determination unit 231, a difference extraction unit 232, a volume fluctuation estimation unit 233, a unity extraction unit 234, a combination estimation unit 235, a movement amount estimation unit 236, and a load collapse determination unit 237. , provided.
- the reference determination unit 231 is a functional unit that determines the voxel data 102 at a certain reference point in time (reference time) from the voxelization unit 223 as a change reference point (see FIG. 2).
- the reference determination unit 231 stores the voxel data 102 at the reference time point (the indicated time point) from the voxelization unit 223 according to an instruction from the operation unit 242, and holds it as reference data for positional variation.
- the reference determination unit 231 outputs the voxel data 102 at the reference time to the difference extraction unit 232 .
- the difference extraction unit 232 compares the voxel data 102 at an arbitrary reference time (reference voxel data) with the voxel data 102 at a point in time when a predetermined or arbitrary time has passed from the reference time (comparison voxel data). is a functional unit that extracts the difference (for example, the difference in the position in the depth direction) between voxels corresponding to the surface of the cargo 4 viewed from a predetermined position (for example, the rear of the truck 2, the position of the sensor 10). (See Figure 2).
- the difference extraction unit 232 acquires reference voxel data from the reference determination unit 231 and acquires comparison voxel data from the voxelization unit 223 in the difference extraction process.
- the difference extraction process for example, when the position of the surface of the voxel when viewing the load 4 from behind the truck 2 changes to the front side of the track 2 in the depth direction (for example, when the load 4 at the position of the target voxel moves to the left, right, or bottom, and the cargo 4 on the far side appears; extracts the difference so as to indicate that the volume has "decreased", and similarly when the position of the voxel surface changes to the rear side of the track 2 (for example, the load 4 at the target voxel position moves to the front side If the target voxel moves, if another load 4 moves in front of the load 4 at the position of the target voxel, if the occlusion part increases), the difference is extracted so as to indicate that the volume "increased".
- the difference is extracted so as to express the degree of volume decrease or increase stepwise according to the magnitude of the distance by which the surface position of the voxel changes forward or backward on the track 2. can be done.
- the difference extraction unit 232 outputs the difference data extracted by the difference extraction process to the variation volume estimation unit 233 and the unity extraction unit 234 .
- the fluctuating volume estimation unit 233 is a functional unit that estimates the fluctuating volume (fluctuation volume) of the overall image of the package 4 based on the difference data from the difference extraction unit 232 (see FIG. 2).
- the fluctuating volume can be represented, for example, by the sum of the differences in the depth direction positions of voxels at corresponding positions on the surface of the cargo 4 viewed from the rear of the truck 2 .
- there is no change in volume (0) see movement examples 2-1 to 2-4 in FIG. 5.
- the fluctuating volume amount will be a negative value.
- the fluctuating volume estimation unit 233 outputs the estimated fluctuating volume estimation data to the cargo collapse determination unit 237 .
- the unity extraction unit 234 Based on the difference data from the difference extraction unit 232, the unity extraction unit 234 extracts voxels with increased volume (increased volume voxels) and voxels with decreased volume (decreased volume voxels) at the same horizontal position. ), and extracts a group of voxels with no increase or decrease in volume (see FIG. 2).
- the unity extraction unit 234 can be based on the premise that the volume of the packages 4 does not change.
- the grouping extracting section 234 outputs difference data including the extracted grouping data to the combination estimating section 235 .
- the grouping extracting section 234 outputs the difference data from the difference extracting section 232 to the combination estimating section 235 when the grouping cannot be extracted.
- the combination estimating unit 235 is a functional unit that estimates a combination of increased volume voxels and decreased volume voxels based on the difference data from the unity extraction unit 234 (including unity data if a unity can be extracted). (See Figure 2).
- the combination estimating unit 235 can assume that the volume of the package 4 does not change. In the combination estimation process, for example, when a plurality of combinations are possible, it is possible to preferentially combine increased volume voxels and decreased volume voxels at the same horizontal position. Also, when a plurality of combinations are possible, it is possible to preferentially combine the increased volume voxel and the decreased volume voxel that are farthest apart from each other.
- the combination estimating section 235 outputs difference data including the extracted combination data to the movement amount estimating section 236 .
- the combination estimation unit 235 outputs the difference data from the unity extraction unit 234 to the movement amount estimation unit 236 when the combination cannot be estimated.
- the movement amount estimation unit 236 calculates the number of packages generated between the reference time and the time when a predetermined or arbitrary time has passed. 4 (see FIG. 2).
- the moving amount estimator 236 can assume that the volume of the load 4 does not change.
- the movement amount estimation unit 236 calculates the distance from the volume reduction voxel to the volume increase voxel related to the combination data, and calculates the distance from the volume amount increase voxel related to the combined data. is calculated, and a value obtained by subtracting the calculated length from the calculated distance is estimated as the amount of movement of the load 4 .
- the movement amount estimation unit 236 calculates the distance from the volume reduction voxel to the volume increase voxel related to the combination data, and calculates the calculated distance. is estimated as the amount of movement of the load 4 . If the difference data does not include combination data (regardless of the presence or absence of grouped data), the movement of the cargo 4 is accompanied by a change in volume. It is possible to preferentially perform the estimation of the volume variation in the volume estimation unit 233 . In addition, when estimating the amount of movement, correction may be made so that different weights are assigned to horizontal movement and vertical movement (falling).
- the movement amount estimator 236 estimates the movement amount for each combination data.
- the movement amount estimation unit 236 selects the longest movement amount from the estimated movement amounts, and estimates the selected movement amount as the longest movement amount. In addition, when the estimated movement amount is one, the movement amount is estimated as the longest movement amount. Movement amount estimation section 236 outputs the estimated longest movement amount estimation data to cargo collapse determination section 237 .
- the load collapse determination unit 237 determines the estimated volume fluctuation data from the volume fluctuation estimation unit 233 or the estimated longest movement amount data from the movement amount estimation unit 236 and a preset threshold (the first value for the volume fluctuation). This is a functional unit that determines whether or not there is a possibility that the load 4 will collapse by comparing the threshold value (or the second threshold value for the longest movement amount) (see FIG. 2). The cargo collapse determination unit 237 determines that there is a possibility that the cargo 4 collapses when the estimated volume fluctuation data is greater than the first threshold value (even when the first threshold value or more is possible).
- the cargo collapse determination unit 237 determines that there is no possibility of cargo collapse of the cargo 4 when the estimated volume fluctuation data is equal to or less than the first threshold value (or less than the first threshold value). The cargo collapse determination unit 237 determines that there is a possibility that the cargo 4 collapses when the longest movement amount is larger than the second threshold value (even when the second threshold value or more is possible). The cargo collapse determination unit 237 determines that there is no possibility of cargo collapse of the cargo 4 when the longest movement amount is equal to or less than the second threshold value (even when less than the second threshold value is possible). It is arbitrary for the cargo collapse determination unit 237 to determine whether or not there is a possibility of collapse of the cargo 4 by preferentially using either the variable volume amount or the longest movement amount.
- the determination can be made by preferentially using the variable volume amount with a relatively small load.
- the cargo collapse determination unit 237 outputs warning output instruction information to the warning output unit 243 to warn the user of the occurrence of an abnormality.
- the user interface unit 240 is a functional unit that has a user interface (function for exchanging information with the user) (see FIG. 2).
- the user interface unit 240 provides an interface between the user and the loading space recognition device 200 so that each process can be operated and the result of each process can be confirmed.
- the user interface section 240 includes a display section 241 , an operation section 242 and a warning output section 243 .
- the display unit 241 is a functional unit that displays the preprocessed data 101 and the like from the noise removal unit 212 (see FIG. 2).
- a liquid crystal display for example, an organic EL (Electroluminescence) display, an AR (Augmented Reality) glass, or the like can be used.
- the operation unit 242 is a functional unit that performs area designation and confirmation instructions to the area designation unit 221 and the reference determination unit 231 based on user operations (see FIG. 2).
- the operation unit 242 for example, a touch panel, a mouse, a camera and software for recognizing gestures and eye movements can be used.
- the warning output unit 243 is a functional unit that outputs a warning to the user based on the warning output instruction information from the cargo collapse determination unit 237 (see FIG. 2).
- a display that displays characters and images related to the warning, a speaker that outputs an alarm sound, a lamp that lights up the alarm, a communication unit that transmits warning output instruction information to another system, etc. can be used. .
- FIG. 3 is a flow chart schematically showing the operation of the loading space recognition device in the loading space recognition system according to the first embodiment. 1 and 2 and their descriptions should be referred to for the configuration and details of the loading space recognition device.
- the format conversion unit 211 of the preprocessing unit 210 acquires from the sensor 10 reference photographing data 100 (reference photographing data; three-dimensional data) obtained by photographing the cargo 4 in the loading space 5 serving as the photographing area ( Step A1).
- the format conversion section 211 of the preprocessing section 210 converts the format of the reference photographing data 100 into a common format (step A2).
- the noise removing unit 212 of the preprocessing unit 210 removes noise from the reference photographing data 100 converted into the common format to create reference preprocessing data 101 (step A3).
- the overall baggage image estimation section 222 of the packing style grasping section 220 determines the loaded baggage 4 on the loading area. is estimated (step A4).
- the voxelization unit 223 of the packing style grasping unit 220 creates reference voxel data 102 (reference voxel data) relating to the overall image of the package 4 based on the estimation result data estimated in step A4 ( Step A5).
- the reference determination unit 231 of the determination unit 230 saves the reference voxel data created by the voxelization unit 223 as a reference time value for the load collapse determination process by the operation of the operation unit 242 by the user ( Step A6).
- the operation of the operation unit 242 by the user can be performed, for example, after the loading of the cargo 4 into the container 3 is completed and before delivery is started.
- the format conversion unit 211 of the preprocessing unit 210 converts the image data 100 (reference image data) from the sensor 10 into the image pickup area when a predetermined or arbitrary time has passed since the reference image data 100 (reference image data) was acquired.
- Photographed data 100 for comparison comparative photographed data; three-dimensional data obtained by photographing the package 4 in the space 5 is acquired (step A7).
- the format conversion section 211 of the preprocessing section 210 converts the format of the photographing data 100 for comparison into a common format (step A8).
- the noise removal unit 212 of the preprocessing unit 210 removes noise from the comparison imaging data 100 converted into the common format to create comparison preprocessing data 101 (step A9).
- the overall baggage image estimation section 222 of the packing style grasping section 220 determines the loaded baggage 4 on the loading area. is estimated (step A10).
- the voxelization unit 223 of the packing style grasping unit 220 creates comparison voxel data 102 (comparison voxel data) relating to the overall image of the package 4 based on the estimation result data estimated in step A10 ( Step A11).
- the difference extraction unit 232 of the determination unit 230 compares the reference voxel data stored in the reference determination unit 231 with the comparison voxel data created by the voxelization unit 223 to obtain a predetermined position (for example, , the rear of the truck 2, the position of the sensor 10), and the difference between the voxels at the corresponding positions on the surface of the cargo 4 (for example, the difference in the position in the depth direction) is extracted (step A12).
- a predetermined position for example, the rear of the truck 2, the position of the sensor 10
- the fluctuating volume estimation unit 233 of the determination unit 230 estimates the fluctuating volume of the overall image of the package 4 (fluctuation volume) based on the difference data from the difference extraction unit 232 (step A13).
- the cargo collapse determination unit 237 of the determination unit 230 determines whether or not the fluctuation volume estimation data from the fluctuation volume estimation unit 233 is greater than a preset first threshold value for the fluctuation volume. (Step A14). If the estimated volume change is greater than the first threshold (YES in step A14), it is determined that there is a possibility that the cargo 4 may collapse, output warning output instruction information to the warning output unit 243, and proceed to step A20. .
- the unity extraction unit 234 of the determination unit 230 extracts the volume at the same horizontal position based on the difference data from the difference extraction unit 232.
- a set of voxels with no increase or decrease in volume between the increased voxel and the decreased volume voxel is extracted (step A15). Skip when no grouping can be extracted.
- the combination estimation unit 235 of the determination unit 230 combines the increased volume voxels and decreased volume voxels based on the difference data from the unity extraction unit 234 (including the unity data if the unity can be extracted). is estimated (step A16).
- the movement amount estimator 236 of the determination unit 230 determines when a predetermined or arbitrary time has elapsed from the reference time. Estimate the amount of movement of the cargo 4 that occurred during the period up to (step A17).
- the distance from the volume reduction voxel to the volume increase voxel related to the combined data is calculated, and the volume increase or decrease related to the combined data is calculated.
- the length of the non-existent voxel is calculated, and a value obtained by subtracting the calculated length from the calculated distance is estimated as the movement amount of the load 4 .
- the distance from the volume decrease voxel to the volume increase voxel related to the combination data is calculated, and the calculated The distance value is estimated as the amount of movement of the load 4 .
- the movement amount is estimated for each combination data.
- the movement amount estimation unit 236 of the determination unit 230 selects the longest movement amount from the estimated movement amounts, and estimates the selected movement amount as the longest movement amount (step A18).
- the load collapse determination unit 237 of the determination unit 230 determines whether or not the longest movement amount estimation data from the movement amount estimation unit 236 is larger than a preset second threshold value for the longest movement amount ( Step A19). If the estimated longest movement amount data is greater than the second threshold value (YES in step A19), it is determined that there is a possibility that the load 4 may collapse, output warning output instruction information to the warning output unit 243, and proceed to step A20. . If the estimated longest movement amount data is equal to or less than the second threshold value (NO in step A19), it is determined that there is no possibility of collapse of the cargo 4, and one cycle is terminated until the user instructs termination. Repeat steps A7 to A20.
- the warning output unit 243 of the user interface unit 240 outputs a warning to the user based on the warning output instruction information from the cargo collapse determination unit 237 (step A20). After that, one cycle is finished, and steps A7 to A20 are repeated until the user instructs to finish.
- FIG. 4 is an image diagram schematically showing some examples in which there is a varying volume amount when extracting the difference between the reference voxel and the comparison voxel.
- the reference voxel data when the whole image of the package (4 in FIG. 1) is viewed from the rear side of the truck (2 in FIG. 1) is in a state like the reference voxel data in FIG. 4, the reference voxel data in FIG. 4, the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is changed from the comparison voxel data according to the movement example 1-1 to the difference extraction data according to the movement example 1-1 in FIG. Become.
- the volume amounts of only the upper left four voxels increase by one step, and the other voxels do not increase or decrease.
- the reference voxel data and comparison voxel data differ in total volume.
- the occlusion part (gap or space) has increased on the back side of the voxel whose volume has increased.
- the volume estimator (233 in FIG. 2) estimates the volume fluctuation, and the collapse determination unit (237 in FIG. 2) performs threshold determination of the volume fluctuation.
- the difference extraction data extracted by the difference extraction unit becomes the difference extraction data according to the movement example 1-2 in FIG. It becomes like differential extraction data.
- the volumes of four voxels each decrease by one level, the volumes of other eight voxels increase by one level, and the other voxels increase or decrease no change in In other words, there is an increase and a decrease in volume, but even if the increase and decrease in volume are offset, there are four increases in volume by one stage, so the reference voxel data and the comparison voxel data Different total volume.
- the fluctuating volume estimation unit estimates the fluctuating volume
- the collapse determination unit (237 in FIG. 2) performs threshold determination of the fluctuating volume.
- the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 1-3 in FIG. It becomes like differential extraction data.
- the volumes of 8 voxels decrease by one level
- the volumes of another 12 voxels increase by one level
- the volumes of another four voxels increase by one level.
- the volume increases by two steps, and there is no increase or decrease in other voxels.
- the fluctuating volume estimation unit (233 in FIG. 2) estimates the fluctuating volume, and the cargo collapse determination unit (237 in FIG. 2) ) to determine the threshold value of the volume fluctuation.
- FIG. 5 is an image diagram schematically showing several examples in which there is no volume variation and there is movement when the difference between the reference voxel and the comparison voxel is extracted.
- FIG. 6 is an image diagram schematically showing the transition from collective estimation to longest movement amount estimation in example 2-1 of FIG.
- FIG. 7 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in example 2-2 of FIG.
- FIG. 8 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in Example 2-3 of FIG.
- FIG. 9 is an image diagram schematically showing the transition from coherent estimation to longest movement amount estimation in Example 2-4 of FIG.
- the difference extraction data extracted by the difference extraction unit (232 in FIG. 2) is changed from the comparison voxel data according to the movement example 2-1 to the difference extraction data according to the movement example 2-1 in FIG. Become.
- the volume of each of the six voxels increases by one step, the volume of another six voxels decreases by one step, and the other voxels undergo an increase or decrease change.
- the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 2-2 in FIG. It becomes like differential extraction data.
- the volumes of eight voxels increase by one level
- the volumes of other eight voxels decrease by one level
- the other voxels increase or decrease.
- the movement amount estimation unit (236 in FIG. 2) calculates the distance (see the two arrows in FIG. 7C) from the volume decrease voxel related to the combination to the volume increase voxel as shown in FIG. Calculate the length of voxels (not shown; one) with no increase or decrease in the volume amount related to the combined data, and from the distance calculated as shown in FIG.
- the calculated length is estimated as the amount of movement of the cargo 4 (see the two arrows in FIG. 7(D); one is deducted, the other is not deducted), and the movement amount estimation unit (236 in FIG. 2)
- the longest amount of movement from among them is estimated as the longest amount of movement (see the circled arrow in FIG. 7(D); in this case, there are two, but either can be used), and the cargo collapse determination unit (Fig. 237) performs a threshold determination of the maximum movement amount.
- the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 2-3 in FIG. It becomes like differential extraction data.
- the volume of each of the eight voxels increases by one step, the volume of another eight voxels decreases by one step, and the other voxels undergo an increase or decrease change.
- the increase and decrease in volume are offset by the same number and number of stages, so the total volume of the reference voxel data and comparison voxel data is the same.
- the displacement estimation unit (236 in FIG. 2) calculates the distance from the volume decrease voxel to the volume increase voxel (two 8(D)), and the distance calculated as shown in FIG. 8(D) is estimated as the amount of movement of the load 4 (see two arrows in FIG. 236), the longest movement amount among the movement amounts is estimated as the longest movement amount (see the circled arrow in FIG. 8(D)), and the cargo collapse determination unit (237 in FIG. 2) determines the longest movement amount. Perform threshold judgment.
- the difference extraction data extracted by the difference extraction unit is the difference extraction data according to the movement example 2-4 in FIG. It becomes like differential extraction data.
- the volumes of 12 voxels increase by one step
- the volumes of other 12 voxels decrease by one step
- the other voxels increase or decrease.
- the displacement estimation unit (236 in FIG. 2) calculates the distance from the volume decrease voxel to the volume increase voxel (three distances in FIG. 9C) as shown in FIG. (see arrows)), and the distance calculated as shown in FIG. 236), the longest movement amount out of the movement amounts is estimated as the longest movement amount (see the circled arrow in FIG. 9(D)), and the cargo collapse determination unit (237 in FIG. 2) determines the longest movement amount. Perform threshold judgment.
- the unity extraction processing criteria of the unity extraction unit 234, the combination estimation processing criteria of the combination estimation unit 235, and the movement amount estimation unit See the detailed description of FIG. 2 for the criteria of the H.236 displacement estimation process.
- the difference between the voxels at the corresponding positions between the reference voxel data and the comparison voxel data is extracted to estimate the amount of change in volume or the maximum amount of movement of the overall image of the package 4, and perform threshold determination. Therefore, it is possible to contribute to determining the possibility of collapse of cargo due to movement of cargo.
- the occlusion portion is estimated and the packing appearance of the cargo 4 is grasped. Therefore, it is possible to consider the change of the cargo 4 in the occlusion portion. It is possible.
- the reference voxel data of the packing appearance at a specific time such as before the start of movement of the truck 2 is held, and the comparative voxel data of the packing appearance after a predetermined or arbitrary time has passed from the reference time.
- the possibility of collapse of cargo Drivers can be notified.
- the possibility of collapse of the cargo 4 can be detected at an early stage, the damage of the cargo 4 can be prevented, and the transportation quality can be prevented from being deteriorated.
- FIG. 11 is a block diagram schematically showing the configuration of the loading space recognition device according to the second embodiment.
- the loading space recognizing device 200 is a device that recognizes variations (volumetric variations, distance variations) of the cargo in the loading space where the cargo is loaded, based on the photographed data.
- Loading space recognizing device 200 includes package overall image estimating unit 222 , voxelizing unit 223 , and determining unit 230 .
- the luggage overall image estimating unit 222 is configured to estimate the overall image of the luggage loaded in the loading space based on the three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and output it as estimation result data. ing.
- the voxelization unit 223 is configured to voxelize the estimation result data and output it as voxel data.
- the determination unit 230 compares voxel data at an arbitrary reference time with voxel data after a predetermined or arbitrary time has elapsed from the reference time, thereby estimating the amount of change in volume or the amount of movement of the load. It is configured to determine the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold value.
- the difference between the voxel data at the corresponding position between the voxel data at the reference time and the voxel data at the time other than the reference time is extracted, and the amount of change in volume or the maximum movement amount of the overall image of the package is estimated, and threshold determination is performed. is carried out, it is possible to contribute to determining the possibility of collapse of cargo due to movement of cargo.
- the loading space recognition device can be configured by so-called hardware resources (information processing device, computer), and can use one having the configuration illustrated in FIG. 12 .
- hardware resource 1000 includes processor 1001 , memory 1002 , network interface 1003 , etc., which are interconnected by internal bus 1004 .
- the configuration shown in FIG. 12 is not intended to limit the hardware configuration of the hardware resource 1000 .
- the hardware resource 1000 may include hardware not shown (for example, an input/output interface).
- the number of units such as the processors 1001 included in the device is not limited to the illustration in FIG.
- a CPU Central Processing Unit
- MPU Micro Processor Unit
- GPU Graphics Processing Unit
- RAM Random Access Memory
- ROM Read Only Memory
- HDD Hard Disk Drive
- SSD Solid State Drive
- LAN Local Area Network
- network adapter for example, a LAN (Local Area Network) card, network adapter, network interface card, etc.
- network interface card for example, a LAN (Local Area Network) card, network adapter, network interface card, etc. can be used.
- LAN Local Area Network
- the functions of the hardware resource 1000 are realized by the processing modules described above.
- the processing module is implemented by the processor 1001 executing a program stored in the memory 1002, for example.
- the program can be downloaded via a network or updated using a storage medium storing the program.
- the processing module may be realized by a semiconductor chip.
- the functions performed by the above processing modules may be realized by executing software in some kind of hardware.
- An overall luggage image estimation unit configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data.
- a voxelization unit configured to voxelize the estimation result data and output as voxel data;
- the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a determination unit configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
- a loading space recognition device configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data.
- the determination unit is a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point; By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data; a volume variation estimating unit configured to estimate a volume variation of the overall image of the package based on the difference data and output as volume variation estimation data; a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the estimated data of the volume fluctuation and the threshold for the volume fluctuation as the threshold;
- the load space recognition device comprising: [Appendix 3] The determination unit is Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased
- the determination unit is a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point; By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data; Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity
- the loading space recognition device according to any one of Appendices 1 to 4.
- the region specifying unit is configured to specify a determination exclusion region to be excluded from determination by the determination unit by user operation
- the luggage overall image estimating unit is configured to exclude the luggage loaded in the determination exclusion area and estimate the overall image of the luggage loaded in the loading area.
- the loading space recognition device according to appendix 5.
- Appendix 7 Further comprising a detection unit attached to the structure having the loading space and detecting vibration or sound, The determination unit acquires the voxel data from the voxelization unit when the detection unit detects shaking or sound of a certain level or more, and compares the voxel data at the reference time and the acquired voxel data.
- the movement amount estimator corrects the movement amount to be smaller than the estimated movement amount when the movement direction of the baggage is the horizontal direction, or the estimated movement amount when the movement direction of the baggage is the vertical direction or the oblique direction. corrected so as to be larger than the corrected movement amount, and selecting the longest movement amount from the corrected movement amount.
- the loading space recognition device according to appendix 3 or 4.
- the determination unit further includes: a change amount between the voxel data other than the reference time and the other voxel data immediately before the voxel data; It is configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change between the data, The loading space recognition device according to any one of appendices 1 to 8.
- the determination unit is configured to output warning output instruction information when it is determined that there is a possibility of cargo collapse,
- the loading space recognition device further includes a warning output unit configured to output a warning based on the warning output instruction information.
- the loading space recognition device according to any one of Appendices 1 to 9.
- [Appendix 11] a sensor that senses the surface of the cargo in the loading space and outputs imaged three-dimensional data; a loading space recognition device according to any one of appendices 1 to 10; A loading space recognition system.
- [Appendix 12] A loading space recognition method for recognizing a cargo loading space using hardware resources, a step of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimated image as estimation result data; a step of voxelizing the estimation result data and outputting it as voxel data; By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a step of determining whether or not there is a possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
本発明は、日本国特許出願:特願2021-030741号(2021年2月26日出願)の優先権主張に基づくものであり、同出願の全記載内容は引用をもって本書に組み込み記載されているものとする。
本発明は、積載空間認識装置、システム、方法、及びプログラムに関する。 [Description of related applications]
The present invention is based on the priority claim of Japanese Patent Application: Japanese Patent Application No. 2021-030741 (filed on February 26, 2021), and the entire description of the application is incorporated herein by reference. shall be
The present invention relates to a loading space recognition device, system, method, and program.
実施形態1に係る積載空間認識システムについて図面を用いて説明する。図1は、実施形態1に係る積載空間認識システムの構成及び使用態様の一例を模式的に示したイメージ図である。図2は、実施形態1に係る積載空間認識システムにおける積載空間認識装置の構成を模式的に示したブロック図である。ここでは、トラックのコンテナの積み荷を例に説明する。 [Embodiment 1]
A loading space recognition system according to the first embodiment will be described with reference to the drawings. FIG. 1 is an image diagram schematically showing an example of the configuration and usage of the loading space recognition system according to the first embodiment. FIG. 2 is a block diagram schematically showing the configuration of the loading space recognition device in the loading space recognition system according to the first embodiment. Here, the cargo of containers on a truck will be described as an example.
実施形態2に係る積載空間認識装置について図面を用いて説明する。図11は、実施形態2に係る積載空間認識装置の構成を模式的に示したブロック図である。 [Embodiment 2]
A loading space recognition device according to the second embodiment will be described with reference to the drawings. FIG. 11 is a block diagram schematically showing the configuration of the loading space recognition device according to the second embodiment.
荷物の積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するように構成されている荷物全体像推定部と、
前記推定結果データをボクセル化してボクセルデータとして出力するように構成されているボクセル化部と、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するように構成されている判定部と、
を備える、積載空間認識装置。
[付記2]
前記判定部は、
前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
前記差分データに基づいて荷物の全体像の変動体積量を推定して変動体積量推定データとして出力するように構成された変動体積量推定部と、
前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
を備える、付記1記載の積載空間認識装置。
[付記3]
前記判定部は、
前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
をさらに備え、
前記荷崩れ判定部は、前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定し、荷崩れの可能性が有ると判定されたときに、前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成されている、付記2記載の積載空間認識装置。
[付記4]
前記判定部は、
前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
を備える、付記1記載の積載空間認識装置。
[付記5]
ユーザの操作により、前記積載空間における荷物の積載領域を指定するように構成された領域指定部をさらに備え、
前記荷物全体像推定部は、前記積載領域に積載された荷物の全体像を推定するように構成されている、
付記1乃至4のいずれか一に記載の積載空間認識装置。
[付記6]
前記領域指定部は、ユーザの操作により、前記判定部での判定から除外する判定除外領域を指定するように構成され、
前記荷物全体像推定部は、前記判定除外領域に積載された荷物を除外して前記積載領域に積載された荷物の全体像を推定するように構成されている、
付記5記載の積載空間認識装置。
[付記7]
前記積載空間を有する構造体に取り付けられるとともに、揺れ又は音を検出する検出部をさらに備え、
前記判定部は、前記検出部で一定以上の揺れ又は音を検出したときに前記ボクセル化部から前記ボクセルデータを取得して、前記基準時の前記ボクセルデータと、取得した前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定するように構成されている、
付記1乃至6のいずれか一に記載の積載空間認識装置。
[付記8]
前記移動量推定部は、荷物の移動方向が横方向のときに、推定された前記移動量よりも小さくなるように補正し、又は、荷物の移動方向が縦方向又は斜め方向のときに、推定された前記移動量よりも大きくなるように補正し、補正された前記移動量の中から最も長い移動量を選択するように構成されている、
付記3又は4記載の積載空間認識装置。
[付記9]
前記判定部は、さらに、前記基準時以外の前記ボクセルデータと、その1つ前の他の前記ボクセルデータとの間の変化量と、前記基準時の前記ボクセルデータと前記基準時以外の前記ボクセルデータとの間の変化量とを比較することにより、荷崩れの可能性の有無を判定するように構成されている、
付記1乃至8のいずれか一に記載の積載空間認識装置。
[付記10]
前記判定部は、荷物の荷崩れの可能性があると判定したときに、警告出力指示情報を出力するように構成され、
前記積載空間認識装置は、前記警告出力指示情報に基づいて警告を出力するように構成された警告出力部をさらに備える、
付記1乃至9のいずれか一に記載の積載空間認識装置。
[付記11]
積載空間内の荷物の表面をセンシングして撮像された3次元データを出力するセンサと、
付記1乃至10のいずれか一に記載の積載空間認識装置と、
を備える、積載空間認識システム。
[付記12]
ハードウェア資源を用いて荷物の積載空間を認識する積載空間認識方法であって、
前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するステップと、
前記推定結果データをボクセル化してボクセルデータとして出力するステップと、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するステップと、
を含む、積載空間認識方法。
[付記13]
荷物の積載空間を認識させる処理をハードウェア資源に実行させるプログラムであって、
前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力する処理と、
前記推定結果データをボクセル化してボクセルデータとして出力する処理と、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定する処理と、
を前記ハードウェア資源に実行させる、プログラム。 [Appendix 1]
An overall luggage image estimation unit configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data. When,
a voxelization unit configured to voxelize the estimation result data and output as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a determination unit configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
A loading space recognition device.
[Appendix 2]
The determination unit is
a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
a volume variation estimating unit configured to estimate a volume variation of the overall image of the package based on the difference data and output as volume variation estimation data;
a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the estimated data of the volume fluctuation and the threshold for the volume fluctuation as the threshold;
The load space recognition device according to appendix 1, comprising:
[Appendix 3]
The determination unit is
Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
further comprising
The cargo collapse determination unit compares the estimated data of the volume fluctuation with the threshold for the volume fluctuation as the threshold to determine the possibility of cargo collapse. When it is determined that there is a possibility of cargo collapse, the presence or absence of the possibility of collapse of cargo is determined by comparing the estimated longest movement amount data with the threshold value for the longest movement amount as the threshold value. , the loading space recognition device according to Supplementary Note 2.
[Appendix 4]
The determination unit is
a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the longest movement amount estimation data with the threshold for the maximum movement amount as the threshold;
The load space recognition device according to appendix 1, comprising:
[Appendix 5]
further comprising an area specifying unit configured to specify a cargo loading area in the loading space by a user's operation;
The luggage overall image estimation unit is configured to estimate the overall image of the luggage loaded in the loading area.
5. The loading space recognition device according to any one of Appendices 1 to 4.
[Appendix 6]
The region specifying unit is configured to specify a determination exclusion region to be excluded from determination by the determination unit by user operation,
The luggage overall image estimating unit is configured to exclude the luggage loaded in the determination exclusion area and estimate the overall image of the luggage loaded in the loading area.
The loading space recognition device according to
[Appendix 7]
Further comprising a detection unit attached to the structure having the loading space and detecting vibration or sound,
The determination unit acquires the voxel data from the voxelization unit when the detection unit detects shaking or sound of a certain level or more, and compares the voxel data at the reference time and the acquired voxel data. configured to estimate the varying volume or displacement of the load by comparing
The loading space recognition device according to any one of Appendices 1 to 6.
[Appendix 8]
The movement amount estimator corrects the movement amount to be smaller than the estimated movement amount when the movement direction of the baggage is the horizontal direction, or the estimated movement amount when the movement direction of the baggage is the vertical direction or the oblique direction. corrected so as to be larger than the corrected movement amount, and selecting the longest movement amount from the corrected movement amount.
The loading space recognition device according to
[Appendix 9]
The determination unit further includes: a change amount between the voxel data other than the reference time and the other voxel data immediately before the voxel data; It is configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change between the data,
The loading space recognition device according to any one of appendices 1 to 8.
[Appendix 10]
The determination unit is configured to output warning output instruction information when it is determined that there is a possibility of cargo collapse,
The loading space recognition device further includes a warning output unit configured to output a warning based on the warning output instruction information.
The loading space recognition device according to any one of Appendices 1 to 9.
[Appendix 11]
a sensor that senses the surface of the cargo in the loading space and outputs imaged three-dimensional data;
a loading space recognition device according to any one of appendices 1 to 10;
A loading space recognition system.
[Appendix 12]
A loading space recognition method for recognizing a cargo loading space using hardware resources,
a step of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimated image as estimation result data;
a step of voxelizing the estimation result data and outputting it as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a step of determining whether or not there is a possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
A load space recognition method, comprising:
[Appendix 13]
A program for causing a hardware resource to execute a process for recognizing a loading space for cargo,
a process of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimation result data;
a process of voxelizing the estimation result data and outputting it as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated A process of determining the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold;
to the hardware resource.
2 トラック
3 コンテナ
4 荷物
5 積載空間
10 センサ
20 検出部
100 撮影データ
101 前処理データ
102 ボクセルデータ
200 積載空間認識装置
210 前処理部
211 フォーマット変換部
212 ノイズ除去部
220 荷姿把握部
221 領域指定部
222 荷物全体像推定部
223 ボクセル化部
230 判定部
231 基準確定部
232 差分抽出部
233 変動体積量推定部
234 まとまり抽出部
235 組み合せ推定部
236 移動量推定部
237 荷崩れ判定部
240 ユーザインタフェイス部
241 表示部
242 操作部
243 警告出力部
1000 ハードウェア資源
1001 プロセッサ
1002 メモリ
1003 ネットワークインタフェイス
1004 内部バス 1 Loading space recognition system 2
Claims (13)
- 荷物の積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するように構成されている荷物全体像推定部と、
前記推定結果データをボクセル化してボクセルデータとして出力するように構成されているボクセル化部と、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するように構成されている判定部と、
を備える、積載空間認識装置。 An overall luggage image estimation unit configured to estimate an overall image of the luggage loaded in the loading space based on three-dimensional data obtained by imaging the loading space of the luggage from a predetermined direction, and to output estimation result data. When,
a voxelization unit configured to voxelize the estimation result data and output as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a determination unit configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
A loading space recognition device. - 前記判定部は、
前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
前記差分データに基づいて荷物の全体像の変動体積量を推定して変動体積量推定データとして出力するように構成された変動体積量推定部と、
前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
を備える、請求項1記載の積載空間認識装置。 The determination unit is
a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
a volume variation estimating unit configured to estimate a volume variation of the overall image of the package based on the difference data and output as volume variation estimation data;
a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the estimated data of the volume fluctuation and the threshold for the volume fluctuation as the threshold;
The load space perceiving device of claim 1, comprising: - 前記判定部は、
前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
をさらに備え、
前記荷崩れ判定部は、前記変動体積量推定データと、前記閾値としての変動体積量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定し、荷崩れの可能性が有ると判定されたときに、前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成されている、請求項2記載の積載空間認識装置。 The determination unit is
Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
further comprising
The cargo collapse determination unit compares the estimated data of the volume fluctuation with the threshold for the volume fluctuation as the threshold to determine the possibility of cargo collapse. When it is determined that there is a possibility of cargo collapse, the presence or absence of the possibility of collapse of cargo is determined by comparing the estimated longest movement amount data with the threshold value for the longest movement amount as the threshold value. 3. The loading space recognition device according to claim 2. - 前記判定部は、
前記ボクセル化部からの前記基準時の前記ボクセルデータを変化基準点として確定するように構成された基準確定部と、
前記基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間が経過した時点での前記ボクセルデータと、を比較することにより、所定の位置から荷物を見た面の対応する位置のボクセルの差分を抽出して差分データとして出力するように構成された差分抽出部と、
前記差分データに基づいて、同じ水平方向の位置にある、体積量が増加した体積量増加ボクセルと体積量が減少した体積量減少ボクセルとの間に存在する体積量の増減のないボクセルのまとまりを抽出してまとまりデータとして出力するように構成されたまとまり抽出部と、
前記差分データに基づいて、前記体積量増加ボクセルと前記体積量減少ボクセルとの組み合せを推定して組み合せデータとして出力するように構成された組み合せ推定部と、
前記差分データ、前記まとまりデータ、及び前記組み合せデータに基づいて、前記基準時から所定又は任意の時間が経過した時までの間に生じた荷物の少なくとも1つの移動量を推定し、推定された前記移動量の中から最も長い移動量を選択し、選択された前記移動量を最長移動量と推定して最長移動量推定データとして出力するように構成された移動量推定部と、
前記最長移動量推定データと、前記閾値としての最長移動量用の閾値と、を比較することによって、荷崩れの可能性の有無を判定するように構成された荷崩れ判定部と、
を備える、請求項1記載の積載空間認識装置。 The determination unit is
a reference determination unit configured to determine the voxel data at the reference time from the voxelization unit as a change reference point;
By comparing the voxel data at the reference time with the voxel data at the time when a predetermined or arbitrary time has passed since the reference time, voxels at corresponding positions on the surface of the package viewed from a predetermined position a difference extraction unit configured to extract the difference between and output as difference data;
Based on the difference data, a group of voxels without an increase or decrease in volume existing between an increased volume voxel with an increased volume and a decreased volume voxel with a decreased volume at the same horizontal position a unity extraction unit configured to extract and output as unity data;
a combination estimation unit configured to estimate a combination of the volume increase voxel and the volume decrease voxel based on the difference data and output the combined data;
estimating at least one amount of movement of the cargo during a period from the reference time to a time when a predetermined or arbitrary time has elapsed based on the difference data, the grouped data, and the combination data; a movement amount estimator configured to select the longest movement amount from among the movement amounts, estimate the selected movement amount as the longest movement amount, and output the longest movement amount estimation data;
a cargo collapse determination unit configured to determine the possibility of cargo collapse by comparing the longest movement amount estimation data with the threshold for the maximum movement amount as the threshold;
The load space perceiving device of claim 1, comprising: - ユーザの操作により、前記積載空間における荷物の積載領域を指定するように構成された領域指定部をさらに備え、
前記荷物全体像推定部は、前記積載領域に積載された荷物の全体像を推定するように構成されている、
請求項1乃至4のいずれか一に記載の積載空間認識装置。 further comprising an area specifying unit configured to specify a cargo loading area in the loading space by a user's operation;
The luggage overall image estimation unit is configured to estimate the overall image of the luggage loaded in the loading area.
The loading space recognition device according to any one of claims 1 to 4. - 前記領域指定部は、ユーザの操作により、前記判定部での判定から除外する判定除外領域を指定するように構成され、
前記荷物全体像推定部は、前記判定除外領域に積載された荷物を除外して前記積載領域に積載された荷物の全体像を推定するように構成されている、
請求項5記載の積載空間認識装置。 The region specifying unit is configured to specify a determination exclusion region to be excluded from determination by the determination unit by user operation,
The luggage overall image estimating unit is configured to exclude the luggage loaded in the determination exclusion area and estimate the overall image of the luggage loaded in the loading area.
The loading space recognition device according to claim 5. - 前記積載空間を有する構造体に取り付けられるとともに、揺れ又は音を検出する検出部をさらに備え、
前記判定部は、前記検出部で一定以上の揺れ又は音を検出したときに前記ボクセル化部から前記ボクセルデータを取得して、前記基準時の前記ボクセルデータと、取得した前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定するように構成されている、
請求項1乃至6のいずれか一に記載の積載空間認識装置。 Further comprising a detection unit attached to the structure having the loading space and detecting vibration or sound,
The determination unit acquires the voxel data from the voxelization unit when the detection unit detects shaking or sound of a certain level or more, and compares the voxel data at the reference time and the acquired voxel data. configured to estimate the varying volume or displacement of the load by comparing
The loading space recognition device according to any one of claims 1 to 6. - 前記移動量推定部は、荷物の移動方向が横方向のときに、推定された前記移動量よりも小さくなるように補正し、又は、荷物の移動方向が縦方向又は斜め方向のときに、推定された前記移動量よりも大きくなるように補正し、補正された前記移動量の中から最も長い移動量を選択するように構成されている、
請求項3又は4記載の積載空間認識装置。 The movement amount estimator corrects the movement amount to be smaller than the estimated movement amount when the movement direction of the baggage is the horizontal direction, or the estimated movement amount when the movement direction of the baggage is the vertical direction or the oblique direction. corrected so as to be larger than the corrected movement amount, and selecting the longest movement amount from the corrected movement amount.
The loading space recognition device according to claim 3 or 4. - 前記判定部は、さらに、前記基準時以外の前記ボクセルデータと、その1つ前の他の前記ボクセルデータとの間の変化量と、前記基準時の前記ボクセルデータと前記基準時以外の前記ボクセルデータとの間の変化量とを比較することにより、荷崩れの可能性の有無を判定するように構成されている、
請求項1乃至8のいずれか一に記載の積載空間認識装置。 The determination unit further includes: a change amount between the voxel data other than the reference time and the other voxel data immediately before the voxel data; It is configured to determine the presence or absence of the possibility of cargo collapse by comparing the amount of change between the data,
The loading space recognition device according to any one of claims 1 to 8. - 前記判定部は、荷物の荷崩れの可能性があると判定したときに、警告出力指示情報を出力するように構成され、
前記積載空間認識装置は、前記警告出力指示情報に基づいて警告を出力するように構成された警告出力部をさらに備える、
請求項1乃至9のいずれか一に記載の積載空間認識装置。 The determination unit is configured to output warning output instruction information when it is determined that there is a possibility of cargo collapse,
The loading space recognition device further includes a warning output unit configured to output a warning based on the warning output instruction information.
The loading space recognition device according to any one of claims 1 to 9. - 積載空間内の荷物の表面をセンシングして撮像された3次元データを出力するセンサと、
請求項1乃至10のいずれか一に記載の積載空間認識装置と、
を備える、積載空間認識システム。 a sensor that senses the surface of the cargo in the loading space and outputs imaged three-dimensional data;
a loading space recognition device according to any one of claims 1 to 10;
A loading space recognition system. - ハードウェア資源を用いて荷物の積載空間を認識する積載空間認識方法であって、
前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力するステップと、
前記推定結果データをボクセル化してボクセルデータとして出力するステップと、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定するステップと、
を含む、積載空間認識方法。 A loading space recognition method for recognizing a cargo loading space using hardware resources,
a step of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimated image as estimation result data;
a step of voxelizing the estimation result data and outputting it as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated a step of determining whether or not there is a possibility of cargo collapse by comparing the amount of change in volume or the amount of movement with a threshold value;
A load space recognition method, comprising: - 荷物の積載空間を認識させる処理をハードウェア資源に実行させるプログラムであって、
前記積載空間を所定の方向から撮像した3次元データに基づいて、前記積載空間に積載された荷物の全体像を推定して推定結果データとして出力する処理と、
前記推定結果データをボクセル化してボクセルデータとして出力する処理と、
任意の基準時の前記ボクセルデータと、前記基準時から所定又は任意の時間を経過した後の前記ボクセルデータと、を比較することによって荷物の変動体積量又は移動量を推定し、推定された前記変動体積量又は移動量と閾値とを比較することによって荷崩れの可能性の有無を判定する処理と、
を前記ハードウェア資源に実行させる、プログラム。 A program for causing a hardware resource to execute processing for recognizing a loading space for cargo,
a process of estimating an overall image of the cargo loaded in the loading space based on three-dimensional data obtained by imaging the loading space from a predetermined direction and outputting the estimation result data;
a process of voxelizing the estimation result data and outputting it as voxel data;
By comparing the voxel data at an arbitrary reference time with the voxel data after a predetermined or arbitrary time has elapsed from the reference time, the amount of change in volume or the amount of movement of the cargo is estimated, and the estimated A process of determining the presence or absence of the possibility of collapse of cargo by comparing the amount of change in volume or the amount of movement with a threshold;
to the hardware resource.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023502529A JPWO2022181753A1 (en) | 2021-02-26 | 2022-02-25 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021030741 | 2021-02-26 | ||
JP2021-030741 | 2021-02-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022181753A1 true WO2022181753A1 (en) | 2022-09-01 |
Family
ID=83049149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/007817 WO2022181753A1 (en) | 2021-02-26 | 2022-02-25 | Loading space recognition device, system, method, and program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2022181753A1 (en) |
WO (1) | WO2022181753A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021123581A1 (en) | 2021-09-13 | 2023-03-16 | Zf Cv Systems Global Gmbh | Procedures for cargo monitoring |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6577687B1 (en) * | 2019-03-18 | 2019-09-18 | 株式会社Mujin | Shape information generating device, control device, loading / unloading device, physical distribution system, program, and control method |
JP2019219907A (en) * | 2018-06-20 | 2019-12-26 | 三菱電機株式会社 | Cargo-collapse prediction system, cargo-collapse prediction apparatus and method for them |
JP2020060451A (en) * | 2018-10-10 | 2020-04-16 | 日野自動車株式会社 | Luggage space monitoring system and luggage space monitoring method |
-
2022
- 2022-02-25 JP JP2023502529A patent/JPWO2022181753A1/ja active Pending
- 2022-02-25 WO PCT/JP2022/007817 patent/WO2022181753A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019219907A (en) * | 2018-06-20 | 2019-12-26 | 三菱電機株式会社 | Cargo-collapse prediction system, cargo-collapse prediction apparatus and method for them |
JP2020060451A (en) * | 2018-10-10 | 2020-04-16 | 日野自動車株式会社 | Luggage space monitoring system and luggage space monitoring method |
JP6577687B1 (en) * | 2019-03-18 | 2019-09-18 | 株式会社Mujin | Shape information generating device, control device, loading / unloading device, physical distribution system, program, and control method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021123581A1 (en) | 2021-09-13 | 2023-03-16 | Zf Cv Systems Global Gmbh | Procedures for cargo monitoring |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022181753A1 (en) | 2022-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10909667B1 (en) | Image rectification using transformation data | |
US11373320B1 (en) | Detecting inventory changes by comparing image data | |
US11315262B1 (en) | Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras | |
JP7228671B2 (en) | Store Realog Based on Deep Learning | |
JP7228670B2 (en) | Real-time inventory tracking using deep learning | |
JP6511681B1 (en) | Shape information generation device, control device, unloading device, distribution system, program, and control method | |
US10692236B2 (en) | Container use estimation | |
US20110215915A1 (en) | Detection system and detecting method for car | |
US10805556B1 (en) | Storage units with shifted-lens cameras | |
US11087273B1 (en) | Item recognition system using reference images | |
JP6577687B1 (en) | Shape information generating device, control device, loading / unloading device, physical distribution system, program, and control method | |
JP2015041164A (en) | Image processor, image processing method and program | |
WO2022181753A1 (en) | Loading space recognition device, system, method, and program | |
JP5780083B2 (en) | Inspection device, inspection system, inspection method and program | |
US11922728B1 (en) | Associating events with actors using digital imagery and machine learning | |
WO2023236825A1 (en) | Method and apparatus for monitoring capacity utilization rate, and computer-readable storage medium | |
JP7191630B2 (en) | Luggage room monitoring system and luggage room monitoring method | |
WO2022132239A1 (en) | Method, system and apparatus for managing warehouse by detecting damaged cargo | |
KR20230094948A (en) | Method for forklift pickup, computer device, and non-volatile storage medium | |
US11195140B1 (en) | Determination of untidy item return to an inventory location using weight | |
US11117744B1 (en) | Determination of untidy item return to an inventory location | |
US11398094B1 (en) | Locally and globally locating actors by digital cameras and machine learning | |
US11468698B1 (en) | Associating events with actors using digital imagery and machine learning | |
US11200677B2 (en) | Method, system and apparatus for shelf edge detection | |
JP7323170B2 (en) | Loading volume ratio measuring device, system, method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22759787 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023502529 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202305459U Country of ref document: SG |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22759787 Country of ref document: EP Kind code of ref document: A1 |