WO2022157882A1 - コンテナダメージ検出システム - Google Patents
コンテナダメージ検出システム Download PDFInfo
- Publication number
- WO2022157882A1 WO2022157882A1 PCT/JP2021/002035 JP2021002035W WO2022157882A1 WO 2022157882 A1 WO2022157882 A1 WO 2022157882A1 JP 2021002035 W JP2021002035 W JP 2021002035W WO 2022157882 A1 WO2022157882 A1 WO 2022157882A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- container
- image
- learning
- damage
- video
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 238000009434 installation Methods 0.000 claims abstract description 33
- 238000006243 chemical reaction Methods 0.000 claims abstract description 28
- 238000003384 imaging method Methods 0.000 claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 238000010801 machine learning Methods 0.000 claims abstract description 6
- 238000004891 communication Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 description 20
- 230000000694 effects Effects 0.000 description 6
- 230000008439 repair process Effects 0.000 description 3
- JEIPFZHSYJVQDO-UHFFFAOYSA-N iron(III) oxide Inorganic materials O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G63/00—Transferring or trans-shipping at storage areas, railway yards or harbours or in opening mining cuts; Marshalling yard installations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to a container damage detection system.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2016-006616
- the publication states, "A learning device according to the present application restores a three-dimensional shape of a subject using an image in which the subject is depicted, and determines the subject based on the restored three-dimensional shape of the subject. and a learning unit for learning a learning unit for extracting features of the subject using the image generated by the generating unit. I will.”
- one object of the present invention is to provide a container damage detection system that can learn by sharing images taken at each facility that handles containers.
- one aspect of the present invention is a container photographing means for photographing an image of a container at a plurality of locations or facilities where the container is handled, and sharing the image of the container at the plurality of locations or facilities.
- communication means for communicably connecting the plurality of locations or facilities, and data representing an imaging angle with respect to the surface of the container to be photographed, or a parameter for deriving the imaging angle, are held for each of the container imaging means.
- machine learning means for executing inference from learning and learning results, and for capturing the container image between the plurality of locations or facilities without depending on attributes including the type of the container photographing means and installation information.
- FIG. 1 is an overall configuration diagram showing an example of the overall configuration of a container damage detection system according to an embodiment of the present invention
- FIG. FIG. 3 is a configuration diagram showing a configuration example of a container damage detection unit according to the embodiment
- FIG. 4 is a processing flow diagram showing an overview of an example of damage detection processing according to the present embodiment
- FIG. 4 is a processing flow diagram showing an overview of an example of damage detection processing according to the present embodiment
- FIG. 7 is a configuration diagram showing a configuration example of a container damage detection unit according to the second embodiment of the present invention
- FIG. 11 is a processing flow diagram showing an overview of an example of damage detection processing in the second embodiment
- FIG. 11 is a configuration diagram showing a configuration example of a container damage detection unit according to a third embodiment of the present invention
- FIG. 11 is a processing flow diagram showing an overview of an example of damage detection processing in the third embodiment
- FIG. 11 is a configuration diagram showing a configuration example of a container damage detection unit according to a fourth embodiment of the present invention
- FIG. 20 is a processing flow diagram showing an overview of an example of damage detection processing in the fourth embodiment
- FIG. 13 is a diagram showing a configuration example of a container chart in the fourth embodiment
- FIG. 1 is an overall configuration diagram showing an example of the overall configuration of a container damage detection system according to this embodiment
- FIG. 2 is a configuration diagram showing an example of the configuration of a container damage detection unit
- FIGS. FIG. 5 is a processing flow diagram illustrating an overview; The numbers in parentheses or outside the parentheses are the reference numerals of the constituent elements described in the drawings.
- a container damage detection system 1000 of this embodiment is composed of a facility A (100), a facility B (110), . It is designed to be shared with each other.
- a plurality of photographing means 1 attached to a photographing means mounting gate 105 photograph the side and ceiling surfaces of a container 106 loaded on a trailer 104, and the images photographed by the photographing means 1 are damaged.
- the damage detection section 103 detects damaged portions of the container 106 (holes, rust, dents, etc. on the container surface), and the monitor 107 displays the state of damage.
- the positional relationship between the photographing means mounting gate 105 and the trailer 104 is like the facility A front image 101 when viewed from the front of the trailer 104, and like the facility A side image 102 when viewed from the lateral direction.
- Facility A ( 100 ) is also connected to network 2 via damage detection unit 103 .
- a facility N (120) is a facility for loading containers 106 from a cargo ship 121 onto a trailer 104 using a gantry crane 122 attached to a quay 123. Each surface of the container 106 loaded on the trailer 104 is photographed by the photographing means 1, and the images photographed by the photographing means 1 are collected in the damage detection section 103. The configuration is such that the situation is displayed on the monitor 107 .
- the facility N (120) is also connected to the network 2 via the damage detection unit 103. FIG.
- Facility B (110) represents another facility similar to Facility A (100) or Facility N (120), and is connected to Network 2 in the same manner as Facility A (100) or Facility N (120). .
- the damage detection unit 103 is a part that detects the damaged portion of the container 106 from the image photographed by the photographing means 1.
- Camera installation information parameter holding unit 3a, camera installation information parameter holding unit 3b for photographing means, camera installation information parameter holding unit 3c for photographing unit 4, video recording unit 4, learning result recording unit 17, arithmetic unit 30, network interface 18, display 19 , and the arithmetic unit 30 includes a first viewpoint conversion unit 5 , a learning execution unit 10 , and a damage detection inference execution unit 6 . Since a plurality of photographing means 1 are provided, they are individually numbered as photographing means a, photographing means b, and photographing means c.
- Photographing means a (1a), the photographing means b (1b), and the photographing means c (1c) are recorded in the image recording section 4 via the image capturing sections 20, 21, and 22, respectively.
- Photographing means a (1a), photographing means b (1b) are respectively stored in the photographing means a camera installation information parameter holding section 3a, the photographing means b camera installation information parameter holding section 3b, and the photographing means c camera installation information parameter holding section 3c. ), and the photographing angle with respect to the container surface at the time of installation of the camera, which is the photographing means c (1c), are recorded in advance as information parameters at the time of camera installation.
- the video of the part to be learned is extracted from the video recorded in the video recording unit 4, and the learning result learned by the learning execution unit 10 is recorded in the learning result recording unit 17, and the learning result recorded in the learning result recording unit 17 is recorded.
- the damage detection of the container is performed by the damage detection inference execution unit 6 from the image capturing units 20, 21, and 22.
- the video of the part to be learned is the video shot at the facility A (100), that is, the video shot by the shooting means a (1a), the shooting means b (1b), and the shooting means c (1c) of the facility A (100).
- the camera installation information recorded in the camera installation information parameter holding unit 3c of the photographing means c is imported into the facility A (100) via the network 2, and the facility A ( 100), the image taken at the same angle as the image taken at facility A (100) is subjected to viewpoint conversion processing, and the image recorded in the image recording unit 4 in the facility A (100) is used.
- the damage detection unit 103 causes the monitor 107 to display the various images recorded in the image recording unit 4 and the damage status of the container detected by the damage detection inference execution unit 6 through the display unit 19, and the camera installed as the photographing means a.
- the parameter information of the information parameter holding unit 3a, the camera installation information parameter holding unit 3b of the photographing means b, and the camera installation information parameter holding unit 3c of the photographing means c, and various images recorded in the image recording unit 4 are transmitted via the network interface 18, It is configured to be connected to the network 2 .
- FIG. 3 illustrates the flow of the case where damage detection is performed in the facility A (100) based on the result of learning by adding the video shot in .
- the number of photographing means is limited for the sake of simplification, in practice, it is also possible to construct a configuration in which learning is performed using images from a plurality of other facilities and from a plurality of photographing means of each facility.
- the facility A (100) processes the video from the imaging means a on the facility A (100) side according to the following flow.
- the image from the imaging means a on the facility A (100) side is processed at the facility A (100) according to the following flow.
- the calculation unit 30 performs viewpoint conversion so as to obtain a container image equivalent to that of the photographing means a.
- Record in the video recording unit 4 a container damage partial video obtained from another facility and subjected to viewpoint conversion. Processing (viii) is performed at facility A (100) on container damage partial images collected from a plurality of facilities.
- Learning is performed by the arithmetic unit 30 using the container damage partial video recorded in the video recording unit 4, and recorded in the recording unit 17 as a learning result for the photographing means a.
- container damage is detected at the facility A (100) in accordance with the following flow for the image from the imaging means a on the facility A (100) side.
- the damage video of the container can be obtained quickly, and the performance of the container damage detection system 1000 can be improved early. can be improved. It also has the effect of reducing annotation work during learning. Furthermore, if the undetected container damage is recorded as a learning video at some facility, it can be used for learning at all facilities. , can be improved.
- the relationship between the shooting angle of the container and the shooting means differs for each facility, but the inference is executed by absorbing the difference in the appearance of the container with the viewpoint conversion on the learning side. Since learning can be performed in accordance with the image of the shooting means, image processing such as viewpoint conversion is not required when performing inference, which has the effect of lightening the processing load.
- FIG. 5 is a diagram showing a configuration example of the damage detection unit 103 that detects container damage
- FIG. 6 is a processing flow diagram illustrating an overview of damage detection processing.
- This embodiment has a configuration in which a standardized video recording unit 8 and a second viewpoint conversion unit 9 are added to the damage detection unit 103 in the first embodiment.
- the shooting environment differs for each facility, and it is difficult to determine the conditions for shooting the container from the shooting means in advance.
- processing is performed at facility A (100) according to the following flow for the video from the imaging means a on the facility A (100) side.
- Computing unit 30 so that the container damage partial image obtained from another facility and recorded in the image recording unit 4 becomes a container image equivalent to that of the image capturing unit a based on the information in the camera installation information parameter holding unit 3a of the image capturing unit a. Perform viewpoint conversion with .
- container damage is detected at the facility A (100) in accordance with the following flow for the image from the imaging means a on the facility A (100) side.
- this embodiment standardizes the appearance of the container for learning videos acquired at each facility and records them as learning videos. Since conversion is performed so that the orientation of the shooting means is the same for each surface of the container, there is an effect that it can be used for learning without considering the camera installation information parameters of the image acquisition destination. In addition, when using images taken at the other facility, viewpoint conversion is performed in accordance with the direction of the image capturing means to be used with respect to the container surface, so that substantially the same effects as those of the other embodiments can be obtained.
- FIG. 7 is a diagram showing a configuration example of the damage detection unit 103 for detecting container damage in this embodiment
- FIG. 8 is a processing flow illustrating an overview of learning result update processing.
- This embodiment has a configuration in which a learning result candidate recording unit 16 and a learning result comparing unit 15 are added to the damage detection unit 103 in the second embodiment.
- processing for updating the learning results when additional learning is performed using damage images of containers that could not be detected or were detected by mistake is provided.
- new learning results are recorded in the learning result candidate recording unit 16 when additional learning is performed using images of damage that could not be detected or damage that was erroneously detected at each facility.
- the result of inference executed by the learning result candidate recording unit 16 and the result of inference executed by the learning result recording unit 17 are compared (judgment 1), and the result of inference executed by the learning result recording unit 17 is the learning result If all are included in the results of inference execution recorded in the candidate recording unit 16 (Decision 1: Yes), the learning results recorded in the learning result recording unit 17 are moved to the learning result recording unit 17, and new learning results Acts to detect damage.
- determination 1 it is determined whether the number of damage detections for which inference was performed based on the results recorded in the learning result candidate recording unit 16 exceeds the number of damage detections for which inference was performed by the learning result recording unit 17. If it is determined that it exceeds (determination 2: Yes), a human confirms the damage detection result of inference execution based on the result recorded in the learning result candidate recording unit 16, and determines whether to update the learning result. works.
- This embodiment judges the damage detection status of the container before and after additional learning, and decides whether to adopt the new learning result.
- FIG. 9 is a diagram showing a configuration example of the damage detection unit 103 for detecting container damage
- FIG. 10 is a processing flow diagram illustrating an overview of damage detection processing
- FIG. 11 is a configuration example of a container chart. It is a diagram.
- This embodiment has a configuration in which a container number detection unit 11, a container position detection unit 12, a damage position normalization unit 13, and a chart generation unit 14 are provided in the arithmetic unit 30 in the damage detection unit 103 of the third embodiment. ing.
- an image taken from a predetermined line of sight is subjected to line-of-sight conversion processing so as to be the same as when the image is captured from a direction perpendicular to the container surface.
- the position of the container, the size of the container obtained from the aspect ratio of the container, and the container number written on the container are detected and acquired.
- the damage portion of the container is detected by inference execution for the image captured from the photographing means, the damaged image and the type of damage are acquired, a chart is created for each container number, and the chart is acquired. Acts to describe information and images. The flow of processing will be described below with reference to FIG.
- Inference execution of damage detection is performed by the arithmetic unit 30 using the learning result for the photographing means a recorded in the learning result recording unit 17 for the image of the photographing means a.
- the damage position of the container is normalized by the arithmetic unit 30 based on the damage position information and the container size information obtained by inference execution. Obtain the damage video, damage type, normalized position, container number, and information about the surface of the container that was photographed, and perform the following processing.
- the damage detection result obtained by the inference execution is obtained from the information of the camera installation information parameter holding unit 3a of the photographing means a. types are put together in the form of a set of charts by the arithmetic unit 30 .
- the processing of (i) is returned to and repeated.
- FIG. 11 A configuration example of a container chart will be described with reference to FIG.
- the types and positions of damage such as holes, rust, dents, etc., and the magnitude of the damage can be described in actual size for each surface of the container.
- the latest video showing the status of each recorded damage and the history of repairs, changes in damage size, etc. are also recorded.
- a chart is created for each container and the damage situation is recorded, so containers can be managed appropriately at each facility.
- it operates so as to create a medical record by attaching an image from a predetermined direction, regardless of the facility.
- the present invention is not limited to the above-described embodiments, and includes various modifications.
- the video recording unit 4, the learning result recording unit 17, the learning result candidate recording unit 16, the standardized video recording unit 8, and the like can be partially or wholly implemented as a recording device such as a cloud built on a network.
- the arithmetic unit 30 may be realized by a processor such as a CPU or by a dedicated circuit.
- the damage detection unit 103 may be implemented by a personal computer or workstation, or may be implemented by a dedicated circuit.
- images may be shared between different photographing means within the same facility.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
Description
本発明の一実施形態を図1、図2、図3、図4を用いて説明する。図1は、本実施形態に係るコンテナダメージ検出システムの全体構成例を示す全体構成図、図2は、コンテナダメージ検出部の構成例を示す構成図、図3及び図4は、ダメージ検出処理の概要を例示する処理フロー図である。なお、括弧内あるいは括弧外の数字は図面に記載した構成要素の符号である。
(ii)映像記録部4に記録した映像からコンテナのダメージ部分を人手で抽出。
(iii)抽出したコンテナダメージ部分映像を映像記録部4に記録。
また、他施設、例えば、施設N(120)から入手した映像に対して以下の流れで施設A(100)にて処理を行っている。
(iv)他の施設で記録したコンテナダメージ部分映像と映像を撮影した際のカメラ設置情報パラメータをネットワーク2を介して入手。
(v)ネットワーク2を介し入手したカメラ設置情報から撮影手段aと同等のコンテナ映像になるよう演算ユニット30で視点変換を実施。
(vi)他施設から入手し視点変換を施したコンテナダメージ部分映像を映像記録部4に記録。
複数の施設から集められたコンテナダメージ部分映像に対して、施設A(100)にて(vii)の処理を行っている。
(vii)映像記録部4に記録したコンテナダメージ部分映像を用い演算ユニット30で学習を実施し、撮影手段a用の学習結果として記録部17に記録。
推論フェーズでは、施設A(100)側の撮影手段aからの映像に対して以下の流れで施設A(100)にてコンテナダメージの検出を行っている。
(viii)撮影手段aからの映像に対して学習結果記録部17に記録した撮影手段a用の学習結果を用い演算ユニット30でダメージ検出の推論実行を実施。
(ix)ダメージ検出結果をモニター107に表示、履歴作成。推論フェーズの(viii)の処理へ戻る。
(ii)映像記録部4に記録した映像からコンテナのダメージ部分を人手で抽出。
(iii)抽出したコンテナのダメージ部分映像に対して撮影手段aカメラ設置情報パラメータ保持部3aの情報から演算ユニット30で所定の視線になるように視点変換を実施。
(iv)視点変換をしたコンテナダメージ部分映像を映像記録部4に記録。
また、他施設、例えば、施設N(120)から入手した映像に対して以下の流れで施設A(100)にて処理を行っている。
(v)他の施設で記録したコンテナダメージ部分映像と映像を撮影した際のカメラ設置情報パラメータをネットワーク2を介し入手。
(vi)入手したカメラ設置情報から撮影手段aと同等のコンテナ映像になるよう演算ユニット30で視点変換を実施。
(vii)他施設から入手し視点変換を施したコンテナダメージ部分映像を映像記録部4に記録。
複数の施設から集められたコンテナダメージ部分映像に対して、施設A(100)にて(viii)の処理を行っている。
(viii)映像記録部4に記録したコンテナダメージ部分映像を用い演算ユニット30で学習を実施、撮影手段a用の学習結果として記録部17に記録。
推論フェーズでは、施設A(100)側の撮影手段aからの映像に対して以下の流れで施設A(100)にてコンテナダメージの検出を行っている。
(ix)撮影手段aからの映像に対して学習結果記録部17に記録した撮影手段a用の学習結果を用い演算ユニット30でダメージ検出の推論実行を実施。
(x)ダメージ検出結果をモニター107に表示、履歴作成。推論フェーズの(ix)の処理へ戻る。
本発明の第二の実施形態を図5、図6を用いて説明する。図5は、コンテナのダメージを検出するダメージ検出部103の構成例を示す図、図6は、ダメージ検出処理の概要を例示する処理フロー図である。
(ii)映像記録部4に記録した撮影手段aの映像からコンテナのダメージ部分を人手で抽出して映像記録部4に記録。
(iii)抽出したコンテナのダメージ部分映像に対して撮影手段aカメラ設置情報パラメータ保持部3aの情報から演算ユニット30で予め規定された標準視線になるように視点変換を実施し、標準化映像記録部8に記録。
(iv)他の施設で撮影され、標準視線に合わされたコンテナダメージ部分映像をネットワーク2を介し入手。
(v)他施設から入手したコンテナダメージ部分映像を映像記録部4に記録。
(vi)他施設から入手し映像記録部4に記録したコンテナダメージ部分映像に対して、撮影手段aカメラ設置情報パラメータ保持部3aの情報から撮影手段aと同等のコンテナ映像になるよう演算ユニット30で視点変換を実施。
(vii)映像記録部4に記録された人手で抽出したコンテナダメージ部分映像と他の施設から入手し撮影手段aと同等のコンテナ映像になるよう視線変換をしたコンテナダメージ部分映像を用いて演算ユニット30で学習を実施、撮影手段a用の学習結果として記録部17に記録。
推論フェーズでは、施設A(100)側の撮影手段aからの映像に対して以下の流れで施設A(100)にてコンテナダメージの検出を行っている。
(viii)撮影手段aの映像に対して、学習結果記録部17に記録した撮影手段a用の学習結果を用いて演算ユニット30でダメージ検出の推論実行を実施。
(ix)ダメージ検出結果をモニター107に表示、履歴作成。推論フェーズの(viii)へ戻る。
本発明の第三の実施形態を図7、図8を用いて説明する。図7は、本実施形態においてコンテナダメージを検出するためのダメージ検出部103の構成例を示す図、図8は、学習結果の更新処理の概要を例示する処理フローである。
本発明の第四の実施形態を図9、図10、図11を用いて説明する。図9は、コンテナダメージを検出するためのダメージ検出部103の構成例を示す図、図10は、ダメージ検出の処理概要を例示する処理フロー図、図11は、コンテナカルテの一構成例を示す図である。
(ii)映像記録部4に記録映像に対して撮影手段aカメラ設置情報パラメータ保持部3aの情報から演算ユニット30で所定の視線になるように視点変換を実施し、パラメータ保持部3aの情報から撮影したコンテナの撮影面の情報を取得。
(iii)視点変換を施した映像を用いて、演算ユニット30で映像からコンテナ位置を検出し、コンテナ位置情報からコンテナサイズを判断。
(iv)視点変換を施した映像を用いて、演算ユニット30で映像からコンテナ固有番号を検出。
(v)撮影手段aの映像に対して、学習結果記録部17に記録した撮影手段a用の学習結果を用いて演算ユニット30でダメージ検出の推論実行を実施。
(vi)推論実行により求めたダメージの位置情報とコンテナのサイズ情報とから、演算ユニット30でコンテナのダメージ位置を正規化。
上記の処理で得た、ダメージ映像とダメージの種類と正規化位置とコンテナ番号と撮影したコンテナの面に関する情報を取得し、以下の処理を行う。
(vii)推論実行により求めたダメージ検出結果をコンテナ固有番号毎に、撮影手段aカメラ設置情報パラメータ保持部3aの情報から取得したコンテナ撮影面情報とダメージ画像と正規化したコンテナダメージ位置情報及びダメージの種類を演算ユニット30で1セットのカルテの形態にまとめる。ここで、(i)の処理に戻り繰り返す。
101・・・施設A前面イメージ
102・・・施設A側面イメージ
103・・・ダメージ検出部
104・・・トレーラー
105・・・撮影手段取付けゲート
106・・・コンテナ
107・・・モニター
110・・・施設B
120・・・施設N
121・・・貨物船
122・・・ガントリークレーン
123・・・岸壁
1・・・撮影手段
1a・・・撮影手段a
1b・・・撮影手段b
1c・・・撮影手段c
2・・・ネットワーク
3a・・・撮影手段aカメラ設置情報パラメータ保持部
3b・・・撮影手段bカメラ設置情報パラメータ保持部
3c・・・撮影手段cカメラ設置情報パラメータ保持部
4・・・映像記録部
5・・・第1視点変換部
6・・・ダメージ検出推論実行部
8・・・標準化映像記録部
9・・・第2視点変換部
10・・・学習実行部
11・・・コンテナ番号検出部
12・・・コンテナ位置検出部
13・・・ダメージ位置正規化部
14・・・カルテ生成部
15・・・学習結果比較部
16・・・学習結果候補記録部
17・・・学習結果記録部
18・・・ネットワークインターフェース
19・・・表示部
20,21,22・・・映像取り込み部
30・・・演算ユニット
Claims (8)
- コンテナが取り扱われる複数の場所又は施設で当該コンテナの映像を撮影するコンテナ撮影手段と、
前記複数の場所又は施設で前記コンテナの映像を共有するために前記複数の場所又は施設間を通信可能に接続する通信手段と、
前記コンテナ撮影手段毎に、撮影するコンテナ面に対する撮影角度を表すデータ、又は当該撮影角度を導き出すためのパラメータを保持するパラメータ保持手段と、
前記コンテナ撮影手段で取得したコンテナの映像をコンテナ映像として記録する映像記録手段と、
前記パラメータ保持手段に保持されている情報に基づき、コンテナ映像に対して視点変換の演算を行う演算手段と、
学習及び学習結果から推論を実行する機械学習手段とを有し、
前記コンテナ撮影手段の種類、設置情報を含む属性に依存することなく、前記複数の場所又は施設間で前記コンテナ映像を共用する
コンテナダメージ検出システム。 - 前記機械学習手段は、前記演算手段からの前記コンテナ映像により学習を実行する手段を有する、請求項1に記載のコンテナダメージ検出システム。
- 前記機械学習手段は、前記演算手段からの前記コンテナ映像により推論を実行する手段を有する、請求項1に記載のコンテナダメージ検出システム。
- コンテナが取り扱われる複数の場所又は施設で当該コンテナの映像を撮影するコンテナ撮影手段と、
前記複数の場所又は施設で前記コンテナの映像を共有するために前記複数の場所又は施設間を通信可能に接続する通信手段と、
前記コンテナ撮影手段から取得する映像に対して、所定の撮影角度の映像に視点変換を行う第一の演算手段と、
前記第一の演算手段によって生成された映像を記録する標準化済み映像記録手段と、
前記標準化済み映像記録手段からの映像に対して所定の視点変換を行う第二の演算手段と、
学習及び学習結果から推論を実行する機械学習手段とを有し、
前記標準化済み映像に対して、前記コンテナ撮影手段の設置状況に応じて前記第二の演算手段で視点変換を施した映像で学習を行うことにより、撮影場所又は施設に影響されることなく、複数の場所又は施設で集めた映像により学習が実行される
コンテナダメージ検出システム。 - 前記学習における再学習前後の学習結果を保持する手段と、
前記再学習の前後の推論結果を比較する手段とを有し、
前記比較手段の結果が前記再学習前より改善されていると判定した場合、前記学習結果の更新を行う、請求項1又は2に記載のコンテナダメージ検出システム。 - コンテナの映像を撮影するコンテナ撮影手段と、
コンテナのダメージを検出する推論実行手段と、
コンテナの固有番号を検出する手段と、
前記コンテナ撮影手段からの映像に対して、所定の撮影角度の映像に視点変換を行う演算手段と、
前記演算手段からの映像からコンテナの位置を検出する手段と、
前記コンテナの位置を検出する手段から得たコンテナ位置情報を元にダメージ位置情報を正規化する手段と、
検出されたコンテナの前記固有番号と、視点変換を行う演算手段の出力映像又はその一部の映像と、コンテナのダメージ位置を正規化する手段から得た位置情報とを元にダメージ位置を表す情報を少なくとも記載したカルテを生成する手段とを有し、
コンテナのダメージ履歴を生成する、コンテナダメージ検出システム。 - コンテナの固有番号を検出する手段と、
前記演算手段からの映像からコンテナの位置を検出する手段と、
コンテナの位置を検出する手段から得たコンテナ位置情報を元にダメージ位置情報を正規化する手段と、
検出された前記固有番号と、推論実行の結果と、視点変換を行う演算手段の出力映像又はその一部の映像と、コンテナのダメージ位置を正規化する手段から得た位置情報とを元にダメージ位置を表す情報を少なくとも記載したカルテを生成する手段とを有し、
コンテナのダメージ履歴を生成する、請求項1から3までのいずれか一項に記載のコンテナダメージ検出システム。 - コンテナの固有番号を検出する手段と、
前記第一の又は第二の演算手段からの映像からコンテナの位置を検出する手段と、
コンテナの位置を検出する手段から得たコンテナ位置情報を元にダメージ位置情報を正規化する手段と、
検出手段で得た固有番号と、推論実行の結果と、視点変換を行う演算手段の出力映像又はその一部の映像と、コンテナのダメージ位置を正規化する手段から得た位置情報とを元にダメージ位置を表す情報を少なくとも記載したカルテを生成する手段を有し、
コンテナのダメージ履歴を生成する、請求項4に記載のコンテナダメージ検出システム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022576295A JPWO2022157882A1 (ja) | 2021-01-21 | 2021-01-21 | |
PCT/JP2021/002035 WO2022157882A1 (ja) | 2021-01-21 | 2021-01-21 | コンテナダメージ検出システム |
CN202180090667.3A CN116711299A (zh) | 2021-01-21 | 2021-01-21 | 集装箱损伤检测系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/002035 WO2022157882A1 (ja) | 2021-01-21 | 2021-01-21 | コンテナダメージ検出システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022157882A1 true WO2022157882A1 (ja) | 2022-07-28 |
Family
ID=82548559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/002035 WO2022157882A1 (ja) | 2021-01-21 | 2021-01-21 | コンテナダメージ検出システム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2022157882A1 (ja) |
CN (1) | CN116711299A (ja) |
WO (1) | WO2022157882A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002279033A (ja) * | 2001-03-19 | 2002-09-27 | Information Services International Dentsu Ltd | 港湾ゲートシステム、港湾ゲート制御方法 |
JP2007322173A (ja) * | 2006-05-30 | 2007-12-13 | Sumitomo Heavy Ind Ltd | ダメージチェックシステム及びダメージチェック方法 |
JP2019104578A (ja) * | 2017-12-11 | 2019-06-27 | 国土交通省港湾局長 | 人工知能を活用した包括的コンテナターミナルシステム及びオペレーション方法 |
-
2021
- 2021-01-21 CN CN202180090667.3A patent/CN116711299A/zh active Pending
- 2021-01-21 WO PCT/JP2021/002035 patent/WO2022157882A1/ja active Application Filing
- 2021-01-21 JP JP2022576295A patent/JPWO2022157882A1/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002279033A (ja) * | 2001-03-19 | 2002-09-27 | Information Services International Dentsu Ltd | 港湾ゲートシステム、港湾ゲート制御方法 |
JP2007322173A (ja) * | 2006-05-30 | 2007-12-13 | Sumitomo Heavy Ind Ltd | ダメージチェックシステム及びダメージチェック方法 |
JP2019104578A (ja) * | 2017-12-11 | 2019-06-27 | 国土交通省港湾局長 | 人工知能を活用した包括的コンテナターミナルシステム及びオペレーション方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022157882A1 (ja) | 2022-07-28 |
CN116711299A (zh) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11556739B2 (en) | Method for estimating operation of work vehicle, system, method for producing trained classification model, training data, and method for producing training data | |
JP5783885B2 (ja) | 情報提示装置、その方法及びそのプログラム | |
KR20200116138A (ko) | 안면 인식을 위한 방법 및 시스템 | |
GB2594657A (en) | Damage detection from multi-view visual data | |
JPWO2019097578A1 (ja) | 変位成分検出装置、変位成分検出方法、及びプログラム | |
CN112597877A (zh) | 一种基于深度学习的厂区人员异常行为检测方法 | |
Santur et al. | An adaptive fault diagnosis approach using pipeline implementation for railway inspection | |
Shah et al. | Condition assessment of ship structure using robot assisted 3D-reconstruction | |
WO2022157882A1 (ja) | コンテナダメージ検出システム | |
Purohit et al. | Multi-sensor surveillance system based on integrated video analytics | |
JP7126251B2 (ja) | 建設機械制御システム、建設機械制御方法、及びプログラム | |
JP2007304721A (ja) | 画像処理装置及び画像処理方法 | |
Garcia et al. | Large scale semantic segmentation of virtual environments to facilitate corrosion management | |
CN115049322B (zh) | 一种集装箱堆场的集装箱管理方法及系统 | |
WO2021177245A1 (ja) | 画像処理装置、作業指示作成システム、作業指示作成方法 | |
US10507550B2 (en) | Evaluation system for work region of vehicle body component and evaluation method for the work region | |
Moon et al. | Real-time parallel image-processing scheme for a fire-control system | |
WO2020166401A1 (ja) | 学習データ生成装置、方法及びプログラム | |
CN114596239A (zh) | 装卸货事件检测方法、装置、计算机设备和存储介质 | |
Sopauschke et al. | Smart process observer for crane automation | |
JP7386682B2 (ja) | 密閉物検出システム、密閉物検出方法、推定装置、及びプログラム | |
JP7263983B2 (ja) | 撮影漏れ検出装置、及び、撮影漏れ検出方法 | |
JP6739592B1 (ja) | 設備情報収集方法、設備情報収集支援装置、設備情報収集システム、設備状況データ製造方法、設備情報取得支援プログラム | |
JP7107597B2 (ja) | 駅監視装置、駅監視方法及びプログラム | |
JP2021155179A (ja) | クレーン用撮影システム及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21920994 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022576295 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180090667.3 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21920994 Country of ref document: EP Kind code of ref document: A1 |