WO2023162561A1 - Landmark monitoring device, ship steering system, landmark monitoring method, and program - Google Patents

Landmark monitoring device, ship steering system, landmark monitoring method, and program Download PDF

Info

Publication number
WO2023162561A1
WO2023162561A1 PCT/JP2023/002267 JP2023002267W WO2023162561A1 WO 2023162561 A1 WO2023162561 A1 WO 2023162561A1 JP 2023002267 W JP2023002267 W JP 2023002267W WO 2023162561 A1 WO2023162561 A1 WO 2023162561A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
captured
daytime
nighttime
Prior art date
Application number
PCT/JP2023/002267
Other languages
French (fr)
Japanese (ja)
Inventor
正也 能瀬
トロン ミン トラン
博紀 村上
Original Assignee
古野電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 古野電気株式会社 filed Critical 古野電気株式会社
Publication of WO2023162561A1 publication Critical patent/WO2023162561A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G3/00Traffic control systems for marine craft

Definitions

  • the present invention relates to a target monitoring device, a ship maneuvering system, a target monitoring method, and a program.
  • a camera device is installed to sequentially acquire image data of the surrounding water area of a construction vessel, and the image data of the surrounding water area is sequentially input to a computing device.
  • Machine-learned prediction models are stored in advance using image data of multiple types of ships including other construction ships and image data other than ships as training data, and a prediction model is generated using an arithmetic device. and the image data of the surrounding waters, a technique is disclosed for sequentially determining whether or not a vessel of a preset type to be monitored exists in the image data of the surrounding waters.
  • the present invention has been made in view of the above problems, and its main purpose is to provide a target monitoring device, a ship maneuvering system, a target monitoring method, and a program capable of image recognition suitable for the scene. be.
  • a target monitoring apparatus includes an image acquisition unit that acquires an image containing targets on the sea captured by a camera installed on a ship; The target included in the image is detected using a scene determination unit that determines whether the image was captured in the daytime or at night, and a learned model for daytime when the image is determined to have been captured in the daytime.
  • a daytime image recognition unit, and a nighttime image recognition unit that detects the target included in the image using a trained model for nighttime when it is determined that the image was captured at nighttime. . According to this, image recognition suitable for the scene becomes possible.
  • the scene determination unit may use a learned model for scene determination to determine whether the image was captured in the daytime or at night. According to this, by using a trained model for scene determination, it is possible to improve the accuracy of scene determination.
  • the nighttime image recognition unit may detect, as the target, a target candidate that has been detected by the learned model for nighttime and has a brightness of a predetermined level or higher. There is a risk that a trained model for nighttime use alone may not be able to detect a single, independent light as a target. It is possible to improve the accuracy of target detection.
  • the daytime image recognition unit may detect the target included in the image and determine the type of the target using the learned model for daytime. According to this, it is possible not only to detect the target but also to distinguish the type of the target.
  • the nighttime image recognition unit may use the learned model for nighttime to detect the target included in the image and determine whether or not the target is a target. Since it is difficult to distinguish between types of targets in images taken at night, a pre-trained model that is suitable for images taken at night by discriminating whether or not the target is a target rather than the type of the target. can be used.
  • the scene determination unit further determines whether the image was captured during a sunrise or sunset time zone, and if it is determined that the image was captured during a sunrise or sunset time zone, and a sunrise/sunset image recognition unit that detects the target included in the image using a trained model for sunrise/sunset. Since the reflection of the sea surface is strong during sunrise or sunset, the accuracy of target detection may not be sufficient with the trained model for daytime or nighttime use. , it is possible to improve the accuracy of target object detection even for an image captured during the sunrise or sunset time period.
  • the sunrise/sunset image recognition unit may detect the target included in the image and determine the type of the target using the trained model for sunrise/sunset. According to this, it is possible not only to detect the target but also to distinguish the type of the target.
  • the scene determination unit further determines whether the image was shot against backlight, and if it is determined that the image was shot against backlight, the image before being input to the trained model
  • a preprocessing unit that performs gamma correction or contrast adjustment may be further provided. According to this, it is possible to improve the accuracy of target detection even for an image captured against backlight.
  • the scene determination unit further determines whether the image was captured in fog, and if it is determined that the image was captured in fog, the image before being input to the trained model
  • a preprocessing unit that performs image sharpening processing may be further provided. According to this, it is possible to improve the accuracy of target detection even for an image captured in fog.
  • the trained model for the daytime uses learning images including images captured in the daytime as input data, and the in-image position and type of the target included in the learning images as teacher data. It may be a trained model generated by learning. According to this, it is possible to use a trained model suitable for images captured in the daytime.
  • the learned model for nighttime uses learning images including images captured at nighttime as input data, and the position of the target in the image included in the learning images as teacher data, and performs machine learning. It may be a generated trained model. According to this, it is possible to use a trained model suitable for images captured at night.
  • the trained model for sunrise/sunset uses, as input data, learning images including images captured during sunrise or sunset, and the in-image position of the target included in the learning images. It may also be a trained model generated by machine learning using the type and the type as teacher data. According to this, it is possible to use a trained model suitable for an image captured during the sunrise or sunset time period.
  • a marine vessel maneuvering system includes the target monitoring device, a marine vessel maneuvering determination unit that performs marine vessel maneuvering judgment based on the target detected by the target monitoring device, and the a marine vessel maneuvering control unit that controls the marine vessel maneuvering. According to this, it is possible to determine ship maneuvering and control ship maneuvering based on targets detected by image recognition suitable for the scene.
  • a target monitoring method obtains an image containing a target on the sea taken by a camera installed on a ship, and determines whether the image was taken during the daytime or at nighttime. and if it is determined that the image was captured in the daytime, the target included in the image is detected using the learned model for daytime, and the image is determined to be captured in the nighttime. Second, the target included in the image is detected using the trained model for nighttime. According to this, image recognition suitable for the scene becomes possible.
  • a program obtains an image including a marine target captured by a camera installed on a ship, and determines whether the image is captured during the daytime or during the nighttime. , detecting the target included in the image using a learned model for daytime when it is determined that the image was captured in the daytime; and determining that the image was captured in the nighttime.
  • the computer is caused to detect the target included in the image using the learned model for nighttime. According to this, image recognition suitable for the scene becomes possible.
  • FIG. 10 is a diagram showing an example of recognition by a trained model for scene determination
  • FIG. 10 is a diagram showing an example of recognition by a trained model for daytime
  • FIG. 10 is a diagram showing an example of recognition by a trained model for nighttime
  • FIG. 10 is a diagram showing an example of recognition by a trained model for sunrise and sunset
  • It is a figure which shows the procedure example of a target monitoring method. It is a figure which shows the procedure example of pre-processing. It is a figure which shows the procedure example of an image recognition process.
  • FIG. 1 is a block diagram showing a configuration example of the target monitoring system 100.
  • the target object monitoring system 100 is a system mounted on a ship.
  • the ship equipped with the target monitoring system 100 is called “own ship”, and the other ships are called “other ships”.
  • the target monitoring system 100 includes a target monitoring device 1, a display unit 2, a radar 3, an AIS 4, a camera 5, a GNSS receiver 6, a gyrocompass 7, an ECDIS 8, a wireless communication unit 9, and a ship maneuvering control unit. These devices are connected to a network N such as a LAN, and are capable of network communication with each other.
  • a network N such as a LAN
  • the target monitoring device 1 is a computer including a CPU, RAM, ROM, non-volatile memory, input/output interface, and the like.
  • the CPU of the target monitoring device 1 executes information processing according to a program loaded from the ROM or nonvolatile memory to the RAM.
  • the program may be supplied via an information storage medium such as an optical disc or memory card, or may be supplied via a communication network such as the Internet or LAN.
  • the display unit 2 displays the display image generated by the target monitoring device 1.
  • the display unit 2 also displays radar images, camera images, electronic charts, and the like.
  • the display unit 2 is, for example, a display device with a touch sensor, a so-called touch panel.
  • the touch sensor detects a position within the screen indicated by a user's finger or the like.
  • the designated position is not limited to this, and may be input by a trackball or the like.
  • the radar 3 emits radio waves around its own ship, receives the reflected waves, and generates echo data based on the received signals. Also, the radar 3 identifies a target from the echo data and generates TT data (Target Tracking Data) representing the position and speed of the target.
  • TT data Target Tracking Data
  • the AIS (Automatic Identification System) 4 receives AIS data from other ships around the ship or from land control. Not limited to AIS, VDES (VHF Data Exchange System) may be used.
  • the AIS data includes identification codes of other ships, ship names, positions, courses, ship speeds, ship types, hull lengths, destinations, and the like.
  • the camera 5 is a digital camera that captures images of the outside from the own ship and generates image data.
  • the camera 5 is installed, for example, on the bridge of the own ship facing the heading.
  • the camera 5 may be a camera having a pan/tilt function and an optical zoom function, a so-called PTZ camera.
  • the GNSS receiver 6 detects the position of the own ship based on radio waves received from the GNSS (Global Navigation Satellite System).
  • the gyrocompass 7 detects the heading of the own ship.
  • a GPS compass may be used instead of the gyro compass.
  • the ECDIS (Electronic Chart Display and Information System) 8 acquires the ship's position from the GNSS receiver 6 and displays the ship's position on the electronic chart.
  • the ECDIS 8 also displays the planned route of the own ship on the electronic chart.
  • a GNSS plotter may be used.
  • the wireless communication unit 9 includes various wireless equipment for realizing communication with other ships or land control, such as ultra-high frequency, very high frequency band, medium short wave band, or short wave band radio equipment.
  • the ship steering control unit 10 is a control device for realizing automatic ship steering, and controls the steering gear of the own ship. Further, the ship maneuvering control unit 10 may control the engine of the own ship.
  • the target monitoring device 1 is an independent device, but it is not limited to this, and may be integrated with other devices such as the ECDIS 8. That is, the functions of the target monitoring device 1 may be realized by another device.
  • the target monitoring device 1 is mounted on the own ship and used to monitor targets of other ships, etc. existing around the own ship, but the application is not limited to this.
  • the target monitoring device 1 may be installed in a control on land and used to monitor vessels existing in the controlled sea area.
  • FIG. 2 is a block diagram showing a configuration example of the target monitoring device 1.
  • the control unit 20 of the target monitoring device 1 includes an image acquisition unit 11 , an image processing unit 12 , a display control unit 13 and a ship maneuvering determination unit 14 . These functional units are implemented by the control unit 20 executing information processing according to programs. Note that the ship maneuvering determination unit 14 may be located outside the target monitoring device 1 .
  • the control unit 20 of the target monitoring device 1 further includes a target management DB (database) 19.
  • a target management DB 19 is provided in the memory of the target monitoring device 1 .
  • the image acquisition unit 11 acquires images including marine targets such as other ships captured by the camera 5 installed on the own ship.
  • the image acquisition unit 11 sequentially acquires time-series images from the camera 5 and sequentially provides the images to the image processing unit 12 .
  • Time-series images are, for example, still images (frames) included in moving image data.
  • the image processing unit 12 performs predetermined image processing such as image recognition on the image acquired by the image acquisition unit 11, generates target data of the target recognized from the image, and registers the data in the target management DB 19. do. Details of the image processing unit 12 will be described later.
  • the target management DB 19 is a database that manages target data generated by the image processing unit 12 .
  • target management DB 19 not only target data generated by the image processing unit 12 but also other target data such as TT data generated by the radar 3 or AIS data received by the AIS 4 may be integrated. good.
  • the display control unit 13 generates a display image including an object representing the target based on the target data registered in the target management DB 19 and outputs the display image to the display unit 2 .
  • the display image is, for example, a radar image, an electronic chart, or a composite image thereof, and the object representing the target is placed at a position in the image corresponding to the actual position of the target.
  • the ship maneuvering determination unit 14 makes ship maneuvering decisions based on the target data registered in the target management DB 19, and causes the ship maneuvering control unit 10 to perform avoidance maneuvers when it is determined that it is necessary to avoid the target. Specifically, the ship maneuvering control unit 10 calculates a avoidance route for avoiding the target using a avoidance maneuvering algorithm, and controls the steering gear or the engine so that the own ship follows the avoidance route.
  • FIG. 3 is a diagram showing an example of the contents of the target management DB 19.
  • the target management DB 19 includes fields such as "target ID”, “type”, “position in image”, “actual position”, “speed”, and “course”.
  • the target management DB 19 may further include, for example, the size of the target and the elapsed time since detection.
  • Type represents the type of target determined from the image captured by the camera 5.
  • the type of target is, for example, a ship type such as a tanker, a pleasure boat, or a fishing boat.
  • Target types may further include offshore installations, such as buoys.
  • Position in image represents the position where the target exists in the image.
  • Actual position represents the position of the target in the physical space calculated based on the position of the target in the image.
  • the actual position is calculated by first calculating the relative position of the target with respect to the own ship from the position in the image and converting it into the absolute position of the target using the position of the own ship.
  • the "position” may be calculated by integrating the relative position of the target detected by the radar 3 or the actual position of the target received by the AIS 4, or alternatively.
  • “Velocity” and “Course” represent the velocity and course of the target calculated based on the change in the actual position of the target over time.
  • the target management DB 19 stores not only target data of targets recognized from images captured by the camera 5, but also target data captured by a separately installed PZT camera, fixed-point camera, 360-degree camera, or infrared camera. Target data of the target recognized from the image may be further registered.
  • FIG. 4 is a diagram showing a configuration example of the image processing unit 12.
  • the image processing unit 12 includes a scene determination unit 21 , a preprocessing unit 22 , a daytime image recognition unit 23 , a nighttime image recognition unit 24 , and a sunrise/sunset image recognition unit 25 . These functional units are implemented by the control unit 20 executing information processing according to programs.
  • the image processing unit 12 further includes a determination model holding unit 31, a daytime model holding unit 33, a nighttime model holding unit 34, and a sunrise/sunset model holding unit 35. These storage units are provided in the memory of the target monitoring device 1 .
  • the scene determination unit 21 uses the learned model for scene determination held in the determination model holding unit 31 to determine whether the image acquired by the image acquisition unit 11 was taken during the daytime, at nighttime, or It is determined whether the image was captured during the time of sunrise or sunset. The scene determination unit 21 further determines whether the image was captured in backlight or in fog.
  • the trained model DM for scene determination is, for example, an image discrimination model such as a convolutional neural network (CNN).
  • the learned model DM for scene determination is a trained model generated by machine learning using a learning image as input data and a class associated with the learning image as teacher data.
  • the learning images include images of the sea captured during the day, images of the sea captured at night, images of the sea captured during the time of sunrise or sunset (hereinafter also referred to as "sunrise and sunset"), It includes an image of the sea captured in backlight, an image of the sea captured in fog, and the like.
  • the training images may include images of the sea generated by Generative Adversarial Networks (GAN) or 3 Dimensional Computer Graphics (3DCG).
  • Classes associated with training images include “daytime”, “nighttime”, “sunset”, “backlight”, and "fog”.
  • the output layer of the trained model DM for scene judgment has elements corresponding to classes. Elements corresponding to “daytime”, “nighttime”, and “sunrise/set” are set so that the sum of probabilities is 1, for example, by a softmax function.
  • the scene determination unit 21 applies the class with the highest probability among "daytime”, "nighttime”, and "sunset".
  • the scene determination unit 21 determines that the image P was captured in the daytime when the probability of “daytime” is highest, and determines that the image P was captured in the nighttime when the probability of “nighttime” is highest, When the probability of "Sunrise/Sunset" is the highest, it is determined that the image P was captured during sunrise/sunset.
  • the elements corresponding to "backlight” and “fog” are set so as to output accuracies of 0 or more and 1 or less using, for example, a sigmoid function.
  • the scene determination unit 21 determines that the image P was captured in backlight when the probability of “backlight” is equal to or higher than the threshold, and determines that the image P was captured in fog when the probability of “fog” is equal to or higher than the threshold. do.
  • the scene determination unit 21 determines whether the image P was captured during the daytime or at night according to the sunrise time and sunset time calculated based on the image capture time of the image P and the current position of the own ship. It may be determined whether the image was taken or whether the image was taken at sunrise or sunset.
  • the sunrise time period is a period of a predetermined length that includes the sunrise time
  • the sunset time period is a period of a predetermined length that includes the sunset time.
  • the daytime is the period from the time of sunrise to the time of sunset, excluding the time zone of sunrise and time zone of sunset.
  • the night time is the period from the time of sunset to the time of sunrise, excluding the time of sunrise and the time of sunset.
  • the scene determination unit 21 determines whether the image P was captured during the daytime, during the nighttime, or during sunrise and sunset, depending on the ambient brightness detected by the illuminance sensor provided on the ship. You can judge.
  • the preprocessing unit 22 performs gamma correction or contrast adjustment on the image P when it is determined by the scene determination unit 21 that the image P has been shot against the backlight, and the preprocessing unit 22 performs gamma correction or contrast adjustment on the image P to perform daytime use, nighttime use, or sunrise/sunset use in the latter stage. Process the image into an image suitable for input to the trained model of
  • the preprocessing unit 22 performs image sharpening processing such as defog processing on the image P, and performs subsequent daytime and nighttime image processing. , or processed into an image suitable for input to a trained model for sunrise and sunset.
  • the daytime image recognition unit 23 uses the learned model for daytime held in the daytime model holding unit 33 to recognize the image P as Detects contained targets.
  • the daytime image recognition unit 23 determines the type of target detected from the image P.
  • the type of target is, for example, a ship type such as a tanker, a pleasure boat, or a fishing boat.
  • the type of target may be, for example, an offshore installation such as a buoy.
  • the trained model for daytime is, for example, an object detection model such as SSD (Single Shot MultiBox Detector) or YOLO (You Only Look Once), and outputs a bounding box surrounding the target included in the image.
  • the trained model for daytime may be a segmentation model such as Semantic Segmentation or Instance Segmentation.
  • a trained model for daytime use is generated by machine learning, using as input data training images that include images of the ocean taken during the daytime, and using the positions and types of targets in the images contained in the training images as teacher data. It is a trained model.
  • Training images may include daytime maritime images generated by a generative adversarial network (GAN) or 3DCG.
  • GAN generative adversarial network
  • 3DCG 3DCG
  • the position of the target within the image is specified by the coordinates of a rectangular area containing the target within the image P.
  • the in-image position of the target is associated with a class representing the type of the target, such as "tanker”, “pleasure board”, “fishing boat”, and “buoy”, and estimation accuracy.
  • FIG. 6 is a diagram showing an example of recognition of an image DP captured during the daytime by a trained model for daytime use.
  • a target SH such as another ship included in the image DP captured in the daytime is surrounded by a rectangular bounding box BB.
  • a label CF describing the type of target and the accuracy of estimation is added to the bounding box BB.
  • the nighttime image recognition unit 24 uses the learned model for nighttime held in the nighttime model holding unit 34 to recognize the image P. Detects contained targets.
  • the nighttime image recognition unit 24 determines not the type of the target detected from the image P, but whether it is a target. That is, the nighttime image recognition unit 24 determines that the object is a target when the accuracy of estimation output from the trained model for nighttime is equal to or higher than the threshold.
  • the trained model for nighttime use may be an object detection model such as SSD or YOLO, or a segmentation model such as Semantic Segmentation or Instance Segmentation, like the trained model for daytime use. .
  • a trained model for nighttime use is generated by machine learning using training images including images of the sea taken at night as input data, and the positions of targets in the images included in the training images as teacher data. It is a finished model.
  • the trained model for nighttime also learns the arrangement pattern of lights as a parameter.
  • Training images may include nighttime ocean images generated by a generative adversarial network (GAN) or 3DCG.
  • GAN generative adversarial network
  • 3DCG 3DCG.
  • a class representing the target is associated with the position in the image of the target.
  • FIG. 7 is a diagram showing an example of recognition of an image NP captured at night by a trained model for night. As shown in the figure, in the image NP captured at night, only the light L emitted by targets such as other ships can be seen. When a trained model for nighttime is applied to such an image NP, the light L of the target is surrounded by a rectangular bounding box BB, and the bounding box BB describes the target and the estimation accuracy. Label CF is added.
  • the nighttime image recognition unit 24 detects target candidates that have been detected by the learned model for nighttime and have a brightness level equal to or higher than a predetermined level, as targets. In other words, there is a risk that a single independent light cannot be detected as a target simply by applying a trained model for nighttime use. Even with NP, it is possible to improve the accuracy of target detection.
  • the sunrise/sunset image recognition unit 25 uses the trained model for sunrise/sunset held in the sunrise/sunset model holding unit 35 to perform A target included in the image P is detected. Also, the sunrise/sunset image recognition unit 25 determines the type of the target detected from the image P in the same manner as the daytime image recognition unit 23 .
  • the trained model for sunrise/sunset can be an object detection model such as SSD or YOLO, or a segmentation model such as Semantic Segmentation or Instance Segmentation, just like the trained models for daytime and nighttime. There may be.
  • the trained model for sunrise/sunset uses training images, including images of the ocean captured at sunrise/sunset, as input data, and the positions and types of targets in the images included in the training images as teacher data. This is the generated trained model.
  • the training images may include sunrise and sunset images of the ocean generated by a generative adversarial network (GAN) or 3DCG.
  • GAN generative adversarial network
  • 3DCG 3DCG
  • FIG. 8 is a diagram showing an example of recognition of an image SP captured at sunrise/sunset by a trained model for sunrise/sunset.
  • targets SH such as other ships included in the image SP captured at sunrise and sunset are surrounded by a rectangular bounding box BB.
  • a label CF describing the type of target and the accuracy of estimation is added to the bounding box BB.
  • the trained model for daytime or nighttime may not be accurate enough for target detection, but a trained model for sunrise and sunset is prepared separately. By doing so, it is possible to improve the accuracy of target detection even in the image SP captured at the time of sunrise and sunset.
  • FIG. 9 is a diagram showing a procedure example of a target monitoring method implemented in the target monitoring system 100.
  • FIG. FIG. 10 is a diagram showing a procedure example of a preprocessing routine.
  • FIG. 11 is a diagram showing a procedure example of an image recognition processing routine.
  • the control unit 20 of the target monitoring device 1 executes the information processing shown in the figure according to the program.
  • control unit 20 acquires the image P generated by the camera 5 (S11, processing as the image acquisition unit 11).
  • control unit 20 uses the learned model for scene determination to determine whether the acquired image P was captured during the daytime, at nighttime, or during sunrise or sunset. Further, it is determined whether the image was captured in backlight or in fog (S12, processing by the scene determination unit 21).
  • control unit 20 executes a preprocessing routine (S13, processing as the preprocessing unit 22).
  • control unit 20 performs gamma correction or contrast adjustment on the image P ( S22).
  • control unit 20 performs image sharpening processing such as Defog processing on the image P (S24).
  • control unit 20 executes an image recognition processing routine (S14).
  • the control unit 20 uses the learned model for daytime to perform image recognition. Targets included in P are detected and the type of the target is determined (S32, processing by the daytime image recognition unit 23).
  • control unit 20 detects target candidates included in the image P using the learned model for nighttime, and A target candidate having the above brightness is extracted as a target (S34, S35, processing by the nighttime image recognition unit 24).
  • the control unit 20 uses the trained model for sunrise/sunset to detect targets included in the image P.
  • the type of target is discriminated (S37, processing by the sunrise/sunset image recognition unit 25).
  • the image recognition processing routine ends, and the main routine shown in FIG. 9 also ends.
  • the control unit 20 generates target data of the target detected from the image P and registers it in the target management DB 19 .
  • Target monitoring device 2 display unit, 3 radar, 4 AIS, 5 camera, 6 GNSS receiver, 7 gyrocompass, 8 ECDIS, 9 wireless communication unit, 10 ship operation control unit, 11 image acquisition unit, 12 image processing unit , 13 Display control unit, 14 Ship maneuvering determination unit, 19 Target management DB, 20 Control unit, 21 Scene determination unit, 22 Preprocessing unit, 23 Daytime image recognition unit, 24 Nighttime image recognition unit, 25 Daytime image Recognition unit, 31 determination model holding unit, 33 daytime model holding unit, 34 nighttime model holding unit, 35 daytime model holding unit, 100 target monitoring system

Abstract

[Problem] To provide a landmark monitoring device that can perform image recognition suited to a setting. [Solution] This landmark monitoring device comprises: an image acquisition unit that acquires an image, including a landmark at sea, that was captured by a camera installed on a ship; a setting determination unit that determines whether the image was captured during the day or night; a daytime image recognition unit that detects the landmark included in the image, using a daytime learned model if the image is determined to have been captured during the day; and a nighttime image recognition unit that detects a landmark included in the image, using a nighttime learned model if the image is determined to have been captured at night.

Description

物標監視装置、操船システム、物標監視方法、及びプログラムTarget monitoring device, ship maneuvering system, target monitoring method, and program
 本発明は、物標監視装置、操船システム、物標監視方法、及びプログラムに関する。 The present invention relates to a target monitoring device, a ship maneuvering system, a target monitoring method, and a program.
 特許文献1には、工事用船舶の周辺水域の画像データを逐次取得するカメラ装置を設置し、周辺水域の画像データを演算装置に逐次入力して、演算装置には、工事用船舶以外の工事用船舶である他の工事用船舶を含む複数種類の船の画像データと船以外の画像データとを教師データとして機械学習された予測モデルを予め記憶しておき、演算装置を用いて、予測モデルと周辺水域の画像データとに基づいて周辺水域の画像データに、予め設定された監視対象種類の船が存在しているか否かを逐次判定する技術が開示されている。 In Patent Document 1, a camera device is installed to sequentially acquire image data of the surrounding water area of a construction vessel, and the image data of the surrounding water area is sequentially input to a computing device. Machine-learned prediction models are stored in advance using image data of multiple types of ships including other construction ships and image data other than ships as training data, and a prediction model is generated using an arithmetic device. and the image data of the surrounding waters, a technique is disclosed for sequentially determining whether or not a vessel of a preset type to be monitored exists in the image data of the surrounding waters.
特開2021-187282号公報Japanese Unexamined Patent Application Publication No. 2021-187282
 ところで、24時間に亘って海上の物標を監視するAI画像認識システムを構築する際に、昼間又は夜間などの様々な場面で撮像された様々な画像を認識可能な1つの学習済みモデルを作成することは、学習コストが高く、精度向上が困難である。 By the way, when building an AI image recognition system that monitors targets on the sea for 24 hours, we create one trained model that can recognize various images captured in various scenes such as daytime and nighttime. However, the learning cost is high and it is difficult to improve accuracy.
 本発明は、上記課題に鑑みてなされたものであり、その主な目的は、場面に適した画像認識が可能な物標監視装置、操船システム、物標監視方法、及びプログラムを提供することにある。 The present invention has been made in view of the above problems, and its main purpose is to provide a target monitoring device, a ship maneuvering system, a target monitoring method, and a program capable of image recognition suitable for the scene. be.
 上記課題を解決するため、本発明の一の態様の物標監視装置は、船舶に設置されたカメラにより撮像された海上の物標を含む画像を取得する画像取得部と、前記画像が昼間に撮像されたか夜間に撮像されたか判定する場面判定部と、前記画像が昼間に撮像されたと判定された場合に、昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する昼間用画像認識部と、前記画像が夜間に撮像されたと判定された場合に、夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する夜間用画像認識部とを備える。これによると、場面に適した画像認識が可能となる。 In order to solve the above problems, a target monitoring apparatus according to one aspect of the present invention includes an image acquisition unit that acquires an image containing targets on the sea captured by a camera installed on a ship; The target included in the image is detected using a scene determination unit that determines whether the image was captured in the daytime or at night, and a learned model for daytime when the image is determined to have been captured in the daytime. A daytime image recognition unit, and a nighttime image recognition unit that detects the target included in the image using a trained model for nighttime when it is determined that the image was captured at nighttime. . According to this, image recognition suitable for the scene becomes possible.
 上記態様において、前記場面判定部は、場面判定用の学習済みモデルを用いて、前記画像が昼間に撮像されたか夜間に撮像されたか判定してもよい。これによると、場面判定用の学習済みモデルを用いることで、場面判定の精度向上を図ることが可能となる。 In the above aspect, the scene determination unit may use a learned model for scene determination to determine whether the image was captured in the daytime or at night. According to this, by using a trained model for scene determination, it is possible to improve the accuracy of scene determination.
 上記態様において、前記夜間用画像認識部は、前記夜間用の学習済みモデルにより検出され、且つ所定以上の輝度を有する物標候補を、前記物標として検出してもよい。夜間用の学習済みモデルのみでは、独立した単一の光を物標として検出できないおそれがあるが、所定以上の輝度を有するというルールを組み合わせることで、夜間に撮像された画像であっても物標検出の精度向上を図ることが可能となる。 In the above aspect, the nighttime image recognition unit may detect, as the target, a target candidate that has been detected by the learned model for nighttime and has a brightness of a predetermined level or higher. There is a risk that a trained model for nighttime use alone may not be able to detect a single, independent light as a target. It is possible to improve the accuracy of target detection.
 上記態様において、前記昼間用画像認識部は、前記昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出するとともに、前記物標の種類を判別してもよい。これによると、物標の検出だけでなく、物標の種類の判別が可能となる。 In the above aspect, the daytime image recognition unit may detect the target included in the image and determine the type of the target using the learned model for daytime. According to this, it is possible not only to detect the target but also to distinguish the type of the target.
 上記態様において、前記夜間用画像認識部は、前記夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出するとともに、物標であるか否か判別してもよい。夜間に撮像された画像では物標の種類の判別が困難であるため、物標の種類ではなく、物標であるか否か判別することで、夜間に撮像された画像に適した学習済みモデルを用いることが可能となる。 In the above aspect, the nighttime image recognition unit may use the learned model for nighttime to detect the target included in the image and determine whether or not the target is a target. Since it is difficult to distinguish between types of targets in images taken at night, a pre-trained model that is suitable for images taken at night by discriminating whether or not the target is a target rather than the type of the target. can be used.
 上記態様において、前記場面判定部は、さらに、前記画像が日の出又は日の入の時間帯に撮像されたか判定し、前記画像が日の出又は日の入の時間帯に撮像されたと判定された場合に、日出入用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する日出入用画像認識部をさらに備えてもよい。日の出又は日の入の時間帯は海面の反射が強いため、昼間用又は夜間用の学習済みモデルでは物標検出の精度が十分でないおそれがあるが、日出入用の学習済みモデルを用いることで、日の出又は日の入の時間帯に撮像された画像であっても物標検出の精度向上を図ることが可能となる。 In the above aspect, the scene determination unit further determines whether the image was captured during a sunrise or sunset time zone, and if it is determined that the image was captured during a sunrise or sunset time zone, and a sunrise/sunset image recognition unit that detects the target included in the image using a trained model for sunrise/sunset. Since the reflection of the sea surface is strong during sunrise or sunset, the accuracy of target detection may not be sufficient with the trained model for daytime or nighttime use. , it is possible to improve the accuracy of target object detection even for an image captured during the sunrise or sunset time period.
 上記態様において、前記日出入用画像認識部は、前記日出入用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出するとともに、前記物標の種類を判別してもよい。これによると、物標の検出だけでなく、物標の種類の判別が可能となる。 In the above aspect, the sunrise/sunset image recognition unit may detect the target included in the image and determine the type of the target using the trained model for sunrise/sunset. According to this, it is possible not only to detect the target but also to distinguish the type of the target.
 上記態様において、前記場面判定部は、さらに、前記画像が逆光で撮像されたか判定し、前記画像が逆光で撮像されたと判定された場合に、前記学習済みモデルに入力される前の前記画像に対してガンマ補正又はコントラスト調整を行う前処理部をさらに備えてもよい。これによると、逆光で撮像された画像であっても物標検出の精度向上を図ることが可能となる。 In the above aspect, the scene determination unit further determines whether the image was shot against backlight, and if it is determined that the image was shot against backlight, the image before being input to the trained model A preprocessing unit that performs gamma correction or contrast adjustment may be further provided. According to this, it is possible to improve the accuracy of target detection even for an image captured against backlight.
 上記態様において、前記場面判定部は、さらに、前記画像が霧中で撮像されたか判定し、前記画像が霧中で撮像されたと判定された場合に、前記学習済みモデルに入力される前の前記画像に対して画像鮮明化処理を行う前処理部をさらに備えてもよい。これによると、霧中で撮像された画像であっても物標検出の精度向上を図ることが可能となる。 In the above aspect, the scene determination unit further determines whether the image was captured in fog, and if it is determined that the image was captured in fog, the image before being input to the trained model A preprocessing unit that performs image sharpening processing may be further provided. According to this, it is possible to improve the accuracy of target detection even for an image captured in fog.
 上記態様において、前記昼間用の学習済みモデルは、昼間に撮像された画像を含む学習用画像を入力データとし、前記学習用画像に含まれる物標の画像内位置及び種類を教師データとして、機械学習により生成された学習済みモデルであってもよい。これによると、昼間に撮像された画像に適した学習済みモデルを用いることが可能となる。 In the above aspect, the trained model for the daytime uses learning images including images captured in the daytime as input data, and the in-image position and type of the target included in the learning images as teacher data. It may be a trained model generated by learning. According to this, it is possible to use a trained model suitable for images captured in the daytime.
 上記態様において、前記夜間用の学習済みモデルは、夜間に撮像された画像を含む学習用画像を入力データとし、前記学習用画像に含まれる物標の画像内位置を教師データとして、機械学習により生成された学習済みモデルであってもよい。これによると、夜間に撮像された画像に適した学習済みモデルを用いることが可能となる。 In the above aspect, the learned model for nighttime uses learning images including images captured at nighttime as input data, and the position of the target in the image included in the learning images as teacher data, and performs machine learning. It may be a generated trained model. According to this, it is possible to use a trained model suitable for images captured at night.
 上記態様において、前記日出入用の学習済みモデルは、日の出又は日の入の時間帯に撮像された画像を含む学習用画像を入力データとし、前記学習用画像に含まれる物標の画像内位置及び種類を教師データとして、機械学習により生成された学習済みモデルであってもよい。これによると、日の出又は日の入の時間帯に撮像された画像に適した学習済みモデルを用いることが可能となる。 In the above aspect, the trained model for sunrise/sunset uses, as input data, learning images including images captured during sunrise or sunset, and the in-image position of the target included in the learning images. It may also be a trained model generated by machine learning using the type and the type as teacher data. According to this, it is possible to use a trained model suitable for an image captured during the sunrise or sunset time period.
 また、本発明の他の態様の操船システムは、上記物標監視装置と、前記物標監視装置により検出された物標に基づいて操船判断を行う操船判断部と、前記操船判断に基づいて前記船舶の操船制御を行う操船制御部と、を備える。これによると、場面に適した画像認識により検出された物標に基づく操船判断及び操船制御が可能となる。 In another aspect of the present invention, a marine vessel maneuvering system includes the target monitoring device, a marine vessel maneuvering determination unit that performs marine vessel maneuvering judgment based on the target detected by the target monitoring device, and the a marine vessel maneuvering control unit that controls the marine vessel maneuvering. According to this, it is possible to determine ship maneuvering and control ship maneuvering based on targets detected by image recognition suitable for the scene.
 また、本発明の他の態様の物標監視方法は、船舶に設置されたカメラにより撮像された海上の物標を含む画像を取得し、前記画像が昼間に撮像されたか夜間に撮像されたか判定し、前記画像が昼間に撮像されたと判定された場合に、昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出し、前記画像が夜間に撮像されたと判定された場合に、夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する。これによると、場面に適した画像認識が可能となる。 Further, a target monitoring method according to another aspect of the present invention obtains an image containing a target on the sea taken by a camera installed on a ship, and determines whether the image was taken during the daytime or at nighttime. and if it is determined that the image was captured in the daytime, the target included in the image is detected using the learned model for daytime, and the image is determined to be captured in the nighttime. Second, the target included in the image is detected using the trained model for nighttime. According to this, image recognition suitable for the scene becomes possible.
 また、本発明の他の態様のプログラムは、船舶に設置されたカメラにより撮像された海上の物標を含む画像を取得すること、前記画像が昼間に撮像されたか夜間に撮像されたか判定すること、前記画像が昼間に撮像されたと判定された場合に、昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出すること、及び、前記画像が夜間に撮像されたと判定された場合に、夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出すること、をコンピュータに実行させる。これによると、場面に適した画像認識が可能となる。 Further, a program according to another aspect of the present invention obtains an image including a marine target captured by a camera installed on a ship, and determines whether the image is captured during the daytime or during the nighttime. , detecting the target included in the image using a learned model for daytime when it is determined that the image was captured in the daytime; and determining that the image was captured in the nighttime. In this case, the computer is caused to detect the target included in the image using the learned model for nighttime. According to this, image recognition suitable for the scene becomes possible.
物標監視システムの構成例を示す図である。It is a figure which shows the structural example of a target monitoring system. 物標監視装置の構成例を示す図である。It is a figure which shows the structural example of a target monitoring apparatus. 物標管理DBの内容例を示す図である。It is a figure which shows the example of the content of target object management DB. 画像処理部の構成例を示す図である。3 is a diagram illustrating a configuration example of an image processing unit; FIG. 場面判定用の学習済みモデルによる認識例を示す図である。FIG. 10 is a diagram showing an example of recognition by a trained model for scene determination; 昼間用の学習済みモデルによる認識例を示す図である。FIG. 10 is a diagram showing an example of recognition by a trained model for daytime; 夜間用の学習済みモデルによる認識例を示す図である。FIG. 10 is a diagram showing an example of recognition by a trained model for nighttime; 日出入用の学習済みモデルによる認識例を示す図である。FIG. 10 is a diagram showing an example of recognition by a trained model for sunrise and sunset; 物標監視方法の手順例を示す図である。It is a figure which shows the procedure example of a target monitoring method. 前処理の手順例を示す図である。It is a figure which shows the procedure example of pre-processing. 画像認識処理の手順例を示す図である。It is a figure which shows the procedure example of an image recognition process.
 以下、本発明の実施形態について、図面を参照しながら説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、物標監視システム100の構成例を示すブロック図である。物標監視システム100は、船舶に搭載されるシステムである。以下の説明では、物標監視システム100が搭載された船舶を「自船」といい、その他の船舶を「他船」という。 FIG. 1 is a block diagram showing a configuration example of the target monitoring system 100. As shown in FIG. The target object monitoring system 100 is a system mounted on a ship. In the following description, the ship equipped with the target monitoring system 100 is called "own ship", and the other ships are called "other ships".
 物標監視システム100は、物標監視装置1、表示部2、レーダー3、AIS4、カメラ5、GNSS受信機6、ジャイロコンパス7、ECDIS8、無線通信部9、及び操船制御部を備えている。これらの機器は、例えばLAN等のネットワークNに接続されており、相互にネットワーク通信が可能である。 The target monitoring system 100 includes a target monitoring device 1, a display unit 2, a radar 3, an AIS 4, a camera 5, a GNSS receiver 6, a gyrocompass 7, an ECDIS 8, a wireless communication unit 9, and a ship maneuvering control unit. These devices are connected to a network N such as a LAN, and are capable of network communication with each other.
 物標監視装置1は、CPU、RAM、ROM、不揮発性メモリ、及び入出力インターフェース等を含むコンピュータである。物標監視装置1のCPUは、ROM又は不揮発性メモリからRAMにロードされたプログラムに従って情報処理を実行する。 The target monitoring device 1 is a computer including a CPU, RAM, ROM, non-volatile memory, input/output interface, and the like. The CPU of the target monitoring device 1 executes information processing according to a program loaded from the ROM or nonvolatile memory to the RAM.
 プログラムは、光ディスク又はメモリカード等の情報記憶媒体を介して供給されてもよいし、インターネット又はLAN等の通信ネットワークを介して供給されてもよい。 The program may be supplied via an information storage medium such as an optical disc or memory card, or may be supplied via a communication network such as the Internet or LAN.
 表示部2は、物標監視装置1により生成された表示用画像を表示する。表示部2は、レーダー画像、カメラ画像、又は電子海図なども表示する。 The display unit 2 displays the display image generated by the target monitoring device 1. The display unit 2 also displays radar images, camera images, electronic charts, and the like.
 表示部2は、例えばタッチセンサ付き表示装置、いわゆるタッチパネルである。タッチセンサは、ユーザの指等による画面内の指示位置を検出する。これに限らず、トラックボール等により指示位置が入力されてもよい。 The display unit 2 is, for example, a display device with a touch sensor, a so-called touch panel. The touch sensor detects a position within the screen indicated by a user's finger or the like. The designated position is not limited to this, and may be input by a trackball or the like.
 レーダー3は、自船の周囲に電波を発するとともにその反射波を受信し、受信信号に基づいてエコーデータを生成する。また、レーダー3は、エコーデータから物標を識別し、物標の位置及び速度を表すTTデータ(Target Tracking Data)を生成する。 The radar 3 emits radio waves around its own ship, receives the reflected waves, and generates echo data based on the received signals. Also, the radar 3 identifies a target from the echo data and generates TT data (Target Tracking Data) representing the position and speed of the target.
 AIS(Automatic Identification System)4は、自船の周囲に存在する他船又は陸上の管制からAISデータを受信する。AISに限らず、VDES(VHF Data Exchange System)が用いられてもよい。AISデータは、他船の識別符号、船名、位置、針路、船速、船種、船体長、及び行き先などを含んでいる。 The AIS (Automatic Identification System) 4 receives AIS data from other ships around the ship or from land control. Not limited to AIS, VDES (VHF Data Exchange System) may be used. The AIS data includes identification codes of other ships, ship names, positions, courses, ship speeds, ship types, hull lengths, destinations, and the like.
 カメラ5は、自船から外部を撮像して画像データを生成するデジタルカメラである。カメラ5は、例えば自船のブリッジに船首方位を向いて設置される。カメラ5は、パン・チルト機能及び光学ズーム機能を有するカメラ、いわゆるPTZカメラであってもよい。 The camera 5 is a digital camera that captures images of the outside from the own ship and generates image data. The camera 5 is installed, for example, on the bridge of the own ship facing the heading. The camera 5 may be a camera having a pan/tilt function and an optical zoom function, a so-called PTZ camera.
 GNSS受信機6は、GNSS(Global Navigation Satellite System)から受信した電波に基づいて自船の位置を検出する。ジャイロコンパス7は、自船の船首方位を検出する。ジャイロコンパスに限らず、GPSコンパスが用いられてもよい。 The GNSS receiver 6 detects the position of the own ship based on radio waves received from the GNSS (Global Navigation Satellite System). The gyrocompass 7 detects the heading of the own ship. A GPS compass may be used instead of the gyro compass.
 ECDIS(Electronic Chart Display and Information System)8は、GNSS受信機6から自船の位置を取得し、電子海図上に自船の位置を表示する。また、ECDIS8は、電子海図上に自船の計画航路も表示する。ECDISに限らず、GNSSプロッタが用いられてもよい。 The ECDIS (Electronic Chart Display and Information System) 8 acquires the ship's position from the GNSS receiver 6 and displays the ship's position on the electronic chart. The ECDIS 8 also displays the planned route of the own ship on the electronic chart. Not limited to ECDIS, a GNSS plotter may be used.
 無線通信部9は、例えば極超短波、超短波帯、中短波帯、又は短波帯の無線設備など、他船又は陸上の管制との通信を実現するための種々の無線設備を含んでいる。 The wireless communication unit 9 includes various wireless equipment for realizing communication with other ships or land control, such as ultra-high frequency, very high frequency band, medium short wave band, or short wave band radio equipment.
 操船制御部10は、自動操船を実現するための制御装置であり、自船の操舵機を制御する。また、操船制御部10は、自船のエンジンを制御してもよい。 The ship steering control unit 10 is a control device for realizing automatic ship steering, and controls the steering gear of the own ship. Further, the ship maneuvering control unit 10 may control the engine of the own ship.
 本実施形態において、物標監視装置1は独立した装置であるが、これに限らず、ECDIS8等の他の装置と一体であってもよい。すなわち、物標監視装置1の機能が他の装置で実現されてもよい。 In this embodiment, the target monitoring device 1 is an independent device, but it is not limited to this, and may be integrated with other devices such as the ECDIS 8. That is, the functions of the target monitoring device 1 may be realized by another device.
 なお、本実施形態では、物標監視装置1は自船に搭載され、自船の周囲に存在する他船等の物標を監視するために用いられるが、用途はこれに限られない。例えば、物標監視装置1は陸上の管制に設置され、管制海域に存在する船舶を監視するために用いられてもよい。 In this embodiment, the target monitoring device 1 is mounted on the own ship and used to monitor targets of other ships, etc. existing around the own ship, but the application is not limited to this. For example, the target monitoring device 1 may be installed in a control on land and used to monitor vessels existing in the controlled sea area.
 図2は、物標監視装置1の構成例を示すブロック図である。物標監視装置1の制御部20は、画像取得部11、画像処理部12、表示制御部13、及び操船判断部14を備えている。これらの機能部は、制御部20がプログラムに従って情報処理を実行することによって実現される。なお、操船判断部14は、物標監視装置1の外部にあってもよい。 FIG. 2 is a block diagram showing a configuration example of the target monitoring device 1. As shown in FIG. The control unit 20 of the target monitoring device 1 includes an image acquisition unit 11 , an image processing unit 12 , a display control unit 13 and a ship maneuvering determination unit 14 . These functional units are implemented by the control unit 20 executing information processing according to programs. Note that the ship maneuvering determination unit 14 may be located outside the target monitoring device 1 .
 物標監視装置1の制御部20は、物標管理DB(データベース)19をさらに備えている。物標管理DB19は、物標監視装置1のメモリに設けられる。 The control unit 20 of the target monitoring device 1 further includes a target management DB (database) 19. A target management DB 19 is provided in the memory of the target monitoring device 1 .
 画像取得部11は、自船に設置されたカメラ5により撮像された他船等の海上の物標を含む画像を取得する。画像取得部11は、カメラ5から時系列の画像を逐次取得し、画像処理部12に逐次提供する。時系列の画像は、例えば動画像データに含まれる静止画像(フレーム)である。 The image acquisition unit 11 acquires images including marine targets such as other ships captured by the camera 5 installed on the own ship. The image acquisition unit 11 sequentially acquires time-series images from the camera 5 and sequentially provides the images to the image processing unit 12 . Time-series images are, for example, still images (frames) included in moving image data.
 画像処理部12は、画像取得部11により取得された画像に対して画像認識等の所定の画像処理を行い、画像から認識された物標の物標データを生成し、物標管理DB19に登録する。画像処理部12の詳細については後述する。 The image processing unit 12 performs predetermined image processing such as image recognition on the image acquired by the image acquisition unit 11, generates target data of the target recognized from the image, and registers the data in the target management DB 19. do. Details of the image processing unit 12 will be described later.
 物標管理DB19は、画像処理部12により生成された物標データを管理するデータベースである。物標管理DB19には、画像処理部12により生成された物標データだけでなく、レーダー3により生成されたTTデータ又はAIS4により受信されたAISデータ等の他の物標データが統合されてもよい。 The target management DB 19 is a database that manages target data generated by the image processing unit 12 . In the target management DB 19, not only target data generated by the image processing unit 12 but also other target data such as TT data generated by the radar 3 or AIS data received by the AIS 4 may be integrated. good.
 表示制御部13は、物標管理DB19に登録された物標データに基づいて、物標を表すオブジェクトを含む表示用画像を生成し、表示部2に出力する。表示用画像は、例えばレーダー画像、電子海図、又はそれらを合成した画像であり、物標を表すオブジェクトは、物標の実際の位置に対応する画像内の位置に配置される。 The display control unit 13 generates a display image including an object representing the target based on the target data registered in the target management DB 19 and outputs the display image to the display unit 2 . The display image is, for example, a radar image, an electronic chart, or a composite image thereof, and the object representing the target is placed at a position in the image corresponding to the actual position of the target.
 操船判断部14は、物標管理DB19に登録された物標データに基づいて操船判断を行い、物標を避ける必要があると判断した場合に、操船制御部10に避航操船を行わせる。具体的には、操船制御部10は、避航操船アルゴリズムにより物標を避けるための避航航路を算出し、自船が避航航路に従うように操舵機又はエンジン等を制御する。 The ship maneuvering determination unit 14 makes ship maneuvering decisions based on the target data registered in the target management DB 19, and causes the ship maneuvering control unit 10 to perform avoidance maneuvers when it is determined that it is necessary to avoid the target. Specifically, the ship maneuvering control unit 10 calculates a avoidance route for avoiding the target using a avoidance maneuvering algorithm, and controls the steering gear or the engine so that the own ship follows the avoidance route.
 図3は、物標管理DB19の内容例を示す図である。物標管理DB19は、例えば「物標ID」「種類」、「画像内位置」、「実位置」、「速度」、及び「針路」等のフィールドを含んでいる。物標管理DB19は、その他に、例えば物標の大きさ及び検出からの経過時間などをさらに含んでもよい。 FIG. 3 is a diagram showing an example of the contents of the target management DB 19. FIG. The target management DB 19 includes fields such as "target ID", "type", "position in image", "actual position", "speed", and "course". The target management DB 19 may further include, for example, the size of the target and the elapsed time since detection.
 「種類」は、カメラ5により撮像された画像から判別される物標の種類を表す。物標の種類は、例えばタンカー、プレジャーボード、漁船などの船種である。物標の種類は、例えばブイ等の海上設置物をさらに含んでもよい。 "Type" represents the type of target determined from the image captured by the camera 5. The type of target is, for example, a ship type such as a tanker, a pleasure boat, or a fishing boat. Target types may further include offshore installations, such as buoys.
 「画像内位置」は、画像内の物標が存在する位置を表す。「実位置」は、物標の画像内位置に基づいて算出される、現実空間における物標の位置を表す。実位置は、まず自船に対する物標の相対位置を画像内位置から算出し、自船の位置を用いて物標の絶対位置に変換することで算出される。なお、「位置」は、レーダー3により検出された物標の相対位置又はAIS4により受信された物標の実位置を統合して算出してもよく、或いはそれに代えてもよい。「速度」及び「針路」は、物標の実位置の時間的変化に基づいて算出される物標の速度及び針路を表す。 "Position in image" represents the position where the target exists in the image. “Actual position” represents the position of the target in the physical space calculated based on the position of the target in the image. The actual position is calculated by first calculating the relative position of the target with respect to the own ship from the position in the image and converting it into the absolute position of the target using the position of the own ship. The "position" may be calculated by integrating the relative position of the target detected by the radar 3 or the actual position of the target received by the AIS 4, or alternatively. "Velocity" and "Course" represent the velocity and course of the target calculated based on the change in the actual position of the target over time.
 物標管理DB19には、カメラ5により撮像された画像から認識された物標の物標データだけでなく、別途設置されたPZTカメラ、定点固定カメラ、360度カメラ、又は赤外線カメラにより撮像された画像から認識された物標の物標データがさらに登録されてもよい。 The target management DB 19 stores not only target data of targets recognized from images captured by the camera 5, but also target data captured by a separately installed PZT camera, fixed-point camera, 360-degree camera, or infrared camera. Target data of the target recognized from the image may be further registered.
 図4は、画像処理部12の構成例を示す図である。画像処理部12は、場面判定部21、前処理部22、昼間用画像認識部23、夜間用画像認識部24、及び日出入用画像認識部25を備えている。これらの機能部は、制御部20がプログラムに従って情報処理を実行することによって実現される。 FIG. 4 is a diagram showing a configuration example of the image processing unit 12. As shown in FIG. The image processing unit 12 includes a scene determination unit 21 , a preprocessing unit 22 , a daytime image recognition unit 23 , a nighttime image recognition unit 24 , and a sunrise/sunset image recognition unit 25 . These functional units are implemented by the control unit 20 executing information processing according to programs.
 画像処理部12は、判定用モデル保持部31、昼間用モデル保持部33、夜間用モデル保持部34、及び日出入用モデル保持部35をさらに備えている。これらの記憶部は、物標監視装置1のメモリに設けられる。 The image processing unit 12 further includes a determination model holding unit 31, a daytime model holding unit 33, a nighttime model holding unit 34, and a sunrise/sunset model holding unit 35. These storage units are provided in the memory of the target monitoring device 1 .
 場面判定部21は、判定用モデル保持部31に保持された場面判定用の学習済みモデルを用いて、画像取得部11により取得された画像が昼間に撮像されたか、夜間に撮像されたか、又は日の出若しくは日の入の時間帯に撮像されたか判定する。場面判定部21は、さらに、画像が逆光で撮像されたか、画像が霧中で撮像されたか判定する。 The scene determination unit 21 uses the learned model for scene determination held in the determination model holding unit 31 to determine whether the image acquired by the image acquisition unit 11 was taken during the daytime, at nighttime, or It is determined whether the image was captured during the time of sunrise or sunset. The scene determination unit 21 further determines whether the image was captured in backlight or in fog.
 図5に示すように、自船に搭載されたカメラ5により撮像された画像Pが場面判定用の学習済みモデルDMに入力されると、画像Pが撮像された場面を表す判定結果が場面判定用の学習済みモデルDMから出力される。 As shown in FIG. 5, when an image P captured by the camera 5 mounted on the own ship is input to the learned model DM for scene determination, the determination result representing the scene in which the image P was captured is scene determination. is output from the trained model DM for
 場面判定用の学習済みモデルDMは、例えば畳み込みニューラルネットワーク(CNN)等の画像判別モデルである。場面判定用の学習済みモデルDMは、学習用画像を入力データとし、学習用画像に関連付けられたクラスを教師データとして、機械学習により生成された学習済みモデルである。 The trained model DM for scene determination is, for example, an image discrimination model such as a convolutional neural network (CNN). The learned model DM for scene determination is a trained model generated by machine learning using a learning image as input data and a class associated with the learning image as teacher data.
 学習用画像は、昼間に撮像された海上の画像、夜間に撮像された海上の画像、日の出又は日の入の時間帯(以下、「日出入時」ともいう)に撮像された海上の画像、逆光で撮像された海上の画像、及び霧中で撮像された海上の画像などを含んでいる。学習用画像は、敵対的生成ネットワーク(GAN)又は 3 Dimensional Computer Graphics(3DCG)により生成された海上の画像を含んでもよい。 The learning images include images of the sea captured during the day, images of the sea captured at night, images of the sea captured during the time of sunrise or sunset (hereinafter also referred to as "sunrise and sunset"), It includes an image of the sea captured in backlight, an image of the sea captured in fog, and the like. The training images may include images of the sea generated by Generative Adversarial Networks (GAN) or 3 Dimensional Computer Graphics (3DCG).
 学習用画像に関連付けられるクラスは、「昼間」、「夜間」、「日出入」、「逆光」、及び「霧」を含んでいる。 Classes associated with training images include "daytime", "nighttime", "sunset", "backlight", and "fog".
 場面判定用の学習済みモデルDMの出力層は、クラスに対応する要素を備えている。「昼間」、「夜間」、及び「日出入」に対応する要素は、例えばソフトマックス関数により確度の合計が1になるように設定される。場面判定部21は、「昼間」、「夜間」、及び「日出入」のうち、最も確度が高いクラスを適用する。 The output layer of the trained model DM for scene judgment has elements corresponding to classes. Elements corresponding to “daytime”, “nighttime”, and “sunrise/set” are set so that the sum of probabilities is 1, for example, by a softmax function. The scene determination unit 21 applies the class with the highest probability among "daytime", "nighttime", and "sunset".
 すなわち、場面判定部21は、「昼間」の確度が最も高い場合に画像Pが昼間に撮像されたと判定し、「夜間」の確度が最も高い場合に画像Pが夜間に撮像されたと判定し、「日出入」の確度が最も高い場合に画像Pが日出入時に撮像されたと判定する。 That is, the scene determination unit 21 determines that the image P was captured in the daytime when the probability of “daytime” is highest, and determines that the image P was captured in the nighttime when the probability of “nighttime” is highest, When the probability of "Sunrise/Sunset" is the highest, it is determined that the image P was captured during sunrise/sunset.
 また、「逆光」及び「霧」に対応する要素は、例えばシグモイド関数により0以上1以下の確度を出力するように設定される。場面判定部21は、「逆光」の確度が閾値以上である場合に画像Pが逆光で撮像されたと判定し、「霧」の確度が閾値以上である場合に画像Pが霧中で撮像されたと判定する。 Also, the elements corresponding to "backlight" and "fog" are set so as to output accuracies of 0 or more and 1 or less using, for example, a sigmoid function. The scene determination unit 21 determines that the image P was captured in backlight when the probability of “backlight” is equal to or higher than the threshold, and determines that the image P was captured in fog when the probability of “fog” is equal to or higher than the threshold. do.
 これに限らず、場面判定部21は、画像Pの撮像時刻及び自船の現在位置に基づいて算出される日の出時刻及び日の入時刻に応じて、画像Pが昼間に撮像されたか、夜間に撮像されたか、又は日出入時に撮像されたか判定してもよい。 Without being limited to this, the scene determination unit 21 determines whether the image P was captured during the daytime or at night according to the sunrise time and sunset time calculated based on the image capture time of the image P and the current position of the own ship. It may be determined whether the image was taken or whether the image was taken at sunrise or sunset.
 日の出の時間帯は、日の出時刻を含む所定長さの期間であり、日の入の時間帯は、日の入時刻を含む所定長さの期間である。昼間は、日の出時刻から日の入時刻までの期間から、日の出の時間帯及び日の入の時間帯を除いた期間である。夜間は、日の入時刻から日の出時刻までの期間から、日の出の時間帯及び日の入の時間帯を除いた期間である。 The sunrise time period is a period of a predetermined length that includes the sunrise time, and the sunset time period is a period of a predetermined length that includes the sunset time. The daytime is the period from the time of sunrise to the time of sunset, excluding the time zone of sunrise and time zone of sunset. The night time is the period from the time of sunset to the time of sunrise, excluding the time of sunrise and the time of sunset.
 また、場面判定部21は、自船に設けられた照度センサにより検知される周囲の明るさに応じて、画像Pが昼間に撮像されたか、夜間に撮像されたか、又は日出入時に撮像されたか判定してもよい。 In addition, the scene determination unit 21 determines whether the image P was captured during the daytime, during the nighttime, or during sunrise and sunset, depending on the ambient brightness detected by the illuminance sensor provided on the ship. You can judge.
 前処理部22は、場面判定部21により画像Pが逆光で撮像されたと判定された場合に、画像Pに対してガンマ補正又はコントラスト調整を行い、後段の昼間用、夜間用、又は日出入用の学習済みモデルへの入力に適した画像に加工する。 The preprocessing unit 22 performs gamma correction or contrast adjustment on the image P when it is determined by the scene determination unit 21 that the image P has been shot against the backlight, and the preprocessing unit 22 performs gamma correction or contrast adjustment on the image P to perform daytime use, nighttime use, or sunrise/sunset use in the latter stage. Process the image into an image suitable for input to the trained model of
 また、前処理部22は、場面判定部21により画像Pが霧中で撮像されたと判定された場合に、画像Pに対してDefog処理等の画像鮮明化処理を行い、後段の昼間用、夜間用、又は日出入用の学習済みモデルへの入力に適した画像に加工する。 In addition, when the scene determination unit 21 determines that the image P was captured in fog, the preprocessing unit 22 performs image sharpening processing such as defog processing on the image P, and performs subsequent daytime and nighttime image processing. , or processed into an image suitable for input to a trained model for sunrise and sunset.
 昼間用画像認識部23は、場面判定部21により画像Pが昼間に撮像されたと判定された場合に、昼間用モデル保持部33に保持された昼間用の学習済みモデルを用いて、画像Pに含まれる物標を検出する。 When the scene determination unit 21 determines that the image P was captured in the daytime, the daytime image recognition unit 23 uses the learned model for daytime held in the daytime model holding unit 33 to recognize the image P as Detects contained targets.
 また、昼間用画像認識部23は、画像Pから検出された物標の種類を判別する。物標の種類は、例えばタンカー、プレジャーボード、漁船などの船種である。物標の種類は、例えばブイ等の海上設置物であってもよい。 In addition, the daytime image recognition unit 23 determines the type of target detected from the image P. The type of target is, for example, a ship type such as a tanker, a pleasure boat, or a fishing boat. The type of target may be, for example, an offshore installation such as a buoy.
 昼間用の学習済みモデルは、例えば SSD(Single Shot MultiBox Detector)又は YOLO(You Only Look Once)等の物体検出モデルであり、画像に含まれる物標を囲む境界ボックスを出力する。これに限らず、昼間用の学習済みモデルは、Semantic Segmentation 又は Instance Segmentation 等の領域分割モデルであってもよい。 The trained model for daytime is, for example, an object detection model such as SSD (Single Shot MultiBox Detector) or YOLO (You Only Look Once), and outputs a bounding box surrounding the target included in the image. Not limited to this, the trained model for daytime may be a segmentation model such as Semantic Segmentation or Instance Segmentation.
 昼間用の学習済みモデルは、昼間に撮像された海上の画像を含む学習用画像を入力データとし、学習用画像に含まれる物標の画像内位置及び種類を教師データとして、機械学習により生成された学習済みモデルである。学習用画像は、敵対的生成ネットワーク(GAN)又は3DCGにより生成された昼間の海上の画像を含んでもよい。 A trained model for daytime use is generated by machine learning, using as input data training images that include images of the ocean taken during the daytime, and using the positions and types of targets in the images contained in the training images as teacher data. It is a trained model. Training images may include daytime maritime images generated by a generative adversarial network (GAN) or 3DCG.
 物標の画像内位置は、画像P内の物標を含む矩形状の領域の座標で特定される。物標の画像内位置には、例えば「タンカー」、「プレジャーボード」、「漁船」、「ブイ」等の物標の種類を表すクラス及び推定確度が関連付けられる。 The position of the target within the image is specified by the coordinates of a rectangular area containing the target within the image P. The in-image position of the target is associated with a class representing the type of the target, such as "tanker", "pleasure board", "fishing boat", and "buoy", and estimation accuracy.
 図6は、昼間用の学習済みモデルによる昼間に撮像された画像DPの認識例を示す図である。同図に示すように、昼間に撮像された画像DPに含まれる他船等の物標SHは、矩形状の境界ボックスBBによって囲まれる。境界ボックスBBには、物標の種類及び推定の確度が記載されたラベルCFが付加される。 FIG. 6 is a diagram showing an example of recognition of an image DP captured during the daytime by a trained model for daytime use. As shown in the figure, a target SH such as another ship included in the image DP captured in the daytime is surrounded by a rectangular bounding box BB. A label CF describing the type of target and the accuracy of estimation is added to the bounding box BB.
 夜間用画像認識部24は、場面判定部21により画像Pが夜間に撮像されたと判定された場合に、夜間用モデル保持部34に保持された夜間用の学習済みモデルを用いて、画像Pに含まれる物標を検出する。 When the scene determination unit 21 determines that the image P was captured at night, the nighttime image recognition unit 24 uses the learned model for nighttime held in the nighttime model holding unit 34 to recognize the image P. Detects contained targets.
 また、夜間用画像認識部24は、画像Pから検出された物標の種類ではなく、物標であるか否か判別する。すなわち、夜間用画像認識部24は、夜間用の学習済みモデルから出力される推定の確度が閾値以上である場合に、物標であると判別する。 In addition, the nighttime image recognition unit 24 determines not the type of the target detected from the image P, but whether it is a target. That is, the nighttime image recognition unit 24 determines that the object is a target when the accuracy of estimation output from the trained model for nighttime is equal to or higher than the threshold.
 夜間用の学習済みモデルは、上記昼間用の学習済みモデルと同様に、例えば SSD 又は YOLO 等の物体検出モデルであってもよいし、Semantic Segmentation 又は Instance Segmentation 等の領域分割モデルであってもよい。 The trained model for nighttime use may be an object detection model such as SSD or YOLO, or a segmentation model such as Semantic Segmentation or Instance Segmentation, like the trained model for daytime use. .
 夜間用の学習済みモデルは、夜間に撮像された海上の画像を含む学習用画像を入力データとし、学習用画像に含まれる物標の画像内位置を教師データとして、機械学習により生成された学習済みモデルである。夜間用の学習済みモデルは、灯火の配置パターンもパラメータとして学習する。学習用画像は、敵対的生成ネットワーク(GAN)又は3DCGにより生成された夜間の海上の画像を含んでもよい。物標の画像内位置には、物標を表すクラスが関連付けられる。 A trained model for nighttime use is generated by machine learning using training images including images of the sea taken at night as input data, and the positions of targets in the images included in the training images as teacher data. It is a finished model. The trained model for nighttime also learns the arrangement pattern of lights as a parameter. Training images may include nighttime ocean images generated by a generative adversarial network (GAN) or 3DCG. A class representing the target is associated with the position in the image of the target.
 図7は、夜間用の学習済みモデルによる夜間に撮像された画像NPの認識例を示す図である。同図に示すように、夜間に撮像された画像NPにおいては、他船等の物標が発する光Lしかほぼ見えない状態となる。このような画像NPに夜間用の学習済みモデルを適用すると、物標の光Lが矩形状の境界ボックスBBによって囲まれ、境界ボックスBBには、物標であること及び推定の確度が記載されたラベルCFが付加される。 FIG. 7 is a diagram showing an example of recognition of an image NP captured at night by a trained model for night. As shown in the figure, in the image NP captured at night, only the light L emitted by targets such as other ships can be seen. When a trained model for nighttime is applied to such an image NP, the light L of the target is surrounded by a rectangular bounding box BB, and the bounding box BB describes the target and the estimation accuracy. label CF is added.
 さらに、夜間用画像認識部24は、夜間用の学習済みモデルにより検出され、且つ所定以上の輝度を有する物標候補を、物標として検出する。すなわち、夜間用の学習済みモデルを適用しただけでは、独立した単一の光を物標として検出できないおそれがあるが、所定以上の輝度を有するというルールを組み合わせることで、夜間に撮像された画像NPであっても物標検出の精度向上を図ることが可能となる。 Furthermore, the nighttime image recognition unit 24 detects target candidates that have been detected by the learned model for nighttime and have a brightness level equal to or higher than a predetermined level, as targets. In other words, there is a risk that a single independent light cannot be detected as a target simply by applying a trained model for nighttime use. Even with NP, it is possible to improve the accuracy of target detection.
 日出入用画像認識部25は、場面判定部21により画像が日出入時に撮像されたと判定された場合に、日出入用モデル保持部35に保持された日出入用の学習済みモデルを用いて、画像Pに含まれる物標を検出する。また、日出入用画像認識部25は、上記昼間用画像認識部23と同様に、画像Pから検出された物標の種類を判別する。 The sunrise/sunset image recognition unit 25 uses the trained model for sunrise/sunset held in the sunrise/sunset model holding unit 35 to perform A target included in the image P is detected. Also, the sunrise/sunset image recognition unit 25 determines the type of the target detected from the image P in the same manner as the daytime image recognition unit 23 .
 日出入用の学習済みモデルは、上記昼間用及び夜間用の学習済みモデルと同様に、例えば SSD 又は YOLO 等の物体検出モデルであってもよいし、Semantic Segmentation 又は Instance Segmentation 等の領域分割モデルであってもよい。 The trained model for sunrise/sunset can be an object detection model such as SSD or YOLO, or a segmentation model such as Semantic Segmentation or Instance Segmentation, just like the trained models for daytime and nighttime. There may be.
 日出入用の学習済みモデルは、日出入時に撮像された海上の画像を含む学習用画像を入力データとし、学習用画像に含まれる物標の画像内位置及び種類を教師データとして、機械学習により生成された学習済みモデルである。学習用画像は、敵対的生成ネットワーク(GAN)又は3DCGにより生成された日出入時の海上の画像を含んでもよい。 The trained model for sunrise/sunset uses training images, including images of the ocean captured at sunrise/sunset, as input data, and the positions and types of targets in the images included in the training images as teacher data. This is the generated trained model. The training images may include sunrise and sunset images of the ocean generated by a generative adversarial network (GAN) or 3DCG.
 図8は、日出入用の学習済みモデルによる日出入時に撮像された画像SPの認識例を示す図である。同図に示すように、日出入時に撮像された画像SPに含まれる他船等の物標SHは、矩形状の境界ボックスBBによって囲まれる。境界ボックスBBには、物標の種類及び推定の確度が記載されたラベルCFが付加される。 FIG. 8 is a diagram showing an example of recognition of an image SP captured at sunrise/sunset by a trained model for sunrise/sunset. As shown in the figure, targets SH such as other ships included in the image SP captured at sunrise and sunset are surrounded by a rectangular bounding box BB. A label CF describing the type of target and the accuracy of estimation is added to the bounding box BB.
 日出入時に撮像された画像SPには海面の反射があるため、昼間用又は夜間用の学習済みモデルでは物標検出の精度が十分でないおそれがあるが、日出入用の学習済みモデルを別途用意することで、日出入時に撮像された画像SPであっても物標検出の精度向上を図ることが可能となる。 Since the image SP captured at sunrise and sunset has reflections from the sea surface, the trained model for daytime or nighttime may not be accurate enough for target detection, but a trained model for sunrise and sunset is prepared separately. By doing so, it is possible to improve the accuracy of target detection even in the image SP captured at the time of sunrise and sunset.
 図9は、物標監視システム100において実現される物標監視方法の手順例を示す図である。図10は、前処理ルーチンの手順例を示す図である。図11は、画像認識処理ルーチンの手順例を示す図である。物標監視装置1の制御部20は、プログラムに従って同図に示す情報処理を実行する。 FIG. 9 is a diagram showing a procedure example of a target monitoring method implemented in the target monitoring system 100. FIG. FIG. 10 is a diagram showing a procedure example of a preprocessing routine. FIG. 11 is a diagram showing a procedure example of an image recognition processing routine. The control unit 20 of the target monitoring device 1 executes the information processing shown in the figure according to the program.
 まず、制御部20は、カメラ5により生成された画像Pを取得する(S11、画像取得部11としての処理)。 First, the control unit 20 acquires the image P generated by the camera 5 (S11, processing as the image acquisition unit 11).
 次に、制御部20は、場面判定用の学習済みモデルを用いて、取得された画像Pが昼間に撮像されたか、夜間に撮像されたか、又は日の出若しくは日の入の時間帯に撮像されたか、さらには、画像が逆光で撮像されたか、画像が霧中で撮像されたか判定する(S12、場面判定部21としての処理)。 Next, the control unit 20 uses the learned model for scene determination to determine whether the acquired image P was captured during the daytime, at nighttime, or during sunrise or sunset. Further, it is determined whether the image was captured in backlight or in fog (S12, processing by the scene determination unit 21).
 次に、制御部20は、前処理のルーチンを実行する(S13、前処理部22としての処理)。 Next, the control unit 20 executes a preprocessing routine (S13, processing as the preprocessing unit 22).
 図10に示すように、前処理のルーチンにおいて、制御部20は、画像Pが逆光で撮像されたと判定された場合に(S21:YES)、画像Pに対してガンマ補正又はコントラスト調整を行う(S22)。 As shown in FIG. 10, in the pre-processing routine, when it is determined that the image P was shot against backlight (S21: YES), the control unit 20 performs gamma correction or contrast adjustment on the image P ( S22).
 また、制御部20は、画像Pが霧中で撮像されたと判定された場合に(S23:YES)、画像Pに対してDefog処理等の画像鮮明化処理を行う(S24)。 Also, when it is determined that the image P was captured in fog (S23: YES), the control unit 20 performs image sharpening processing such as Defog processing on the image P (S24).
 以上により前処理のルーチンが終了し、図9に示すメインルーチンに戻る。 Thus, the preprocessing routine is completed, and the process returns to the main routine shown in FIG.
 次に、制御部20は、画像認識処理のルーチンを実行する(S14)。 Next, the control unit 20 executes an image recognition processing routine (S14).
 図11に示すように、画像認識処理のルーチンにおいて、制御部20は、画像Pが昼間に撮像されたと判定された場合には(S31:YES)、昼間用の学習済みモデルを用いて、画像Pに含まれる物標を検出するとともに物標の種類を判別する(S32、昼間用画像認識部23としての処理)。 As shown in FIG. 11 , in the image recognition processing routine, when it is determined that the image P was captured in the daytime (S31: YES), the control unit 20 uses the learned model for daytime to perform image recognition. Targets included in P are detected and the type of the target is determined (S32, processing by the daytime image recognition unit 23).
 また、制御部20は、画像Pが夜間に撮像されたと判定された場合には(S33:YES)、夜間用の学習済みモデルを用いて画像Pに含まれる物標候補を検出するとともに、所定以上の輝度を有する物標候補を、物標として抽出する(S34,S35、夜間用画像認識部24としての処理)。 Further, when it is determined that the image P was captured at night (S33: YES), the control unit 20 detects target candidates included in the image P using the learned model for nighttime, and A target candidate having the above brightness is extracted as a target (S34, S35, processing by the nighttime image recognition unit 24).
 また、制御部20は、画像Pが日出入時に撮像されたと判定された場合には(S36:YES)、日出入用の学習済みモデルを用いて、画像Pに含まれる物標を検出するとともに物標の種類を判別する(S37、日出入用画像認識部25としての処理)。 Further, when it is determined that the image P was captured during sunrise/sunset (S36: YES), the control unit 20 uses the trained model for sunrise/sunset to detect targets included in the image P. The type of target is discriminated (S37, processing by the sunrise/sunset image recognition unit 25).
 以上により画像認識処理のルーチンが終了し、図9に示すメインルーチンも終了する。その後、制御部20は、画像Pから検出された物標の物標データを生成し、物標管理DB19に登録する。 Thus, the image recognition processing routine ends, and the main routine shown in FIG. 9 also ends. After that, the control unit 20 generates target data of the target detected from the image P and registers it in the target management DB 19 .
 以上、本発明の実施形態について説明したが、本発明は以上に説明した実施形態に限定されるものではなく、種々の変更が当業者にとって可能であることはもちろんである。 Although the embodiments of the present invention have been described above, the present invention is not limited to the embodiments described above, and it goes without saying that various modifications are possible for those skilled in the art.
1 物標監視装置、2 表示部、3 レーダー、4 AIS、5 カメラ、6 GNSS受信機、7 ジャイロコンパス、8 ECDIS、9 無線通信部、10 操船制御部、11 画像取得部、12 画像処理部、13 表示制御部、14 操船判断部、19 物標管理DB、20 制御部、21 場面判定部、22 前処理部、23 昼間用画像認識部、24 夜間用画像認識部、25 日出入用画像認識部、31 判定用モデル保持部、33 昼間用モデル保持部、34 夜間用モデル保持部、35 日出入用モデル保持部、100 物標監視システム  1 Target monitoring device, 2 display unit, 3 radar, 4 AIS, 5 camera, 6 GNSS receiver, 7 gyrocompass, 8 ECDIS, 9 wireless communication unit, 10 ship operation control unit, 11 image acquisition unit, 12 image processing unit , 13 Display control unit, 14 Ship maneuvering determination unit, 19 Target management DB, 20 Control unit, 21 Scene determination unit, 22 Preprocessing unit, 23 Daytime image recognition unit, 24 Nighttime image recognition unit, 25 Daytime image Recognition unit, 31 determination model holding unit, 33 daytime model holding unit, 34 nighttime model holding unit, 35 daytime model holding unit, 100 target monitoring system

Claims (15)

  1.  船舶に設置されたカメラにより撮像された海上の物標を含む画像を取得する画像取得部と、
     前記画像が昼間に撮像されたか夜間に撮像されたか判定する場面判定部と、
     前記画像が昼間に撮像されたと判定された場合に、昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する昼間用画像認識部と、
     前記画像が夜間に撮像されたと判定された場合に、夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する夜間用画像認識部と、
     を備える、物標監視装置。
    an image acquisition unit that acquires an image including targets on the sea captured by a camera installed on a ship;
    a scene determination unit that determines whether the image is captured in the daytime or at night;
    a daytime image recognition unit that detects the target included in the image using a trained model for daytime when the image is determined to have been captured in the daytime;
    a nighttime image recognition unit that detects the target included in the image using a trained model for nighttime when it is determined that the image was captured at nighttime;
    A target monitoring device.
  2.  前記場面判定部は、場面判定用の学習済みモデルを用いて、前記画像が昼間に撮像されたか夜間に撮像されたか判定する、
     請求項1に記載の物標監視装置。
    The scene determination unit uses a trained model for scene determination to determine whether the image was captured in the daytime or at night.
    The target monitoring device according to claim 1.
  3.  前記夜間用画像認識部は、前記夜間用の学習済みモデルにより検出され、且つ所定以上の輝度を有する物標候補を、前記物標として検出する、
     請求項1または2に記載の物標監視装置。
    The nighttime image recognition unit detects, as the target, a candidate target that is detected by the learned model for nighttime and has a luminance of a predetermined level or more.
    The target monitoring device according to claim 1 or 2.
  4.  前記昼間用画像認識部は、前記昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出するとともに、前記物標の種類を判別する、
     請求項1ないし3の何れかに記載の物標監視装置。
    The daytime image recognition unit uses the learned model for daytime to detect the target included in the image and determine the type of the target.
    The target monitoring device according to any one of claims 1 to 3.
  5.  前記夜間用画像認識部は、前記夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出するとともに、物標であるか否か判別する、
     請求項1ないし4の何れかに記載の物標監視装置。
    The nighttime image recognition unit uses the learned model for nighttime to detect the target included in the image and determines whether it is a target.
    A target monitoring device according to any one of claims 1 to 4.
  6.  前記場面判定部は、さらに、前記画像が日の出又は日の入の時間帯に撮像されたか判定し、
     前記画像が日の出又は日の入の時間帯に撮像されたと判定された場合に、日出入用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する日出入用画像認識部をさらに備える、
     請求項1ないし5の何れかに記載の物標監視装置。
    The scene determination unit further determines whether the image was captured during sunrise or sunset,
    a sunrise/sunset image recognition unit that detects the target included in the image using a trained model for sunrise/sunset when the image is determined to have been captured during a sunrise or sunset time zone; prepare further,
    A target object monitoring device according to any one of claims 1 to 5.
  7.  前記日出入用画像認識部は、前記日出入用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出するとともに、前記物標の種類を判別する、
     請求項6に記載の物標監視装置。
    The sunrise/sunset image recognition unit uses the trained model for sunrise/sunset to detect the target included in the image and determine the type of the target.
    The target monitoring device according to claim 6.
  8.  前記場面判定部は、さらに、前記画像が逆光で撮像されたか判定し、
     前記画像が逆光で撮像されたと判定された場合に、前記学習済みモデルに入力される前の前記画像に対してガンマ補正又はコントラスト調整を行う前処理部をさらに備える、
     請求項1ないし7の何れかに記載の物標監視装置。
    The scene determination unit further determines whether or not the image is captured in backlight,
    further comprising a preprocessing unit that performs gamma correction or contrast adjustment on the image before being input to the trained model when it is determined that the image was captured with backlight;
    The target monitoring device according to any one of claims 1 to 7.
  9.  前記場面判定部は、さらに、前記画像が霧中で撮像されたか判定し、
     前記画像が霧中で撮像されたと判定された場合に、前記学習済みモデルに入力される前の前記画像に対して画像鮮明化処理を行う前処理部をさらに備える、
     請求項1ないし8の何れかに記載の物標監視装置。
    The scene determination unit further determines whether the image was captured in fog,
    Further comprising a preprocessing unit that performs image sharpening processing on the image before being input to the trained model when it is determined that the image was captured in fog,
    The target monitoring device according to any one of claims 1 to 8.
  10.  前記昼間用の学習済みモデルは、昼間に撮像された画像を含む学習用画像を入力データとし、前記学習用画像に含まれる物標の画像内位置及び種類を教師データとして、機械学習により生成された学習済みモデルである、
     請求項1ないし9の何れかに記載の物標監視装置。
    The learned model for the daytime is generated by machine learning using learning images including images captured in the daytime as input data and the positions and types of targets in the images included in the learning images as teacher data. is a trained model with
    A target object monitoring device according to any one of claims 1 to 9.
  11.  前記夜間用の学習済みモデルは、夜間に撮像された画像を含む学習用画像を入力データとし、前記学習用画像に含まれる物標の画像内位置を教師データとして、機械学習により生成された学習済みモデルである、
     請求項1ないし10の何れかに記載の物標監視装置。
    The learned model for nighttime is a learning model generated by machine learning using learning images including images captured at nighttime as input data and the position of the target in the image included in the learning images as teacher data. is a finished model,
    A target object monitoring device according to any one of claims 1 to 10.
  12.  前記日出入用の学習済みモデルは、日の出又は日の入の時間帯に撮像された画像を含む学習用画像を入力データとし、前記学習用画像に含まれる物標の画像内位置及び種類を教師データとして、機械学習により生成された学習済みモデルである、
     請求項6または7に記載の物標監視装置。
    The trained model for sunrise/sunset uses as input data images for learning that include images captured during the time of sunrise or sunset, and the positions and types of targets in the images included in the images for learning are trained. As data, it is a trained model generated by machine learning,
    The target monitoring device according to claim 6 or 7.
  13.  前記請求項1ないし12の何れかに記載された物標監視装置と、
     前記物標監視装置により検出された物標に基づいて操船判断を行う操船判断部と、
     前記操船判断に基づいて前記船舶の操船制御を行う操船制御部と、
     を備える操船システム。
    a target monitoring device according to any one of claims 1 to 12;
    a ship maneuvering determination unit that determines ship maneuvering based on the target detected by the target monitoring device;
    a ship maneuvering control unit that controls the maneuvering of the ship based on the ship maneuvering determination;
    A ship maneuvering system with
  14.  船舶に設置されたカメラにより撮像された海上の物標を含む画像を取得し、
     前記画像が昼間に撮像されたか夜間に撮像されたか判定し、
     前記画像が昼間に撮像されたと判定された場合に、昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出し、
     前記画像が夜間に撮像されたと判定された場合に、夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出する、
     物標監視方法。
    Acquire an image including targets on the sea captured by a camera installed on the ship,
    determining whether the image was taken during the day or at night;
    detecting the target included in the image using a learned model for daytime when it is determined that the image was captured in the daytime;
    detecting the target included in the image using a trained model for nighttime when it is determined that the image was captured at night;
    Target monitoring method.
  15.  船舶に設置されたカメラにより撮像された海上の物標を含む画像を取得すること、
     前記画像が昼間に撮像されたか夜間に撮像されたか判定すること、
     前記画像が昼間に撮像されたと判定された場合に、昼間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出すること、及び、
     前記画像が夜間に撮像されたと判定された場合に、夜間用の学習済みモデルを用いて、前記画像に含まれる前記物標を検出すること、
     をコンピュータに実行させるためのプログラム。 
    Acquiring an image including a marine target imaged by a camera installed on a ship,
    determining whether the image was taken during the day or at night;
    detecting the target included in the image using a learned model for daytime when it is determined that the image was captured in the daytime;
    Detecting the target included in the image using a trained model for nighttime when it is determined that the image was captured at night;
    A program that causes a computer to run
PCT/JP2023/002267 2022-02-25 2023-01-25 Landmark monitoring device, ship steering system, landmark monitoring method, and program WO2023162561A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022027914A JP2023124259A (en) 2022-02-25 2022-02-25 Target monitoring device, ship steering system, target monitoring method, and program
JP2022-027914 2022-02-25

Publications (1)

Publication Number Publication Date
WO2023162561A1 true WO2023162561A1 (en) 2023-08-31

Family

ID=87765441

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/002267 WO2023162561A1 (en) 2022-02-25 2023-01-25 Landmark monitoring device, ship steering system, landmark monitoring method, and program

Country Status (2)

Country Link
JP (1) JP2023124259A (en)
WO (1) WO2023162561A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018037061A (en) * 2016-09-01 2018-03-08 三星電子株式会社Samsung Electronics Co.,Ltd. Method and apparatus for controlling vision sensor for autonomous vehicle
WO2020090251A1 (en) * 2018-10-30 2020-05-07 日本電気株式会社 Object recognition device, object recognition method, and object recognition program
JP2020170319A (en) * 2019-04-02 2020-10-15 Kyb株式会社 Detection device
JP2021187282A (en) * 2020-05-29 2021-12-13 東亜建設工業株式会社 Navigation monitoring system and navigation monitoring method for construction vessel

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018037061A (en) * 2016-09-01 2018-03-08 三星電子株式会社Samsung Electronics Co.,Ltd. Method and apparatus for controlling vision sensor for autonomous vehicle
WO2020090251A1 (en) * 2018-10-30 2020-05-07 日本電気株式会社 Object recognition device, object recognition method, and object recognition program
JP2020170319A (en) * 2019-04-02 2020-10-15 Kyb株式会社 Detection device
JP2021187282A (en) * 2020-05-29 2021-12-13 東亜建設工業株式会社 Navigation monitoring system and navigation monitoring method for construction vessel

Also Published As

Publication number Publication date
JP2023124259A (en) 2023-09-06

Similar Documents

Publication Publication Date Title
US20190204416A1 (en) Target object detecting device, method of detecting a target object and computer readable medium
WO2021132437A1 (en) Administrative server in ship navigation assistance system, ship navigation assistance method, and ship navigation assistance program
EP3881221A1 (en) System and method for measuring the distance to an object in water
EP3926364A1 (en) Ship target object detection system, method of detecting ship target object and reliability estimating device
Yu et al. Object detection-tracking algorithm for unmanned surface vehicles based on a radar-photoelectric system
Yu Development of real-time acoustic image recognition system using by autonomous marine vehicle
CN113933828A (en) Unmanned ship environment self-adaptive multi-scale target detection method and system
WO2023162561A1 (en) Landmark monitoring device, ship steering system, landmark monitoring method, and program
US20230351764A1 (en) Autonomous cruising system, navigational sign identifying method, and non-transitory computer-readable medium
US20220171043A1 (en) Sonar display features
WO2023112348A1 (en) Target monitoring device, target monitoring method, and program
CN107941220B (en) Unmanned ship sea antenna detection and navigation method and system based on vision
WO2023162562A1 (en) Target monitoring system, target monitoring method, and program
WO2023286360A1 (en) Training data collection device, training data collection method, and program
WO2023112349A1 (en) Target monitoring device, target monitoring method, and program
WO2023112347A1 (en) Target monitoring device, target monitoring method, and program
US20240104746A1 (en) Vessel tracking and monitoring system and operating method thereof
WO2023074014A1 (en) Vessel monitoring device, vessel monitoring method, and program
WO2022137953A1 (en) Sea mark identification device, autonomous navigation system, sea mark identification method, and program
Cafaro et al. Towards Enhanced Support for Ship Sailing
WO2023286359A1 (en) Berthing assistance apparatus, berthing assistance method, and program
US20230331357A1 (en) Autonomous cruising system, navigational sign identifying method, and non-transitory computer-readable medium
Tulchinskii et al. Automatic assistance system for visual control of targets
WO2023181041A1 (en) Neural network estimation of a distance to a marine object using camera
CN117545135A (en) Self-adaptive control method and system for navigation mark lamp

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23759567

Country of ref document: EP

Kind code of ref document: A1