WO2023100389A1 - Information processing method, information processing device, and computer program - Google Patents

Information processing method, information processing device, and computer program Download PDF

Info

Publication number
WO2023100389A1
WO2023100389A1 PCT/JP2022/015560 JP2022015560W WO2023100389A1 WO 2023100389 A1 WO2023100389 A1 WO 2023100389A1 JP 2022015560 W JP2022015560 W JP 2022015560W WO 2023100389 A1 WO2023100389 A1 WO 2023100389A1
Authority
WO
WIPO (PCT)
Prior art keywords
water
information processing
image
camera
ship
Prior art date
Application number
PCT/JP2022/015560
Other languages
French (fr)
Japanese (ja)
Inventor
奈緒美 藤澤
勝幸 柳
正也 能瀬
ミン トラン
悠太 高橋
博紀 村上
Original Assignee
古野電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 古野電気株式会社 filed Critical 古野電気株式会社
Publication of WO2023100389A1 publication Critical patent/WO2023100389A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G3/00Traffic control systems for marine craft
    • G08G3/02Anti-collision systems

Definitions

  • the present invention relates to an information processing method, an information processing device, and a computer program for determining water objects around a ship.
  • Patent Literature 1 discloses a technique for facilitating identification of an object on the water by displaying an image of the exterior of a ship photographed and a determination result by radar in association with each other.
  • the present invention has been made in view of such circumstances, and its object is to provide an information processing method, an information processing apparatus, and a computer program that enable effective discrimination of objects on the water. That's what it is.
  • the information processing method when a plurality of photographing devices of different types installed on a ship obtain photographed images photographing the surroundings of the ship, and the photographed images are input, water objects included in the photographed images are acquired.
  • a photographed image obtained from each of the plurality of photographing devices is input to the trained model that outputs the discrimination result of the above; acquiring the discrimination result of the object on the water outputted by the trained model; is characterized by outputting the discrimination result of the object on the water included in .
  • An information processing apparatus includes an image acquisition unit that acquires captured images of the surroundings of a ship captured by a plurality of different types of capturing devices; a discrimination result acquiring unit that inputs captured images acquired from each of the plurality of photographing devices to a trained model that outputs results, and acquires a discrimination result of a water object output by the trained model; and an output unit for outputting the determination result corresponding to each of the devices.
  • a computer program acquires photographed images of the surroundings of a ship taken by a plurality of photographing devices of different types, and when the photographed images are input, learning to output the discrimination results of the objects on the water included in the photographed images. Captured images obtained from each of the plurality of photographing devices are input to the trained model, discrimination results of water objects output by the trained model are obtained, and the discrimination results corresponding to each of the plurality of photographing devices are obtained. It is characterized by having a computer execute processing to output.
  • captured images captured by a plurality of image capturing devices installed on a ship are input to a trained model, and the learned model outputs the discrimination results of water objects.
  • the trained model it is possible to discriminate objects on the water without relying on the crew's visual observation.
  • a plurality of imaging devices installed at a plurality of locations within the vessel water objects present in a wide area around the vessel can be effectively identified.
  • a plurality of imaging devices of different types it is possible to effectively identify more aquatic objects than can be identified using a single imaging device.
  • FIG. 1 is a block diagram showing a configuration example of an information processing system for determining water objects around a ship;
  • FIG. 2 is a block diagram showing an example of the internal functional configuration of the information processing apparatus;
  • FIG. 4 is a schematic diagram showing an example of a captured image;
  • FIG. 10 is a schematic diagram showing an example of outputting a photographed image and a discrimination result of an object on the water;
  • FIG. 10 is a schematic diagram showing an example of outputting a radar image, a photographed image, and a determination result of an object on the water;
  • FIG. 4 is a schematic diagram showing an example of a radar image with a designated portion and a captured image with the corresponding portion emphasized;
  • 7 is a flowchart illustrating an example of processing for re-learning a trained model;
  • FIG. 1 is a schematic diagram showing a ship.
  • a ship 1 is equipped with a plurality of photographing devices.
  • a plurality of photographing devices photograph the surroundings of the ship 1 and identify objects on the water included in the photographed images.
  • a water object is an object that is on the water around the ship 1, such as another ship or a buoy.
  • the multiple imaging devices include a front camera 11, an infrared camera 12, a PTZ camera 13 capable of pan, tilt and zoom, a starboard camera 14, and a port camera 15.
  • the front camera 11 captures an area in front of the ship 1 using visible light.
  • the infrared camera 12 captures an area in front of the ship 1 using infrared rays. By using the infrared camera 12, it becomes easy to photograph an object such as an object on water at night, which is difficult to photograph using visible light.
  • the PTZ camera 13 can photograph an area in any direction at any magnification.
  • a front camera 11 , an infrared camera 12 , and a PTZ camera 13 are installed in front of the ship 1 .
  • the starboard camera 14 is installed on the starboard side of the ship 1
  • the port camera 15 is installed on the port side of the ship 1 .
  • the starboard camera 14 uses visible light to capture an area facing the starboard side of the ship 1 .
  • the port camera 15 uses visible light to capture an area facing the port side of the ship 1 .
  • the relative positions of the areas photographed by the front camera 11, the infrared camera 12, the starboard camera 14, and the port camera 15 with respect to the ship 1 are fixed.
  • the plurality of photographing devices include photographing devices installed at a plurality of positions within the ship 1, and include photographing devices of different types.
  • the angles of view of the front camera 11, the starboard camera 14, and the port camera 15 are preferably wide.
  • the orientation in the horizontal plane is represented by an angle where the front of the ship 1 is 0 degrees
  • the front camera 11 photographs the areas of 300 degrees to 360 degrees and 0 degrees to 60 degrees in the circumference of the ship 1
  • the starboard camera 14 photographs an area of 30 degrees to 150 degrees
  • the port camera 15 photographs an area of 210 degrees to 330 degrees. Since the front camera 11, the starboard camera 14, and the port camera 15 have wide angles of view, a wide area can be photographed with one camera. Therefore, it is possible to efficiently discriminate an object on the water.
  • the front camera 11, the infrared camera 12, the starboard camera 14, or the port camera 15 may be configured so that the position of the area to be photographed can be changed.
  • the plurality of imaging devices provided on the ship 1 may include cameras other than the front camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 and the port camera 15 . Any one of the front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15 may not be included in the plurality of imaging devices.
  • FIG. 2 is a block diagram showing a configuration example of an information processing system 200 for identifying water objects around the ship 1.
  • the information processing system 200 includes an information processing device 2 .
  • the information processing device 2 is a computer.
  • a plurality of photographing devices including a front camera 11 , an infrared camera 12 , a PTZ camera 13 , a starboard camera 14 and a port camera 15 are connected to the information processing device 2 .
  • a radar 16 and an AIS (Automatic Identification System) 17 provided on the ship 1 are also connected to the information processing device 2 .
  • the radar 16 scans the surroundings of the vessel 1 with radio waves, discriminates objects on the water based on the reflected waves of the radio waves, and generates a radar image including the discrimination result.
  • the AIS 17 is a device for exchanging ship information about ships between ships, and receives ship information including information for identifying other ships from other ships by wireless communication.
  • the information processing system 200 includes a display device 31 and an operation device 32 .
  • the display device 31 displays images.
  • the display device 31 is, for example, a liquid crystal display or an EL display (Electroluminescent Display).
  • the operation device 32 receives input of information by receiving an operation from a user such as a crew member of the ship 1 .
  • the operating device 32 is, for example, a touch panel, keyboard, or pointing device.
  • the display device 31 and the operation device 32 may be integrated.
  • FIG. 3 is a block diagram showing an example of the internal functional configuration of the information processing device 2.
  • the information processing device 2 executes an information processing method for identifying objects on the water around the ship 1 .
  • the information processing device 2 includes an arithmetic unit 21 , a memory 22 , a drive unit 23 , a storage unit 24 and an interface unit 25 .
  • the calculation unit 21 is configured using, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or a multi-core CPU.
  • a display device 31 and an operation device 32 are connected to the calculation unit 21 .
  • the memory 22 stores temporary data generated along with computation.
  • the memory 22 is, for example, a RAM (Random Access Memory).
  • a drive unit 23 reads information from a recording medium 20 such as an optical disc or a portable memory.
  • the storage unit 24 is nonvolatile, such as a hard disk or nonvolatile semiconductor memory.
  • the calculation unit 21 causes the drive unit 23 to read the computer program 241 recorded on the recording medium 20 and causes the storage unit 24 to store the read computer program 241 .
  • the calculation unit 21 executes processing necessary for the information processing device 2 according to the computer program 241 .
  • Computer program 241 may be a computer program product.
  • the computer program 241 may be downloaded from outside the information processing device 2 .
  • the computer program 241 may be pre-stored in the information processing device 2 . In these cases, the information processing device 2 does not have to include the drive section 23 .
  • the front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15 are connected to the interface unit 25.
  • the forward camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 and the port side camera 15 create captured images of areas around the ship 1 that they are in charge of, and input the captured images to the information processing device 2 .
  • the interface unit 25 accepts an input photographed image.
  • the information processing device 2 acquires the captured image by receiving the captured image at the interface unit 25 .
  • the front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15 continuously create captured images, or periodically and repeatedly create captured images.
  • the information processing device 2 acquires captured images continuously or acquires captured images repeatedly on a regular basis.
  • the interface unit 25 is connected to the radar 16 and the AIS 17.
  • the radar 16 generates a radar image and inputs it to the information processing device 2 .
  • the information processing device 2 acquires the radar image by receiving the input radar image at the interface unit 25 .
  • the radar 16 repeatedly creates radar images, and the information processing device 2 repeatedly acquires captured images.
  • the AIS 17 receives vessel information from other vessels and inputs AIS data based on the received vessel information to the information processing device 2 .
  • AIS data includes information indicating the type of other vessels.
  • the information processing device 2 acquires the AIS data by receiving the input AIS data at the interface unit 25 . Each time the AIS 17 receives ship information, the AIS 17 inputs the AIS data to the information processing device 2, and the information processing device 2 acquires the AIS data.
  • the information processing device 2 may be configured by a plurality of computers, and data may be distributed and stored by the plurality of computers, and processing may be performed by the plurality of computers in a distributed manner.
  • the information processing device 2 may be realized by a plurality of virtual machines provided within one computer.
  • the information processing device 2 is equipped with a trained model 242 that has been trained.
  • the learned model 242 is implemented by the computing unit 21 executing information processing according to the computer program 241 .
  • the storage unit 24 stores data recording the parameters of the trained model 242 that has been learned in advance, and the calculation unit 21 uses the parameters to execute information processing according to the computer program 241, A trained model 242 is realized.
  • the trained model 242 may be configured by hardware. Alternatively, the trained model 242 may be provided outside the information processing device 2 , and the information processing device 2 may execute processing using the external trained model 242 .
  • FIG. 4 is a conceptual diagram showing the functions of the trained model 242.
  • FIG. Captured images created by the front camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 or the port camera 15 are input to the learned model 242 .
  • the learned model 242 is trained in advance by machine learning so that when a photographed image is input, it outputs a result of judging a water object included in the photographed image.
  • the discrimination results output by the trained model 242 include the position of the object on the water in the captured image, the type of the object on the water, and the degree of certainty of the type of the object on the water.
  • the trained model 242 is configured using YOLO (You Only Look Once).
  • the trained model 242 may be a model using a method other than YOLO, such as R-CNN (Region Based Convolutional Neural Networks) or segmentation.
  • the learned model 242 may include a plurality of learned models in one-to-one correspondence with the front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15. Each of the plurality of learned models is individually learned in advance so as to output a determination result of a water object when a photographed image created by the corresponding photographing device is input.
  • the information processing device 2 performs a process of identifying water objects existing around the ship 1 and outputting the identification result of the water objects.
  • FIG. 5 is a flow chart showing an example of a process for outputting a determination result of an object on the water. A step is abbreviated as S below.
  • the computing unit 21 of the information processing device 2 executes processing according to the computer program 241 .
  • the front camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 and the port camera 15 create captured images by capturing images, and input the captured images to the information processing device 2 .
  • the information processing device 2 acquires a captured image by receiving the captured image input from each imaging device at the interface unit 25 (S101).
  • the information processing device 2 may acquire only the captured images created by some of the imaging devices.
  • the calculation unit 21 stores the captured image in the storage unit 24 .
  • FIG. 6 is a schematic diagram showing an example of a captured image.
  • FIG. 6 shows an example of a photographed image 41 created by the front camera 11.
  • a captured image 41 includes a portion of the hull 51 of the ship 1 .
  • the photographed image 41 also includes a plurality of water objects 52 .
  • the processing of S101 corresponds to the image acquisition unit.
  • the information processing device 2 inputs the captured image to the learned model 242 (S102).
  • the calculation unit 21 inputs the captured image to the learned model 242, and causes the learned model 242 to execute processing.
  • the trained model 242 includes a plurality of trained models corresponding to a plurality of imaging devices one-to-one
  • the calculation unit 21 converts the captured image input from each imaging device into a trained model corresponding to the imaging device.
  • Enter to The learned model 242 outputs a determination result of determining a water object included in the input photographed image in response to the input of the photographed image.
  • the information processing device 2 acquires the determination result of the object on the water output by the learned model 242 (S103).
  • the calculation unit 21 stores the determination result of the object on the water in the storage unit 24 .
  • the determination result includes the position of the aquatic object in the captured image, the type of the aquatic object, and the degree of certainty of the type of the aquatic object.
  • the information processing device 2 determines whether or not the degree of certainty of the type of object on the water included in the determination result is within a predetermined range (S104).
  • the predetermined range is a range in which the degree of certainty is somewhat low.
  • the predetermined range is a range equal to or less than a predetermined threshold.
  • the second threshold is set to a value smaller than the threshold, and the predetermined range is equal to or greater than the second threshold and equal to or less than the threshold.
  • the predetermined range is stored in advance in the storage unit 24 or included in the computer program 241 .
  • the calculation unit 21 compares the certainty with a predetermined range.
  • the calculation unit 21 determines whether or not the certainty factors for each aquatic object are within a predetermined range. When the same object on the water is included in a plurality of photographed images, the calculation unit 21 determines whether the maximum degree of certainty among the degrees of certainty obtained based on the plurality of photographed images is included in a predetermined range. Alternatively, it may be determined whether or not the average value or the variance value of the certainty factors is within a predetermined range.
  • the information processing device 2 magnifies and shoots the water object whose kind of certainty is within the predetermined range.
  • the camera 13 is controlled (S105).
  • the calculation unit 21 identifies the position of the water object whose type certainty factor is within a predetermined range based on the captured image, and causes the PTZ camera 13 to photograph the identified position water object.
  • a control signal for control is transmitted from the interface unit 25 to the PTZ camera 13 .
  • the PTZ camera 13 adjusts its orientation in accordance with the control signal, and enlarges and photographs the water objects whose type certainty is within a predetermined range.
  • the PTZ camera 13 creates a photographed image by photographing, and inputs the photographed image to the information processing device 2 .
  • the information processing apparatus 2 acquires the captured image by receiving the captured image input from the PTZ camera 13 at the interface unit 25 (S106).
  • the calculation unit 21 stores the captured image in the storage unit 24 .
  • the photographed image includes an enlarged water object whose type confidence is within a predetermined range.
  • the captured image acquired in S101 corresponds to the first captured image, and the imaging device that created the first captured image corresponds to the first camera.
  • the captured image acquired in S106 corresponds to the second captured image.
  • the information processing device 2 next inputs the captured image acquired in S106 to the learned model 242 (S107).
  • the trained model 242 includes a plurality of trained models in one-to-one correspondence with a plurality of photographing devices
  • the calculation unit 21 inputs the photographed image to the trained model for the PTZ camera 13 .
  • the learned model 242 outputs a determination result of determining a water object included in the input photographed image.
  • the information processing device 2 acquires the determination result of the object on the water output by the learned model 242 (S108).
  • S ⁇ b>108 the calculation unit 21 stores the determination result of the object on the water in the storage unit 24 . Since the objects on the water are magnified, the features of the objects on the water become more prominent in the captured image.
  • a determination result with a higher degree of certainty can be obtained based on the magnified photographed image of the object on the water.
  • a determination result may be obtained that the photographed image does not include any object on the water.
  • the processing of S107-S108 corresponds to the determination result acquisition unit.
  • the information processing device 2 When the certainty factor is not within the predetermined range in S104 (S104: NO), or after S108 is finished, the information processing device 2 outputs the captured image and the water object discrimination result (S109).
  • the calculation unit 21 outputs the photographed image and the water object discrimination result by displaying the image including the photographed image and the water object discrimination result on the display device 31 .
  • the calculation unit 21 outputs a plurality of photographed images and the discrimination result of the object on the water included in each photographed image. The processing of S109 corresponds to the output unit.
  • FIG. 7 is a schematic diagram showing an example of outputting the photographed image and the determination result of the object on the water.
  • FIG. 7 shows an example of an image displayed on the display device 31.
  • the displayed images include an image 41 created by the front camera 11, an image 42 created by the infrared camera 12, an image 43 created by the PTZ camera 13, an image 44 created by the starboard camera 14, and an image 44 created by the port camera.
  • a photographed image 45 created by 15 is included.
  • Each photographed image is attached with a label 46 that indicates the type of photographing device that created it. By visually recognizing the label 46, the user can confirm which photographing device each photographed image was created by.
  • a photographed image input by each photographing device to the information processing device 2 is associated with identification information for identifying the photographing device. Based on the identification information associated with the captured image, the calculation unit 21 generates a label 46 indicating the type of the image capturing apparatus that created the captured image, and displays the image including the label 46 attached to each captured image on the display device. 31.
  • a mark indicating the position of the water object in the captured image and information indicating the type of the water object are output. For example, as shown in FIG. 7, a bounding box 53 surrounding an aquatic object 52 included in the captured image is displayed as a mark indicating the position of the aquatic object within the captured image. A mark other than the bounding box 53, such as an arrow, may be output. A user can know that a water object exists around the ship 1 by visually recognizing the bounding box 53 .
  • character information 54 representing the type of the aquatic object 52 in characters is displayed as information indicating the type of the aquatic object.
  • the types of aquatic objects are determined to be either ships, aids to navigation, lighthouses, aquaculture facilities, buoys, bridge girders, piers, other non-vessel objects, or unknown objects.
  • the type of the floating object is determined as an unknown object.
  • a water object identified as a vessel may be identified as either a merchant vessel, cruise ship, crane vessel, tugboat, gravel carrier, fishing vessel, pleasure vessel, other vessel, or unknown vessel. Aids or buoys may be further classified.
  • the learned model 242 has been trained so that when the certainty factor is within a predetermined range, the type of the water object 52 photographed below a predetermined size is determined to be unknown. good.
  • the user can recognize the types of water objects existing around the ship 1 by visually recognizing the character information 54 .
  • Information indicating the type of object on the water may be output in addition to the character information 54 . For example, by changing the color of the bounding box 53 or the character information 54 according to the type of water object, different types of water objects may be represented.
  • the captured image 41 created by the front camera 11 and the captured image 42 created by the infrared camera 12 capture almost the same area. However, since the light used is different, the water object 52 included in one captured image may not be included in the other captured image.
  • the captured image 41 created by the front camera 11, the captured image 44 created by the starboard camera 14, and the captured image 45 created by the port camera 15 are captured in different areas. However, if there is an overlapping portion in the areas photographed by the two photographing devices, the same aquatic object 52 may be included in the two photographed images.
  • FIG. 7 shows an example in which the captured image 41 created by the front camera 11 and the captured image 44 created by the starboard camera 14 include the same water object 52, which is a fishing boat.
  • FIG. 7 shows an example in which a photographed image 41 created by the front camera 11 and a photographed image 45 created by the port side camera 15 include the same aquatic object 52, which is a buoy.
  • the image shown in FIG. 7 also includes a photographed image 43 obtained by enlarging and photographing one water object 52 included in the photographed image 41 created by the front camera 11 with the PTZ camera 13 .
  • One aquatic object 52 included in the photographed image 41 appears very small, the type is determined to be unknown, and the degree of certainty is within a predetermined range.
  • the photographed image 43 obtained by enlarging and photographing the object on the water by the PTZ camera 13 is created, and the determination result of the object on the water is obtained.
  • the captured image 43 created by the PTZ camera 13 one aquatic object 52 included in the captured image 41 is enlarged.
  • the type of the aquatic object 52 that was determined as unknown in the determination result based on the captured image 41 is determined as a cruise ship in the determination result based on the captured image 43 .
  • text information 54 indicating that the waterborne object is an unknown object is displayed. be.
  • the calculation part 21 displays information indicating that the photographed image 43 does not contain the object on the water on the display device 31. - ⁇ At this time, the calculation unit 21 may display information indicating that the captured image 43 does not include the object on the water without displaying the captured image 43 .
  • the regions photographed by the photographing devices are different from each other, making it possible to photograph a wider region than can be photographed by a single photographing device. Become. Therefore, it is possible to effectively discriminate water objects present in a wide area around the ship 1 .
  • a plurality of imaging devices of different types water objects that cannot be photographed with one imaging device or are photographed unclearly can sometimes be clearly photographed using another type of imaging device. . Therefore, it is possible to effectively identify more water objects than can be identified using a single imaging device. For example, by using the PTZ camera 13 to enlarge and photograph an object on the water that becomes unclear when photographed by another photographing device, it is possible to obtain an accurate determination result.
  • the information processing device 2 then outputs the radar image, the captured image, and the determination result of the object on the water (S110).
  • the radar 16 scans the surroundings of the vessel 1 with radio waves, discriminates objects on the water based on the reflected waves of the radio waves, generates a radar image including the discrimination result, and inputs the radar image to the information processing device 2 .
  • the information processing device 2 acquires a captured image by receiving the input radar image at the interface unit 25 .
  • the calculation unit 21 stores the acquired radar image in the storage unit 24 . In S ⁇ b>110 , the calculation unit 21 displays on the display device 31 an image including the radar image, the photographed image, and the determination result of the object on the water.
  • FIG. 8 is a schematic diagram showing an example of outputting a radar image, a photographed image, and a determination result of an object on the water.
  • FIG. 8 shows an example of an image displayed on the display device 31, including the radar image 6, the photographed image, and the determination result of the object on the water.
  • the displayed images include the radar image 6, the captured image 41 created by the front camera 11, the captured image 44 created by the starboard camera 14, and the captured image 45 created by the port camera 15.
  • the captured image 41 is placed in front of the radar image 6
  • the captured image 44 is placed on the right side of the radar image 6
  • the captured image 45 is placed on the left side of the radar image 6 .
  • each photographed image and the radar image 6 are displayed in comparison, making it easy to understand the correspondence between the contents of the radar image 6 and the contents of the photographed image.
  • Other captured images may be displayed in contrast to the radar image 6 .
  • the radar image 6 includes images 61 of water objects present around the ship 1 .
  • the radar image 6 includes the determination result of the object on the water determined by the radar 16 .
  • both determination results are output in the same manner. That is, as a result of discrimination of a water object included in the radar image 6, a mark indicating the position of the water object in the radar image 6 and information indicating the type of the water object are output. For example, a bounding box 53 surrounding an image 61 of an object on the water and character information 63 indicating the type of the object 52 on the water are displayed in the same manner as the determination result of the object on the water superimposed on the captured image.
  • the contents of the character information 63 representing the type of the aquatic object are the same as the contents of the character information 54 representing the type of the corresponding aquatic object 52 .
  • the color of the bounding box 53 or the text information 54 related to the image 61 of the water object may be adjusted so as to be the same as the color of the bounding box 53 or the text information 54 related to the corresponding water object 52 .
  • the discrimination result based on the AIS data obtained from the AIS 17 may be used as the discrimination result of the object on the water superimposed on the radar image 6, the discrimination result based on the AIS data obtained from the AIS 17 may be used.
  • the water object discrimination results obtained by the two methods are compared.
  • the discrimination results of a water object obtained by two types of methods match. is easily ascertained. It is also possible to detect errors in the discrimination results of objects on the water. For example, if the image 61 of the object on the water is included in the radar image 6 but the corresponding object 52 on the water is not included in the captured image, it can be determined that the image 61 of the object on the water is a false image. can. In this way, objects on the water are discriminated more reliably.
  • the captured image 42 created by the infrared camera 12 and the captured image 43 created by the PTZ camera 13 may be further displayed.
  • the information processing device 2 next accepts designation of a part of the radar image 6 (S111).
  • the calculation unit 21 receives designation by the user operating the operation device 32 .
  • the calculation unit 21 displays on the display device 31 a designating figure such as a cursor or a bounding box for designating a part of the radar image 6 superimposed on the radar image 6 .
  • the calculation unit 21 changes the position of the designating graphic according to the user's operation received by the operating device 32 and receives the designation of a part of the radar image 6 .
  • the information processing device 2 then outputs the determination result of the object on the water included in the corresponding part in the captured image corresponding to the part in the designated radar image 6 (S112).
  • an emphasis figure such as a bounding box for emphasizing the corresponding portion in the captured image is superimposed on the captured image and displayed on the display device 31.
  • FIG. 9 is a schematic diagram showing an example of the radar image 6 with a designated portion and the captured image with the corresponding portion emphasized. Overlaid on the radar image 6, a designation image 64 is displayed for designating a portion including an image 61 of one aquatic object.
  • the designated image 64 is a graphic enclosing the designated portion with a double line.
  • photographed images 41 and 43 including corresponding portions are displayed, and an emphasis figure 55 for emphasizing the corresponding portions is displayed superimposed on the photographed images 41 and 43 .
  • the emphasized graphic 55 is a graphic enclosing the corresponding portion with a double line. Character information 54 is displayed as the determination result of the aquatic object 52 included in the corresponding portion.
  • a part in the radar image 6 and the corresponding part in the captured image are output in comparison, and the user can confirm how each part in the radar image 6 looks visually.
  • the user can confirm the water object 52 corresponding to the image 61 included in the radar image 6 in the captured image.
  • the user can easily compare the result of identifying the object on the water using the radar 16 and the result of identifying the object on the water based on the photographed image.
  • a photographed image that does not include a corresponding portion corresponding to a portion within the designated radar image 6 may be further displayed. After S112 ends, the information processing device 2 ends the process of outputting the determination result of the object on the water.
  • the processing of S109, the processing of S110, and the processing of S111-112 may be executed in an order different from the example shown in FIG. may be executed alternately and repeatedly. Any of the processing of S109, the processing of S110, and the processing of S111-112 may be omitted.
  • the processing of S101 to S109 is executed as needed, and the determination result of the object on the water is used for navigation of the ship 1.
  • FIG. 10 is a flowchart showing an example of processing for re-learning the trained model 242 .
  • the information processing device 2 generates training data that associates a photographed image created by one of the photographing devices with the discrimination result of the object on the water (S21).
  • the calculation unit 21 prepares training data including a data set that associates a photographed image with a water object discrimination result based on a photographed image taken by a photographing device different from the photographing device that photographed the photographed image.
  • the photographed image 41 created by the front camera 11 and the determination result of the water object based on the photographed image 43 created by the PTZ camera 13 are associated.
  • the calculation unit 21 generates training data including a data set that associates the captured image with the determination result of the object on the water using the radar 16. In addition, the calculation unit 21 generates training data including a data set that associates the captured image with the determination result of the object on the water using the AIS 17 . In S ⁇ b>21 , the calculation unit 21 generates training data including a plurality of data sets in which the photographed images and the discrimination results of the objects on the water are associated with each other, and stores the training data in the storage unit 24 .
  • the information processing device 2 next re-learns the learned model 242 using the training data (S22).
  • the calculation unit 21 inputs the captured image included in the training data to the learned model 242 and re-learns the learned model 242 .
  • the learned model 242 outputs a discrimination result of a water object according to the input of the photographed image.
  • the calculation unit 21 acquires the determination result of the object on the water output by the trained model 242, and the determination result of the object on the water associated with the captured image input to the trained model 242 in the training data, and the learned model 242
  • the calculation parameters of the learned model 242 are adjusted so that the difference from the output determination result of the object on the water becomes small. That is, the parameters are adjusted so that the determination result of the aquatic object that is substantially the same as the determination result of the aquatic object associated with the captured image is output.
  • the computing unit 21 re-learns the learned model 242 by repeating the process using a plurality of data sets included in the training data and adjusting the parameters of the learned model 242 .
  • the calculation unit 21 stores data recording the adjusted final parameters in the storage unit 24 . In this way, a relearned trained model 242 is generated.
  • the information processing device 2 ends the process of re-learning the trained model 242 .
  • the learned model 242 is re-learned using the captured images and the discrimination results of the objects on the water based on the captured images taken by different imaging devices as training data, so that more accurate discrimination results can be output.
  • trained model 242 can be adjusted.
  • the learned model 242 is adjusted using discrimination results different from the discrimination results based on the captured images. be able to. Note that the processes of S21 and S22 may be executed by a device other than the information processing device 2. FIG.
  • captured images captured by a plurality of image capturing devices installed on the ship 1 are input to the trained model 242, and the trained model 242 outputs the water object discrimination results. do.
  • the learned model 242 it is possible to discriminate objects on the water without depending on the visual observation of the crew.
  • a plurality of photographing devices installed at a plurality of positions within the ship 1 it is possible to photograph a wide area, and to effectively discriminate water objects existing in a wide area around the ship 1. becomes possible.
  • by using a plurality of photographing devices of different types it is possible to effectively discriminate many underwater objects. Effective discrimination of water objects can facilitate safe navigation of the vessel 1 .
  • the radar 16 and the AIS 17 are connected to the information processing device 2 in this embodiment, the radar 16 and the AIS 17 may be independent from the information processing device 2 .
  • the radar 16 and the AIS 17 are connected to the display device 31 without going through the information processing device 2, and independently of the processing by the information processing device 2, the radar image 6 and the radar 16 or the water object using the AIS 17 are displayed. may be displayed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Ocean & Marine Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

[Problem] To provide an information processing method, an information processing device, and a computer program that enable effective identification of objects on water. [Solution] The information processing method for identifying an object-on-water around a ship comprises: acquiring photographed images obtained by photographing the surroundings of the ship via a plurality of photographing devices of different types installed on the ship; inputting the photographed images acquired from each of the plurality of photographing devices to a trained model that outputs results of identifying the objects on water included in the photographed image when the photographed images are input; acquiring the results of identifying the objects on water output from the trained model; and outputting the results of identifying the objects on water included in each of the plurality of photographing devices.

Description

情報処理方法、情報処理装置及びコンピュータプログラムInformation processing method, information processing device and computer program
 本発明は、船舶の周囲にある水上物体を判別するための情報処理方法、情報処理装置及びコンピュータプログラムに関する。 The present invention relates to an information processing method, an information processing device, and a computer program for determining water objects around a ship.
 船舶においては、航行のために、周囲に存在する他の船舶等の水上物体を判別することが必要である。従来、水上物体を判別するための技術が開発されている。特許文献1には、船舶の外部を撮影した画像とレーダーによる判別結果とを関連付けて表示することにより、水上物体の判別を容易にする技術が開示されている。 For ships to navigate, it is necessary to distinguish between water objects such as other ships that exist in the surrounding area. Conventionally, techniques have been developed for discriminating objects on the water. Patent Literature 1 discloses a technique for facilitating identification of an object on the water by displaying an image of the exterior of a ship photographed and a determination result by radar in association with each other.
米国特許出願公開第2020/0018848号明細書U.S. Patent Application Publication No. 2020/0018848
 水上物体を判別するための従来の技術は、船員の目視による判別に比べて、まだ不十分である。例えば、レーダーを用いた技術では、偽像の発生、又は判別できない水上物体がある等の問題がある。このため、より効果的に水上物体を判別するための技術が求められている。  Conventional techniques for identifying objects on the water are still inadequate compared to visual identification by sailors. For example, techniques using radar have problems such as the occurrence of false images or unidentifiable water objects. Therefore, there is a demand for a technique for more effectively discriminating objects on the water.
 本発明は、斯かる事情に鑑みてなされたものであって、その目的とするところは、効果的に水上物体を判別することを可能にする情報処理方法、情報処理装置及びコンピュータプログラムを提供することにある。 The present invention has been made in view of such circumstances, and its object is to provide an information processing method, an information processing apparatus, and a computer program that enable effective discrimination of objects on the water. That's what it is.
 本発明に係る情報処理方法は、船舶に設置された種類の異なる複数の撮影装置が前記船舶の周囲を撮影した撮影画像を取得し、撮影画像を入力した場合に前記撮影画像に含まれる水上物体の判別結果を出力する学習済モデルへ、前記複数の撮影装置の夫々から取得した撮影画像を入力し、前記学習済モデルが出力した水上物体の判別結果を取得し、前記複数の撮影装置の夫々に含まれる水上物体の判別結果を出力することを特徴とする。 In the information processing method according to the present invention, when a plurality of photographing devices of different types installed on a ship obtain photographed images photographing the surroundings of the ship, and the photographed images are input, water objects included in the photographed images are acquired. a photographed image obtained from each of the plurality of photographing devices is input to the trained model that outputs the discrimination result of the above; acquiring the discrimination result of the object on the water outputted by the trained model; is characterized by outputting the discrimination result of the object on the water included in .
 本発明に係る情報処理装置は、種類の異なる複数の撮影装置が船舶の周囲を撮影した撮影画像を取得する画像取得部と、撮影画像を入力した場合に前記撮影画像に含まれる水上物体の判別結果を出力する学習済モデルへ、前記複数の撮影装置の夫々から取得した撮影画像を入力し、前記学習済モデルが出力した水上物体の判別結果を取得する判別結果取得部と、前記複数の撮影装置の夫々に対応する前記判別結果を出力する出力部とを備えることを特徴とする。 An information processing apparatus according to the present invention includes an image acquisition unit that acquires captured images of the surroundings of a ship captured by a plurality of different types of capturing devices; a discrimination result acquiring unit that inputs captured images acquired from each of the plurality of photographing devices to a trained model that outputs results, and acquires a discrimination result of a water object output by the trained model; and an output unit for outputting the determination result corresponding to each of the devices.
 本発明に係るコンピュータプログラムは、種類の異なる複数の撮影装置が船舶の周囲を撮影した撮影画像を取得し、撮影画像を入力した場合に前記撮影画像に含まれる水上物体の判別結果を出力する学習済モデルへ、前記複数の撮影装置の夫々から取得した撮影画像を入力し、前記学習済モデルが出力した水上物体の判別結果を取得し、前記複数の撮影装置の夫々に対応する前記判別結果を出力する処理をコンピュータに実行させることを特徴とする。 A computer program according to the present invention acquires photographed images of the surroundings of a ship taken by a plurality of photographing devices of different types, and when the photographed images are input, learning to output the discrimination results of the objects on the water included in the photographed images. Captured images obtained from each of the plurality of photographing devices are input to the trained model, discrimination results of water objects output by the trained model are obtained, and the discrimination results corresponding to each of the plurality of photographing devices are obtained. It is characterized by having a computer execute processing to output.
 本発明の一形態においては、船舶に設置された複数の撮影装置で撮影した撮影画像を学習済モデルへ入力し、学習済モデルが出力した水上物体の判別結果を出力する。学習済モデルを利用することにより、船員の目視に頼らずに水上物体を判別することができる。船舶内の複数の位置に設置された複数の撮影装置を用いることにより、船舶の周囲の広い領域に存在する水上物体を効果的に判別することができる。また、種類の異なる複数の撮影装置を用いることにより、単一の撮影装置を利用して判別することができる水上物体よりも多くの水上物体を効果的に判別することが可能となる。 In one embodiment of the present invention, captured images captured by a plurality of image capturing devices installed on a ship are input to a trained model, and the learned model outputs the discrimination results of water objects. By using the trained model, it is possible to discriminate objects on the water without relying on the crew's visual observation. By using a plurality of imaging devices installed at a plurality of locations within the vessel, water objects present in a wide area around the vessel can be effectively identified. Also, by using a plurality of imaging devices of different types, it is possible to effectively identify more aquatic objects than can be identified using a single imaging device.
 本発明にあっては、船舶の周囲に存在する水上物体を効果的に判別し、船舶の安全な航行を容易にする等、優れた効果を奏する。  In the present invention, it is possible to effectively discriminate water objects existing around the ship, and to facilitate safe navigation of the ship.
船舶を示す模式図である。It is a schematic diagram which shows a ship. 船舶の周囲にある水上物体を判別するための情報処理システムの構成例を示すブロック図である。1 is a block diagram showing a configuration example of an information processing system for determining water objects around a ship; FIG. 情報処理装置の内部の機能構成例を示すブロック図である。2 is a block diagram showing an example of the internal functional configuration of the information processing apparatus; FIG. 学習済モデルの機能を示す概念図である。FIG. 4 is a conceptual diagram showing functions of a trained model; 水上物体の判別結果を出力する処理の例を示すフローチャートである。9 is a flow chart showing an example of processing for outputting a determination result of an object on the water; 撮影画像の例を示す模式図である。FIG. 4 is a schematic diagram showing an example of a captured image; 撮影画像及び水上物体の判別結果を出力した例を示す模式図である。FIG. 10 is a schematic diagram showing an example of outputting a photographed image and a discrimination result of an object on the water; レーダー画像、撮影画像及び水上物体の判別結果を出力した例を示す模式図である。FIG. 10 is a schematic diagram showing an example of outputting a radar image, a photographed image, and a determination result of an object on the water; 一部分を指定されたレーダー画像及び対応部分を強調した撮影画像の例を示す模式図である。FIG. 4 is a schematic diagram showing an example of a radar image with a designated portion and a captured image with the corresponding portion emphasized; 学習済モデルの再学習を行う処理の例を示すフローチャートである。7 is a flowchart illustrating an example of processing for re-learning a trained model;
 以下本発明をその実施の形態を示す図面に基づき具体的に説明する。
 図1は、船舶を示す模式図である。船舶1には、複数の撮影装置が備えられている。本実施形態では、複数の撮影装置が船舶1の周囲を撮影し、撮影画像に含まれる水上物体を判別する。水上物体は、他の船舶又はブイ等、船舶1の周囲の水上に存在する物体である。
BEST MODE FOR CARRYING OUT THE INVENTION The present invention will be specifically described below with reference to the drawings showing its embodiments.
FIG. 1 is a schematic diagram showing a ship. A ship 1 is equipped with a plurality of photographing devices. In this embodiment, a plurality of photographing devices photograph the surroundings of the ship 1 and identify objects on the water included in the photographed images. A water object is an object that is on the water around the ship 1, such as another ship or a buoy.
 複数の撮影装置には、前方カメラ11と、赤外線カメラ12と、パン(pan )、チルト(tilt)及びズーム(zoom)が可能なPTZカメラ13と、右舷カメラ14と、左舷カメラ15とが含まれる。前方カメラ11は、可視光を用いて、船舶1の前方にある領域を撮影する。赤外線カメラ12は、赤外線を用いて、船舶1の前方にある領域を撮影する。赤外線カメラ12を用いることにより、夜間における水上物体等、可視光を用いた場合には撮影が困難な物体の撮影を容易にする。PTZカメラ13は、任意の方向にある領域を任意の倍率で撮影することができる。前方カメラ11、赤外線カメラ12、及びPTZカメラ13は、船舶1の正面に設置されている。 The multiple imaging devices include a front camera 11, an infrared camera 12, a PTZ camera 13 capable of pan, tilt and zoom, a starboard camera 14, and a port camera 15. be The front camera 11 captures an area in front of the ship 1 using visible light. The infrared camera 12 captures an area in front of the ship 1 using infrared rays. By using the infrared camera 12, it becomes easy to photograph an object such as an object on water at night, which is difficult to photograph using visible light. The PTZ camera 13 can photograph an area in any direction at any magnification. A front camera 11 , an infrared camera 12 , and a PTZ camera 13 are installed in front of the ship 1 .
 右舷カメラ14は船舶1の右舷に設置され、左舷カメラ15は船舶1の左舷に設置されている。右舷カメラ14は、可視光を用いて、船舶1の右舷に面した領域を撮影する。左舷カメラ15は、可視光を用いて、船舶1の左舷に面した領域を撮影する。前方カメラ11、赤外線カメラ12、右舷カメラ14及び左舷カメラ15が撮影する領域の船舶1に対する相対的な位置は、固定されている。このように、複数の撮影装置は、船舶1内の複数の位置に設置された撮影装置を含み、種類の異なる撮影装置を含む。 The starboard camera 14 is installed on the starboard side of the ship 1 , and the port camera 15 is installed on the port side of the ship 1 . The starboard camera 14 uses visible light to capture an area facing the starboard side of the ship 1 . The port camera 15 uses visible light to capture an area facing the port side of the ship 1 . The relative positions of the areas photographed by the front camera 11, the infrared camera 12, the starboard camera 14, and the port camera 15 with respect to the ship 1 are fixed. In this way, the plurality of photographing devices include photographing devices installed at a plurality of positions within the ship 1, and include photographing devices of different types.
 前方カメラ11、右舷カメラ14及び左舷カメラ15を用いることにより、船舶1の周囲にある複数の領域を撮影し、撮影画像に基づいた水上物体の判別漏れを抑制することができる。前方カメラ11、右舷カメラ14及び左舷カメラ15の画角は、広角であることが望ましい。例えば、水平面内での向きを、船舶1の正面を0度とした角度で表し、船舶1の周囲の内、300度~360度及び0度~60度の領域を前方カメラ11が撮影し、30度~150度の領域を右舷カメラ14が撮影し、210度~330度の領域を左舷カメラ15が撮影する。前方カメラ11、右舷カメラ14及び左舷カメラ15の画角が広角であることにより、一つのカメラで広い領域を撮影することができる。このため、効率的に水上物体の判別を行うことができる。 By using the front camera 11, the starboard camera 14, and the port camera 15, it is possible to photograph a plurality of areas around the ship 1 and prevent omissions in distinguishing water objects based on the photographed images. The angles of view of the front camera 11, the starboard camera 14, and the port camera 15 are preferably wide. For example, the orientation in the horizontal plane is represented by an angle where the front of the ship 1 is 0 degrees, and the front camera 11 photographs the areas of 300 degrees to 360 degrees and 0 degrees to 60 degrees in the circumference of the ship 1, The starboard camera 14 photographs an area of 30 degrees to 150 degrees, and the port camera 15 photographs an area of 210 degrees to 330 degrees. Since the front camera 11, the starboard camera 14, and the port camera 15 have wide angles of view, a wide area can be photographed with one camera. Therefore, it is possible to efficiently discriminate an object on the water.
 なお、前方カメラ11、赤外線カメラ12、右舷カメラ14又は左舷カメラ15は、撮影する領域の位置を変更することができる形態であってもよい。船舶1に備えられた複数の撮影装置には、前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15以外のカメラが含まれていてもよい。複数の撮影装置には、前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15のいずれかが含まれていなくてもよい。 Note that the front camera 11, the infrared camera 12, the starboard camera 14, or the port camera 15 may be configured so that the position of the area to be photographed can be changed. The plurality of imaging devices provided on the ship 1 may include cameras other than the front camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 and the port camera 15 . Any one of the front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15 may not be included in the plurality of imaging devices.
 図2は、船舶1の周囲にある水上物体を判別するための情報処理システム200の構成例を示すブロック図である。情報処理システム200は、情報処理装置2を備えている。情報処理装置2は、コンピュータである。情報処理装置2には、前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15を含む複数の撮影装置が接続されている。また、情報処理装置2には、船舶1に備えられているレーダー16及びAIS(Automatic Identification System )17が接続されている。レーダー16は、船舶1の周囲を電波で走査し、電波の反射波に基づいて水上物体を判別し、判別結果を含んだレーダー画像を生成する。AIS17は、船舶に関する船舶情報を船舶間で交換するための装置であり、無線通信により、他の船舶を識別する情報を含む船舶情報を他の船舶から受信する。 FIG. 2 is a block diagram showing a configuration example of an information processing system 200 for identifying water objects around the ship 1. As shown in FIG. The information processing system 200 includes an information processing device 2 . The information processing device 2 is a computer. A plurality of photographing devices including a front camera 11 , an infrared camera 12 , a PTZ camera 13 , a starboard camera 14 and a port camera 15 are connected to the information processing device 2 . A radar 16 and an AIS (Automatic Identification System) 17 provided on the ship 1 are also connected to the information processing device 2 . The radar 16 scans the surroundings of the vessel 1 with radio waves, discriminates objects on the water based on the reflected waves of the radio waves, and generates a radar image including the discrimination result. The AIS 17 is a device for exchanging ship information about ships between ships, and receives ship information including information for identifying other ships from other ships by wireless communication.
 更に、情報処理システム200は、表示装置31と、操作装置32とを備えている。表示装置31は、画像を表示する。表示装置31は、例えば、液晶ディスプレイ又はELディスプレイ(Electroluminescent Display)である。操作装置32は、船舶1の乗員等の使用者からの操作を受け付けることにより、情報の入力を受け付ける。操作装置32は、例えば、タッチパネル、キーボード又はポインティングデバイスである。表示装置31及び操作装置32は、一体になっていてもよい。 Furthermore, the information processing system 200 includes a display device 31 and an operation device 32 . The display device 31 displays images. The display device 31 is, for example, a liquid crystal display or an EL display (Electroluminescent Display). The operation device 32 receives input of information by receiving an operation from a user such as a crew member of the ship 1 . The operating device 32 is, for example, a touch panel, keyboard, or pointing device. The display device 31 and the operation device 32 may be integrated.
 図3は、情報処理装置2の内部の機能構成例を示すブロック図である。情報処理装置2は、船舶1の周囲にある水上物体を判別するための情報処理方法を実行する。情報処理装置2は、演算部21と、メモリ22と、ドライブ部23と、記憶部24と、インタフェース部25とを備えている。演算部21は、例えばCPU(Central Processing Unit )、GPU(Graphics Processing Unit)、又はマルチコアCPUを用いて構成されている。演算部21には、表示装置31及び操作装置32が接続されている。メモリ22は、演算に伴って発生する一時的なデータを記憶する。メモリ22は、例えばRAM(Random Access Memory)である。ドライブ部23は、光ディスク又は可搬型メモリ等の記録媒体20から情報を読み取る。記憶部24は、不揮発性であり、例えばハードディスク又は不揮発性半導体メモリである。 FIG. 3 is a block diagram showing an example of the internal functional configuration of the information processing device 2. As shown in FIG. The information processing device 2 executes an information processing method for identifying objects on the water around the ship 1 . The information processing device 2 includes an arithmetic unit 21 , a memory 22 , a drive unit 23 , a storage unit 24 and an interface unit 25 . The calculation unit 21 is configured using, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or a multi-core CPU. A display device 31 and an operation device 32 are connected to the calculation unit 21 . The memory 22 stores temporary data generated along with computation. The memory 22 is, for example, a RAM (Random Access Memory). A drive unit 23 reads information from a recording medium 20 such as an optical disc or a portable memory. The storage unit 24 is nonvolatile, such as a hard disk or nonvolatile semiconductor memory.
 演算部21は、記録媒体20に記録されたコンピュータプログラム241をドライブ部23に読み取らせ、読み取ったコンピュータプログラム241を記憶部24に記憶させる。演算部21は、コンピュータプログラム241に従って、情報処理装置2に必要な処理を実行する。コンピュータプログラム241はコンピュータプログラム製品であってもよい。コンピュータプログラム241は、情報処理装置2の外部からダウンロードされてもよい。又は、コンピュータプログラム241は、情報処理装置2に予め記憶されていてもよい。これらの場合は、情報処理装置2はドライブ部23を備えていなくてもよい。 The calculation unit 21 causes the drive unit 23 to read the computer program 241 recorded on the recording medium 20 and causes the storage unit 24 to store the read computer program 241 . The calculation unit 21 executes processing necessary for the information processing device 2 according to the computer program 241 . Computer program 241 may be a computer program product. The computer program 241 may be downloaded from outside the information processing device 2 . Alternatively, the computer program 241 may be pre-stored in the information processing device 2 . In these cases, the information processing device 2 does not have to include the drive section 23 .
 インタフェース部25には、前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15が接続されている。前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15は、船舶1の周囲にある夫々が担当する領域を撮影した撮影画像を作成し、情報処理装置2へ入力する。インタフェース部25は、入力された撮影画像を受け付ける。インタフェース部25で撮影画像を受け付けることにより、情報処理装置2は、撮影画像を取得する。前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15は、連続的に撮影画像を作成するか、又は定期的に繰り返し撮影画像を作成する。情報処理装置2は、連続的に撮影画像を取得するか、又は定期的に繰り返し撮影画像を取得する。 The front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15 are connected to the interface unit 25. The forward camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 and the port side camera 15 create captured images of areas around the ship 1 that they are in charge of, and input the captured images to the information processing device 2 . The interface unit 25 accepts an input photographed image. The information processing device 2 acquires the captured image by receiving the captured image at the interface unit 25 . The front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15 continuously create captured images, or periodically and repeatedly create captured images. The information processing device 2 acquires captured images continuously or acquires captured images repeatedly on a regular basis.
 更に、インタフェース部25は、レーダー16及びAIS17が接続されている。レーダー16は、レーダー画像を生成し、情報処理装置2へ入力する。入力されたレーダー画像をインタフェース部25で受け付けることにより、情報処理装置2は、レーダー画像を取得する。レーダー16は、繰り返しレーダー画像を作成し、情報処理装置2は繰り返し撮影画像を取得する。AIS17は、他の船舶から船舶情報を受信し、受信した船舶情報に基づいたAISデータを情報処理装置2へ入力する。AISデータには、他の船舶の種類を示す情報が含まれている。入力されたAISデータをインタフェース部25で受け付けることにより、情報処理装置2は、AISデータを取得する。AIS17が船舶情報を受信する都度、AIS17はAISデータを情報処理装置2へ入力し、情報処理装置2はAISデータを取得する。 Furthermore, the interface unit 25 is connected to the radar 16 and the AIS 17. The radar 16 generates a radar image and inputs it to the information processing device 2 . The information processing device 2 acquires the radar image by receiving the input radar image at the interface unit 25 . The radar 16 repeatedly creates radar images, and the information processing device 2 repeatedly acquires captured images. The AIS 17 receives vessel information from other vessels and inputs AIS data based on the received vessel information to the information processing device 2 . AIS data includes information indicating the type of other vessels. The information processing device 2 acquires the AIS data by receiving the input AIS data at the interface unit 25 . Each time the AIS 17 receives ship information, the AIS 17 inputs the AIS data to the information processing device 2, and the information processing device 2 acquires the AIS data.
 情報処理装置2は、複数のコンピュータにより構成され、データが複数のコンピュータによって分散して記憶されていてもよく、処理が複数のコンピュータによって分散して実行されてもよい。情報処理装置2は、一台のコンピュータ内に設けられた複数の仮想マシンによって実現されてもよい。 The information processing device 2 may be configured by a plurality of computers, and data may be distributed and stored by the plurality of computers, and processing may be performed by the plurality of computers in a distributed manner. The information processing device 2 may be realized by a plurality of virtual machines provided within one computer.
 情報処理装置2は、学習済みの学習済モデル242を備えている。学習済モデル242は、コンピュータプログラム241に従って演算部21が情報処理を実行することにより実現される。例えば、記憶部24は、予め学習されている学習済モデル242のパラメータを記録したデータを記憶し、演算部21は、パラメータを用いて、コンピュータプログラム241に従った情報処理を実行することにより、学習済モデル242を実現する。学習済モデル242は、ハードウェアにより構成されていてもよい。或は、学習済モデル242は情報処理装置2の外部に設けられており、情報処理装置2は、外部の学習済モデル242を利用して処理を実行する形態であってもよい。 The information processing device 2 is equipped with a trained model 242 that has been trained. The learned model 242 is implemented by the computing unit 21 executing information processing according to the computer program 241 . For example, the storage unit 24 stores data recording the parameters of the trained model 242 that has been learned in advance, and the calculation unit 21 uses the parameters to execute information processing according to the computer program 241, A trained model 242 is realized. The trained model 242 may be configured by hardware. Alternatively, the trained model 242 may be provided outside the information processing device 2 , and the information processing device 2 may execute processing using the external trained model 242 .
 図4は、学習済モデル242の機能を示す概念図である。学習済モデル242には、前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14又は左舷カメラ15が作成した撮影画像を入力される。学習済モデル242は、撮影画像を入力した場合に撮影画像に含まれる水上物体を判別した判別結果を出力するように、予め機械学習により学習されている。学習済モデル242が出力する判別結果には、撮影画像内での水上物体の位置、水上物体の種類、及び水上物体の種類の確信度が含まれる。例えば、学習済モデル242は、YOLO(You Only Look Once)を用いて構成されている。学習済モデル242は、R-CNN(Region Based Convolutional Neural Networks)、又はセグメンテーション等、YOLO以外の方法を用いたモデルであってもよい。 FIG. 4 is a conceptual diagram showing the functions of the trained model 242. FIG. Captured images created by the front camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 or the port camera 15 are input to the learned model 242 . The learned model 242 is trained in advance by machine learning so that when a photographed image is input, it outputs a result of judging a water object included in the photographed image. The discrimination results output by the trained model 242 include the position of the object on the water in the captured image, the type of the object on the water, and the degree of certainty of the type of the object on the water. For example, the trained model 242 is configured using YOLO (You Only Look Once). The trained model 242 may be a model using a method other than YOLO, such as R-CNN (Region Based Convolutional Neural Networks) or segmentation.
 学習済モデル242は、前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15に一対一対応する複数の学習済モデルを含んでいてもよい。複数の学習済モデルの夫々は、対応する撮影装置が作成した撮影画像を入力された場合に水上物体の判別結果を出力するように、予め個別に学習されている。 The learned model 242 may include a plurality of learned models in one-to-one correspondence with the front camera 11, the infrared camera 12, the PTZ camera 13, the starboard camera 14, and the port camera 15. Each of the plurality of learned models is individually learned in advance so as to output a determination result of a water object when a photographed image created by the corresponding photographing device is input.
 情報処理装置2は、船舶1の周囲に存在する水上物体を判別し、水上物体の判別結果を出力する処理を行う。図5は、水上物体の判別結果を出力する処理の例を示すフローチャートである。以下、ステップをSと略す。情報処理装置2の演算部21は、コンピュータプログラム241に従って処理を実行する。 The information processing device 2 performs a process of identifying water objects existing around the ship 1 and outputting the identification result of the water objects. FIG. 5 is a flow chart showing an example of a process for outputting a determination result of an object on the water. A step is abbreviated as S below. The computing unit 21 of the information processing device 2 executes processing according to the computer program 241 .
 前方カメラ11、赤外線カメラ12、PTZカメラ13、右舷カメラ14及び左舷カメラ15は、撮影を行うことにより撮影画像を作成し、撮影画像を情報処理装置2へ入力する。情報処理装置2は、夫々の撮影装置から入力された撮影画像をインタフェース部25で受け付けることにより、撮影画像を取得する(S101)。情報処理装置2は、一部の撮影装置が作成した撮影画像のみを取得してもよい。演算部21は、撮影画像を記憶部24に記憶する。 The front camera 11 , the infrared camera 12 , the PTZ camera 13 , the starboard camera 14 and the port camera 15 create captured images by capturing images, and input the captured images to the information processing device 2 . The information processing device 2 acquires a captured image by receiving the captured image input from each imaging device at the interface unit 25 (S101). The information processing device 2 may acquire only the captured images created by some of the imaging devices. The calculation unit 21 stores the captured image in the storage unit 24 .
 図6は、撮影画像の例を示す模式図である。図6には、前方カメラ11が作成した撮影画像41の例を示す。撮影画像41には、船舶1の船体51の一部が含まれている。また、撮影画像41には、複数の水上物体52が含まれている。S101の処理は画像取得部に対応する。 FIG. 6 is a schematic diagram showing an example of a captured image. FIG. 6 shows an example of a photographed image 41 created by the front camera 11. As shown in FIG. A captured image 41 includes a portion of the hull 51 of the ship 1 . The photographed image 41 also includes a plurality of water objects 52 . The processing of S101 corresponds to the image acquisition unit.
 情報処理装置2は、撮影画像を学習済モデル242へ入力する(S102)。S102では、演算部21は、撮影画像を学習済モデル242へ入力し、学習済モデル242に処理を実行させる。学習済モデル242が複数の撮影装置に一対一対応する複数の学習済モデルを含んでいる場合は、演算部21は、各撮影装置から入力された撮影画像を、撮影装置に対応する学習済モデルへ入力する。学習済モデル242は、撮影画像が入力されたことに応じて、入力された撮影画像に含まれる水上物体を判別した判別結果を出力する。情報処理装置2は、学習済モデル242が出力した水上物体の判別結果を取得する(S103)。演算部21は、水上物体の判別結果を記憶部24に記憶する。判別結果には、撮影画像内での水上物体の位置、水上物体の種類、及び水上物体の種類の確信度が含まれる。 The information processing device 2 inputs the captured image to the learned model 242 (S102). In S102, the calculation unit 21 inputs the captured image to the learned model 242, and causes the learned model 242 to execute processing. When the trained model 242 includes a plurality of trained models corresponding to a plurality of imaging devices one-to-one, the calculation unit 21 converts the captured image input from each imaging device into a trained model corresponding to the imaging device. Enter to The learned model 242 outputs a determination result of determining a water object included in the input photographed image in response to the input of the photographed image. The information processing device 2 acquires the determination result of the object on the water output by the learned model 242 (S103). The calculation unit 21 stores the determination result of the object on the water in the storage unit 24 . The determination result includes the position of the aquatic object in the captured image, the type of the aquatic object, and the degree of certainty of the type of the aquatic object.
 情報処理装置2は、次に、判別結果に含まれる水上物体の種類の確信度が所定の範囲に含まれるか否かを判定する(S104)。所定の範囲は、確信度がある程度低い範囲である。例えば、所定の範囲は、所定の閾値以下の範囲である。例えば、第2閾値を閾値よりも小さい値とし、所定の範囲は第2閾値以上閾値以下の範囲である。所定の範囲は予め記憶部24に記憶されているか、又はコンピュータプログラム241に含まれている。S104では、演算部21は、確信度と所定の範囲とを比較する。判別結果に複数の水上物体の種類の確信度が含まれる場合は、演算部21は、夫々の水上物体について確信度が所定の範囲に含まれるか否かを判定する。複数の撮影画像に同一の水上物体が含まれている場合は、演算部21は、複数の撮影画像に基づいて得られた確信度の中で最大の確信度が所定の範囲に含まれるか否かを判定してもよく、確信度の平均値又は分散値が所定の範囲に含まれるか否かを判定してもよい。 The information processing device 2 then determines whether or not the degree of certainty of the type of object on the water included in the determination result is within a predetermined range (S104). The predetermined range is a range in which the degree of certainty is somewhat low. For example, the predetermined range is a range equal to or less than a predetermined threshold. For example, the second threshold is set to a value smaller than the threshold, and the predetermined range is equal to or greater than the second threshold and equal to or less than the threshold. The predetermined range is stored in advance in the storage unit 24 or included in the computer program 241 . In S104, the calculation unit 21 compares the certainty with a predetermined range. If the determination result includes certainty factors of a plurality of types of aquatic objects, the calculation unit 21 determines whether or not the certainty factors for each aquatic object are within a predetermined range. When the same object on the water is included in a plurality of photographed images, the calculation unit 21 determines whether the maximum degree of certainty among the degrees of certainty obtained based on the plurality of photographed images is included in a predetermined range. Alternatively, it may be determined whether or not the average value or the variance value of the certainty factors is within a predetermined range.
 確信度が所定の範囲に含まれている場合は(S104:YES)、情報処理装置2は、種類の確信度が所定の範囲に含まれている水上物体を拡大して撮影するように、PTZカメラ13を制御する(S105)。S105では、演算部21は、撮影画像に基づいて、種類の確信度が所定の範囲に含まれている水上物体の位置を特定し、位置を特定した水上物体を撮影するようにPTZカメラ13を制御するための制御信号を、インタフェース部25からPTZカメラ13へ送信する。PTZカメラ13は、制御信号に従って、向きを調整し、種類の確信度が所定の範囲に含まれている水上物体を拡大して撮影する。 If the certainty is within the predetermined range (S104: YES), the information processing device 2 magnifies and shoots the water object whose kind of certainty is within the predetermined range. The camera 13 is controlled (S105). In S105, the calculation unit 21 identifies the position of the water object whose type certainty factor is within a predetermined range based on the captured image, and causes the PTZ camera 13 to photograph the identified position water object. A control signal for control is transmitted from the interface unit 25 to the PTZ camera 13 . The PTZ camera 13 adjusts its orientation in accordance with the control signal, and enlarges and photographs the water objects whose type certainty is within a predetermined range.
 PTZカメラ13は、撮影を行うことにより撮影画像を作成し、撮影画像を情報処理装置2へ入力する。情報処理装置2は、PTZカメラ13から入力された撮影画像をインタフェース部25で受け付けることにより、撮影画像を取得する(S106)。演算部21は、撮影画像を記憶部24に記憶する。撮影画像には、種類の確信度が所定の範囲に含まれていた水上物体が拡大して含まれる。S101で取得される撮影画像は第1撮影画像に相当し、第1撮影画像を作成した撮影装置は第1カメラに相当する。S106で取得される撮影画像は、第2撮影画像に相当する。 The PTZ camera 13 creates a photographed image by photographing, and inputs the photographed image to the information processing device 2 . The information processing apparatus 2 acquires the captured image by receiving the captured image input from the PTZ camera 13 at the interface unit 25 (S106). The calculation unit 21 stores the captured image in the storage unit 24 . The photographed image includes an enlarged water object whose type confidence is within a predetermined range. The captured image acquired in S101 corresponds to the first captured image, and the imaging device that created the first captured image corresponds to the first camera. The captured image acquired in S106 corresponds to the second captured image.
 情報処理装置2は、次に、S106で取得した撮影画像を学習済モデル242へ入力する(S107)。学習済モデル242が複数の撮影装置に一対一対応する複数の学習済モデルを含んでいる場合は、演算部21は、PTZカメラ13用の学習済モデルへ撮影画像を入力する。学習済モデル242は、入力された撮影画像に含まれる水上物体を判別した判別結果を出力する。情報処理装置2は、学習済モデル242が出力した水上物体の判別結果を取得する(S108)。S108では、演算部21は、水上物体の判別結果を記憶部24に記憶する。水上物体が拡大されているので、撮影画像では水上物体の特徴がより顕著になる。このため、水上物体を拡大した撮影画像に基づいて、より確信度の高い判別結果が得られる。S108では、撮影画像に水上物体が含まれていないとの判別結果が得られることもある。S107~S108の処理は判別結果取得部に対応する。 The information processing device 2 next inputs the captured image acquired in S106 to the learned model 242 (S107). When the trained model 242 includes a plurality of trained models in one-to-one correspondence with a plurality of photographing devices, the calculation unit 21 inputs the photographed image to the trained model for the PTZ camera 13 . The learned model 242 outputs a determination result of determining a water object included in the input photographed image. The information processing device 2 acquires the determination result of the object on the water output by the learned model 242 (S108). In S<b>108 , the calculation unit 21 stores the determination result of the object on the water in the storage unit 24 . Since the objects on the water are magnified, the features of the objects on the water become more prominent in the captured image. Therefore, a determination result with a higher degree of certainty can be obtained based on the magnified photographed image of the object on the water. In S108, a determination result may be obtained that the photographed image does not include any object on the water. The processing of S107-S108 corresponds to the determination result acquisition unit.
 S104で確信度が所定の範囲に含まれていない場合(S104:NO)、又はS108が終了した後、情報処理装置2は、撮影画像及び水上物体の判別結果を出力する(S109)。S109では、演算部21は、撮影画像及び水上物体の判別結果を含んだ画像を表示装置31に表示することにより、撮影画像及び水上物体の判別結果を出力する。S109では、演算部21は、複数の撮影画像と、夫々の撮影画像に含まれる水上物体の判別結果を出力する。S109の処理は出力部に対応する。 When the certainty factor is not within the predetermined range in S104 (S104: NO), or after S108 is finished, the information processing device 2 outputs the captured image and the water object discrimination result (S109). In S<b>109 , the calculation unit 21 outputs the photographed image and the water object discrimination result by displaying the image including the photographed image and the water object discrimination result on the display device 31 . In S<b>109 , the calculation unit 21 outputs a plurality of photographed images and the discrimination result of the object on the water included in each photographed image. The processing of S109 corresponds to the output unit.
 図7は、撮影画像及び水上物体の判別結果を出力した例を示す模式図である。図7には、表示装置31に表示された画像の例を示す。表示された画像には、前方カメラ11が作成した撮影画像41、赤外線カメラ12が作成した撮影画像42、PTZカメラ13が作成した撮影画像43、右舷カメラ14が作成した撮影画像44、及び左舷カメラ15が作成した撮影画像45が含まれる。 FIG. 7 is a schematic diagram showing an example of outputting the photographed image and the determination result of the object on the water. FIG. 7 shows an example of an image displayed on the display device 31. As shown in FIG. The displayed images include an image 41 created by the front camera 11, an image 42 created by the infrared camera 12, an image 43 created by the PTZ camera 13, an image 44 created by the starboard camera 14, and an image 44 created by the port camera. A photographed image 45 created by 15 is included.
 夫々の撮影画像には、作成した撮影装置の種類を示すラベル46が付属している。ラベル46を視認することにより、使用者は、夫々の撮影画像が何れの撮影装置で作成されたものであるのかを確認することができる。夫々の撮影装置が情報処理装置2へ入力する撮影画像には、撮影装置を識別する識別情報が関連付けられている。演算部21は、撮影画像に関連付けられている識別情報に基づいて、撮影画像を作成した撮影装置の種類を示すラベル46を生成し、各撮影画像に付属したラベル46を含んだ画像を表示装置31に表示する。 Each photographed image is attached with a label 46 that indicates the type of photographing device that created it. By visually recognizing the label 46, the user can confirm which photographing device each photographed image was created by. A photographed image input by each photographing device to the information processing device 2 is associated with identification information for identifying the photographing device. Based on the identification information associated with the captured image, the calculation unit 21 generates a label 46 indicating the type of the image capturing apparatus that created the captured image, and displays the image including the label 46 attached to each captured image on the display device. 31.
 水上物体の判別結果として、水上物体の撮影画像内での位置を示す目印と、水上物体の種類を示す情報とが出力される。例えば、図7に示すように、水上物体の撮影画像内での位置を示す目印として、撮影画像に含まれる水上物体52を囲うバウンディングボックス53が表示される。矢印等のバウンディングボックス53以外の目印が出力されてもよい。使用者は、バウンディングボックス53を視認することにより、船舶1の周囲に水上物体が存在することを知ることができる。 As a result of identifying a water object, a mark indicating the position of the water object in the captured image and information indicating the type of the water object are output. For example, as shown in FIG. 7, a bounding box 53 surrounding an aquatic object 52 included in the captured image is displayed as a mark indicating the position of the aquatic object within the captured image. A mark other than the bounding box 53, such as an arrow, may be output. A user can know that a water object exists around the ship 1 by visually recognizing the bounding box 53 .
 例えば、図7に示すように、水上物体の種類を示す情報として、水上物体52の種類を文字で表した文字情報54が表示される。例えば、水上物体の種類は、船舶、航路標識、灯台、養殖施設、ブイ、橋の主桁、橋脚、その他の非船舶物体、又は不明物体の何れかに判別される。例えば、確信度が第2閾値以下である場合に、水上物体の種類は不明物体と判別される。例えば、船舶と判別される水上物体は、商船、クルーズ船、クレーン船、タグボート、砂利運搬船、漁船、プレジャー船、その他の船舶、又は不明な船舶の何れかに判別される。航路標識又はブイが更に分類されてもよい。また、例えば、確信度が所定の範囲に含まれている場合で、所定のサイズ以下に写る水上物体52は、その種類を不明と判別されるように、学習済モデル242が学習されていてもよい。使用者は、文字情報54を視認することによって、船舶1の周囲に存在する水上物体の種類を知ることができる。水上物体の種類を示す情報は、文字情報54以外にも出力されてもよい。例えば、水上物体の種類に応じてバウンディングボックス53又は文字情報54の色を変更することにより、水上物体の種類の違いを表してもよい。 For example, as shown in FIG. 7, character information 54 representing the type of the aquatic object 52 in characters is displayed as information indicating the type of the aquatic object. For example, the types of aquatic objects are determined to be either ships, aids to navigation, lighthouses, aquaculture facilities, buoys, bridge girders, piers, other non-vessel objects, or unknown objects. For example, when the certainty is equal to or less than the second threshold, the type of the floating object is determined as an unknown object. For example, a water object identified as a vessel may be identified as either a merchant vessel, cruise ship, crane vessel, tugboat, gravel carrier, fishing vessel, pleasure vessel, other vessel, or unknown vessel. Aids or buoys may be further classified. Further, for example, even if the learned model 242 has been trained so that when the certainty factor is within a predetermined range, the type of the water object 52 photographed below a predetermined size is determined to be unknown. good. The user can recognize the types of water objects existing around the ship 1 by visually recognizing the character information 54 . Information indicating the type of object on the water may be output in addition to the character information 54 . For example, by changing the color of the bounding box 53 or the character information 54 according to the type of water object, different types of water objects may be represented.
 前方カメラ11が作成した撮影画像41と、赤外線カメラ12が作成した撮影画像42とは、ほぼ同じ領域が撮影されている。但し、使用されている光が異なるので、一方の撮影画像に含まれている水上物体52が他方の撮影画像には含まれていないことがある。前方カメラ11が作成した撮影画像41と、右舷カメラ14が作成した撮影画像44と、及び左舷カメラ15が作成した撮影画像45とは、互いに異なる領域が撮影されている。但し、二つの撮影装置が撮影する領域に重複する部分がある場合は、同じ水上物体52が二つの撮影画像に含まれることがある。 The captured image 41 created by the front camera 11 and the captured image 42 created by the infrared camera 12 capture almost the same area. However, since the light used is different, the water object 52 included in one captured image may not be included in the other captured image. The captured image 41 created by the front camera 11, the captured image 44 created by the starboard camera 14, and the captured image 45 created by the port camera 15 are captured in different areas. However, if there is an overlapping portion in the areas photographed by the two photographing devices, the same aquatic object 52 may be included in the two photographed images.
 図7には、前方カメラ11が作成した撮影画像41と、右舷カメラ14が作成した撮影画像44とに、漁船である同一の水上物体52が含まれている例を示す。図7には、前方カメラ11が作成した撮影画像41と、左舷カメラ15が作成した撮影画像45とに、ブイである同一の水上物体52が含まれている例を示す。 FIG. 7 shows an example in which the captured image 41 created by the front camera 11 and the captured image 44 created by the starboard camera 14 include the same water object 52, which is a fishing boat. FIG. 7 shows an example in which a photographed image 41 created by the front camera 11 and a photographed image 45 created by the port side camera 15 include the same aquatic object 52, which is a buoy.
 また、図7に示す画像には、前方カメラ11が作成した撮影画像41に含まれる一つの水上物体52をPTZカメラ13で拡大して撮影した撮影画像43が含まれる。撮影画像41に含まれる一つの水上物体52は非常に小さく映っており、種類が不明と判別され、確信度が所定の範囲に含まれている。S105~S108の処理により、PTZカメラ13が当該水上物体を拡大して撮影した撮影画像43が作成され、水上物体の判別結果が取得される。PTZカメラ13が作成した撮影画像43では、撮影画像41に含まれる一つの水上物体52が拡大されている。撮影画像41に基づいた判別結果では不明と判別されていた水上物体52の種類は、撮影画像43に基づいた判別結果ではクルーズ船と判別されている。なお、PTZカメラ13が撮影した撮影画像43に基づき、水上物体の種類が不明物体であるとの判別結果が得られた場合は、水上物体が不明物体であることを示す文字情報54が表示される。撮影画像43に水上物体が含まれていないとの判別結果が得られた場合は、演算部21は、撮影画像43に水上物体が含まれていないことを示す情報を表示装置31に表示する。このとき、演算部21は、撮影画像43を表示せずに、撮影画像43に水上物体が含まれていないことを示す情報を表示してもよい。 The image shown in FIG. 7 also includes a photographed image 43 obtained by enlarging and photographing one water object 52 included in the photographed image 41 created by the front camera 11 with the PTZ camera 13 . One aquatic object 52 included in the photographed image 41 appears very small, the type is determined to be unknown, and the degree of certainty is within a predetermined range. Through the processing of S105 to S108, the photographed image 43 obtained by enlarging and photographing the object on the water by the PTZ camera 13 is created, and the determination result of the object on the water is obtained. In the captured image 43 created by the PTZ camera 13, one aquatic object 52 included in the captured image 41 is enlarged. The type of the aquatic object 52 that was determined as unknown in the determination result based on the captured image 41 is determined as a cruise ship in the determination result based on the captured image 43 . Note that when a determination result is obtained based on the captured image 43 captured by the PTZ camera 13 that the type of the waterborne object is an unknown object, text information 54 indicating that the waterborne object is an unknown object is displayed. be. When it is determined that the photographed image 43 does not contain any object on the water, the calculation part 21 displays information indicating that the photographed image 43 does not contain the object on the water on the display device 31. - 特許庁At this time, the calculation unit 21 may display information indicating that the captured image 43 does not include the object on the water without displaying the captured image 43 .
 船舶1内の複数の位置に設置された複数の撮影装置を用いることにより、撮影装置が撮影する領域が互いに異なり、単一の撮影装置で撮影できる領域よりも広い領域を撮影することが可能になる。このため、船舶1の周囲の広い領域に存在する水上物体を効果的に判別することが可能となる。また、種類の異なる複数の撮影装置を用いることにより、一の撮影装置では撮影できない又は不明確に撮影される水上物体であっても、他の種類の撮影装置を用いて明確に撮影できることがある。このため、単一の撮影装置を利用して判別することができる水上物体よりも多くの水上物体を効果的に判別することが可能となる。例えば、他の撮影装置による撮影では不明確になる水上物体をPTZカメラ13を利用して拡大して撮影することにより、正確な判別結果を得ることが可能となる。 By using a plurality of photographing devices installed at a plurality of positions in the ship 1, the regions photographed by the photographing devices are different from each other, making it possible to photograph a wider region than can be photographed by a single photographing device. Become. Therefore, it is possible to effectively discriminate water objects present in a wide area around the ship 1 . In addition, by using a plurality of imaging devices of different types, water objects that cannot be photographed with one imaging device or are photographed unclearly can sometimes be clearly photographed using another type of imaging device. . Therefore, it is possible to effectively identify more water objects than can be identified using a single imaging device. For example, by using the PTZ camera 13 to enlarge and photograph an object on the water that becomes unclear when photographed by another photographing device, it is possible to obtain an accurate determination result.
 情報処理装置2は、次に、レーダー画像、撮影画像及び水上物体の判別結果を出力する(S110)。レーダー16は、船舶1の周囲を電波で走査し、電波の反射波に基づいて水上物体を判別し、判別結果を含んだレーダー画像を生成し、レーダー画像を情報処理装置2へ入力する。情報処理装置2は、入力されたレーダー画像をインタフェース部25で受け付けることにより、撮影画像を取得する。演算部21は、取得したレーダー画像を記憶部24に記憶する。S110では、演算部21は、レーダー画像、撮影画像及び水上物体の判別結果を含んだ画像を表示装置31に表示する。 The information processing device 2 then outputs the radar image, the captured image, and the determination result of the object on the water (S110). The radar 16 scans the surroundings of the vessel 1 with radio waves, discriminates objects on the water based on the reflected waves of the radio waves, generates a radar image including the discrimination result, and inputs the radar image to the information processing device 2 . The information processing device 2 acquires a captured image by receiving the input radar image at the interface unit 25 . The calculation unit 21 stores the acquired radar image in the storage unit 24 . In S<b>110 , the calculation unit 21 displays on the display device 31 an image including the radar image, the photographed image, and the determination result of the object on the water.
 図8は、レーダー画像、撮影画像及び水上物体の判別結果を出力した例を示す模式図である。図8には、表示装置31に表示された、レーダー画像6、撮影画像及び水上物体の判別結果を含んだ画像の例を示す。表示された画像には、レーダー画像6と、前方カメラ11が作成した撮影画像41と、右舷カメラ14が作成した撮影画像44と、左舷カメラ15が作成した撮影画像45とが含まれる。撮影画像41はレーダー画像6の前側に配置され、撮影画像44はレーダー画像6の右側に配置され、撮影画像45はレーダー画像6の左側に配置されている。このように、各撮影画像とレーダー画像6とが対比して表示されることにより、レーダー画像6の内容と撮影画像の内容との対応関係が分かり易くなる。レーダー画像6と対比してその他の撮影画像が表示されてもよい。 FIG. 8 is a schematic diagram showing an example of outputting a radar image, a photographed image, and a determination result of an object on the water. FIG. 8 shows an example of an image displayed on the display device 31, including the radar image 6, the photographed image, and the determination result of the object on the water. The displayed images include the radar image 6, the captured image 41 created by the front camera 11, the captured image 44 created by the starboard camera 14, and the captured image 45 created by the port camera 15. The captured image 41 is placed in front of the radar image 6 , the captured image 44 is placed on the right side of the radar image 6 , and the captured image 45 is placed on the left side of the radar image 6 . In this way, each photographed image and the radar image 6 are displayed in comparison, making it easy to understand the correspondence between the contents of the radar image 6 and the contents of the photographed image. Other captured images may be displayed in contrast to the radar image 6 .
 レーダー画像6には、船舶1の周囲に存在する水上物体の像61が含まれる。また、レーダー画像6には、レーダー16によって判別した水上物体の判別結果が含まれる。レーダー16によって判別した水上物体の判別結果と、学習済モデル242が出力した水上物体の判別結果とが一致する場合、両方の判別結果は同一の態様で出力される。即ち、レーダー画像6に含まれる水上物体の判別結果として、水上物体のレーダー画像6内での位置を示す目印と、水上物体の種類を示す情報とが出力される。例えば、撮影画像に重ねて表示される水上物体の判別結果と同様に、水上物体の像61を囲うバウンディングボックス53と、水上物体52の種類を文字で表した文字情報63が表示される。水上物体の種類を表す文字情報63の内容は、対応する水上物体52の種類を表す文字情報54の内容と同一である。水上物体の像61に係るバウンディングボックス53又は文字情報54の色は、対応する水上物体52に係るバウンディングボックス53又は文字情報54の色と同一になるように、調整されてもよい。なお、レーダー画像6に重ねて表示される水上物体の判別結果として、AIS17から得られたAISデータに基づいた判別結果が用いられてもよい。 The radar image 6 includes images 61 of water objects present around the ship 1 . In addition, the radar image 6 includes the determination result of the object on the water determined by the radar 16 . When the determination result of the water object determined by the radar 16 and the determination result of the water object output by the learned model 242 match, both determination results are output in the same manner. That is, as a result of discrimination of a water object included in the radar image 6, a mark indicating the position of the water object in the radar image 6 and information indicating the type of the water object are output. For example, a bounding box 53 surrounding an image 61 of an object on the water and character information 63 indicating the type of the object 52 on the water are displayed in the same manner as the determination result of the object on the water superimposed on the captured image. The contents of the character information 63 representing the type of the aquatic object are the same as the contents of the character information 54 representing the type of the corresponding aquatic object 52 . The color of the bounding box 53 or the text information 54 related to the image 61 of the water object may be adjusted so as to be the same as the color of the bounding box 53 or the text information 54 related to the corresponding water object 52 . As the discrimination result of the object on the water superimposed on the radar image 6, the discrimination result based on the AIS data obtained from the AIS 17 may be used.
 レーダー16を用いた水上物体の判別結果と、撮影画像に基づいた水上物体の判別結果とを出力することにより、二種類の方法で得られた水上物体の判別結果が比較される。レーダー16を用いた水上物体の判別結果と、撮影画像に基づいた水上物体の判別結果とを同一の態様で出力することにより、二種類の方法で得られた水上物体の判別結果が一致することが容易に確かめられる。水上物体の判別結果の誤りを検出することもできる。例えば、レーダー画像6に水上物体の像61が含まれるにも関わらず、対応する水上物体52が撮影画像に含まれていない場合は、水上物体の像61は偽像であると判定することができる。このように、水上物体がより確実に判別される。S110では、赤外線カメラ12が作成した撮影画像42及びPTZカメラ13が作成した撮影画像43が更に表示されてもよい。 By outputting the results of water object discrimination using the radar 16 and the water object discrimination results based on the captured image, the water object discrimination results obtained by the two methods are compared. By outputting in the same manner the result of discrimination of a water object using the radar 16 and the result of discrimination of a water object based on a photographed image, the discrimination results of a water object obtained by two types of methods match. is easily ascertained. It is also possible to detect errors in the discrimination results of objects on the water. For example, if the image 61 of the object on the water is included in the radar image 6 but the corresponding object 52 on the water is not included in the captured image, it can be determined that the image 61 of the object on the water is a false image. can. In this way, objects on the water are discriminated more reliably. In S110, the captured image 42 created by the infrared camera 12 and the captured image 43 created by the PTZ camera 13 may be further displayed.
 情報処理装置2は、次に、レーダー画像6内の一部分の指定を受け付ける(S111)。S111では、演算部21は、使用者が操作装置32を操作することにより、指定を受け付ける。例えば、演算部21は、レーダー画像6に重ねて、レーダー画像6内の一部分を指定するためのカーソル又はバウンディングボックス等の指定用図形を表示装置31に表示する。演算部21は、操作装置32が受け付けた使用者からの操作に応じて、指定用図形の位置を変更し、レーダー画像6内の一部分の指定を受け付ける。 The information processing device 2 next accepts designation of a part of the radar image 6 (S111). In S<b>111 , the calculation unit 21 receives designation by the user operating the operation device 32 . For example, the calculation unit 21 displays on the display device 31 a designating figure such as a cursor or a bounding box for designating a part of the radar image 6 superimposed on the radar image 6 . The calculation unit 21 changes the position of the designating graphic according to the user's operation received by the operating device 32 and receives the designation of a part of the radar image 6 .
 情報処理装置2は、次に、指定されたレーダー画像6内の一部分に対応する撮影画像内の対応部分に含まれる水上物体の判別結果を出力する(S112)。S112では、撮影画像内の対応部分を強調するためのバウンディングボックス等の強調図形を、撮影画像に重ねて表示装置31に表示する。 The information processing device 2 then outputs the determination result of the object on the water included in the corresponding part in the captured image corresponding to the part in the designated radar image 6 (S112). In S112, an emphasis figure such as a bounding box for emphasizing the corresponding portion in the captured image is superimposed on the captured image and displayed on the display device 31. FIG.
 図9は、一部分を指定されたレーダー画像6及び対応部分を強調した撮影画像の例を示す模式図である。レーダー画像6に重ねて、一つの水上物体の像61を含む一部分を指定するための指定画像64が表示されている。図9に示す例では、指定画像64は、指定された部分を二重線で囲う図形である。また、対応部分を含む撮影画像41及び43が表示され、撮影画像41及び43に重ねて、対応部分を強調するための強調図形55が表示されている。図9に示す例では、強調図形55は、対応部分を二重線で囲う図形である。対応部分に含まれる水上物体52の判別結果として、文字情報54が表示される。 FIG. 9 is a schematic diagram showing an example of the radar image 6 with a designated portion and the captured image with the corresponding portion emphasized. Overlaid on the radar image 6, a designation image 64 is displayed for designating a portion including an image 61 of one aquatic object. In the example shown in FIG. 9, the designated image 64 is a graphic enclosing the designated portion with a double line. In addition, photographed images 41 and 43 including corresponding portions are displayed, and an emphasis figure 55 for emphasizing the corresponding portions is displayed superimposed on the photographed images 41 and 43 . In the example shown in FIG. 9, the emphasized graphic 55 is a graphic enclosing the corresponding portion with a double line. Character information 54 is displayed as the determination result of the aquatic object 52 included in the corresponding portion.
 レーダー画像6内の一部分と、撮影画像内の対応部分とが対比して出力され、使用者は、レーダー画像6内の各部分が目視ではどのように見えるのかを確認することができる。例えば、使用者は、レーダー画像6に含まれる像61に対応する水上物体52を、撮影画像内に確認することができる。また、使用者は、レーダー16を用いた水上物体の判別結果と、撮影画像に基づいた水上物体の判別結果とを容易に比較することができる。S112では、指定されたレーダー画像6内の一部分に対応する対応部分を含まない撮影画像が更に表示されてもよい。S112が終了した後は、情報処理装置2は、水上物体の判別結果を出力する処理を終了する。 A part in the radar image 6 and the corresponding part in the captured image are output in comparison, and the user can confirm how each part in the radar image 6 looks visually. For example, the user can confirm the water object 52 corresponding to the image 61 included in the radar image 6 in the captured image. In addition, the user can easily compare the result of identifying the object on the water using the radar 16 and the result of identifying the object on the water based on the photographed image. In S<b>112 , a photographed image that does not include a corresponding portion corresponding to a portion within the designated radar image 6 may be further displayed. After S112 ends, the information processing device 2 ends the process of outputting the determination result of the object on the water.
 S109の処理とS110の処理とS111~112の処理とは、図5に示した例とは異なる順に実行されてもよく、操作装置32が受け付けた使用者からの操作に応じて、任意の準で実行されてもよく、交互に繰り返し実行されてもよい。S109の処理とS110の処理とS111~112の処理とは、いずれかの実行が省略されてもよい。S101~S109の処理は、随時実行され、水上物体の判別結果は船舶1の航行のために利用される。 The processing of S109, the processing of S110, and the processing of S111-112 may be executed in an order different from the example shown in FIG. may be executed alternately and repeatedly. Any of the processing of S109, the processing of S110, and the processing of S111-112 may be omitted. The processing of S101 to S109 is executed as needed, and the determination result of the object on the water is used for navigation of the ship 1.
 情報処理装置2は、水上物体の判別結果を利用して、学習済モデル242の再学習を行うことができる。図10は、学習済モデル242の再学習を行う処理の例を示すフローチャートである。情報処理装置2は、いずれかの撮影装置が作成した撮影画像と水上物体の判別結果とを関連付けた訓練データを生成する(S21)。S21では、演算部21は、撮影画像と、当該撮影画像を撮影した撮影装置とは異なる撮影装置が撮影した撮影画像に基づいた水上物体の判別結果とを関連付けたデータセットを含んだ訓練データを生成する。例えば、前方カメラ11が作成した撮影画像41と、PTZカメラ13が作成した撮影画像43に基づいた水上物体の判別結果とを関連付ける。 The information processing device 2 can re-learn the learned model 242 using the discrimination result of the object on the water. FIG. 10 is a flowchart showing an example of processing for re-learning the trained model 242 . The information processing device 2 generates training data that associates a photographed image created by one of the photographing devices with the discrimination result of the object on the water (S21). In S21, the calculation unit 21 prepares training data including a data set that associates a photographed image with a water object discrimination result based on a photographed image taken by a photographing device different from the photographing device that photographed the photographed image. Generate. For example, the photographed image 41 created by the front camera 11 and the determination result of the water object based on the photographed image 43 created by the PTZ camera 13 are associated.
 S21では、演算部21は、撮影画像と、レーダー16を用いた水上物体の判別結果とを関連付けたデータセットを含んだ訓練データを生成する。また、演算部21は、撮影画像と、AIS17を用いた水上物体の判別結果とを関連付けたデータセットを含んだ訓練データを生成する。S21では、演算部21は、撮影画像と水上物体の判別結果とを関連付けたデータセットを複数含んだ訓練データを生成し、記憶部24に記憶する。 In S21, the calculation unit 21 generates training data including a data set that associates the captured image with the determination result of the object on the water using the radar 16. In addition, the calculation unit 21 generates training data including a data set that associates the captured image with the determination result of the object on the water using the AIS 17 . In S<b>21 , the calculation unit 21 generates training data including a plurality of data sets in which the photographed images and the discrimination results of the objects on the water are associated with each other, and stores the training data in the storage unit 24 .
 情報処理装置2は、次に、訓練データを利用して、学習済モデル242の再学習を行う(S22)。S22では、演算部21は、訓練データに含まれる撮影画像を学習済モデル242へ入力し、学習済モデル242の再学習を行う。学習済モデル242は、撮影画像の入力に応じて、水上物体の判別結果を出力する。演算部21は、学習済モデル242が出力した水上物体の判別結果を取得し、学習済モデル242へ入力された撮影画像に訓練データにおいて関連付けられた水上物体の判別結果と、学習済モデル242が出力した水上物体の判別結果との誤差が小さくなるように、学習済モデル242の演算のパラメータを調整する。即ち、撮影画像に関連付けられた水上物体の判別結果とほぼ同一の水上物体の判別結果が出力されるように、パラメータが調整される。 The information processing device 2 next re-learns the learned model 242 using the training data (S22). In S<b>22 , the calculation unit 21 inputs the captured image included in the training data to the learned model 242 and re-learns the learned model 242 . The learned model 242 outputs a discrimination result of a water object according to the input of the photographed image. The calculation unit 21 acquires the determination result of the object on the water output by the trained model 242, and the determination result of the object on the water associated with the captured image input to the trained model 242 in the training data, and the learned model 242 The calculation parameters of the learned model 242 are adjusted so that the difference from the output determination result of the object on the water becomes small. That is, the parameters are adjusted so that the determination result of the aquatic object that is substantially the same as the determination result of the aquatic object associated with the captured image is output.
 演算部21は、訓練データに含まれる複数のデータセットを用いて処理を繰り返して、学習済モデル242のパラメータを調整することにより、学習済モデル242の再学習を行う。演算部21は、調整された最終的なパラメータを記録したデータを記憶部24に記憶する。このようにして、再学習された学習済モデル242が生成される。S22が終了した後、情報処理装置2は、学習済モデル242の再学習を行う処理を終了する。 The computing unit 21 re-learns the learned model 242 by repeating the process using a plurality of data sets included in the training data and adjusting the parameters of the learned model 242 . The calculation unit 21 stores data recording the adjusted final parameters in the storage unit 24 . In this way, a relearned trained model 242 is generated. After S<b>22 ends, the information processing device 2 ends the process of re-learning the trained model 242 .
 S21~S22の処理では、撮影画像と異なる撮影装置で撮影した撮影画像に基づいた水上物体の判別結果とを訓練データとして学習済モデル242を再学習することにより、より正確な判別結果を出力できるように、学習済モデル242を調整することができる。また、撮影画像とレーダー16又はAIS17を用いた水上物体の判別結果とを訓練データとすることにより、撮影画像に基づいた判別結果とは異なる判別結果を利用して、学習済モデル242を調整することができる。なお、S21~S22の処理は、情報処理装置2以外の装置で実行されてもよい。 In the processes of S21 and S22, the learned model 242 is re-learned using the captured images and the discrimination results of the objects on the water based on the captured images taken by different imaging devices as training data, so that more accurate discrimination results can be output. Thus, trained model 242 can be adjusted. In addition, by using captured images and discrimination results of water objects using the radar 16 or AIS 17 as training data, the learned model 242 is adjusted using discrimination results different from the discrimination results based on the captured images. be able to. Note that the processes of S21 and S22 may be executed by a device other than the information processing device 2. FIG.
 以上詳述した如く、本実施形態においては、船舶1に設置された複数の撮影装置で撮影した撮影画像を学習済モデル242へ入力し、学習済モデル242が出力した水上物体の判別結果を出力する。学習済モデル242を利用することにより、船員の目視に頼らずに水上物体を判別することができる。船舶1内の複数の位置に設置された複数の撮影装置を用いることにより、広い領域を撮影することが可能になり、船舶1の周囲の広い領域に存在する水上物体を効果的に判別することが可能となる。また、種類の異なる複数の撮影装置を用いることにより、多くの水上物体を効果的に判別することが可能となる。水上物体を効果的に判別することによって、船舶1の安全な航行を容易にすることが可能となる。 As described in detail above, in the present embodiment, captured images captured by a plurality of image capturing devices installed on the ship 1 are input to the trained model 242, and the trained model 242 outputs the water object discrimination results. do. By using the learned model 242, it is possible to discriminate objects on the water without depending on the visual observation of the crew. By using a plurality of photographing devices installed at a plurality of positions within the ship 1, it is possible to photograph a wide area, and to effectively discriminate water objects existing in a wide area around the ship 1. becomes possible. Also, by using a plurality of photographing devices of different types, it is possible to effectively discriminate many underwater objects. Effective discrimination of water objects can facilitate safe navigation of the vessel 1 .
 なお、本実施形態においては、レーダー16及びAIS17が情報処理装置2に接続されている形態を示したが、レーダー16及びAIS17は情報処理装置2から独立していてもよい。例えば、レーダー16及びAIS17は情報処理装置2を経由せずに表示装置31に接続されており、情報処理装置2による処理とは独立して、レーダー画像6とレーダー16又はAIS17を用いた水上物体の判別結果とが表示されてもよい。 Although the radar 16 and the AIS 17 are connected to the information processing device 2 in this embodiment, the radar 16 and the AIS 17 may be independent from the information processing device 2 . For example, the radar 16 and the AIS 17 are connected to the display device 31 without going through the information processing device 2, and independently of the processing by the information processing device 2, the radar image 6 and the radar 16 or the water object using the AIS 17 are displayed. may be displayed.
 本発明は上述した実施の形態の内容に限定されるものではなく、請求項に示した範囲で種々の変更が可能である。即ち、請求項に示した範囲で適宜変更した技術的手段を組み合わせて得られる実施形態も本発明の技術的範囲に含まれる。 The present invention is not limited to the contents of the above-described embodiments, and various modifications are possible within the scope indicated in the claims. That is, the technical scope of the present invention also includes embodiments obtained by combining technical means appropriately modified within the scope of the claims.
 1 船舶
 11 前方カメラ
 12 赤外線カメラ
 13 PTZカメラ
 14 右舷カメラ
 15 左舷カメラ
 2 情報処理装置
 20 記録媒体
 242 学習済モデル
 31 表示装置
 41、42、43、44、45 撮影画像
 52 水上物体
 53 バウンディングボックス
 54 文字情報
 6 レーダー画像
 61 水上物体の像
1 ship 11 front camera 12 infrared camera 13 PTZ camera 14 starboard camera 15 port camera 2 information processing device 20 recording medium 242 learned model 31 display device 41, 42, 43, 44, 45 photographed image 52 surface object 53 bounding box 54 characters Information 6 Radar image 61 Image of object on water

Claims (12)

  1.  船舶に設置された種類の異なる複数の撮影装置が前記船舶の周囲を撮影した撮影画像を取得し、
     撮影画像を入力した場合に前記撮影画像に含まれる水上物体の判別結果を出力する学習済モデルへ、前記複数の撮影装置の夫々から取得した撮影画像を入力し、
     前記学習済モデルが出力した水上物体の判別結果を取得し、
     前記複数の撮影装置の夫々に含まれる水上物体の判別結果を出力する
     ことを特徴とする情報処理方法。
    A plurality of photographing devices of different types installed on a ship acquire photographed images of the surroundings of the ship,
    inputting the photographed images obtained from each of the plurality of photographing devices to a trained model that, when a photographed image is input, outputs a discrimination result of an object on the water contained in the photographed image;
    Acquiring the discrimination result of the object on the water output by the learned model,
    An information processing method, characterized by outputting a discrimination result of an aquatic object included in each of the plurality of photographing devices.
  2.  前記判別結果は、水上物体の種類を含む
     ことを特徴とする請求項1に記載の情報処理方法。
    2. The information processing method according to claim 1, wherein the determination result includes a type of a water object.
  3.  前記複数の撮影装置は、第1カメラと、パン、チルト及びズームが可能なPTZカメラとを含み、
     前記学習済モデルが出力する水上物体の判別結果は、水上物体の種類の確信度を含み、
     前記第1カメラから取得した第1撮影画像を前記学習済モデルへ入力し、前記学習済モデルが出力する水上物体の判別結果を取得し、
     取得した前記判別結果に含まれる前記確信度が所定の範囲に含まれる場合に、前記第1撮影画像に含まれる水上物体を拡大して撮影した第2撮影画像を、前記PTZカメラから取得し、
     前記第2撮影画像を前記学習済モデルへ入力し、前記学習済モデルが出力する水上物体の判別結果を取得し、
     取得した前記判別結果を出力する
     ことを特徴とする請求項1又は2に記載の情報処理方法。
    The plurality of imaging devices include a first camera and a PTZ camera capable of panning, tilting and zooming;
    The discrimination result of the aquatic object output by the learned model includes the certainty of the type of the aquatic object,
    inputting a first captured image obtained from the first camera to the learned model, obtaining a water object discrimination result output by the learned model;
    obtaining, from the PTZ camera, a second photographed image obtained by enlarging and photographing a water object included in the first photographed image when the certainty factor included in the obtained determination result is within a predetermined range;
    inputting the second photographed image to the learned model, acquiring a water object discrimination result output by the learned model;
    3. The information processing method according to claim 1, wherein the obtained determination result is output.
  4.  前記複数の撮影装置は、前記船舶の前方にある領域を撮影する前方カメラと、赤外線を用いて撮影を行う赤外線カメラと、パン、チルト及びズームが可能なPTZカメラと、前記船舶の右舷に面した領域を撮影する右舷カメラと、前記船舶の左舷に面した領域を撮影する左舷カメラとを含み、
     撮影装置の種類を示す情報と共に前記判別結果を出力する
     ことを特徴とする請求項1又は2に記載の情報処理方法。
    The plurality of photographing devices include a front camera for photographing an area in front of the ship, an infrared camera for photographing using infrared rays, a PTZ camera capable of panning, tilting and zooming, and a starboard side surface of the ship. including a starboard camera that captures an area that is located on the side of the ship, and a port camera that captures an area facing the port side of the ship,
    3. The information processing method according to claim 1, wherein the determination result is output together with information indicating the type of photographing device.
  5.  前記撮影画像と、前記撮影画像に含まれる水上物体の前記撮影画像内での位置を示す目印と、前記水上物体の種類を示す情報とを出力することにより、前記判別結果を出力する
     ことを特徴とする請求項1乃至4のいずれか一つに記載の情報処理方法。
    The determination result is output by outputting the photographed image, a mark indicating the position of the object on the water contained in the photographed image within the photographed image, and information indicating the type of the object on the water. The information processing method according to any one of claims 1 to 4.
  6.  前記船舶に設置したレーダーを用いた水上物体の判別結果を含んだレーダー画像と、前記レーダーが走査する範囲に含まれる領域を撮影した撮影画像を入力した前記学習済モデルが出力した水上物体の判別結果とを、対比して出力する
     ことを特徴とする請求項1乃至5のいずれか一つに記載の情報処理方法。
    Discrimination of water objects output by the learned model that inputs a radar image containing the results of discrimination of water objects using the radar installed on the ship and a photographed image of an area included in the range scanned by the radar. 6. The information processing method according to any one of claims 1 to 5, wherein the results are output in comparison with each other.
  7.  前記レーダー画像内の一部分の指定を受け付け、
     前記学習済モデルが出力した水上物体の判別結果に含まれる、前記一部分に対応する前記撮影画像内の対応部分に含まれる水上物体の判別結果を出力する
     ことを特徴とする請求項6に記載の情報処理方法。
    Receiving designation of a portion within the radar image,
    7. The method according to claim 6, wherein a determination result of an aquatic object included in a corresponding portion in the captured image corresponding to the portion, which is included in the aquatic object determination result output by the learned model, is output. Information processing methods.
  8.  前記レーダーを用いた水上物体の判別結果と、前記学習済モデルが出力した水上物体の判別結果とが一致する場合に、両方の判別結果を同一の態様で出力する
     ことを特徴とする請求項6又は7に記載の情報処理方法。
    7. When the discrimination result of a water object using said radar and the discrimination result of a water object output by said trained model match, both discrimination results are output in the same manner. Or the information processing method according to 7.
  9.  撮影画像と、前記撮影画像を撮影した撮影装置とは種類の異なる撮影装置が撮影した撮影画像に基づいた水上物体の判別結果とを含んだ訓練データを用いて、前記学習済モデルの再学習を行う
     ことを特徴とする請求項1乃至8のいずれか一つに記載の情報処理方法。
    re-learning the learned model using training data containing captured images and discrimination results of water objects based on captured images captured by an imaging device different in type from the imaging device that captured the captured images. The information processing method according to any one of claims 1 to 8, characterized in that:
  10.  前記撮影装置が撮影した撮影画像と、前記船舶に設置したレーダー又はAIS(Automatic Identification System )を用いた水上物体の判別結果とを含んだ訓練データを用いて、前記学習済モデルの再学習を行う
     ことを特徴とする請求項1乃至9のいずれか一つに記載の情報処理方法。
    Re-learning the trained model using training data containing images captured by the imaging device and discrimination results of water objects using a radar or AIS (Automatic Identification System) installed on the ship. 10. The information processing method according to any one of claims 1 to 9, characterized in that:
  11.  種類の異なる複数の撮影装置が船舶の周囲を撮影した撮影画像を取得する画像取得部と、
     撮影画像を入力した場合に前記撮影画像に含まれる水上物体の判別結果を出力する学習済モデルへ、前記複数の撮影装置の夫々から取得した撮影画像を入力し、前記学習済モデルが出力した水上物体の判別結果を取得する判別結果取得部と、
     前記複数の撮影装置の夫々に対応する前記判別結果を出力する出力部と
     を備えることを特徴とする情報処理装置。
    an image acquisition unit that acquires captured images of the surroundings of the ship captured by a plurality of imaging devices of different types;
    The photographed images obtained from each of the plurality of photographing devices are input to a trained model that, when a photographed image is input, outputs a discrimination result of an object on the water contained in the photographed image, and the water surface output by the trained model is input. a discrimination result acquisition unit that acquires a discrimination result of an object;
    and an output unit that outputs the determination result corresponding to each of the plurality of photographing devices.
  12.  種類の異なる複数の撮影装置が船舶の周囲を撮影した撮影画像を取得し、
     撮影画像を入力した場合に前記撮影画像に含まれる水上物体の判別結果を出力する学習済モデルへ、前記複数の撮影装置の夫々から取得した撮影画像を入力し、
     前記学習済モデルが出力した水上物体の判別結果を取得し、
     前記複数の撮影装置の夫々に対応する前記判別結果を出力する
     処理をコンピュータに実行させることを特徴とするコンピュータプログラム。
    Captured images of the surroundings of the ship are captured by multiple imaging devices of different types,
    inputting the photographed images obtained from each of the plurality of photographing devices to a trained model that, when a photographed image is input, outputs a discrimination result of an object on the water contained in the photographed image;
    Acquiring the discrimination result of the object on the water output by the learned model,
    A computer program that causes a computer to execute a process of outputting the determination result corresponding to each of the plurality of photographing devices.
PCT/JP2022/015560 2021-12-02 2022-03-29 Information processing method, information processing device, and computer program WO2023100389A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021196343 2021-12-02
JP2021-196343 2021-12-02

Publications (1)

Publication Number Publication Date
WO2023100389A1 true WO2023100389A1 (en) 2023-06-08

Family

ID=86611781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/015560 WO2023100389A1 (en) 2021-12-02 2022-03-29 Information processing method, information processing device, and computer program

Country Status (1)

Country Link
WO (1) WO2023100389A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6236549B1 (en) * 2016-06-02 2017-11-22 日本郵船株式会社 Ship navigation support device
JP2021103396A (en) * 2019-12-25 2021-07-15 知洋 下川部 Management server in ship-loaded marine navigation support system, ship-loaded marine navigation support method, and ship-loaded marine navigation support program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6236549B1 (en) * 2016-06-02 2017-11-22 日本郵船株式会社 Ship navigation support device
JP2021103396A (en) * 2019-12-25 2021-07-15 知洋 下川部 Management server in ship-loaded marine navigation support system, ship-loaded marine navigation support method, and ship-loaded marine navigation support program

Similar Documents

Publication Publication Date Title
US11900668B2 (en) System and method for identifying an object in water
JP6932487B2 (en) Mobile monitoring device
KR102530691B1 (en) Device and method for monitoring a berthing
KR20200039589A (en) Device and method for monitoring ship and port
KR101666466B1 (en) Marine risk management system and marine risk management method using marine object distance measuring system with monocular camera
CN115147594A (en) Ship image trajectory tracking and predicting method based on ship bow direction identification
KR102520844B1 (en) Method and device for monitoring harbor and ship considering sea level
KR102311089B1 (en) Apparatus and method for monitoring the ocean using smart marine buoys
Wisernig et al. Augmented reality visualization for sailboats (ARVS)
WO2023100389A1 (en) Information processing method, information processing device, and computer program
US20230351764A1 (en) Autonomous cruising system, navigational sign identifying method, and non-transitory computer-readable medium
Sumimoto et al. Machine vision for detection of the rescue target in the marine casualty
CN114332682B (en) Marine panorama defogging target identification method
KR20210055904A (en) Apparatus for electronic surveillance of voyage
JP2005316958A (en) Red eye detection device, method, and program
KR102315080B1 (en) Apparatus and method for monitoring the ocean using smart marine buoys
CN115082811A (en) Method for identifying and measuring distance of marine navigation ship according to image data
KR102295283B1 (en) Smart navigation support apparatus
CN115035466A (en) Infrared panoramic radar system for safety monitoring
US20240104746A1 (en) Vessel tracking and monitoring system and operating method thereof
JPWO2020208742A1 (en) Polygon detection device, polygon detection method, and polygon detection program
WO2023167071A1 (en) Information processing device, monitoring device, information processing method, and program
EP4296968A1 (en) Method for labelling a water surface within an image, method for providing a training dataset for training, validating, and/or testing a machine learning algorithm, machine learning algorithm for detecting a water surface in an image, and water surface detection system
KR102613592B1 (en) Method distant ship detection and detailed type identification using images
WO2022137953A1 (en) Sea mark identification device, autonomous navigation system, sea mark identification method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22900825

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023564729

Country of ref document: JP

Kind code of ref document: A