WO2020075429A1 - Dispositif et procédé de surveillance d'environnement - Google Patents

Dispositif et procédé de surveillance d'environnement Download PDF

Info

Publication number
WO2020075429A1
WO2020075429A1 PCT/JP2019/035308 JP2019035308W WO2020075429A1 WO 2020075429 A1 WO2020075429 A1 WO 2020075429A1 JP 2019035308 W JP2019035308 W JP 2019035308W WO 2020075429 A1 WO2020075429 A1 WO 2020075429A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
monitoring device
information
image
unit
Prior art date
Application number
PCT/JP2019/035308
Other languages
English (en)
Japanese (ja)
Inventor
浩二 西山
Original Assignee
古野電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 古野電気株式会社 filed Critical 古野電気株式会社
Priority to CN201980065172.8A priority Critical patent/CN112840641A/zh
Priority to JP2020550232A priority patent/JP7346436B2/ja
Publication of WO2020075429A1 publication Critical patent/WO2020075429A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention mainly relates to a peripheral monitoring device using a camera image.
  • Patent Document 1 discloses an intruding object monitoring system using an image pickup device such as a camera as an image input means. Patent Document 1 discloses that when detecting an intruding ship on the ocean, a mask area for automatically removing erroneous detection such as reflection of light on the ocean is set.
  • Patent Document 1 since the mask area is created based on the change in the pixel value, the mask area cannot be correctly set when the reflection of light on the sea and the ship are confusing, and the false detection or There was room for improvement in that detection was missed.
  • the present invention has been made in view of the above circumstances, and an object thereof is to realize a peripheral monitoring device capable of highly accurate target detection.
  • a peripheral monitoring device having the following configuration. That is, the surroundings monitoring device includes a nautical chart data storage unit, a captured video input unit, a region setting unit, a target information acquisition unit, and a composite video generation unit.
  • the chart data storage unit stores chart data.
  • the captured image input unit inputs a captured image of the camera.
  • the area setting unit sets a detection target area based on the chart data.
  • the target information acquisition unit acquires target information of a target detected in the detection target area.
  • the composite video generation unit described above generates a composite video in which the target information is combined with the position of the captured video corresponding to the position where the target is detected.
  • the detection target area is an area divided by a boundary line represented by a plurality of position information.
  • the shape of the detection target area can be flexibly determined.
  • the detection target area is an area demarcated by a land-land boundary line or a contour line included in the chart data.
  • the detection target area is an area that is separated by a boundary line that is separated from the water-land boundary line included in the chart data by a predetermined distance toward the water area.
  • the target information acquisition unit acquires target information of a target detected by performing image recognition on the detection target area.
  • the image recognition is performed while changing parameters according to the water depth included in the chart data.
  • the captured image input section inputs a captured image captured by a camera installed on the ground.
  • the target information acquisition unit acquires target information of a target detected by a radar device in the detection target area.
  • the target information preferably represents at least one of the position of the target, the speed of the target, and the size of the target.
  • the above peripheral monitoring device preferably has the following configuration. That is, the target information acquisition unit can acquire the target information based on the AIS information.
  • the composite video generation unit can combine the target information based on the AIS information and the target information based on information other than the AIS information into the same video.
  • the composite image generation unit obtains both the target information based on the AIS information and the target information based on information other than the AIS information for the same target. In this case, it is preferable to preferentially combine the target information based on the AIS information.
  • the above-mentioned video generation device preferably has the following configuration. That is, the composite video generation unit can combine the azimuth scale indicating the azimuth with the captured video. The composite video generation unit can automatically change the vertical position in which the azimuth scale is composited with the captured video.
  • the chart data is stored. Inputs the video image taken by the shooting device.
  • the detection target area set based on the nautical chart data is set.
  • Target information of the target detected in the detection target area is acquired.
  • a composite image is generated by combining the target information at the position of the captured image corresponding to the position where the target is detected.
  • FIG. 3 is a conceptual diagram illustrating 3D scene data constructed by arranging virtual reality objects in a 3D virtual space, and a projection screen arranged in the 3D virtual space.
  • FIG. 4 is a diagram showing a composite video output by a data compositing unit.
  • the conceptual diagram which shows a mode that the virtual reality object of a land-and-land boundary line is arrange
  • FIG. 1 is a block diagram showing the overall configuration of a perimeter monitoring device 1 according to an embodiment of the present invention.
  • FIG. 2 is a side view showing various devices provided in the port surveillance facility 4.
  • the perimeter monitoring device 1 shown in FIG. 1 is installed in a port monitoring facility 4 as shown in FIG. 2, for example.
  • the surroundings monitoring device 1 can generate a video on which information for supporting the motion monitoring of a ship (monitoring target) in a port is superimposed on the basis of the video captured by the camera (imaging device) 3. Therefore, the peripheral monitoring device 1 functions as a video generation device.
  • the video generated by the peripheral monitoring device 1 is displayed on the display 2.
  • the display 2 can be configured, for example, as a display arranged on a monitoring desk where an operator in the port monitoring facility 4 performs monitoring work.
  • the display 2 is not limited to the above, and may be, for example, a display of a portable computer carried by a surveillance assistant who monitors the surroundings from the port surveillance facility 4.
  • the perimeter monitoring device 1 synthesizes an image of the surroundings captured by the camera 3 installed in the port monitoring facility 4 and a figure that virtually represents the additional display information (detailed later) of the surroundings. In this way, a composite video that is an output video to the display 2 is generated.
  • the camera 3 is configured as a visible light video camera that takes a picture around the port surveillance facility 4.
  • the camera 3 is configured as a wide-angle type camera, and is mounted slightly downward in a place with high visibility.
  • This camera 3 has a live output function and can generate moving image data (image data) as a shooting result in real time and output it to the surroundings monitoring device 1.
  • -Camera 3 performs fixed point shooting in principle.
  • the camera 3 is attached via a rotation mechanism (not shown), and the photographing direction can be changed by inputting a signal instructing a pan / tilt operation from the peripheral monitoring device 1.
  • the peripheral monitoring device 1 of the present embodiment is electrically connected to the camera 3, the AIS receiver 9 as various devices, the radar device (target detection unit) 12, and the like.
  • the AIS receiver 9 receives the AIS information transmitted from the ship.
  • the AIS information includes the position (latitude / longitude) of the ship navigating the monitored port, the length and width of the ship, the type and identification information of the ship, the ship speed, the course, and the destination of the ship. Various information is included.
  • the radar device 12 can detect a target such as a ship existing in the monitored port. Further, the radar device 12 has a known target tracking function (Target Tracking, TT) capable of capturing and tracking a target, and obtains the position and velocity vector (TT information) of the target.
  • TT Target Tracking
  • the peripheral monitoring device 1 is connected to a keyboard 36 and a mouse 37 operated by the user. By operating the keyboard 36 and the mouse 37, the user can give various instructions regarding image generation. This instruction includes a pan / tilt operation of the camera 3 and the like.
  • the peripheral monitoring device 1 includes a captured image input unit 21, an additional display information acquisition unit (target information acquisition unit) 17, a camera position / direction setting unit 25, and a radar position / direction setting unit. 26, an image recognition unit (target detection unit) 28, an image recognition region setting unit (region setting unit) 31, a tracking target region setting unit (region setting unit) 32, a nautical chart data storage unit 33, and a composite image. And a generation unit 20.
  • the peripheral monitoring device 1 is configured as a known computer, and includes a CPU, a ROM, a RAM, an HDD, and the like, although not shown. Furthermore, the peripheral monitoring device 1 includes a GPU for performing three-dimensional image processing, which will be described later, at high speed. Then, for example, the HDD stores software for executing the perimeter monitoring method of the present invention.
  • the peripheral monitoring device 1 can be configured such that the captured image input unit 21, the additional display information acquisition unit 17, the camera position / direction setting unit 25, the radar position / direction setting unit 26, the image recognition unit 28, It can function as the image recognition area setting unit 31, the tracking target area setting unit 32, the nautical chart data storage unit 33, the composite video generation unit 20, and the like.
  • Reference numeral 35 is a processing circuit.
  • the captured video input unit 21 can input the video data output by the camera 3 at, for example, 30 frames per second.
  • the captured video input unit 21 outputs the input video data to the image recognition unit 28 and the composite video generation unit 20 (data composition unit 23 described later).
  • the additional display information acquisition unit 17 is based on the information input to the peripheral monitoring device 1 from the AIS receiver 9 and the radar device 12 and the information acquired by the image recognition unit 28, and with respect to the video image captured by the camera 3.
  • Information to be additionally displayed is acquired.
  • additional display information Various types are conceivable. For example, the position and speed of the ship obtained from the AIS receiver 9, the position and speed of the target obtained from the radar device 12, and the object obtained from the image recognition unit 28. It can be the position and speed of the target.
  • the additional display information acquisition unit 17 outputs the acquired information to the composite video generation unit 20. The details of the additional display information will be described later.
  • the camera position / direction setting unit 25 can set the position (shooting position) and direction of the camera 3 in the port monitoring facility 4.
  • the information on the position of the camera 3 is specifically information indicating latitude, longitude, and altitude.
  • the information on the orientation of the camera 3 is specifically information indicating an azimuth angle and a depression angle. These pieces of information can be obtained by, for example, performing measurements during installation work of the camera 3.
  • the orientation of the camera 3 can be changed within a predetermined angle range as described above, but when the pan / tilt operation of the camera 3 is performed, the latest orientation information is set in the camera position / orientation setting unit 25 accordingly. To be done.
  • the camera position / orientation setting unit 25 outputs the set information to the composite video generation unit 20, the additional display information acquisition unit 17, and the image recognition area setting unit 31.
  • the radar position / direction setting unit 26 can set the position and direction of the antenna of the radar device 12.
  • the information on the position of the antenna is specifically information indicating latitude and longitude.
  • the information on the direction of the antenna is information indicating the azimuth.
  • the antenna normally rotates in a horizontal plane, but the orientation of the antenna here means the azimuth (reference azimuth) that is the reference of the detection direction of the radar device 12.
  • the radar position / direction setting unit 26 outputs the set information to the additional display information acquisition unit 17 and the tracking target area setting unit 32.
  • the image recognition unit 28 cuts out a portion of the image acquired from the captured image input unit 21, which is considered to be a ship or the like, and collates it with a target image database registered in advance to detect a ship, a diver, a drifting object, or the like. Recognize the target of. Specifically, the image recognition unit 28 detects a moving object by an inter-frame difference method or the like, cuts out a region where a difference occurs, and collates it with an image database. As a matching method, a known appropriate method such as template matching can be used. Image recognition can also be realized by using another publicly known method, for example, a neural network. The image recognition unit 28 outputs the information on the position recognized on the image for each of the recognized targets to the additional display information acquisition unit 17.
  • the image recognition area setting unit 31 sets an area in the video input from the captured video input unit 21 to the image recognition unit 28, which is a target for the image recognition unit 28 to perform image recognition.
  • the image recognition unit 28 recognizes an object floating on the water surface. Therefore, as the image recognition area setting unit 31, normally, only a part of the area where the water surface appears is the target area for image recognition.
  • the boundary that separates the inside and outside of the target area for image recognition can be represented by, for example, a closed polygonal line figure.
  • the image recognition area setting unit 31 stores position information (latitude and longitude) of a plurality of vertices of a polygonal line figure.
  • the camera 3 is usually installed so that the surface of the water is greatly photographed, but as shown in the example of FIG. 3, the field of view of the camera 3 may be on the ground at the same time as the surface of the water.
  • the image recognition target area in the image recognition area setting unit 31 so as to exclude the land portion, it is possible to prevent the image recognition unit 28 from recognizing, for example, a vehicle traveling on a road as a target. can do. As a result, the recognition accuracy can be improved and the processing load can be reduced.
  • the tracking target area setting unit 32 in FIG. 1 sets an area to be a target for the radar device 12 to track a target by the target tracking function described above. Since the radar device 12 is supposed to detect objects floating on the water surface, the tracking target area setting unit 32, like the image recognition area setting unit 31, also targets the target tracking target so as to exclude the land portion. Is set. This makes it possible to properly detect only the target on the water surface.
  • the boundary separating the inside and outside of the target area for tracking by the radar device 12 can be represented by a closed polygonal line shape like the target area for image recognition.
  • the tracking target area setting unit 32 stores position information (latitude and longitude) of a plurality of vertices of a polygonal line figure.
  • the nautical chart data storage unit 33 stores nautical chart data.
  • the nautical chart data for example, nautical electronic charts are used.
  • the chart data includes vector data of the boundary line between the water surface and the land (land-water boundary line).
  • the method of expressing the boundary line is arbitrary, for example, it is conceivable that the contour of the land area is expressed by a closed figure with polygonal lines and the position information (latitude and longitude) of each vertex is described in order.
  • the nautical chart data storage unit 33 outputs vector data indicating the land-land boundary to the image recognition area setting unit 31 and the tracking target area setting unit 32. This facilitates the setting of areas for the image recognition area setting unit 31 and the tracking target area setting unit 32.
  • the composite video generation unit 20 generates a composite video displayed on the display 2.
  • the synthetic image generation unit 20 includes a three-dimensional scene generation unit (display data generation unit) 22 and a data synthesis unit (display output unit) 23.
  • the 3D scene generation unit 22 arranges the virtual reality objects 41v, 42v, ... Corresponding to the additional display information in the 3D virtual space 40 to generate a 3D scene of the virtual reality.
  • three-dimensional scene data (three-dimensional display data) 48 which is data of a three-dimensional scene, is generated. The details of the 3D scene will be described later.
  • the data synthesizing unit 23 draws the three-dimensional scene data 48 generated by the three-dimensional scene generating unit 22 to generate a figure that three-dimensionally expresses the additional display information, and at the same time, synthesizes the image shown in FIG. , And the image captured by the camera 3 is output. As shown in FIG. 5, in this composite video, graphics 41f, 42f, ... Showing additional display information are superimposed on the video captured by the camera 3.
  • the data synthesizing unit 23 outputs the generated synthetic video to the display 2. The details of the graphic generation processing and the data synthesis processing will be described later.
  • FIG. 3 is a conceptual diagram illustrating an example of additional display information to be displayed in the peripheral monitoring device 1.
  • the additional display information is information that is displayed in addition to the image captured by the camera 3, and various information can be considered depending on the purpose and function of the device connected to the peripheral monitoring device 1.
  • the received AIS information for example, the position and orientation of the ship
  • the radar device 12 the position and speed of the detected target can be used as additional display information.
  • the additional display information includes the position and speed of the target identified by the image recognition unit 28 performing image recognition.
  • the position and velocity vector of the target included in the information obtained from the radar device 12 are relative to the position and orientation of the antenna of the radar device 12. Therefore, the additional display information acquisition unit 17 converts the position and velocity vector of the target obtained from the radar device 12 into the earth standard based on the information obtained from the radar position / direction setting unit 26.
  • the additional display information acquisition unit 17 converts the position and velocity vector of the target obtained from the image recognition unit 28 into the earth reference based on the information obtained from the camera position / direction setting unit 25.
  • each additional display information includes at least information indicating the position (latitude and longitude) of a point on the sea surface (water surface) where it is arranged.
  • the additional display information indicating the vessels 41r, 42r, 43r includes information indicating the positions of the vessels 41r, 42r, 43r.
  • FIG. 4 shows three-dimensional scene data 48 generated by arranging virtual reality objects 41v, 42v, ... In the tertiary virtual space 40, and a projection screen 51 arranged in the three-dimensional virtual space 40. It is a conceptual diagram explaining.
  • the three-dimensional virtual space 40 in which the virtual reality objects 41v, 42v, ... Are arranged by the three-dimensional scene generation unit 22 is configured by an orthogonal coordinate system as shown in FIG.
  • the origin of this Cartesian coordinate system is set to a point of zero altitude just below the installation position of the camera 3.
  • the xz plane which is a horizontal plane including the origin, is set so as to simulate the sea surface (water surface).
  • the coordinate axes are determined such that the + z direction always matches the azimuth angle of the camera 3, the + x direction is the right direction, and the + y direction is the up direction.
  • Each point (coordinate) in the three-dimensional virtual space 40 is set so as to correspond to the actual position around the camera 3.
  • FIG. 4 shows an example in which virtual reality objects 41v, 42v, 43v are arranged in the three-dimensional virtual space 40 corresponding to the situation of the port shown in FIG.
  • the virtual reality objects 41v, 42v, ... Show downward cones indicating the positions of the recognized targets (that is, the ships 41r, 42r, 43r) and the velocity vector of the target.
  • the downward cone represents that the target is located just below it.
  • the direction of the arrow expresses the direction of the speed of the target, and the length of the arrow expresses the magnitude of the speed.
  • Each virtual reality object 41v, 42v, 43v is arranged on the xz plane or slightly above it so that the relative position of the additional display information represented by the virtual reality object 41v, 42v, 43v to the camera 3 is reflected.
  • calculations are performed using the position and orientation of the camera 3 set by the camera position / orientation setting unit 25 shown in FIG. Be seen.
  • the 3D scene generation unit 22 generates the 3D scene data 48 as described above.
  • the virtual reality objects 41v, 42v, ... are arranged on the basis of the azimuth with the origin directly below the camera 3, when the azimuth of the camera 3 changes from the state of FIG.
  • a new three-dimensional scene in which the objects 41v, 42v, ... Are rearranged is constructed and the three-dimensional scene data 48 is updated.
  • the contents of the additional display information are changed, for example, when the vessels 41r, 42r, 43r move from the state of FIG. 3, the three-dimensional scene data 48 is updated so as to reflect the latest additional display information.
  • the data synthesis unit 23 arranges a projection screen 51 in the three-dimensional virtual space 40, which defines the position and range of the image captured by the camera 3.
  • a projection screen 51 in the three-dimensional virtual space 40, which defines the position and range of the image captured by the camera 3.
  • the data composition unit 23 arranges the viewpoint camera 55 so as to simulate the position and orientation of the camera 3 in the real space in the three-dimensional virtual space 40.
  • the data synthesizing unit 23 also arranges the projection screen 51 so as to face the viewpoint camera 55.
  • the position of the camera 3 can be obtained based on the setting value of the camera position / orientation setting unit 25 shown in FIG.
  • the azimuth angle of the viewpoint camera 55 in the three-dimensional virtual space 40 does not change even if the azimuth angle changes due to the pan operation of the camera 3. Instead, when the camera 3 pans the data, the data synthesizing unit 23 causes the virtual reality objects 41v, 42v, ... In the three-dimensional virtual space 40 to be in the horizontal plane about the origin by the changed azimuth angle. Then rearrange so as to rotate.
  • the depression angle of the viewpoint camera 55 is controlled so as to be always equal to the depression angle of the camera 3.
  • the data synthesizing unit 23 determines the position and orientation of the projection screen 51 arranged in the three-dimensional virtual space 40 in association with the change in the depression angle (change in the depression angle of the viewpoint camera 55) due to the tilt operation of the camera 3. Change so as to always keep the state of facing.
  • the data synthesizing unit 23 performs a known rendering process on the three-dimensional scene data 48 and the projection screen 51 to generate a two-dimensional image. More specifically, the data synthesizing unit 23 arranges the viewpoint camera 55 in the three-dimensional virtual space 40, and sets the view frustum 56 that defines the range to be subjected to the rendering process with the viewpoint camera 55 as the apex. It is defined so that the line-of-sight direction is the central axis. Next, the data synthesizing unit 23 determines the vertex coordinates of the polygons located inside the view frustum 56 among the polygons forming each object (the virtual reality objects 41v, 42v, ... And the projection screen 51).
  • the projection screen 51 has a curved shape along a spherical shell centered on the viewpoint camera 55, it is possible to prevent distortion of the captured image due to perspective projection.
  • the camera 3 is a wide-angle camera, and the lens distortion occurs in the captured image as shown in FIG. 3, but the lens distortion is removed when the captured image is attached to the projection screen 51.
  • the method of removing the lens distortion is arbitrary, but for example, it is conceivable to use a look-up table in which the position of the pixel before correction and the position of the pixel after correction are associated with each other. As a result, the three-dimensional virtual space 40 shown in FIG. 4 and the captured video can be well matched.
  • FIG. 5 shows the result of combining the above-described two-dimensional image by rendering the three-dimensional scene data 48 of FIG. 4 with the captured video shown in FIG.
  • the part in which the video image captured by the camera 3 appears is shown by a broken line for the sake of convenience so that it can be easily distinguished from other parts. The same is true for me).
  • graphics 41f, 42f, 43f expressing additional display information are arranged so as to overlap the captured video image.
  • the figures 41f, 42f, ... Show the three-dimensional shape of the virtual reality objects 41v, 42v ,. It is generated as a result of drawing. Therefore, even when the graphics 41f, 42f, ... Are superimposed on the realistic image by the camera 3, it is possible to prevent the appearance of strangeness from occurring.
  • the lens distortion is removed from the captured image input from the camera 3 when projected on the projection screen 51 of the three-dimensional virtual space 40.
  • the data synthesizing unit 23 again adds lens distortion to the synthesized image after rendering by the inverse conversion using the above-mentioned lookup table.
  • FIG. 5 and FIG. 3 it is possible to obtain a composite image in which a sense of discomfort does not easily occur in relation to the camera image before composition.
  • the addition of lens distortion may be omitted.
  • the data synthesizing unit 23 further synthesizes character information describing useful information for monitoring at positions near the figures 41f, 42f, 43f in the synthesized video.
  • the content of the character information is arbitrary, and various contents can be displayed, for example, information for identifying the ship, information indicating the size of the ship, information indicating from which device the information was acquired, and the like.
  • the information for identifying the ship can be obtained from the AIS information, for example.
  • the information indicating the size of the ship can be obtained from the AIS information, but from the size of the image detected at the time of image recognition by the image recognition unit 28, or the size of the tracking echo obtained by the radar device 12. Can also be obtained by calculation. As a result, it is possible to realize a monitoring screen with abundant information.
  • additional display information based on the AIS information obtained from the AIS receiver 9 is displayed for the ship 41r. It is conceivable that the image recognition unit 28 recognizes the same ship 41r or the radar device 12 tracks the same ship 41r. However, even in this case, the composite video generation unit 20 does not include the AIS information in the ship 41r.
  • the additional display information based on the priority is displayed. As a result, information with high reliability can be preferentially displayed.
  • Azimuth scale 46 is displayed in the composite image of FIG.
  • the azimuth scale 46 is formed in an arc shape so as to connect the left end and the right end of the screen.
  • On the azimuth scale 46 a numerical value indicating the azimuth corresponding to the video is described. This allows the user to intuitively understand the azimuth the camera 3 is looking at.
  • the data composition unit 23 causes the azimuth scale 46 to compose. Automatically move the specified position in the vertical direction of the image. This makes it possible to prevent the azimuth scale 46 from interfering with other displays.
  • the image recognition area setting unit 31 sets the area where the image recognition unit 28 performs image recognition.
  • the user can specify the above-mentioned area by specifying the target area for image recognition with the mouse 37 or the like while displaying the image captured by the camera 3 on the display 2.
  • the chart data storage unit 33 When the user designates a target area for image recognition, the chart data storage unit 33 outputs vector data representing the land-land boundary line among the stored chart data to the image recognition area setting unit 31.
  • the image recognition area setting unit 31 outputs this boundary line vector data to the composite video generation unit 20.
  • the 3D scene generation unit 22 of the composite video generation unit 20 generates 3D scene data 48 in which a virtual reality object 49v representing a land-land boundary is arranged on the xz plane.
  • the position where the virtual reality object 49v is arranged is arranged so as to reflect the relative position of the water-land boundary line with respect to the camera 3 with the azimuth angle of the camera 3 as a reference.
  • the data synthesizing unit 23 performs the rendering process exactly as described above, and outputs a synthetic image as shown in FIG. 7 to the display.
  • the graphic 49f as a result of the rendering of the virtual reality object 49v is arranged on the composite image as if it were placed on the water surface of the image captured by the camera 3. Since the boundary line vector data should match the actual boundary line, the shape of the water surface reflected by the camera 3 and the shape of the figure 49f are matched.
  • the user can easily and appropriately set the image recognition area with reference to the graphic 49f.
  • the user can directly specify the area surrounded by the graphic 49f shown in FIG. 7 as the image recognition area.
  • the user can also specify the image recognition area after modifying the area by a mouse operation or the like so as to correct the deviation from the captured video.
  • the boundary vector data stored in the nautical chart data storage unit 33 can also be used in the tracking target region setting unit 32 to set the tracking target region.
  • the tracking target area can be calculated and set so as to be limited to the water area based on the position / direction of the antenna set in the radar position / direction setting unit 26 and the boundary line vector data. As a result, the user can easily and accurately set the tracking target area.
  • the boundary of the target area for image recognition and the boundary of the target area for radar tracking can be defined using other boundary lines instead of using the land-land boundary line.
  • contour lines corresponding to a predetermined depth may be used as the recognition boundary.
  • a boundary line may be created by offsetting the boundary line from the water-land boundary line to the water area side, and the boundary line may be used as a recognition boundary.
  • Image recognition or radar tracking parameters may be changed according to the information included in the chart data. For example, since a large vessel cannot navigate in a shallow water area, when the water depth acquired based on the nautical chart data is small, the image recognition unit 28 performs template matching with the image database of the small vessel or the radar device 12 is small. It can be considered that only the radar echo is targeted for tracking. Thereby, proper monitoring can be realized.
  • FIG. 8 is a flowchart showing a series of processes performed in the peripheral monitoring device 1.
  • the peripheral monitoring device 1 stores the chart data input from the outside in the chart data storage unit 33 as a preparatory work (step S101).
  • the peripheral monitoring device 1 inputs the captured image captured by the camera 3 from the captured image input unit 21 (step S102).
  • the composite video generation unit 20 of the periphery monitoring device 1 generates a composite video that combines the captured video and the boundary line vector data acquired from the nautical chart data storage unit 33, and outputs the composite video to the display 2 (step S103).
  • Step S104 The user appropriately sets the image recognition area while looking at the screen of the display 2 (step S104). Steps S103 and S104 may be omitted, and the perimeter monitoring device 1 side may automatically set the boundary of the image recognition area to be the same as the boundary line vector data.
  • the peripheral monitoring device 1 inputs the captured image captured by the camera 3 from the captured image input unit 21 (step S105). Subsequently, the image recognition unit 28 of the perimeter monitoring device 1 performs image recognition on the image recognition area and acquires target information of the target detected by this (step S106). Next, the composite video generation unit 20 generates a composite video in which the target information is combined at the position of the captured video corresponding to the position where the target is detected, and outputs the composite video to the display 2 (step S107). After that, the process returns to step S105, and the processes of steps S105 to S107 are repeated.
  • the peripheral monitoring device 1 of the present embodiment includes the nautical chart data storage unit 33, the captured video input unit 21, the image recognition area setting unit 31, the additional display information acquisition unit 17, and the composite video generation. And a section 20.
  • the chart data storage unit 33 stores chart data.
  • the captured image input unit 21 inputs a captured image of the camera 3.
  • the image recognition area setting unit 31 sets a detection target area based on the chart data.
  • the additional display information acquisition unit 17 acquires the target information of the target detected in the detection target area.
  • the combined image generation unit 20 generates a combined image in which the target information is combined at the position of the captured image corresponding to the position where the target is detected.
  • the above pan / tilt function may be omitted in the camera 3, and the shooting direction may be unchangeable.
  • the virtual reality objects 41v, 42v, ... Have the position of the camera 3 as the origin, as described in FIG. It is arranged based on the camera orientation.
  • the virtual reality objects 41v, 42v, ... May be arranged not on the camera orientation reference but on the true north reference in which the + z direction is always true north.
  • the azimuth changes due to the pan operation of the camera 3
  • the position and orientation of the camera 3 in the three-dimensional virtual space 40 are changed.
  • a fixed point appropriately set on the earth is used as the origin, and, for example, + z direction is true north, + x direction is May be set to be true east.
  • the device (information source of additional display information) connected to the peripheral monitoring device 1 is not limited to the one described in FIG. 1, and may include other devices. Examples of such devices include infrared cameras and acoustic sensors.
  • the target area set by the image recognition area setting unit 31 and the target area set by the tracking target area setting unit 32 can be represented by another graphic, for example, a smooth curve, instead of the polygonal line graphic. Further, the target area can be set in the form of raster data (mask image) instead of vector data.
  • the graphic representing the additional display information is not limited to that shown in FIG.
  • the three-dimensional model of the ship is arranged in a direction that matches the direction of the ship obtained from the AIS information and the like.
  • the size of the three-dimensional model of the ship placed in the three-dimensional virtual space 40 may be changed according to the size of the ship obtained from the AIS information or the like.
  • the peripheral monitoring device 1 is not limited to a facility on the ground, and may be provided in a moving body such as a ship.
  • peripheral monitoring device 3 camera 17 additional display information acquisition unit (target information acquisition unit) 20 Synthetic video generation unit 21 Photographed video input unit 28 Image recognition unit 31 Image recognition area setting unit (area setting unit) 32 Tracking target area setting section (area setting section) 33 Chart data storage
  • All processes described herein may be embodied by software code modules executed by a computing system including one or more computers or processors and may be fully automated.
  • the code modules may be stored on any type of non-transitory computer readable medium or other computer storage device. Some or all of the methods may be embodied in dedicated computer hardware.
  • the various exemplary logic blocks and modules described in connection with the embodiments disclosed herein can be implemented or executed by a machine such as a processor.
  • the processor may be a microprocessor, but in the alternative, the processor may be a controller, microcontroller, or state machine, combinations thereof, or the like.
  • the processor can include electrical circuitry configured to process computer-executable instructions.
  • the processor comprises an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other programmable device that performs logical operations without processing computer-executable instructions.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a processor may also be a combination of computing devices, such as a combination of digital signal processors (digital signal processors) and microprocessors, multiple microprocessors, one or more microprocessors in combination with DSP cores, or any other thereof. It can be implemented as such a configuration. Although described primarily herein in terms of digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented by analog circuits or mixed analog and digital circuits.
  • a computing environment includes any type of computer system including, but not limited to, a microprocessor, mainframe computer, digital signal processor, portable computing device, device controller, or computing engine-based computer system in an apparatus. be able to.
  • conditional languages such as “capable”, “capable”, “possible” or “possible” refer to particular features, elements and / or steps that a particular embodiment includes. Embodiments are understood in the sense of the context commonly used to convey the inclusion. Accordingly, such a conditional language is generally any method in which features, elements and / or steps are required for one or more embodiments, or one or more embodiments It is not meant to necessarily include logic to determine whether an element and / or step is included in or executed by any particular embodiment.
  • a disjunctive language such as the phrase "at least one of X, Y, and Z", unless otherwise specified, has an item, term, etc. that is any of X, Y, Z, or any combination thereof. Understood in a commonly used context to indicate that it can be (eg, X, Y, Z). Accordingly, such disjunctive languages generally require each of at least one of X, at least one of Y, or at least one of Z for which a particular embodiment is present. Does not mean.
  • Numerals such as “one” should generally be construed to include one or more described items unless specifically stated. Thus, phrases such as “one device configured to” are intended to include one or more of the listed devices. Such one or more enumerated devices may also be collectively configured to perform the recited citations. For example, "a processor configured to perform A, B and C below” refers to a first processor configured to perform A and a second processor configured to perform B and C. Processor.
  • the term "horizontal” as used herein, regardless of its orientation, is a plane parallel to the plane or surface of the floor of the area in which the described system is used, or description. Is defined as the plane in which the method is performed.
  • the term “floor” can be replaced with the terms “ground” or “water surface”.
  • the term “vertical / vertical” refers to the direction vertical / vertical to the defined horizontal line. Terms such as “upper”, “lower”, “lower”, “upper”, “side”, “higher”, “lower”, “upward”, “above”, and “below” are defined with respect to the horizontal plane. ing.
  • connection As used herein, the terms “attach,” “connect,” “pair,” and other related terms, unless otherwise noted, are removable, moveable, fixed, adjustable, And / or should be construed as including a removable connection or coupling. Connections / couplings include direct connections and / or connections having an intermediate structure between the two described components.
  • the numbers preceded by terms such as “approximately”, “about”, and “substantially” include the recited numbers, and unless otherwise indicated. Represents an amount near the stated amount that performs a desired function or achieves a desired result. For example, “approximately”, “about” and “substantially” refer to values less than 10% of the stated numerical value, unless expressly specified otherwise.
  • the features of the embodiments in which the terms “about,” “about,” and “substantially” are previously disclosed perform additional desirable functions. Or represents a feature with some variability in achieving that desired result for that feature.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Le problème décrit par la présente invention est de réaliser un dispositif de surveillance d'environnement avec lequel il est possible de détecter une cible avec une précision élevée. La solution selon l'invention porte sur un dispositif de surveillance des environs (1) comprenant une unité de stockage des données des cartes marines (33), une unité d'entrée vidéo capturée (21), une unité de réglage de la zone de reconnaissance des images (31), une unité d'acquisition d'informations d'affichage supplémentaires (17), et une unité de génération de vidéo synthétique (20). L'unité de stockage de données de carte marine (33) stocke des données de carte marine. L'unité d'entrée vidéo capturée (21) accepte une vidéo capturée par une caméra (3) en entrée de celle-ci. L'unité de réglage de zone de reconnaissance d'image (31) règle une zone de sujet de détection sur la base des données de carte marine. L'unité d'acquisition d'informations d'affichage supplémentaire (17) acquiert des informations cibles concernant une cible détectée dans la zone de sujet de détection. L'unité de génération de vidéo synthétique (20) génère une vidéo synthétique dans laquelle les informations cibles sont synthétisées avec la position d'une vidéo capturée qui correspond à la position à laquelle la cible est détectée.
PCT/JP2019/035308 2018-10-09 2019-09-09 Dispositif et procédé de surveillance d'environnement WO2020075429A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980065172.8A CN112840641A (zh) 2018-10-09 2019-09-09 周边监视装置以及周边监视方法
JP2020550232A JP7346436B2 (ja) 2018-10-09 2019-09-09 周辺監視装置及び周辺監視方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-191014 2018-10-09
JP2018191014 2018-10-09

Publications (1)

Publication Number Publication Date
WO2020075429A1 true WO2020075429A1 (fr) 2020-04-16

Family

ID=70163782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/035308 WO2020075429A1 (fr) 2018-10-09 2019-09-09 Dispositif et procédé de surveillance d'environnement

Country Status (3)

Country Link
JP (2) JP7346436B2 (fr)
CN (1) CN112840641A (fr)
WO (1) WO2020075429A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000152220A (ja) * 1998-11-17 2000-05-30 Oki Electric Ind Co Ltd 監視用itvカメラの制御方法
JP2005289264A (ja) * 2004-04-01 2005-10-20 Furuno Electric Co Ltd 船舶航行支援装置
JP2014099055A (ja) * 2012-11-14 2014-05-29 Canon Inc 検出装置、検出方法、及びプログラム
JP2014529322A (ja) * 2011-07-21 2014-11-06 コリア インスティチュートオブ オーシャン サイエンス アンド テクノロジー 天井移動型透明ディスプレイを用いた船舶用増強現実システム及びその具現方法
JP2015088816A (ja) * 2013-10-29 2015-05-07 セコム株式会社 画像監視システム
WO2017208422A1 (fr) * 2016-06-02 2017-12-07 日本郵船株式会社 Dispositif de support de navigation de navire
JP2018019359A (ja) * 2016-07-29 2018-02-01 キヤノン株式会社 船舶監視装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3365182B2 (ja) * 1995-12-27 2003-01-08 三菱電機株式会社 映像監視装置
JP2002176641A (ja) 2000-12-05 2002-06-21 Matsushita Electric Ind Co Ltd 周囲映像提示装置
JP2004227045A (ja) 2003-01-20 2004-08-12 Wham Net Service Co Ltd 漁場侵入者監視システムおよび漁場侵入者監視方法
JP3777411B2 (ja) * 2003-08-08 2006-05-24 今津隼馬 船舶航行支援装置
JP2005208011A (ja) 2004-01-26 2005-08-04 Mitsubishi Heavy Ind Ltd 監視システム及び監視方法
JP2010041530A (ja) 2008-08-07 2010-02-18 Sanyo Electric Co Ltd 操縦支援装置
KR101048508B1 (ko) * 2011-04-11 2011-07-11 (주)에디넷 스마트디바이스를 이용한 실시간 항만 영상 관제 시스템 및 그 방법
KR101880437B1 (ko) * 2016-10-28 2018-08-17 한국해양과학기술원 실제카메라 영상과 가상카메라 영상을 활용하여 넓은 시야각을 제공하는 무인선 관제시스템
CN107609564B (zh) * 2017-09-19 2020-01-10 浙江大学 基于联合分割和傅里叶描述子库的水下目标图像识别方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000152220A (ja) * 1998-11-17 2000-05-30 Oki Electric Ind Co Ltd 監視用itvカメラの制御方法
JP2005289264A (ja) * 2004-04-01 2005-10-20 Furuno Electric Co Ltd 船舶航行支援装置
JP2014529322A (ja) * 2011-07-21 2014-11-06 コリア インスティチュートオブ オーシャン サイエンス アンド テクノロジー 天井移動型透明ディスプレイを用いた船舶用増強現実システム及びその具現方法
JP2014099055A (ja) * 2012-11-14 2014-05-29 Canon Inc 検出装置、検出方法、及びプログラム
JP2015088816A (ja) * 2013-10-29 2015-05-07 セコム株式会社 画像監視システム
WO2017208422A1 (fr) * 2016-06-02 2017-12-07 日本郵船株式会社 Dispositif de support de navigation de navire
JP2018019359A (ja) * 2016-07-29 2018-02-01 キヤノン株式会社 船舶監視装置

Also Published As

Publication number Publication date
JPWO2020075429A1 (ja) 2021-09-02
JP7492059B2 (ja) 2024-05-28
JP2023106401A (ja) 2023-08-01
JP7346436B2 (ja) 2023-09-19
CN112840641A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
US11270458B2 (en) Image generating device
JP7488925B2 (ja) 映像生成装置、および、映像生成方法
US11415991B2 (en) Image generating device and image generating method
WO2015098807A1 (fr) Système de capture d'image associé à une combinaison en temps réel d'un sujet et d'un espace virtuel tridimensionnel
JP6720409B2 (ja) 映像生成装置
US11548598B2 (en) Image generating device and method of generating image
JP7313811B2 (ja) 画像処理装置、画像処理方法、及びプログラム
WO2020075429A1 (fr) Dispositif et procédé de surveillance d'environnement
JP7365355B2 (ja) 映像生成装置及び映像生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870907

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020550232

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19870907

Country of ref document: EP

Kind code of ref document: A1