WO2020075430A1 - Image generation device and image generation method - Google Patents

Image generation device and image generation method Download PDF

Info

Publication number
WO2020075430A1
WO2020075430A1 PCT/JP2019/035309 JP2019035309W WO2020075430A1 WO 2020075430 A1 WO2020075430 A1 WO 2020075430A1 JP 2019035309 W JP2019035309 W JP 2019035309W WO 2020075430 A1 WO2020075430 A1 WO 2020075430A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
generation device
dimensional
video
Prior art date
Application number
PCT/JP2019/035309
Other languages
French (fr)
Japanese (ja)
Inventor
昌裕 箕輪
祐弥 ▲高▼島
浩二 西山
Original Assignee
古野電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 古野電気株式会社 filed Critical 古野電気株式会社
Priority to JP2020550233A priority Critical patent/JP7365355B2/en
Publication of WO2020075430A1 publication Critical patent/WO2020075430A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • G01W1/02Instruments for indicating weather conditions by measuring two or more variables, e.g. humidity, pressure, temperature, cloud cover or wind speed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention mainly relates to an image generation device that synthesizes a figure on a camera image.
  • Patent Document 1 discloses a distribution data display device that generates a graphic image in which the intensity distribution status of meteorological data is projected according to the size of the elevation angle, and superimposes it on the camera image for display. Patent Document 1 mentions a configuration in which, when the meteorological data is hierarchized in the altitude direction, the distribution status for each altitude is selectively displayed.
  • Patent Document 1 The configuration of Patent Document 1 above is premised on expressing the planar distribution of the meteorological data when the camera is oriented so as to look up at the sky. Therefore, in the display device of Patent Document 1, for example, the precipitation distribution according to the altitude between the rain cloud and the surface of the earth cannot be seen at a glance, and there is room for improvement in that it lacks realism and realism. there were.
  • the present invention has been made in view of the above circumstances, and an object thereof is to provide a video generation device capable of showing the distribution of meteorological data in a more easily understandable form.
  • the video generation device includes a captured video input unit, a weather information acquisition unit, a weather information graphic generation unit, and a synthetic video generation unit.
  • the captured image input unit inputs a captured image of the camera.
  • the weather information acquisition unit acquires weather information.
  • the weather information figure generation unit is a two-dimensional figure obtained by converting an outer shape of a three-dimensional shape based on the plurality of three-dimensional position information when the weather information includes a plurality of three-dimensional position information. Generate a weather information figure.
  • the synthetic image generation unit generates a synthetic image in which the weather information graphic is synthesized with the captured image.
  • the user can easily understand the weather information together with the camera image, including the outline information in the height direction.
  • the weather information graphic generation unit generate the weather information graphic by rendering the three-dimensional shape in two dimensions.
  • the weather information graphic can be realized by three-dimensional computer graphics.
  • the weather information graphic generation unit two-dimensionally renders the virtual reality object as the three-dimensional shape arranged in a three-dimensional space at the position and orientation of the viewpoint corresponding to the captured image. Therefore, it is preferable to generate the weather information graphic.
  • the weather information includes precipitation echo information acquired by a weather radar.
  • the weather information graphic generation unit can generate a graphic indicating a region on the ground or above water where precipitation is based on the precipitation echo information.
  • the weather information includes the position of the sun.
  • the weather information includes a three-dimensional shape of a cloud.
  • the meteorological information graphic generation unit can generate a graphic indicating an area on the ground or on the water where the sunlight is blocked by the cloud, based on the three-dimensional shape of the cloud and the position of the sun. Preferably there is.
  • the weather information includes wind direction distribution information, wind velocity distribution information, temperature distribution information, atmospheric pressure distribution information, water vapor distribution information, and lightning location information. It is preferable that at least one of information on the predicted distribution of solar radiation and information on the predicted distribution of precipitation is included.
  • the above-mentioned video generation device preferably has the following configuration. That is, this video generation device includes a target information input unit that inputs target information including the position of the target on the water.
  • the synthetic image generation unit synthesizes the weather information figure with the photographed image and converts the position of the target so as to geometrically match the photographed image with the target information figure which is a figure indicating the position. It is possible to generate a composite video by combining the captured video with the captured video.
  • the target information includes information obtained by detecting the target with the target monitoring radar.
  • the target information includes information obtained from the automatic ship identification device.
  • the above-mentioned video generation device preferably has the following configuration. That is, this video generation device includes a caution degree acquisition unit that acquires the relationship between the azimuth and the caution degree indicating the degree of caution for the azimuth, based on at least the weather information.
  • the synthetic video generation unit can synthesize a caution level chart showing a relationship between the orientation and the caution level with the captured video.
  • the above-mentioned video generation device preferably has the following configuration. That is, the synthetic image generation unit can synthesize an azimuth scale indicating an azimuth with the captured image. The vertical position at which the azimuth scale is combined with the captured image is automatically changed.
  • the user can easily grasp the direction by the azimuth scale and avoid the display of the azimuth scale from interfering with the grasp of the situation.
  • the following video generation method is provided. That is, the image captured by the camera is input. Get weather information.
  • the weather information includes a plurality of three-dimensional position information
  • the outer shape of the three-dimensional shape based on the plurality of three-dimensional position information is converted so as to geometrically match the captured image.
  • a composite image is generated by combining the weather information graphic with the captured image.
  • the user can easily understand the weather information including the information in the height direction together with the camera image.
  • FIG. 3 is a conceptual diagram illustrating 3D scene data constructed by arranging virtual reality objects in a 3D virtual space, and a projection screen arranged in the 3D virtual space.
  • FIG. 1 is a block diagram showing the overall configuration of a perimeter monitoring device 1 according to an embodiment of the present invention.
  • FIG. 2 is a side view showing various devices provided in the port monitoring facility 4 and the like.
  • the perimeter monitoring device (video generation device) 1 shown in FIG. 1 is installed in the port monitoring facility 4 shown in FIG. 2, for example.
  • the surroundings monitoring device 1 can generate a video on which information for supporting the weather condition monitoring and the motion monitoring of a ship in a port is superimposed on the basis of the video captured by the camera (imaging device) 3.
  • the video generated by the peripheral monitoring device 1 is displayed on the display 2.
  • the display 2 can be configured, for example, as a display placed on a monitoring desk where an operator performs monitoring work in the port monitoring facility 4.
  • the display 2 is not limited to the above, and may be, for example, a display of a portable computer carried by an operator or its assistant.
  • the surroundings monitoring device 1 combines a surrounding image captured by the camera 3 installed in the harbor monitoring facility 4 and a figure that virtually represents additional display information (detailed later) regarding the surroundings. By doing so, a composite video that is an output video to the display 2 is generated.
  • the camera 3 is configured as a visible light video camera that takes a picture around the port surveillance facility 4.
  • the camera 3 is configured as a wide-angle type camera, and is mounted slightly downward in a place with high visibility.
  • This camera 3 has a live output function and can generate moving image data (image data) as a shooting result in real time and output it to the surroundings monitoring device 1.
  • FIG. 3 shows an example of a captured image input from the camera 3 to the peripheral monitoring device 1.
  • -Camera 3 performs fixed point shooting in principle.
  • the camera 3 is attached via a rotation mechanism (not shown), and the photographing direction can be changed by inputting a signal instructing a pan / tilt operation from the peripheral monitoring device 1.
  • the surroundings monitoring device 1 of the present embodiment includes a weather radar 15 as a variety of devices, a omnidirectional camera 16, a ship monitoring radar (target monitoring radar) 12, and an AIS receiver. (Automatic ship identification device) 9 and the like are electrically connected.
  • the weather radar 15 detects radio waves and rain clouds by transmitting and receiving radio waves from the antenna while changing the elevation angle and azimuth angle.
  • the meteorological radar 15 obtains the elevation angle and the azimuth angle at which the raindrops and rainclouds are present, the distances to the raindrops and rain clouds, and the amount of precipitation by performing known calculations.
  • the weather radar 15 outputs precipitation echo information, which is a three-dimensional distribution of water content according to latitude, longitude, and altitude, to the surroundings monitoring device 1.
  • the weather radar 15 corresponds to a weather information generation device.
  • the omnidirectional camera 16 is composed of an upward facing camera equipped with a fish-eye lens, and observes the whole sky at a fixed point.
  • the omnidirectional cameras 16 are installed in three or more places, and by analyzing the images of the clouds taken by each omnidirectional camera 16, the three-dimensional distribution of clouds according to latitude, longitude, and altitude. To get.
  • the three-dimensional distribution of clouds is output to the peripheral monitoring device 1.
  • the spherical camera 16 corresponds to a weather information generation device.
  • the surroundings monitoring device 1 inputs information such as temperature, pressure, humidity, solar radiation, wind direction, and wind speed output from a weather observation station installed at an appropriate location in the area monitored by the camera 3. be able to.
  • information on the height of waves output from a wave height meter installed at an appropriate place in the sea can be input to the surroundings monitoring device 1.
  • information on the amount of water vapor measured by the weather observation satellite by the microwave radiometer can be input to the peripheral monitoring device 1.
  • the location of the lightning measured by a known lightning monitoring system observing the lightning discharge can be input to the peripheral monitoring device 1.
  • the ship monitoring radar 12 can detect targets such as ships existing in the monitored port by transmitting and receiving radio waves from the antenna.
  • the ship monitoring radar 12 has a known target tracking function (Target Tracking, TT) capable of capturing and tracking a target, and the position and velocity vector (TT information) of the target. Can be asked.
  • the ship monitoring radar 12 corresponds to a target detection device.
  • the AIS receiver 9 receives the AIS information transmitted from the ship.
  • the AIS information includes the position (latitude / longitude) of the ship navigating the monitored port, the length and width of the ship, the type and identification information of the ship, the ship speed, the course, and the destination. Various information is included.
  • the AIS receiver 9 corresponds to a target detection device.
  • the peripheral monitoring device 1 is connected to a keyboard 36 and a mouse 37 operated by the user. By operating the keyboard 36 and the mouse 37, the user can give various instructions regarding image generation. This instruction includes a pan / tilt operation of the camera 3 and the like.
  • the peripheral monitoring device 1 includes a captured image input unit 21, a weather information input unit 22, a target information input unit 23, an additional display information acquisition unit 17, and a camera position / orientation setting unit 25. And a sun position acquisition unit 26, an image recognition unit 28, and an augmented reality video generation unit 30.
  • the peripheral monitoring device 1 is configured as a known computer and includes a CPU, a ROM, a RAM, an HDD, and the like, which are not shown. Further, the peripheral monitoring device 1 is equipped with a GPU for performing three-dimensional image processing described later at high speed. Then, for example, the HDD stores software for executing the image generation method of the present invention. By the cooperation of the hardware and the software, the peripheral monitoring device 1 can be used for the captured image input unit 21, the weather information input unit 22, the target information input unit 23, the additional display information acquisition unit 17, the camera position / direction setting unit 25. , The sun position acquisition unit 26, the image recognition unit 28, the augmented reality video generation unit 30, and the like.
  • the captured video input unit 21 can input the video data output by the camera 3 at, for example, 30 frames per second.
  • the captured video input unit 21 outputs the input video data to the image recognition unit 28 and the augmented reality video generation unit 30 (a rendering unit 32 described below).
  • the weather information input unit 22 can input various weather information output by the weather radar 15 and the omnidirectional camera 16. Therefore, the weather information input unit 22 corresponds to a weather information acquisition unit.
  • the weather information input unit 22 outputs the input weather information to the additional display information acquisition unit 17.
  • the target information input unit 23 can input information (hereinafter referred to as target information) about targets such as ships output by the ship monitoring radar 12 and the AIS receiver 9.
  • the target information input unit 23 corresponds to the target information acquisition unit.
  • the target information input unit 23 outputs the input target information to the additional display information acquisition unit 17.
  • the additional display information acquisition unit 17 takes an image with the camera 3 based on the information input to the peripheral monitoring device 1 from the weather information input unit 22 and the target object information input unit 23, the information acquired by the image recognition unit 28, and the like.
  • the information (additional display information, weather information, target information) to be additionally displayed on the video is acquired.
  • additional display information are conceivable, for example, the three-dimensional distribution of precipitation obtained from the weather radar 15, the three-dimensional distribution of clouds obtained from the omnidirectional camera 16, the sun position acquisition unit described later.
  • the additional display information acquisition unit 17 outputs the acquired information to the augmented reality video generation unit 30. The details of the additional display information will be described later.
  • the camera position / direction setting unit 25 can set the position (shooting position) and direction of the camera 3 in the port monitoring facility 4.
  • the information on the position of the camera 3 is specifically information indicating latitude, longitude, and altitude.
  • the information on the orientation of the camera 3 is specifically information indicating an azimuth angle and a depression angle. These pieces of information can be obtained by, for example, performing measurements during installation work of the camera 3.
  • the orientation of the camera 3 can be changed within a predetermined angle range as described above, but when the pan / tilt operation of the camera 3 is performed, the latest orientation information is set in the camera position / orientation setting unit 25 accordingly. To be done.
  • the camera position / orientation setting unit 25 outputs the set information to the sun position acquisition unit 26, the additional display information acquisition unit 17, and the augmented reality video generation unit 30.
  • the sun position acquisition unit 26 calculates the position of the sun (azimuth and elevation) by calculation based on a known formula based on the position where the camera 3 is installed and the current time. Since the position of the sun is a type of weather information, the sun position acquisition unit 26 corresponds to the weather information acquisition unit. The sun position acquisition unit 26 outputs the acquired sun position to the additional display information acquisition unit 17.
  • the image recognition unit 28 cuts out a portion of the image acquired from the captured image input unit 21, which is considered to be a ship or the like, and collates it with a target image database registered in advance to detect a ship, a diver, a drifting object, or the like. Recognize the target of. Specifically, the image recognition unit 28 detects a moving object by an inter-frame difference method or the like, cuts out a region where a difference occurs, and collates it with an image database. As a matching method, a known appropriate method such as template matching can be used. Image recognition can also be realized by using another publicly known method, for example, a neural network. The image recognition unit 28 outputs the information on the position recognized on the image for each of the recognized targets to the additional display information acquisition unit 17. The image recognition unit 28, like the target information input unit 23, corresponds to the target information acquisition unit.
  • the augmented reality video generation unit 30 generates a composite video to be displayed on the display 2.
  • the augmented reality video generation unit 30 includes a three-dimensional scene generation unit 31 and a rendering unit 32.
  • the 3D scene generation unit 31 arranges the virtual reality objects 41v, 42v, ... Corresponding to the additional display information in the 3D virtual space 40 to generate a 3D scene of the virtual reality.
  • the three-dimensional scene data (three-dimensional display data) 50 which is the three-dimensional scene data, is generated. The details of the 3D scene will be described later.
  • the rendering unit 32 draws the three-dimensional scene data 50 generated by the three-dimensional scene generation unit 31 to convert the outer shapes of the virtual reality objects 41v, 42v, ... into two-dimensional shapes 41f, 42f, ... ⁇ Generate. Further, the rendering unit 32 simultaneously generates the above-mentioned two-dimensional figures 41f, 42f, ... And at the same time, generates an image in which the figures 41f, 42f ,. Therefore, the rendering unit 32 functions as a weather information graphic generation unit and a composite video generation unit. As shown in FIG. 5, in this composite video, graphics 41f, 42f, ... Showing additional display information are superimposed on the video captured by the camera 3. The rendering unit 32 (augmented reality video generation unit 30) outputs the generated synthetic video to the display 2. The details of the graphic generation processing and the data synthesis processing will be described later.
  • the additional display information is information that is additionally displayed on the video image captured by the camera 3, and various information can be considered depending on the purpose and function of the device connected to the peripheral monitoring device 1.
  • the three-dimensional precipitation distribution can be used as additional display information.
  • the three-dimensional cloud distribution can be used as additional display information. The latest information regarding the precipitation distribution and the cloud distribution is input to the peripheral monitoring device 1 from each device.
  • the position and speed of the detected target can be used as additional display information.
  • the AIS receiver 9 the above-mentioned received AIS information (for example, the position and orientation of the ship) can be used as additional display information. These pieces of information are input from the respective devices to the peripheral monitoring device 1 in real time.
  • the additional display information includes the position and speed of the target identified by the image recognition unit 28 performing image recognition.
  • the position and velocity vector of the target included in the information obtained from the ship monitoring radar 12 are relative to the position and orientation of the antenna of the ship monitoring radar 12.
  • the additional display information acquisition unit 17 determines the position and velocity vector of the target obtained from the ship monitoring radar 12 based on the position and orientation information of the antenna preset in the surroundings monitoring device 1. Convert to standard.
  • the position and velocity vector of the target obtained from the image recognition unit 28 are relative to the position and orientation of the camera 3.
  • the additional display information acquisition unit 17 converts the position and velocity vector of the target obtained from the image recognition unit 28 into the earth reference based on the information obtained from the camera position / orientation setting unit 25.
  • the additional display information related to weather includes information indicating the position (latitude and longitude) of the point where it is located. Further, the additional display information related to weather may include information indicating altitude.
  • the additional display information indicating the cloud 41r includes information indicating the position (latitude, longitude, and altitude) of each point when the cloud is expressed as a set of many points.
  • the additional display information indicating the rain 42r includes information indicating the positions (latitude, longitude, and altitude) of a plurality of points included in the shape of the rain distribution.
  • a large vessel 46r is sailing toward the inner right side of the camera image in the harbor, but the vessel 46r includes the vessel monitoring radar 12, the AIS receiver 9, and the image recognition unit. 28 is detected.
  • the additional display information related to the target such as a ship includes at least information indicating the position (latitude and longitude) of the point on the sea surface (water surface) on which it is placed.
  • the additional display information indicating the ship 46r includes information indicating the position of the ship 46.
  • FIG. 4 shows 3D scene data 50 generated by arranging virtual reality objects 41v, 42v, ... In the 3D virtual space 40, and a projection screen 51 arranged in the 3D virtual space 40. It is a conceptual diagram explaining.
  • the three-dimensional virtual space 40 in which the virtual reality objects 41v, 42v, ... Are arranged by the three-dimensional scene generation unit 31 is configured by an orthogonal coordinate system as shown in FIG.
  • the origin of this Cartesian coordinate system is set at a point of zero altitude just below the installation position of the camera 3.
  • the xz plane which is a horizontal plane including the origin, is set so as to simulate the sea surface (water surface) or the ground.
  • the coordinate axes are determined so that the + z direction always matches the azimuth angle of the camera 3, the + x direction is the right direction, and the + y direction is the up direction.
  • Each point (coordinate) in this three-dimensional virtual space 40 is set so as to correspond to the actual position around the camera 3.
  • FIG. 4 shows an example in which virtual reality objects 41v, 42v, ... Are arranged in the three-dimensional virtual space 40 corresponding to the situation of the port shown in FIG.
  • the virtual reality object 41v related to the cloud is represented as a collection of small spheres.
  • Each spherical camera 16 can acquire the presence / absence of cloud and the thickness for each azimuth and elevation.
  • the thickness of the cloud can be estimated by the color density of the cloud of the captured image.
  • the three-dimensional shape of the cloud can be acquired by combining the images captured by the plurality of spherical cameras 16 at the same time and processing them three-dimensionally.
  • the virtual actual object 42v related to rain is represented as a three-dimensional shape having contours of isointensity surfaces of multiple levels of precipitation intensity.
  • the weather radar 15 can acquire the rainfall intensity for each of the orientation angle and the elevation angle based on the radar echo.
  • the above-mentioned three-dimensional shape can be obtained by converting this data into a distribution of precipitation intensity on a three-dimensional grid and performing three-dimensional interpolation if necessary.
  • a figure showing a region on the ground or above water where precipitation is present is expressed as a polygon placed on a plane (xz plane) showing above the ground or above water.
  • a virtual reality object 43v indicating the sun is also arranged in the three-dimensional virtual space 40.
  • the azimuth and elevation of the sun viewed from the origin in the three-dimensional virtual space 40 can be easily obtained based on the azimuth of the camera 3 and the azimuth and elevation of the sun acquired by the sun position acquisition unit 26.
  • the virtual reality object 43v is arranged at a position immediately in front of a projection screen 51 described later. This makes it possible to prevent the later-described graphic 43f indicating the sun from being hidden by the camera image.
  • the virtual reality object 44v relating to the area on the ground or above the water where the sunlight is shaded by the clouds is represented as a polygon placed on a plane (xz plane) showing the ground or above the water. This polygonal area can be obtained by projecting the virtual reality object 41v of the cloud on the xz plane in the direction of sunlight.
  • the virtual reality object 45v representing the wind direction is represented as a three-dimensional figure of an arrow.
  • the position of the virtual reality object 45v corresponds to the position of the anemometer of the above-mentioned meteorological observation station.
  • the direction of the arrow indicates the wind direction, and the length of the arrow indicates the strength of the wind.
  • the virtual reality object 46v relating to the target to be monitored includes a downward cone indicating the position of the recognized target (that is, the vessel 46r) and an arrow indicating the velocity vector of the target. I'm out.
  • the downward cone represents that the target is located just below it.
  • the direction of the arrow expresses the direction of the speed of the target, and the length of the arrow expresses the magnitude of the speed.
  • Each virtual reality object 41v, 42v, ... Is arranged so that the relative position of the additional display information represented by the virtual reality object 41v, 42v, ... With respect to the camera 3 is reflected.
  • calculations are performed using the positions and orientations of the cameras 3 set by the camera position / orientation setting unit 25 shown in FIG. Be seen.
  • the three-dimensional scene generation unit 31 generates the three-dimensional scene data 50 as described above.
  • the virtual reality objects 41v, 42v, ... are arranged on the basis of the azimuth with the origin directly below the camera 3, when the azimuth of the camera 3 changes from the state of FIG. A new three-dimensional scene in which objects 41v, 42v, ... Are rearranged is constructed and the three-dimensional scene data 50 is updated.
  • the content of the additional display information is changed due to, for example, the shape of the cloud 41r changing from the state of FIG. 3 or the ship 46r moving, the three-dimensional scene data is reflected so as to reflect the latest additional display information. 50 is updated.
  • the rendering unit 32 arranges a projection screen 51 in the three-dimensional virtual space 40, which defines a position and a range on which the image captured by the camera 3 is projected.
  • a projection screen 51 in the three-dimensional virtual space 40, which defines a position and a range on which the image captured by the camera 3 is projected.
  • the rendering unit 32 arranges the viewpoint camera 55 so as to simulate the position and orientation of the camera 3 in the real space in the three-dimensional virtual space 40.
  • the rendering unit 32 also arranges the projection screen 51 so as to face the viewpoint camera 55.
  • the position of the camera 3 can be obtained based on the set value of the camera position / orientation setting unit 25 shown in FIG.
  • the azimuth angle of the viewpoint camera 55 in the three-dimensional virtual space 40 does not change even if the azimuth angle changes due to the pan operation of the camera 3. Instead, when the panning operation of the camera 3 is performed, the rendering unit 32 causes the virtual reality objects 41v, 42v, ... In the three-dimensional virtual space 40 to move in the horizontal plane around the y axis by the changed azimuth angle. Relocate to rotate.
  • the depression angle of the viewpoint camera 55 is controlled so as to be always equal to the depression angle of the camera 3.
  • the rendering unit 32 links the position and orientation of the projection screen 51 arranged in the three-dimensional virtual space 40 in conjunction with the change in the depression angle (change in the depression angle of the viewpoint camera 55) due to the tilt operation of the camera 3, and the viewpoint camera 55. Change so as to always keep the state of facing.
  • the rendering unit 32 generates a two-dimensional image by performing known rendering processing on the tertiary source scene data 50 and the projection screen 51. More specifically, the rendering unit 32 arranges the viewpoint camera 55 in the three-dimensional virtual space 40, sets the view frustum 56 that defines the range to be subjected to the rendering process, and sets the viewpoint camera 55 as the apex. The line-of-sight direction is defined as the central axis. Subsequently, the rendering unit 32 determines the vertex coordinates of polygons located inside the view frustum 56 among the polygons forming each object (the virtual reality objects 41v, 42v, ... And the projection screen 51).
  • the coordinates are converted into the coordinates of a two-dimensional virtual screen corresponding to the display area of the composite image on the display 2. Then, based on the vertices arranged on the virtual screen, a two-dimensional image is generated by performing pixel generation / processing processing at a predetermined resolution.
  • a figure obtained by drawing the three-dimensional scene data 50 (in other words, a figure as a rendering result of the virtual reality objects 41v, 42v, . ) Is included. Further, in the process of generating the secondary image, the image captured by the camera 3 is arranged at a position corresponding to the projection screen 51. As a result, the rendering unit 32 realizes image synthesis.
  • the projection screen 51 has a curved shape along a spherical shell centered on the viewpoint camera 55, it is possible to prevent distortion of the captured image due to perspective projection.
  • the camera 3 is a wide-angle camera, and a lens distortion occurs in the captured image as shown in FIG. 3, but the lens distortion is removed when the captured image is attached to the projection screen 51.
  • the method for removing the lens distortion is arbitrary, but for example, it is conceivable to use a look-up table in which the positions of pixels before correction and the positions of pixels after correction are associated with each other. Thereby, the three-dimensional virtual space 40 shown in FIG. 4 and the captured image can be well matched.
  • FIG. 5 shows the result of combining the above-described two-dimensional image by rendering the three-dimensional scene data 50 of FIG. 4 with the captured video shown in FIG.
  • the part in which the image captured by the camera 3 appears is shown by a broken line for convenience so that it can be easily distinguished from other parts.
  • figures 41f, 42f, ... Representing additional display information are arranged so as to overlap the captured video image.
  • the figures 41f, 42f, ... Show the three-dimensional shape of the virtual reality objects 41v, 42v, ..., Which compose the three-dimensional scene data 50 shown in FIG. 4, from the viewpoint of the same position and orientation as the camera 3. It is generated as a result of drawing. Therefore, the geometrical consistency is maintained, so that even if the graphics 41f, 42f, ... Are superimposed on the realistic image by the camera 3, it should not cause a sense of discomfort in appearance. You can
  • the lens distortion is removed from the captured image input from the camera 3 when it is projected on the projection screen 51 of the three-dimensional virtual space 40.
  • the rendering unit 32 again adds lens distortion to the synthesized image after rendering by inverse conversion using the above-mentioned lookup table.
  • FIG. 5 and FIG. 3 it is possible to obtain a composite image that does not cause a sense of discomfort in relation to the camera image before composition.
  • the application of lens distortion may be omitted.
  • the rendering unit 32 further synthesizes character information describing useful information for understanding the situation at positions near the figures 41f, 42f, ... In the synthesized video.
  • the content of character information can be arbitrary.
  • weather the weather will change (for example, how many minutes will be fine), the distinction between rain clouds and simple clouds, areas with a lot of water vapor, snowfall areas, areas where sunlight is blocked by clouds, factory Examples include areas where the temperature is high in the surroundings.
  • Examples of the ship include information for identifying the ship (ship name, etc.), information indicating the size of the ship, and the like. As a result, it is possible to realize a monitoring screen with abundant information.
  • the ship monitoring radar 12 the AIS receiver 9
  • a target information graphic 46f which is a graphic representing additional display information based on the detection result obtained by the image recognition unit 28 is combined with the captured video.
  • Azimuth scale 48f is displayed in the composite image of FIG.
  • the azimuth scale 48f is formed in an arc shape so as to connect the left end and the right end of the screen. Numerical values indicating the azimuth corresponding to the image are written on the azimuth scale 48f. This allows the user to intuitively understand the direction in which the camera 3 is looking.
  • the rendering unit 32 determines The position where the scale 48f is combined is automatically moved in the vertical direction of the image. As a result, the azimuth scale 48f can be prevented from interfering with other displays.
  • the azimuth scale 48f in the composite image of FIG. 5 is obtained by arranging the virtual reality object 48v of the azimuth scale in the three-dimensional scene data 50 shown in FIG. 4 and rendering this by the rendering unit 32.
  • a caution level chart 49f is displayed for each azimuth indicating the degree to which one should be vigilant in terms of the safety of navigation of the ship.
  • the additional display information acquisition unit 17 considers various factors that affect safety such as the strength of waves and the distribution of wind strength, and sets the degree of caution around the installation position of the camera 3. Ask for each direction.
  • the 3D scene generation unit 31 generates the virtual reality object 49v of the alertness chart based on the information output from the additional display information acquisition unit 17, and arranges it in the 3D virtual space 40.
  • the alertness chart 49f can be combined with the camera image. Therefore, the additional display information acquisition unit 17 corresponds to a warning level acquisition unit.
  • the alert level chart 49f allows the operator to easily notice a situation that requires attention.
  • the figures included in the composite video are not limited to the information described above, and can be configured to show various information.
  • weather it is a rain echo on the ground, an echo of snow, an echo of rain inside a cloud, wind direction, wind speed, temperature, atmospheric pressure, water vapor amount, forecast of solar radiation or precipitation, high temperature area around the factory, lightning.
  • the location, snowfall area, etc. can be considered.
  • These pieces of information may be given as a two-dimensional distribution or a three-dimensional distribution.
  • the time series data accumulating the weather information may be analyzed, and the obtained information may be shown in the composite video.
  • the identification number such as the MMSI number included in the AIS information, and the length and width of the vessel can be considered.
  • the user can set whether or not to additionally display the above information in the composite video for each type of information. Thereby, the user can be in a state in which, for example, only the graphic 42f of the precipitation distribution is combined with the camera image depending on the situation. By setting to display only the required virtual reality figure, it is possible to prevent the display from being crowded.
  • FIG. 6 is a flowchart showing a series of processes performed in the peripheral monitoring device 1.
  • the peripheral monitoring device 1 inputs the captured image captured by the camera 3 from the captured image input unit 21 (step S101). Subsequently, the peripheral monitoring device 1 inputs various weather information from the weather radar 15 and the like (step S102). Next, the augmented reality video generation unit 30 generates a two-dimensional figure obtained by converting the outer shape of the three-dimensional shape of the weather information (step S103). At the same time, the augmented reality video generation unit 30 generates a composite video by combining the obtained two-dimensional graphic (weather information graphic) with the captured video, and outputs the obtained composite video (step S104). ). After that, the process returns to step S101, and the processes of steps S101 to S104 are repeated.
  • the surroundings monitoring device 1 of the present embodiment includes the captured image input unit 21, the weather information input unit 22, and the rendering unit 32.
  • the captured image input unit 21 inputs a captured image of the camera 3.
  • the weather information input unit 22 inputs weather information including a plurality of three-dimensional position information.
  • the rendering unit 32 geometrically matches the outer shape of the three-dimensional shape (virtual reality objects 41v, 42v, 43v, 44v, 45v) based on the plurality of three-dimensional position information included in the weather information with the captured image.
  • the weather information graphics 41f, 42f, 43f, 44f, 45f which are the two-dimensional graphics converted as described above, are generated.
  • the rendering unit 32 generates a composite video in which the weather information graphics 41f, 42f, ... Are composited with the captured video.
  • the operator can easily understand the weather information, including the camera image, including the outline information in the height direction.
  • the rendering unit 32 simultaneously generates a two-dimensional figure based on the virtual reality objects 41v, 42v, ... And synthesizes the two-dimensional figure and the camera image. However, these processes may be performed separately.
  • the augmented reality video generation unit 30 further includes a two-dimensional synthesis unit.
  • the rendering unit 32 renders only the virtual reality objects 41v, 42v, ... With the projection screen 51 omitted to generate a two-dimensional image.
  • the two-dimensional image includes two-dimensional figures (meteorological information figure and target information figure) obtained by converting the virtual actual objects 41v, 42v, .... After that, the two-dimensional synthesis unit superimposes the rendering result image on the camera image.
  • the rendering unit 32 corresponds to the weather information graphic generation unit
  • the two-dimensional synthesis unit corresponds to the synthetic video generation unit.
  • the augmented reality image generation unit 30 may arrange vector data of a coastline obtained from a nautical chart, for example, in the three-dimensional scene data 50 as a virtual reality object for rendering. In this case, the figure of the coastline can be displayed on the composite image.
  • the augmented reality video generation unit 30 may arrange a three-dimensional shape representing a map as virtual reality objects in three-dimensional scene data for rendering.
  • a figure of a road or a building can be displayed on the composite image.
  • a graphic or text indicating a special warning is displayed on the composite video to alert the operator. May be.
  • past or future meteorological information may be configured so that the operator can display the date and time.
  • past weather information By accumulating the acquired current weather information in an appropriate storage unit, past weather information can be obtained.
  • the future weather information can be obtained by the surroundings monitoring device 1 estimating from past and present weather information by calculation. For example, by the operator specifying the past or future date and time using the keyboard 36, the mouse 37, etc., the weather information graphic indicating the past weather information or the future estimated weather information is displayed as the current camera image. It is conceivable that it is configured so as to be superimposed on and displayed.
  • the above pan / tilt function may be omitted in the camera 3, and the shooting direction may be unchangeable.
  • the virtual reality objects 41v, 42v, ... Have the position of the camera 3 as the origin, as described in FIG. It is arranged based on the camera orientation.
  • the virtual reality objects 41v, 42v, ... May be arranged not on the camera orientation reference but on the true north reference in which the + z direction is always true north.
  • the azimuth changes due to the pan operation of the camera 3
  • the position and orientation of the camera 3 in the three-dimensional virtual space 40 are changed.
  • a fixed point appropriately set on the earth is used as the origin, and, for example, + z direction is true north, + x direction is May be set to be true east.
  • the figure representing the additional display information is not limited to the one shown in FIG. 5, and can be changed to various figures.
  • the virtual reality object 41v representing a cloud can be configured as a three-dimensional model having a complicated contour shape imitating a cloud lump instead of a collection of small spheres.
  • the three-dimensional model of the ship is arranged in a direction that matches the direction of the ship obtained from the AIS information and the like.
  • the size of the three-dimensional model of the ship arranged in the three-dimensional virtual space 40 may be changed according to the size of the ship obtained from the AIS information or the like.
  • At least one of the ship monitoring radar 12 and the AIS receiver 9 may not be connected to the peripheral monitoring device 1. Further, the image recognition unit 28 may be omitted from the peripheral monitoring device 1.
  • the peripheral monitoring device 1 is not limited to a facility on the ground, and may be provided in a moving body such as a ship.
  • All processes described herein may be embodied by software code modules executed by a computing system including one or more computers or processors and may be fully automated.
  • the code modules may be stored on any type of non-transitory computer readable medium or other computer storage device. Some or all of the methods may be embodied in dedicated computer hardware.
  • the various exemplary logic blocks and modules described in connection with the embodiments disclosed herein can be implemented or executed by a machine such as a processor.
  • the processor may be a microprocessor, but in the alternative, the processor may be a controller, microcontroller, or state machine, combinations thereof, or the like.
  • the processor can include electrical circuitry configured to process computer-executable instructions.
  • the processor comprises an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other programmable device that performs logical operations without processing computer-executable instructions.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a processor may also be a combination of computing devices, such as a combination of digital signal processors (digital signal processors) and microprocessors, multiple microprocessors, one or more microprocessors in combination with DSP cores, or any other thereof. It can be implemented as such a configuration. Although described primarily herein in terms of digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented by analog circuits or mixed analog and digital circuits.
  • a computing environment includes any type of computer system including, but not limited to, a microprocessor, mainframe computer, digital signal processor, portable computing device, device controller, or computing engine-based computer system in an apparatus. be able to.
  • conditional languages such as “capable”, “capable”, “possible” or “possible” refer to particular features, elements and / or steps that a particular embodiment includes. Embodiments are understood in the sense of the context commonly used to convey the inclusion. Accordingly, such a conditional language is generally any method in which features, elements and / or steps are required for one or more embodiments, or one or more embodiments It is not meant to necessarily include logic to determine whether an element and / or step is included in or executed by any particular embodiment.
  • a disjunctive language such as the phrase "at least one of X, Y, and Z", unless otherwise specified, has an item, term, etc. that is any of X, Y, Z, or any combination thereof. Understood in a commonly used context to indicate that it can be (eg, X, Y, Z). Accordingly, such disjunctive languages generally require each of at least one of X, at least one of Y, or at least one of Z for which a particular embodiment is present. Does not mean.
  • Numerals such as “one” should generally be construed to include one or more described items unless specifically stated. Thus, phrases such as “one device configured to” are intended to include one or more of the listed devices. Such one or more enumerated devices may also be collectively configured to perform the recited citations. For example, "a processor configured to perform A, B and C below” refers to a first processor configured to perform A and a second processor configured to perform B and C. Processor.
  • the term "horizontal” as used herein, regardless of its orientation, is a plane parallel to the plane or surface of the floor of the area in which the described system is used, or description. Is defined as the plane in which the method is performed.
  • the term “floor” can be replaced with the terms “ground” or “water surface”.
  • the term “vertical / vertical” refers to the direction vertical / vertical to the defined horizontal line. Terms such as “upper”, “lower”, “lower”, “upper”, “side”, “higher”, “lower”, “upward”, “above”, and “below” are defined with respect to the horizontal plane. ing.
  • connection As used herein, the terms “attach,” “connect,” “pair,” and other related terms, unless otherwise noted, are removable, moveable, fixed, adjustable, And / or should be construed as including a removable connection or coupling. Connections / couplings include direct connections and / or connections having an intermediate structure between the two described components.
  • the numbers preceded by terms such as “approximately”, “about”, and “substantially” include the recited numbers, and unless otherwise indicated. Represents an amount near the stated amount that performs a desired function or achieves a desired result. For example, “approximately”, “about” and “substantially” refer to values less than 10% of the stated numerical value, unless expressly specified otherwise.
  • the features of the embodiments in which the terms “about,” “about,” and “substantially” are previously disclosed perform additional desirable functions. Or represents a feature with some variability in achieving that desired result for that feature.

Abstract

[Problem] To provide an image generation device that can show the distribution of weather data in a more understandable manner. [Solution] A periphery monitoring device 1 is provided with; a photographed image input unit 21; a weather information input unit 22; and a rendering unit 32. The photographed image input unit 21 inputs a photographed image of a camera 3. The weather information input unit 22 inputs weather information which includes multiple sets of three-dimensional position information. The rendering unit 32 generates a weather information figure which is a two-dimensional figure obtained by converting the outer shape of a three-dimensional shape based on the multiple sets of three-dimensional position information included in the weather information so as to geometrically match the photographed image. The rendering unit 32 generates a synthesized image in which the weather information figure is synthesized with the photographed image.

Description

映像生成装置及び映像生成方法Video generation apparatus and video generation method
 本発明は、主として、カメラ映像に図形を合成する映像生成装置に関する。 The present invention mainly relates to an image generation device that synthesizes a figure on a camera image.
  特許文献1は、気象データの強度分布状況を仰角の大きさに応じて投影したグラフィック画像を生成し、カメラ映像に重畳して表示する分布データ表示装置を開 示する。特許文献1では、気象データが高度方向に階層化されている場合に、高度別の分布状況を選択的に表示する構成について言及されている。 Patent Document 1 discloses a distribution data display device that generates a graphic image in which the intensity distribution status of meteorological data is projected according to the size of the elevation angle, and superimposes it on the camera image for display. Patent Document 1 mentions a configuration in which, when the meteorological data is hierarchized in the altitude direction, the distribution status for each altitude is selectively displayed.
特許第6087712号公報Japanese Patent No. 6087712
  上記特許文献1の構成は、空を見上げる視点となるようにカメラが向いているときの気象データの平面的な分布を表現することが前提となっている。従って、特 許文献1の表示装置では、例えば雨雲と地表との間での高度に応じた降水分布を一目で見ることができず、また、現実感及び臨場感に乏しい点で改善の余地が あった。 The configuration of Patent Document 1 above is premised on expressing the planar distribution of the meteorological data when the camera is oriented so as to look up at the sky. Therefore, in the display device of Patent Document 1, for example, the precipitation distribution according to the altitude between the rain cloud and the surface of the earth cannot be seen at a glance, and there is room for improvement in that it lacks realism and realism. there were.
 本発明は以上の事情に鑑みてされたものであり、その目的は、より分かり易い形で気象データの分布を示すことが可能な映像生成装置を提供することにある。 The present invention has been made in view of the above circumstances, and an object thereof is to provide a video generation device capable of showing the distribution of meteorological data in a more easily understandable form.
課題を解決するための手段及び効果Means and effects for solving the problem
 本発明の解決しようとする課題は以上の如くであり、次にこの課題を解決するための手段とその効果を説明する。 The problem to be solved by the present invention is as described above. Next, the means for solving the problem and the effect thereof will be described.
  本発明の第1の観点によれば、以下の構成の映像生成装置が提供される。即ち、映像生成装置は、撮影映像入力部と、気象情報取得部と、気象情報図形生成部 と、合成映像生成部と、を備える。前記撮影映像入力部は、カメラの撮影映像を入力する。前記気象情報取得部は、気象情報を取得する。前記気象情報図形生成 部は、前記気象情報に複数の3次元での位置情報が含まれる場合に、当該複数の3次元での位置情報に基づく3次元形状の外形を変換した2次元図形である気象 情報図形を生成する。前記合成映像生成部は、前記気象情報図形を前記撮影映像に合成した合成映像を生成する。 According to the first aspect of the present invention, a video generation device having the following configuration is provided. That is, the video generation device includes a captured video input unit, a weather information acquisition unit, a weather information graphic generation unit, and a synthetic video generation unit. The captured image input unit inputs a captured image of the camera. The weather information acquisition unit acquires weather information. The weather information figure generation unit is a two-dimensional figure obtained by converting an outer shape of a three-dimensional shape based on the plurality of three-dimensional position information when the weather information includes a plurality of three-dimensional position information. Generate a weather information figure. The synthetic image generation unit generates a synthetic image in which the weather information graphic is synthesized with the captured image.
 これにより、ユーザは、カメラ映像とともに気象情報を、高さ方向の外形の情報を含めて容易に理解することができる。 With this, the user can easily understand the weather information together with the camera image, including the outline information in the height direction.
 前記の映像生成装置においては、前記気象情報図形生成部は、前記3次元形状を2次元にレンダリングすることにより、前記気象情報図形を生成することが好ましい。 In the image generation device, it is preferable that the weather information graphic generation unit generate the weather information graphic by rendering the three-dimensional shape in two dimensions.
 これにより、気象情報図形を3次元コンピュータグラフィックスにより実現できる。 With this, the weather information graphic can be realized by three-dimensional computer graphics.
 前記の映像生成装置においては、前記気象情報図形生成部は、3次元空間に配置した前記3次元形状としての仮想現実オブジェクトを、前記撮影映像に対応する視点の位置及び向きで2次元にレンダリングすることで、前記気象情報図形を生成することが好ましい。 In the image generation device, the weather information graphic generation unit two-dimensionally renders the virtual reality object as the three-dimensional shape arranged in a three-dimensional space at the position and orientation of the viewpoint corresponding to the captured image. Therefore, it is preferable to generate the weather information graphic.
 これにより、撮影映像に幾何学的に整合させるように気象情報図形を生成することができる。 With this, it is possible to generate a weather information figure so as to geometrically match the captured image.
 前記の映像生成装置においては、前記気象情報には、気象用レーダで取得した降水エコー情報が含まれることが好ましい。 In the image generation device, it is preferable that the weather information includes precipitation echo information acquired by a weather radar.
 これにより、ユーザは、降水の分布の3次元的な輪郭を、現実感をもって容易に理解することができる。 With this, the user can easily understand the three-dimensional contour of the distribution of precipitation with a sense of reality.
 前記の映像生成装置においては、前記気象情報図形生成部は、前記降水エコー情報に基づいて、降水がある地上又は水上の領域を示す図形を生成可能であることが好ましい。 In the above-mentioned image generation device, it is preferable that the weather information graphic generation unit can generate a graphic indicating a region on the ground or above water where precipitation is based on the precipitation echo information.
 これにより、ユーザは、降水がある地上/水上の領域の分布を容易に理解することができる。 With this, the user can easily understand the distribution of the ground / water area where there is precipitation.
 前記の映像生成装置においては、前記気象情報には、太陽の位置が含まれることが好ましい。 In the above image generation device, it is preferable that the weather information includes the position of the sun.
 これにより、太陽の位置を、現実感をもって容易に理解することができる。 With this, you can easily understand the position of the sun with a sense of reality.
 前記の映像生成装置においては、前記気象情報には、雲の3次元形状が含まれることが好ましい。 In the above image generation device, it is preferable that the weather information includes a three-dimensional shape of a cloud.
 これにより、雲の形状の3次元的な輪郭を、現実感をもって容易に理解することができる。 With this, you can easily understand the three-dimensional contour of the cloud shape with a sense of reality.
 前記の映像生成装置においては、前記気象情報図形生成部は、前記雲の3次元形状と太陽の位置とに基づいて、太陽光が雲によって遮られる地上又は水上の領域を示す図形を生成可能であることが好ましい。 In the image generation device, the meteorological information graphic generation unit can generate a graphic indicating an area on the ground or on the water where the sunlight is blocked by the cloud, based on the three-dimensional shape of the cloud and the position of the sun. Preferably there is.
 これにより、ユーザは、日射に関する情報を容易に理解することができる。 With this, the user can easily understand the information about solar radiation.
  前記の映像生成装置においては、前記気象情報には、風向の分布の情報、風速の分布の情報、気温の分布の情報、気圧の分布の情報、水蒸気量の分布の情報、雷 の位置の情報、予測される日射の分布の情報、及び予測される降水の分布の情報のうち少なくとも何れかが含まれることが好ましい。 In the image generation device, the weather information includes wind direction distribution information, wind velocity distribution information, temperature distribution information, atmospheric pressure distribution information, water vapor distribution information, and lightning location information. It is preferable that at least one of information on the predicted distribution of solar radiation and information on the predicted distribution of precipitation is included.
 これにより、ユーザは、気象に関する様々な情報を容易に理解できる。 This allows the user to easily understand various weather information.
  前記の映像生成装置においては、以下の構成とすることが好ましい。即ち、この映像生成装置は、水上の物標の位置を含む物標情報を入力する物標情報入力部を 備える。前記合成映像生成部は、前記気象情報図形を前記撮影映像に合成するとともに、前記物標の位置を前記撮影映像に幾何学的に整合させるように変換した 位置を示す図形である物標情報図形を前記撮影映像に合成した合成映像を生成可能である。 The above-mentioned video generation device preferably has the following configuration. That is, this video generation device includes a target information input unit that inputs target information including the position of the target on the water. The synthetic image generation unit synthesizes the weather information figure with the photographed image and converts the position of the target so as to geometrically match the photographed image with the target information figure which is a figure indicating the position. It is possible to generate a composite video by combining the captured video with the captured video.
 これにより、気象に関する情報だけでなく、物標の位置に関する情報を、現実感を伴う映像によって統合的に把握することができる。 With this, not only the information on the weather but also the information on the position of the target can be comprehensively grasped by the video with a sense of reality.
 前記の映像生成装置においては、前記物標情報に、物標監視用レーダで物標を探知して得られた情報が含まれることが好ましい。 In the above-mentioned image generation device, it is preferable that the target information includes information obtained by detecting the target with the target monitoring radar.
 これにより、レーダによる物標の監視結果を容易に理解することができる。 With this, you can easily understand the result of target monitoring by radar.
 前記の映像生成装置においては、前記物標情報に、自動船舶識別装置から得られた情報が含まれることが好ましい。 In the above-mentioned image generation device, it is preferable that the target information includes information obtained from the automatic ship identification device.
 これにより、自動船舶識別装置による物標の監視結果を容易に理解することができる。 With this, you can easily understand the result of target monitoring by the automatic ship identification device.
  前記の映像生成装置においては、以下の構成とすることが好ましい。即ち、この映像生成装置は、方位と、その方位について警戒すべき度合いを示す警戒度と、 の関係を、少なくとも前記気象情報に基づいて取得する警戒度取得部を備える。前記合成映像生成部は、前記撮影映像に対して、前記方位と前記警戒度の関係を 示す警戒度チャートを合成可能である。 The above-mentioned video generation device preferably has the following configuration. That is, this video generation device includes a caution degree acquisition unit that acquires the relationship between the azimuth and the caution degree indicating the degree of caution for the azimuth, based on at least the weather information. The synthetic video generation unit can synthesize a caution level chart showing a relationship between the orientation and the caution level with the captured video.
 これにより、ユーザは、特にどの方位について注意すべきかを容易に理解することができる。 With this, the user can easily understand which azimuth to pay attention to.
 前記の映像生成装置においては、以下の構成とすることが好ましい。即ち、前記合成映像生成部は、前記撮影映像に対して、方位を示す方位目盛りを合成可能である。前記方位目盛りが前記撮影映像に対して合成される上下方向の位置が自動的に変更される。 The above-mentioned video generation device preferably has the following configuration. That is, the synthetic image generation unit can synthesize an azimuth scale indicating an azimuth with the captured image. The vertical position at which the azimuth scale is combined with the captured image is automatically changed.
 これにより、ユーザは、方位目盛りによって方向を容易に把握できるとともに、当該方位目盛りの表示が状況の把握の邪魔になることを回避できる。 With this, the user can easily grasp the direction by the azimuth scale and avoid the display of the azimuth scale from interfering with the grasp of the situation.
  本発明の第2の観点によれば、以下の映像生成方法が提供される。即ち、カメラの撮影映像を入力する。気象情報を取得する。前記気象情報に複数の3次元での 位置情報が含まれる場合に、当該複数の3次元での位置情報に基づく3次元形状の外形を、前記撮影映像に幾何学的に整合するように変換した2次元図形である 気象情報図形を生成する。前記気象情報図形を前記撮影映像に合成した合成映像を生成する。 According to the second aspect of the present invention, the following video generation method is provided. That is, the image captured by the camera is input. Get weather information. When the weather information includes a plurality of three-dimensional position information, the outer shape of the three-dimensional shape based on the plurality of three-dimensional position information is converted so as to geometrically match the captured image. Generates a weather information figure that is a two-dimensional figure. A composite image is generated by combining the weather information graphic with the captured image.
 これにより、ユーザは、カメラ映像とともに気象情報を、高さ方向での情報を含めて容易に理解することができる。 With this, the user can easily understand the weather information including the information in the height direction together with the camera image.
本発明の一実施形態に係る周辺監視装置の全体的な構成を示すブロック図。The block diagram which shows the whole structure of the periphery monitoring apparatus which concerns on one Embodiment of this invention. 港湾監視施設等に備えられる各種の機器を示す側面図。The side view which shows various equipment with which a port surveillance facility etc. are equipped. カメラから入力される撮影映像の例を示す図。The figure which shows the example of the picked-up video input from a camera. 3次元仮想空間に仮想現実オブジェクトを配置して構築される3次元シーンデータと、当該3次元仮想空間に配置される映写スクリーンと、を説明する概念図。FIG. 3 is a conceptual diagram illustrating 3D scene data constructed by arranging virtual reality objects in a 3D virtual space, and a projection screen arranged in the 3D virtual space. 拡張現実映像生成部が出力する合成映像を示す図。The figure which shows the synthetic | combination image which an augmented reality image generation part outputs. 周辺監視装置において行われる処理を説明するフローチャート。The flowchart explaining the process performed in a periphery monitoring apparatus.
 次に、図面を参照して本発明の実施の形態を説明する。図1は、本発明の一実施形態に係る周辺監視装置1の全体的な構成を示すブロック図である。図2は、港湾監視施設4等に備えられる各種の機器を示す側面図である。 Next, an embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram showing the overall configuration of a perimeter monitoring device 1 according to an embodiment of the present invention. FIG. 2 is a side view showing various devices provided in the port monitoring facility 4 and the like.
  図1に示す周辺監視装置(映像生成装置)1は、例えば、図2に示す港湾監視施設4に設置される。周辺監視装置1は、カメラ(撮影装置)3で撮影した映像を ベースとして、気象状況の監視、及び、港湾の船舶の動静監視を支援する情報を重畳した映像を生成することができる。周辺監視装置1が生成した映像は、ディ スプレイ2に表示される。 The perimeter monitoring device (video generation device) 1 shown in FIG. 1 is installed in the port monitoring facility 4 shown in FIG. 2, for example. The surroundings monitoring device 1 can generate a video on which information for supporting the weather condition monitoring and the motion monitoring of a ship in a port is superimposed on the basis of the video captured by the camera (imaging device) 3. The video generated by the peripheral monitoring device 1 is displayed on the display 2.
 ディスプレイ2は、例えば、港湾監視施設4でオペレータが監視業務を行う監視デスクに配置され たディスプレイとして構成することができる。ただし、ディスプレイ2は上記に限定されず、例えば、オペレータ又はその補助者が携帯する携帯型コンピュータ のディスプレイ等とすることが可能である。 The display 2 can be configured, for example, as a display placed on a monitoring desk where an operator performs monitoring work in the port monitoring facility 4. However, the display 2 is not limited to the above, and may be, for example, a display of a portable computer carried by an operator or its assistant.
 周辺監視装置1は、港湾監視施設4に設置されたカメラ3によって撮影された周囲の映像と、当該周囲の状況に関する付加表示情報(後に詳述)を仮想現実的に表現する図形と、を合成することにより、ディスプレイ2への出力映像である合成映像を生成する。 The surroundings monitoring device 1 combines a surrounding image captured by the camera 3 installed in the harbor monitoring facility 4 and a figure that virtually represents additional display information (detailed later) regarding the surroundings. By doing so, a composite video that is an output video to the display 2 is generated.
 次に、主に図1を参照して、周辺監視装置1に電気的に接続されるカメラ3及び各種の機器について説明する。 Next, the camera 3 and various devices electrically connected to the peripheral monitoring device 1 will be described mainly with reference to FIG.
  カメラ3は、港湾監視施設4の周囲を撮影する可視光ビデオカメラとして構成されている。カメラ3は広角型のカメラとして構成されており、見通しの良い高い 場所にやや下向きで取り付けられている。このカメラ3はライブ出力機能を有しており、撮影結果としての動画データ(映像データ)をリアルタイムで生成して 周辺監視装置1に出力することができる。図3には、カメラ3から周辺監視装置1に入力される撮影映像の例が示されている。 The camera 3 is configured as a visible light video camera that takes a picture around the port surveillance facility 4. The camera 3 is configured as a wide-angle type camera, and is mounted slightly downward in a place with high visibility. This camera 3 has a live output function and can generate moving image data (image data) as a shooting result in real time and output it to the surroundings monitoring device 1. FIG. 3 shows an example of a captured image input from the camera 3 to the peripheral monitoring device 1.
 カメラ3は、原則として定点撮影を行う。ただし、カメラ3は図略の回転機構を介して取り付けられており、パン/チルト動作を指示する信号が周辺監視装置1から入力されることにより、その撮影方向を変更することができる。 -Camera 3 performs fixed point shooting in principle. However, the camera 3 is attached via a rotation mechanism (not shown), and the photographing direction can be changed by inputting a signal instructing a pan / tilt operation from the peripheral monitoring device 1.
 本実施形態の周辺監視装置1は、上記のカメラ3のほか、各種の機器としての気象用レーダ15、全天球カメラ16、船舶監視用レーダ(物標監視用レーダ)12、及びAIS受信機(自動船舶識別装置)9等と電気的に接続されている。 In addition to the camera 3 described above, the surroundings monitoring device 1 of the present embodiment includes a weather radar 15 as a variety of devices, a omnidirectional camera 16, a ship monitoring radar (target monitoring radar) 12, and an AIS receiver. (Automatic ship identification device) 9 and the like are electrically connected.
  気象用レーダ15は、仰角及び方位角を変化させながらアンテナから電波を送受信することにより、雨粒及び雨雲等を探知する。気象用レーダ15は、公知の計 算を行うことにより、雨粒及び雨雲が存在する仰角及び方位角と、当該雨粒及び雨雲までの距離と、降水量と、を取得する。気象用レーダ15は、緯度、経度、 高度に応じた水分量の3次元分布である降水エコー情報を、周辺監視装置1に出力する。気象用レーダ15は、気象情報生成装置に相当する。 The weather radar 15 detects radio waves and rain clouds by transmitting and receiving radio waves from the antenna while changing the elevation angle and azimuth angle. The meteorological radar 15 obtains the elevation angle and the azimuth angle at which the raindrops and rainclouds are present, the distances to the raindrops and rain clouds, and the amount of precipitation by performing known calculations. The weather radar 15 outputs precipitation echo information, which is a three-dimensional distribution of water content according to latitude, longitude, and altitude, to the surroundings monitoring device 1. The weather radar 15 corresponds to a weather information generation device.
  全天球カメラ16は、魚眼レンズを装着した上向きのカメラにより構成されており、全天を定点観測する。全天球カメラ16は3箇所以上に設置されており、そ れぞれの全天球カメラ16が撮影した雲の映像を解析することにより、緯度、経度、高度に応じた雲の3次元分布を取得する。雲の3次元分布は、周辺監視装置 1に出力される。全天球カメラ16は、気象情報生成装置に相当する。 The omnidirectional camera 16 is composed of an upward facing camera equipped with a fish-eye lens, and observes the whole sky at a fixed point. The omnidirectional cameras 16 are installed in three or more places, and by analyzing the images of the clouds taken by each omnidirectional camera 16, the three-dimensional distribution of clouds according to latitude, longitude, and altitude. To get. The three-dimensional distribution of clouds is output to the peripheral monitoring device 1. The spherical camera 16 corresponds to a weather information generation device.
 気象情報生成装置は、他にも様々に考えられる。例を 挙げると、周辺監視装置1には、カメラ3による監視対象領域の適宜の場所に設置された気象観測ステーションが出力した、気温、気圧、湿度、日射、風向及び 風速等の情報を入力することができる。また、周辺監視装置1には、海の適宜の場所に設置された波高計が出力した、波の高さの情報を入力することができる。 更に、周辺監視装置1には、気象観測衛星がマイクロ波放射計によって測定した水蒸気量の情報を入力することができる。周辺監視装置1には、公知の雷監視シ ステムが雷放電を観測することにより測定された雷の位置を入力することもできる。 There are various other possible weather information generation devices. To give an example, the surroundings monitoring device 1 inputs information such as temperature, pressure, humidity, solar radiation, wind direction, and wind speed output from a weather observation station installed at an appropriate location in the area monitored by the camera 3. be able to. In addition, information on the height of waves output from a wave height meter installed at an appropriate place in the sea can be input to the surroundings monitoring device 1. In addition, information on the amount of water vapor measured by the weather observation satellite by the microwave radiometer can be input to the peripheral monitoring device 1. The location of the lightning measured by a known lightning monitoring system observing the lightning discharge can be input to the peripheral monitoring device 1.
 船舶監視用レーダ12は、アンテナから 電波を送受信することにより、監視対象の港湾に存在する船舶等の物標を探知することができる。また、この船舶監視用レーダ12は物標を捕捉及び追尾するこ とが可能な公知の目標追尾機能(Target Tracking、TT)を有しており、当該物標の位置及び速度ベクトル(TT情報)を求めることができ る。船舶監視用レーダ12は、物標検出装置に相当する。 The ship monitoring radar 12 can detect targets such as ships existing in the monitored port by transmitting and receiving radio waves from the antenna. In addition, the ship monitoring radar 12 has a known target tracking function (Target Tracking, TT) capable of capturing and tracking a target, and the position and velocity vector (TT information) of the target. Can be asked. The ship monitoring radar 12 corresponds to a target detection device.
 AIS受信機9は、船舶から送信されるAIS情報を受信する。 AIS情報には、監視対象の港湾を航行している船舶の位置(緯度・経度)、当該船舶の長さ及び幅、当該船舶の種類及び識別情報、当該船舶の船速、針路及び 目的地等の、様々な情報が含まれる。AIS受信機9は、物標検出装置に相当する。 The AIS receiver 9 receives the AIS information transmitted from the ship. The AIS information includes the position (latitude / longitude) of the ship navigating the monitored port, the length and width of the ship, the type and identification information of the ship, the ship speed, the course, and the destination. Various information is included. The AIS receiver 9 corresponds to a target detection device.
 周辺監視装置1は、ユーザが操作するキーボード36及びマウス37に接続されている。ユーザは、キーボード36及びマウス37を操作することで、映像の生成に関する各種の指示を行うことができる。この指示には、カメラ3のパン/チルト動作等が含まれる。 The peripheral monitoring device 1 is connected to a keyboard 36 and a mouse 37 operated by the user. By operating the keyboard 36 and the mouse 37, the user can give various instructions regarding image generation. This instruction includes a pan / tilt operation of the camera 3 and the like.
 次に、周辺監視装置1の構成について、主に図1を参照して詳細に説明する。 Next, the configuration of the peripheral monitoring device 1 will be described in detail mainly with reference to FIG.
 図1に示すように、周辺監視装置1は、撮影映像入力部21と、気象情報入力部22と、物標情報入力部23と、付加表示情報取得部17と、カメラ位置/向き設定部25と、太陽位置取得部26と、画像認識部28と、拡張現実映像生成部30と、を備える。 As shown in FIG. 1, the peripheral monitoring device 1 includes a captured image input unit 21, a weather information input unit 22, a target information input unit 23, an additional display information acquisition unit 17, and a camera position / orientation setting unit 25. And a sun position acquisition unit 26, an image recognition unit 28, and an augmented reality video generation unit 30.
  具体的には、周辺監視装置1は公知のコンピュータとして構成され、図示しないが、CPU、ROM、RAM、及びHDD等を備える。更に、周辺監視装置1 は、後述の3次元画像処理を高速で行うためのGPUを備えている。そして、例えばHDDには、本発明の映像生成方法を実行させるためのソフトウェアが記憶 されている。このハードウェアとソフトウェアの協働により、周辺監視装置1を、撮影映像入力部21、気象情報入力部22、物標情報入力部23、付加表示情 報取得部17、カメラ位置/向き設定部25、太陽位置取得部26、画像認識部28、及び拡張現実映像生成部30等として機能させることができる。 Specifically, the peripheral monitoring device 1 is configured as a known computer and includes a CPU, a ROM, a RAM, an HDD, and the like, which are not shown. Further, the peripheral monitoring device 1 is equipped with a GPU for performing three-dimensional image processing described later at high speed. Then, for example, the HDD stores software for executing the image generation method of the present invention. By the cooperation of the hardware and the software, the peripheral monitoring device 1 can be used for the captured image input unit 21, the weather information input unit 22, the target information input unit 23, the additional display information acquisition unit 17, the camera position / direction setting unit 25. , The sun position acquisition unit 26, the image recognition unit 28, the augmented reality video generation unit 30, and the like.
 撮影映像入力部21は、カメラ3が出力する映像データを、例えば30フレーム毎秒で入力することができる。撮影映像入力部21は、入力された映像データを、画像認識部28及び拡張現実映像生成部30(後述のレンダリング部32)に出力する。 The captured video input unit 21 can input the video data output by the camera 3 at, for example, 30 frames per second. The captured video input unit 21 outputs the input video data to the image recognition unit 28 and the augmented reality video generation unit 30 (a rendering unit 32 described below).
 気象情報入力部22は、気象用レーダ15及び全天球カメラ16が出力した各種の気象情報を入力することができる。従って、気象情報入力部22は、気象情報取得部に相当する。気象情報入力部22は、入力された気象情報を、付加表示情報取得部17に出力する。 The weather information input unit 22 can input various weather information output by the weather radar 15 and the omnidirectional camera 16. Therefore, the weather information input unit 22 corresponds to a weather information acquisition unit. The weather information input unit 22 outputs the input weather information to the additional display information acquisition unit 17.
  物標情報入力部23は、船舶監視用レーダ12及びAIS受信機9が出力した船舶等の物標に関する情報(以下、物標情報という。)を入力することができる。 物標情報入力部23は、物標情報取得部に相当する。物標情報入力部23は、入力された物標情報を、付加表示情報取得部17に出力する。 The target information input unit 23 can input information (hereinafter referred to as target information) about targets such as ships output by the ship monitoring radar 12 and the AIS receiver 9. The target information input unit 23 corresponds to the target information acquisition unit. The target information input unit 23 outputs the input target information to the additional display information acquisition unit 17.
  付加表示情報取得部17は、気象情報入力部22及び物標情報入力部23から周辺監視装置1に入力した情報、及び、画像認識部28が取得した情報等に基づい て、カメラ3で撮影した映像に対して付加的に表示する情報(付加表示情報、気象情報、物標情報)を取得する。この付加表示情報としては様々なものが考えら れるが、例えば、気象用レーダ15から得られる降水の3次元分布、全天球カメラ16から得られる雲の3次元分布、後述の太陽位置取得部26から得られる太 陽の位置、AIS受信機9から得られる船舶の位置及び速度、船舶監視用レーダ12から得られる物標の位置及び速度、画像認識部28から得られる物標の位置 及び速度等とすることができる。付加表示情報取得部17は、取得した情報を拡張現実映像生成部30に出力する。なお、付加表示情報の詳細は後述する。 The additional display information acquisition unit 17 takes an image with the camera 3 based on the information input to the peripheral monitoring device 1 from the weather information input unit 22 and the target object information input unit 23, the information acquired by the image recognition unit 28, and the like. The information (additional display information, weather information, target information) to be additionally displayed on the video is acquired. Although various types of additional display information are conceivable, for example, the three-dimensional distribution of precipitation obtained from the weather radar 15, the three-dimensional distribution of clouds obtained from the omnidirectional camera 16, the sun position acquisition unit described later. Position and speed of the ship obtained from AIS receiver 9, position and speed of the ship obtained from AIS receiver 9, position and speed of the target acquired from radar for ship monitoring 12, position and speed of the target acquired from image recognition unit 28 And so on. The additional display information acquisition unit 17 outputs the acquired information to the augmented reality video generation unit 30. The details of the additional display information will be described later.
  カメラ位置/向き設定部25は、港湾監視施設4におけるカメラ3の位置(撮影位置)及び向きを設定することができる。カメラ3の位置の情報は、具体的に は、緯度、経度、及び高度を示す情報である。カメラ3の向きの情報は、具体的には、方位角及び俯角を示す情報である。これらの情報は、例えばカメラ3の設 置作業時に測定を行うこと等により得ることができる。カメラ3の向きは上述のとおり所定の角度範囲で変更可能であるが、カメラ3のパン/チルト操作がされ ると、それに応じて、最新の向きの情報がカメラ位置/向き設定部25に設定される。カメラ位置/向き設定部25は、設定されている情報を、太陽位置取得部 26、付加表示情報取得部17及び拡張現実映像生成部30に出力する。 The camera position / direction setting unit 25 can set the position (shooting position) and direction of the camera 3 in the port monitoring facility 4. The information on the position of the camera 3 is specifically information indicating latitude, longitude, and altitude. The information on the orientation of the camera 3 is specifically information indicating an azimuth angle and a depression angle. These pieces of information can be obtained by, for example, performing measurements during installation work of the camera 3. The orientation of the camera 3 can be changed within a predetermined angle range as described above, but when the pan / tilt operation of the camera 3 is performed, the latest orientation information is set in the camera position / orientation setting unit 25 accordingly. To be done. The camera position / orientation setting unit 25 outputs the set information to the sun position acquisition unit 26, the additional display information acquisition unit 17, and the augmented reality video generation unit 30.
 太陽位置取得部26は、太陽の位置(方位角及び仰 角)を、カメラ3が設置された位置と、現在時刻と、に基づいて、公知の式に基づいた計算により求める。太陽の位置は気象情報の一種であるので、太陽位置取 得部26は、気象情報取得部に相当する。太陽位置取得部26は、取得した太陽の位置を付加表示情報取得部17に出力する。 The sun position acquisition unit 26 calculates the position of the sun (azimuth and elevation) by calculation based on a known formula based on the position where the camera 3 is installed and the current time. Since the position of the sun is a type of weather information, the sun position acquisition unit 26 corresponds to the weather information acquisition unit. The sun position acquisition unit 26 outputs the acquired sun position to the additional display information acquisition unit 17.
  画像認識部28は、撮影映像入力部21から取得した画像のうち船舶等と考えられる部分を切り出して、予め登録されている物標画像データベースと照合するこ とにより、船舶、ダイバー、漂流物等の物標を認識する。具体的には、画像認識部28は、移動する物体をフレーム間差分法等により検出し、差異が生じた領域 を切り出して、画像データベースと照合する。照合の方法としては、テンプレートマッチング等、公知の適宜の方法を用いることができる。画像認識は、他の公 知の方法、例えばニューラルネットワークを用いて実現することもできる。画像認識部28は、認識された物標のそれぞれについて、画像上で認識された位置の 情報を付加表示情報取得部17に出力する。画像認識部28は、物標情報入力部23と同様に、物標情報取得部に相当する。 The image recognition unit 28 cuts out a portion of the image acquired from the captured image input unit 21, which is considered to be a ship or the like, and collates it with a target image database registered in advance to detect a ship, a diver, a drifting object, or the like. Recognize the target of. Specifically, the image recognition unit 28 detects a moving object by an inter-frame difference method or the like, cuts out a region where a difference occurs, and collates it with an image database. As a matching method, a known appropriate method such as template matching can be used. Image recognition can also be realized by using another publicly known method, for example, a neural network. The image recognition unit 28 outputs the information on the position recognized on the image for each of the recognized targets to the additional display information acquisition unit 17. The image recognition unit 28, like the target information input unit 23, corresponds to the target information acquisition unit.
 拡張現実映像生成部30は、ディスプレイ2に表示する合成映像を生成する。この拡張現実映像生成部30は、3次元シーン生成部31と、レンダリング部32と、を備える。 The augmented reality video generation unit 30 generates a composite video to be displayed on the display 2. The augmented reality video generation unit 30 includes a three-dimensional scene generation unit 31 and a rendering unit 32.
  3次元シーン生成部31は、図4に示すように、3次元仮想空間40に、付加表示情報に対応する仮想現実オブジェクト41v,42v,・・・を配置すること により、仮想現実の3次元シーンを構築する。これにより、3次元シーンのデータである3次元シーンデータ(3次元表示用データ)50が生成される。なお、 3次元シーンの詳細は後述する。 As shown in FIG. 4, the 3D scene generation unit 31 arranges the virtual reality objects 41v, 42v, ... Corresponding to the additional display information in the 3D virtual space 40 to generate a 3D scene of the virtual reality. To build. As a result, the three-dimensional scene data (three-dimensional display data) 50, which is the three-dimensional scene data, is generated. The details of the 3D scene will be described later.
 レンダリング部32は、3次元シーン生成部31が生成した3次元シーンデータ50を描画 することで、仮想現実オブジェクト41v,42v,・・・の外形を2次元に変換した図形41f,42f,・・・を生成する。また、レンダリング部32は、 上記の2次元の図形41f,42f,・・・の生成と同時に、当該図形41f,42f,・・・とカメラ3の撮影映像とを合成した映像を生成する。従って、レ ンダリング部32は、気象情報図形生成部及び合成映像生成部として機能する。図5に示すように、この合成映像では、カメラ3で撮影された映像に、付加表示 情報を示す図形41f,42f,・・・が重ねられている。レンダリング部32(拡張現実映像生成部30)は、生成された合成映像をディスプレイ2に出力す る。なお、図形の生成処理及びデータ合成処理の詳細は後述する。 The rendering unit 32 draws the three-dimensional scene data 50 generated by the three-dimensional scene generation unit 31 to convert the outer shapes of the virtual reality objects 41v, 42v, ... into two- dimensional shapes 41f, 42f, ...・ Generate. Further, the rendering unit 32 simultaneously generates the above-mentioned two-dimensional figures 41f, 42f, ... And at the same time, generates an image in which the figures 41f, 42f ,. Therefore, the rendering unit 32 functions as a weather information graphic generation unit and a composite video generation unit. As shown in FIG. 5, in this composite video, graphics 41f, 42f, ... Showing additional display information are superimposed on the video captured by the camera 3. The rendering unit 32 (augmented reality video generation unit 30) outputs the generated synthetic video to the display 2. The details of the graphic generation processing and the data synthesis processing will be described later.
 次に、前述の付加表示情報取得部17で取得される付加表示情報について詳細に説明する。 Next, the additional display information acquired by the above-mentioned additional display information acquisition unit 17 will be described in detail.
 付加表示情報は、カメラ3で撮影した映像に対して付加的に表示する情報であり、周辺監視装置1に接続される機器の目的及び機能に応じて様々なものが考えられる。 The additional display information is information that is additionally displayed on the video image captured by the camera 3, and various information can be considered depending on the purpose and function of the device connected to the peripheral monitoring device 1.
 気象用レーダ15に関しては、3次元での降水分布を付加表示情報とすることができる。全天球カメラ16に関しては、3次元での雲の分布を付加表示情報とすることができる。降水分布及び雲の分布に関する最新の情報は、それぞれの機器から周辺監視装置1に入力される。 Regarding the weather radar 15, the three-dimensional precipitation distribution can be used as additional display information. For the omnidirectional camera 16, the three-dimensional cloud distribution can be used as additional display information. The latest information regarding the precipitation distribution and the cloud distribution is input to the peripheral monitoring device 1 from each device.
  船舶監視用レーダ12に関しては、探知された物標の位置及び速度等を付加表示情報とすることができる。AIS受信機9に関しては、受信した上述のAIS情 報(例えば、船舶の位置及び向き等)を付加表示情報とすることができる。これらの情報は、それぞれの機器から周辺監視装置1にリアルタイムで入力される。 Regarding the ship monitoring radar 12, the position and speed of the detected target can be used as additional display information. Regarding the AIS receiver 9, the above-mentioned received AIS information (for example, the position and orientation of the ship) can be used as additional display information. These pieces of information are input from the respective devices to the peripheral monitoring device 1 in real time.
 また、本実施形態において、付加表示情報には、画像認識部28が画像認識を行うことにより識別された物標の位置及び速度等が含まれる。 Further, in the present embodiment, the additional display information includes the position and speed of the target identified by the image recognition unit 28 performing image recognition.
  船舶監視用レーダ12から得られた情報に含まれる物標の位置及び速度ベクトルは、船舶監視用レーダ12のアンテナの位置及び向きを基準とした相対的なもの である。付加表示情報取得部17は、船舶監視用レーダ12から得られた物標の位置及び速度ベクトルを、周辺監視装置1に予め設定してある当該アンテナの位 置及び向きの情報に基づいて、地球基準に変換する。 The position and velocity vector of the target included in the information obtained from the ship monitoring radar 12 are relative to the position and orientation of the antenna of the ship monitoring radar 12. The additional display information acquisition unit 17 determines the position and velocity vector of the target obtained from the ship monitoring radar 12 based on the position and orientation information of the antenna preset in the surroundings monitoring device 1. Convert to standard.
 同様に、画像認識部28から得られた物標の位置及び速度ベクトルは、 カメラ3の位置及び向きを基準とした相対的なものである。付加表示情報取得部17は、画像認識部28から得られた物標の位置及び速度ベクトルを、カメラ位 置/向き設定部25から得られた情報に基づいて、地球基準に変換する。 Similarly, the position and velocity vector of the target obtained from the image recognition unit 28 are relative to the position and orientation of the camera 3. The additional display information acquisition unit 17 converts the position and velocity vector of the target obtained from the image recognition unit 28 into the earth reference based on the information obtained from the camera position / orientation setting unit 25.
 以下、付加表示情報の例を説明する。図3のカメラ 映像では、遠くの上空に多くの雲41rが現れているが、この雲41rは全天球カメラ16により検出されている。更には、上空の雲41rの一部から雨42r が降っているが、この雨42rに基づく降水は気象用レーダ15により検出されている。 Below, an example of additional display information is explained. In the camera image of FIG. 3, many clouds 41r appear in the distant sky, but the clouds 41r are detected by the spherical camera 16. Furthermore, rain 42r is falling from a part of the clouds 41r in the sky, and the precipitation based on the rain 42r is detected by the weather radar 15.
 気象に関する付加表示情報には、少 なくとも、それが配置される地点の位置(緯度及び経度)を示す情報が含まれている。また、気象に関する付加表示情報には、高度を示す情報が含まれる場合も ある。例えば、雲41rを示す付加表示情報には、当該雲を多数の点の集まりとして表現したときの各点の位置(緯度、経度及び高度)を示す情報が含まれてい る。雨42rを示す付加表示情報には、当該雨の分布の形状に含まれる複数の点の位置(緯度、経度及び高度)を示す情報が含まれている。 -At least the additional display information related to weather includes information indicating the position (latitude and longitude) of the point where it is located. Further, the additional display information related to weather may include information indicating altitude. For example, the additional display information indicating the cloud 41r includes information indicating the position (latitude, longitude, and altitude) of each point when the cloud is expressed as a set of many points. The additional display information indicating the rain 42r includes information indicating the positions (latitude, longitude, and altitude) of a plurality of points included in the shape of the rain distribution.
 また、図3の状況では、港湾内で大きな船舶46rがカメラ映像の右側奥へ向かって航行しているが、この船舶46rは、船舶監視用レーダ12、AIS受信機9、及び、画像認識部28により検出されている。 Further, in the situation of FIG. 3, a large vessel 46r is sailing toward the inner right side of the camera image in the harbor, but the vessel 46r includes the vessel monitoring radar 12, the AIS receiver 9, and the image recognition unit. 28 is detected.
 船舶等の物標に関する付加表示情報には、少なくとも、それが配置される海面(水面)における地点の位置(緯度及び経度)を示す情報が含まれている。例えば、船舶46rを示す付加表示情報には、当該船舶46の位置を示す情報が含まれている。 The additional display information related to the target such as a ship includes at least information indicating the position (latitude and longitude) of the point on the sea surface (water surface) on which it is placed. For example, the additional display information indicating the ship 46r includes information indicating the position of the ship 46.
  次に、3次元シーン生成部31による3次元シーンの構築、及び、レンダリング部32による映像の合成について、図4を参照して詳細に説明する。図4は、3 次元仮想空間40に仮想現実オブジェクト41v,42v,・・・を配置して生成される3次元シーンデータ50と、当該3次元仮想空間40に配置される映写 スクリーン51と、を説明する概念図である。 Next, the construction of the 3D scene by the 3D scene generation unit 31 and the composition of the video by the rendering unit 32 will be described in detail with reference to FIG. FIG. 4 shows 3D scene data 50 generated by arranging virtual reality objects 41v, 42v, ... In the 3D virtual space 40, and a projection screen 51 arranged in the 3D virtual space 40. It is a conceptual diagram explaining.
 3次元シーン生成部31によって仮想現実オブジェクト 41v,42v,・・・が配置される3次元仮想空間40は、図4に示すように直交座標系で構成される。この直交座標系の原点は、カメラ3の設置位置の真下 の高度ゼロの点に定められる。3次元仮想空間40において、原点を含む水平な面であるxz平面が海面(水面)又は地面を模擬するように設定される。図4の 例では、座標軸は、+z方向が常にカメラ3の方位角と一致し、+x方向が右方向、+y方向が上方向となるように定められる。この3次元仮想空間40内の各 地点(座標)は、カメラ3の周囲の現実の位置に対応するように設定される。 The three-dimensional virtual space 40 in which the virtual reality objects 41v, 42v, ... Are arranged by the three-dimensional scene generation unit 31 is configured by an orthogonal coordinate system as shown in FIG. The origin of this Cartesian coordinate system is set at a point of zero altitude just below the installation position of the camera 3. In the three-dimensional virtual space 40, the xz plane, which is a horizontal plane including the origin, is set so as to simulate the sea surface (water surface) or the ground. In the example of FIG. 4, the coordinate axes are determined so that the + z direction always matches the azimuth angle of the camera 3, the + x direction is the right direction, and the + y direction is the up direction. Each point (coordinate) in this three-dimensional virtual space 40 is set so as to correspond to the actual position around the camera 3.
 図4には、図3に示す港湾の状況に対応して、仮想現実オブジェクト41v,42v,・・・を3次元仮想空間40内に配置した例が示されている。 FIG. 4 shows an example in which virtual reality objects 41v, 42v, ... Are arranged in the three-dimensional virtual space 40 corresponding to the situation of the port shown in FIG.
  本実施形態において、雲に関する仮想現実オブジェクト41vは、小さな球の集まりとして表現されている。それぞれの全天球カメラ16は、方位角及び仰角毎 に、雲の有無及び厚さを取得することができる。なお、雲の厚さは、撮影画像の雲の色の濃さによって推定することができる。複数の全天球カメラ16で同時に 撮影した画像を組み合わせて3次元的に処理することで、雲の3次元形状を取得することができる。 In the present embodiment, the virtual reality object 41v related to the cloud is represented as a collection of small spheres. Each spherical camera 16 can acquire the presence / absence of cloud and the thickness for each azimuth and elevation. The thickness of the cloud can be estimated by the color density of the cloud of the captured image. The three-dimensional shape of the cloud can be acquired by combining the images captured by the plurality of spherical cameras 16 at the same time and processing them three-dimensionally.
 また、雨に関する仮想現 実オブジェクト42vは、降水の強さの複数段階の等強度面を輪郭とする3次元形状として表現されている。気象用レーダ15は、レーダエコーに基づいて、方 位角及び仰角毎に、降水の強さを取得することができる。このデータを3次元格子での降水の強さの分布に変換し、必要に応じて3次元補間を行うことにより、 上述の3次元形状を得ることができる。更に、本実施形態では、降水がある地上又は水上の領域を示す図形が、地上又は水上を示す平面(xz平面)に置かれた 多角形として表現されている。 Also, the virtual actual object 42v related to rain is represented as a three-dimensional shape having contours of isointensity surfaces of multiple levels of precipitation intensity. The weather radar 15 can acquire the rainfall intensity for each of the orientation angle and the elevation angle based on the radar echo. The above-mentioned three-dimensional shape can be obtained by converting this data into a distribution of precipitation intensity on a three-dimensional grid and performing three-dimensional interpolation if necessary. Further, in the present embodiment, a figure showing a region on the ground or above water where precipitation is present is expressed as a polygon placed on a plane (xz plane) showing above the ground or above water.
 3次元仮想空間40には、太陽を示す仮想現実オブジェクト43vも配置されている。3次元 仮想空間40における原点から見た太陽の方位角及び仰角は、カメラ3の方位角と、太陽位置取得部26で取得した太陽の方位角及び仰角と、に基づいて簡単に 求めることができる。仮想現実オブジェクト43vは、後述の映写スクリーン51よりもすぐ手前の位置に配置される。これにより、太陽を示す後述の図形 43fがカメラ映像によって隠れないようにすることができる。 A virtual reality object 43v indicating the sun is also arranged in the three-dimensional virtual space 40. The azimuth and elevation of the sun viewed from the origin in the three-dimensional virtual space 40 can be easily obtained based on the azimuth of the camera 3 and the azimuth and elevation of the sun acquired by the sun position acquisition unit 26. . The virtual reality object 43v is arranged at a position immediately in front of a projection screen 51 described later. This makes it possible to prevent the later-described graphic 43f indicating the sun from being hidden by the camera image.
 太陽光が雲によって遮られて日陰になる地上又は水上の領域 に関する仮想現実オブジェクト44vは、地上又は水上を示す平面(xz平面)に置かれた多角形として表現されている。この多角形の領域は、雲の仮想現実オ ブジェクト41vを太陽光の方向でxz平面に投影することにより求めることができる。 The virtual reality object 44v relating to the area on the ground or above the water where the sunlight is shaded by the clouds is represented as a polygon placed on a plane (xz plane) showing the ground or above the water. This polygonal area can be obtained by projecting the virtual reality object 41v of the cloud on the xz plane in the direction of sunlight.
 風向きを表す仮想現実オブジェクト45vは、矢印の3次元図形として表現されている。仮想現実オブジェクト45vの位置は、上述の気象観測ステーションが備える風向風速計の位置に対応している。矢印の向きは風向きを示し、矢印の長さは風の強さを示している。 The virtual reality object 45v representing the wind direction is represented as a three-dimensional figure of an arrow. The position of the virtual reality object 45v corresponds to the position of the anemometer of the above-mentioned meteorological observation station. The direction of the arrow indicates the wind direction, and the length of the arrow indicates the strength of the wind.
  本実施形態において、監視対象の物標に関する仮想現実オブジェクト46vは、認識された物標(即ち、船舶46r)の位置を示す下向きの円錐と、当該物標の 速度ベクトルを示す矢印と、を含んでいる。下向きの円錐は、その真下に物標が位置することを表現している。矢印の向きは物標の速度の向きを表現し、矢印の 長さは速度の大きさを表現している。 In the present embodiment, the virtual reality object 46v relating to the target to be monitored includes a downward cone indicating the position of the recognized target (that is, the vessel 46r) and an arrow indicating the velocity vector of the target. I'm out. The downward cone represents that the target is located just below it. The direction of the arrow expresses the direction of the speed of the target, and the length of the arrow expresses the magnitude of the speed.
 各仮想現実オブジェクト41v,42v,・・・は、カメラ3の方位角を基準として、 それが表す付加表示情報のカメラ3に対する相対位置を反映させるように配置される。これらの仮想現実オブジェクト41v,42v,・・・が配置される位置 を決定するにあたっては、図1に示すカメラ位置/向き設定部25で設定されたカメラ3の位置及び向きを用いた計算が行われる。 Each virtual reality object 41v, 42v, ... Is arranged so that the relative position of the additional display information represented by the virtual reality object 41v, 42v, ... With respect to the camera 3 is reflected. In determining the positions at which these virtual reality objects 41v, 42v, ... Are arranged, calculations are performed using the positions and orientations of the cameras 3 set by the camera position / orientation setting unit 25 shown in FIG. Be seen.
  3次元シーン生成部31は、上記のようにして、3次元シーンデータ50を生成する。図4の例では、仮想現実オブジェクト41v,42v,・・・がカメラ3 の真下を原点とした方位基準で配置されるので、図3の状態からカメラ3の方位角が変化すると、当該仮想現実オブジェクト41v,42v,・・・を再配置し た新しい3次元シーンが構築されて、3次元シーンデータ50が更新される。また、例えば図3の状態から雲41rの形が変わったり、船舶46rが移動する等 して、付加表示情報の内容が変更されると、最新の付加表示情報を反映するように3次元シーンデータ50が更新される。 The three-dimensional scene generation unit 31 generates the three-dimensional scene data 50 as described above. In the example of FIG. 4, since the virtual reality objects 41v, 42v, ... Are arranged on the basis of the azimuth with the origin directly below the camera 3, when the azimuth of the camera 3 changes from the state of FIG. A new three-dimensional scene in which objects 41v, 42v, ... Are rearranged is constructed and the three-dimensional scene data 50 is updated. In addition, when the content of the additional display information is changed due to, for example, the shape of the cloud 41r changing from the state of FIG. 3 or the ship 46r moving, the three-dimensional scene data is reflected so as to reflect the latest additional display information. 50 is updated.
 更 に、レンダリング部32は、3次元仮想空間40に、カメラ3の撮影映像が映写される位置及び範囲を定める映写スクリーン51を配置する。この映写スクリー ン51と仮想現実オブジェクト41v,42v,・・・の両方が視野に含まれるように後述の視点カメラ55の位置及び向きを設定することで、映像の合成を実 現することができる。 Furthermore, the rendering unit 32 arranges a projection screen 51 in the three-dimensional virtual space 40, which defines a position and a range on which the image captured by the camera 3 is projected. By setting the position and orientation of the viewpoint camera 55, which will be described later, such that both the projection screen 51 and the virtual reality objects 41v, 42v, ... Are included in the field of view, it is possible to realize image synthesis. .
 レンダリング部32は、視点カメラ55を、現実空間におけるカメラ3の位置及び向きを3次元仮想空 間40においてシミュレートするように配置する。また、レンダリング部32は、映写スクリーン51を、当該視点カメラ55に正対するように配置する。カメ ラ3の位置のシミュレートに関し、カメラ3の位置は、図1に示すカメラ位置/向き設定部25の設定値に基づいて得ることができる。 The rendering unit 32 arranges the viewpoint camera 55 so as to simulate the position and orientation of the camera 3 in the real space in the three-dimensional virtual space 40. The rendering unit 32 also arranges the projection screen 51 so as to face the viewpoint camera 55. Regarding the simulation of the position of the camera 3, the position of the camera 3 can be obtained based on the set value of the camera position / orientation setting unit 25 shown in FIG.
  3次元仮想空間40における視点カメラ55の方位角は、カメラ3のパン動作によって方位角が変化しても、変化することはない。その代わり、レンダリング部 32は、カメラ3をパン動作させると、変化した方位角の分だけ、3次元仮想空間40における仮想現実オブジェクト41v,42v,・・・を、y軸を中心に 水平面内で回転させるように再配置する。 The azimuth angle of the viewpoint camera 55 in the three-dimensional virtual space 40 does not change even if the azimuth angle changes due to the pan operation of the camera 3. Instead, when the panning operation of the camera 3 is performed, the rendering unit 32 causes the virtual reality objects 41v, 42v, ... In the three-dimensional virtual space 40 to move in the horizontal plane around the y axis by the changed azimuth angle. Relocate to rotate.
 視点カメラ55の俯角は、カメラ3の俯角と常に等しくなるように制御される。レ ンダリング部32は、カメラ3のチルト動作による俯角の変化(視点カメラ55の俯角の変化)に連動して、3次元仮想空間40に配置される映写スクリーン 51の位置及び向きを、視点カメラ55と正対する状態を常に保つように変化させる。 The depression angle of the viewpoint camera 55 is controlled so as to be always equal to the depression angle of the camera 3. The rendering unit 32 links the position and orientation of the projection screen 51 arranged in the three-dimensional virtual space 40 in conjunction with the change in the depression angle (change in the depression angle of the viewpoint camera 55) due to the tilt operation of the camera 3, and the viewpoint camera 55. Change so as to always keep the state of facing.
 そして、レンダリング部32は、3次 元シーンデータ50及び映写スクリーン51に対して公知のレンダリング処理を施すことにより、2次元の画像を生成する。より具体的には、レンダリング部 32は、3次元仮想空間40に視点カメラ55を配置するとともに、レンダリング処理の対象となる範囲を定める視錐台56を、当該視点カメラ55を頂点と し、その視線方向が中心軸となるように定義する。続いて、レンダリング部32は、各オブジェクト(仮想現実オブジェクト41v,42v,・・・及び映写ス クリーン51)を構成するポリゴンのうち、当該視錐台56の内部に位置するポリゴンの頂点座標を、透視投影により、ディスプレイ2での合成映像の表示領域 に相当する2次元の仮想スクリーンの座標に変換する。そして、この仮想スクリーン上に配置された頂点に基づいて、所定の解像度でピクセルの生成・加工処理 を行うことにより、2次元の画像を生成する。 Then, the rendering unit 32 generates a two-dimensional image by performing known rendering processing on the tertiary source scene data 50 and the projection screen 51. More specifically, the rendering unit 32 arranges the viewpoint camera 55 in the three-dimensional virtual space 40, sets the view frustum 56 that defines the range to be subjected to the rendering process, and sets the viewpoint camera 55 as the apex. The line-of-sight direction is defined as the central axis. Subsequently, the rendering unit 32 determines the vertex coordinates of polygons located inside the view frustum 56 among the polygons forming each object (the virtual reality objects 41v, 42v, ... And the projection screen 51). By perspective projection, the coordinates are converted into the coordinates of a two-dimensional virtual screen corresponding to the display area of the composite image on the display 2. Then, based on the vertices arranged on the virtual screen, a two-dimensional image is generated by performing pixel generation / processing processing at a predetermined resolution.
 このようにして生成された2次元の画像には、3次元シーンデータ50の描画 が行われることにより得られた図形(言い換えれば、仮想現実オブジェクト41v,42v,・・・のレンダリング結果としての図形)が含まれる。また、2次 元の画像の生成過程において、映写スクリーン51に相当する位置には、カメラ3の撮影映像が貼り付けられるように配置される。これにより、レンダリング部 32による映像の合成が実現される。 In the two-dimensional image generated in this way, a figure obtained by drawing the three-dimensional scene data 50 (in other words, a figure as a rendering result of the virtual reality objects 41v, 42v, ...). ) Is included. Further, in the process of generating the secondary image, the image captured by the camera 3 is arranged at a position corresponding to the projection screen 51. As a result, the rendering unit 32 realizes image synthesis.
 映写スクリーン51は、視点カメラ55を中心とする球殻に沿うように湾曲した形状と なっているので、透視投影による撮影映像の歪みを防止することができる。また、カメラ3は広角カメラであり、その撮影映像には図3に示すようにレンズ歪み が生じているが、当該レンズ歪みは、映写スクリーン51に撮影映像が貼り付けられる時点で除去されている。レンズ歪みを除去する方法は任意であるが、例え ば、補正前の画素の位置と補正後の画素の位置を対応付けたルックアップテーブルを用いることが考えられる。これにより、図4に示す3次元仮想空間40と撮 影映像を良好に整合させることができる。 Since the projection screen 51 has a curved shape along a spherical shell centered on the viewpoint camera 55, it is possible to prevent distortion of the captured image due to perspective projection. Further, the camera 3 is a wide-angle camera, and a lens distortion occurs in the captured image as shown in FIG. 3, but the lens distortion is removed when the captured image is attached to the projection screen 51. . The method for removing the lens distortion is arbitrary, but for example, it is conceivable to use a look-up table in which the positions of pixels before correction and the positions of pixels after correction are associated with each other. Thereby, the three-dimensional virtual space 40 shown in FIG. 4 and the captured image can be well matched.
 次に、カメラ3で撮影した映像と合成映像との関係を、例を参照して説明する。 Next, the relationship between the video captured by the camera 3 and the composite video will be described with reference to an example.
  図5には、図3に示す撮影映像に対し、図4の3次元シーンデータ50のレンダリングによる上記の2次元の画像を合成した結果が示されている。ただし、図5 では、カメラ3による撮影映像が現れている部分が、それ以外の部分と区別し易くなるように便宜的に破線で示されている。図5の合成映像では、付加表示情報 を表現する図形41f,42f,・・・が、撮影映像に重なるように配置されている。 FIG. 5 shows the result of combining the above-described two-dimensional image by rendering the three-dimensional scene data 50 of FIG. 4 with the captured video shown in FIG. However, in FIG. 5, the part in which the image captured by the camera 3 appears is shown by a broken line for convenience so that it can be easily distinguished from other parts. In the composite video image of FIG. 5, figures 41f, 42f, ... Representing additional display information are arranged so as to overlap the captured video image.
 上記の図形41f,42f,・・・ は、図4に示す3次元シーンデータ50を構成する仮想現実オブジェクト41v,42v,・・・の3次元形状を、カメラ3と同じ位置及び向きの視点で描画し た結果として生成される。従って、幾何学的な整合性が保たれるので、カメラ3による写実的な映像に対して図形41f,42f,・・・を重ねた場合でも、見 た目の違和感が生じないようにすることができる。 The figures 41f, 42f, ... Show the three-dimensional shape of the virtual reality objects 41v, 42v, ..., Which compose the three-dimensional scene data 50 shown in FIG. 4, from the viewpoint of the same position and orientation as the camera 3. It is generated as a result of drawing. Therefore, the geometrical consistency is maintained, so that even if the graphics 41f, 42f, ... Are superimposed on the realistic image by the camera 3, it should not cause a sense of discomfort in appearance. You can
 これにより、仮想現実の雲があたかも空に漂い、降水分布が雨雲と水面 (地面)との間で形作られ、船舶を示すマークがあたかも水面上の空中に浮かんでいるように見える、自然で現実感の高い拡張現実の映像を得ることができる。 また、ユーザは、ディスプレイ2を眺めることで、仮想現実を表す図形41f,42f,・・・が網羅的に視界に入るので、取りこぼしなく必要な情報を得るこ とができる。 As a result, virtual reality clouds float in the sky, the precipitation distribution is formed between the rain clouds and the water surface (ground), and the mark indicating the ship appears to float in the air above the water surface. It is possible to obtain a sense of augmented reality video. Further, the user can comprehensively obtain the necessary information by looking at the display 2 because the figures 41f, 42f, ... Representing the virtual reality are comprehensively included in the field of view.
 上述したように、カメラ3から入力された撮影映像は、3次元仮想空間40の映写スクリーン51に映写される 時点でレンズ歪みが除去されている。しかしながら、レンダリング部32は、レンダリング後の合成映像に対し、上述のルックアップテーブルを用いた逆変換に より、レンズ歪みを再び付与する。これにより、図5と図3とを比較すれば分かるように、合成前のカメラ映像との関係で違和感が生じにくい合成映像を得るこ とができる。ただし、レンズ歪みの付与は省略しても良い。 As described above, the lens distortion is removed from the captured image input from the camera 3 when it is projected on the projection screen 51 of the three-dimensional virtual space 40. However, the rendering unit 32 again adds lens distortion to the synthesized image after rendering by inverse conversion using the above-mentioned lookup table. As a result, as can be seen by comparing FIG. 5 and FIG. 3, it is possible to obtain a composite image that does not cause a sense of discomfort in relation to the camera image before composition. However, the application of lens distortion may be omitted.
 レンダリング部32は、図5に示すように、合成映像において図 形41f,42f,・・・の近傍の位置に、状況の理解のために有用な情報を記述した文字情報を更に合成している。文字情報の内容は任意とすることができ る。気象に関して例示すると、天気が変化する予定(例えば、何分後に晴天になるか)、雨雲と単なる雲との区別、水蒸気が多い地域、降雪地域、日照が雲によ り遮られる地域、工場の周囲で高温となっている地域等を挙げることができる。船舶に関して例示すると、船舶を識別するための情報(船名等)、船舶の大きさ を示す情報等を挙げることができる。これにより、情報が充実した監視画面を実現することができる。 As shown in FIG. 5, the rendering unit 32 further synthesizes character information describing useful information for understanding the situation at positions near the figures 41f, 42f, ... In the synthesized video. . The content of character information can be arbitrary. As an example of weather, the weather will change (for example, how many minutes will be fine), the distinction between rain clouds and simple clouds, areas with a lot of water vapor, snowfall areas, areas where sunlight is blocked by clouds, factory Examples include areas where the temperature is high in the surroundings. Examples of the ship include information for identifying the ship (ship name, etc.), information indicating the size of the ship, and the like. As a result, it is possible to realize a monitoring screen with abundant information.
 図5に示すように、本 実施形態では、気象の検出結果に基づく付加表示情報を表す図形である気象情報図形41f,42f,・・・に加えて、船舶監視用レーダ12、AIS受信機 9、及び画像認識部28により得られた検出結果に基づく付加表示情報を表す図形である物標情報図形46fが撮影映像に合成されている。これにより、気象と 物標に関する表示を統合でき、ユーザは、多様な情報源から得られる情報を1つの合成映像で分かり易く監視することができる。この結果、監視負担を良好に軽 減することができる。 As shown in FIG. 5, in the present embodiment, in addition to the weather information graphics 41f, 42f, ... Which are graphics representing additional display information based on the weather detection result, the ship monitoring radar 12, the AIS receiver 9 , And a target information graphic 46f, which is a graphic representing additional display information based on the detection result obtained by the image recognition unit 28, is combined with the captured video. With this, it is possible to integrate the display of the weather and the target, and the user can easily monitor the information obtained from various information sources with one combined video. As a result, the burden of monitoring can be favorably reduced.
 図5の合成映像には、方位目盛り48fが表示されている。方位目盛り48fは、画面の左端と右端と を繋ぐように円弧状に形成されている。方位目盛り48fには、映像に対応した方位を示す数値が記載されている。これにより、ユーザは、カメラ3が見ている 方位を直感的に把握することができる。 Azimuth scale 48f is displayed in the composite image of FIG. The azimuth scale 48f is formed in an arc shape so as to connect the left end and the right end of the screen. Numerical values indicating the azimuth corresponding to the image are written on the azimuth scale 48f. This allows the user to intuitively understand the direction in which the camera 3 is looking.
 例えばカメラ3がチルトしたり、雲41rが移動したりして、合成映像上で方位目盛 り48fが他の図形41f,42f,・・・と重なりそうな場合には、レンダリング部32は、方位目盛り48fが合成される位置を映像の上下方向に自動的に 移動させる。これにより、方位目盛り48fが他の表示の邪魔にならないようにすることができる。 For example, when the camera 3 is tilted or the cloud 41r is moved and the azimuth scale 48f is likely to overlap with other figures 41f, 42f, ... On the composite video, the rendering unit 32 determines The position where the scale 48f is combined is automatically moved in the vertical direction of the image. As a result, the azimuth scale 48f can be prevented from interfering with other displays.
 図5の合成映像における方位目盛り48fは、図4に示す3次元シーンデータ50に方位目盛りの仮想現実オブジェクト48vが配置され、これがレンダリング部32によってレンダリングされることにより得られる。 The azimuth scale 48f in the composite image of FIG. 5 is obtained by arranging the virtual reality object 48v of the azimuth scale in the three-dimensional scene data 50 shown in FIG. 4 and rendering this by the rendering unit 32.
  図5に示すように、方位目盛り48fのすぐ上には、方位毎に、船舶の航行の安全性等の観点から警戒すべき度合いを表示する警戒度チャート49fが表示され る。具体的には、付加表示情報取得部17は、波の強さ、風の強さの分布等、安全に影響する様々な要因を考慮して、警戒度を、カメラ3の設置位置を中心とす る方位毎に求める。3次元シーン生成部31は、付加表示情報取得部17から出力された情報に基づいて警戒度チャートの仮想現実オブジェクト49vを生成し て、3次元仮想空間40に配置する。この仮想現実オブジェクト49vをレンダリング部32がレンダリングすることで、図5に示すように、カメラ映像に警戒 度チャート49fを合成することができる。従って、付加表示情報取得部17は警戒度取得部に相当する。この警戒度チャート49fにより、オペレータは、注 意すべき状況に容易に気付くことができる。 As shown in FIG. 5, just above the azimuth scale 48f, a caution level chart 49f is displayed for each azimuth indicating the degree to which one should be vigilant in terms of the safety of navigation of the ship. Specifically, the additional display information acquisition unit 17 considers various factors that affect safety such as the strength of waves and the distribution of wind strength, and sets the degree of caution around the installation position of the camera 3. Ask for each direction. The 3D scene generation unit 31 generates the virtual reality object 49v of the alertness chart based on the information output from the additional display information acquisition unit 17, and arranges it in the 3D virtual space 40. By rendering the virtual reality object 49v by the rendering unit 32, as shown in FIG. 5, the alertness chart 49f can be combined with the camera image. Therefore, the additional display information acquisition unit 17 corresponds to a warning level acquisition unit. The alert level chart 49f allows the operator to easily notice a situation that requires attention.
 合成映像に含まれる図形としては、上記で説明した情報に限らず、様々な情報を 示すように構成することができる。例示すると、気象に関しては、地上の雨のエコー、雪のエコー、雲の内部の雨のエコー、風向、風速、気温、気圧、水蒸気 量、日射又は降水の予測、工場の周囲の高温領域、雷の位置、降雪地域等が考えられる。これらの情報は、2次元的な分布又は3次元的な分布として与えられて も良い。また、気象情報を蓄積した時系列データを解析し、得られた情報を合成映像において示すように構成しても良い。船舶に関しては、AIS情報に含まれ るMMSI番号等の識別番号、船の長さ及び幅等が考えられる。 The figures included in the composite video are not limited to the information described above, and can be configured to show various information. For example, regarding weather, it is a rain echo on the ground, an echo of snow, an echo of rain inside a cloud, wind direction, wind speed, temperature, atmospheric pressure, water vapor amount, forecast of solar radiation or precipitation, high temperature area around the factory, lightning. The location, snowfall area, etc. can be considered. These pieces of information may be given as a two-dimensional distribution or a three-dimensional distribution. Further, the time series data accumulating the weather information may be analyzed, and the obtained information may be shown in the composite video. For vessels, the identification number such as the MMSI number included in the AIS information, and the length and width of the vessel can be considered.
 合成映像において、上記の情報を付加的に表示するか否か を、情報の種別毎にユーザが設定できるようにすることが好ましい。これにより、ユーザは、状況に応じて、例えば降水分布の図形42fだけをカメラ映像に合 成した状態とすることができる。必要とする仮想現実の図形だけを表示する設定とすることで、表示が混み合うのを防止することができる。 It is preferable that the user can set whether or not to additionally display the above information in the composite video for each type of information. Thereby, the user can be in a state in which, for example, only the graphic 42f of the precipitation distribution is combined with the camera image depending on the situation. By setting to display only the required virtual reality figure, it is possible to prevent the display from being crowded.
 図6には、周辺監視装置1において行われる一連の処理がフローチャートによって示されている。 FIG. 6 is a flowchart showing a series of processes performed in the peripheral monitoring device 1.
  図6のフローがスタートすると、周辺監視装置1は、カメラ3が撮影した撮影映像を撮影映像入力部21から入力する(ステップS101)。続いて、周辺監視 装置1は、気象用レーダ15等から各種の気象情報を入力する(ステップS102)。次に、拡張現実映像生成部30は、気象情報の3次元形状の外形を変換し た2次元図形を生成する(ステップS103)。これと同時に、拡張現実映像生成部30は、得られた2次元図形(気象情報図形)を撮影映像に合成することに より、合成映像を生成し、得られた合成映像を出力する(ステップS104)。その後、処理はステップS101に戻り、ステップS101~S104の処理が 反復される。 When the flow of FIG. 6 starts, the peripheral monitoring device 1 inputs the captured image captured by the camera 3 from the captured image input unit 21 (step S101). Subsequently, the peripheral monitoring device 1 inputs various weather information from the weather radar 15 and the like (step S102). Next, the augmented reality video generation unit 30 generates a two-dimensional figure obtained by converting the outer shape of the three-dimensional shape of the weather information (step S103). At the same time, the augmented reality video generation unit 30 generates a composite video by combining the obtained two-dimensional graphic (weather information graphic) with the captured video, and outputs the obtained composite video (step S104). ). After that, the process returns to step S101, and the processes of steps S101 to S104 are repeated.
 以上に説明したように、本実施形態の周辺監視装置1は、撮影映像入力部21と、気象情報入力部22と、レン ダリング部32と、を備える。撮影映像入力部21は、カメラ3の撮影映像を入力する。気象情報入力部22は、複数の3次元での位置情報を含む気象情報を入 力する。レンダリング部32は、気象情報に含まれる複数の3次元での位置情報に基づく3次元形状(仮想現実オブジェクト 41v,42v,43v,44v,45v)の外形を、撮影映像に幾何学的に整合するように変換した2次元図形である気象情報図形 41f,42f,43f,44f,45fを生成する。レンダリング部32は、気象情報図形41f,42f,・・・を撮影映像に合成した合成映像を生成す る。 As described above, the surroundings monitoring device 1 of the present embodiment includes the captured image input unit 21, the weather information input unit 22, and the rendering unit 32. The captured image input unit 21 inputs a captured image of the camera 3. The weather information input unit 22 inputs weather information including a plurality of three-dimensional position information. The rendering unit 32 geometrically matches the outer shape of the three-dimensional shape ( virtual reality objects 41v, 42v, 43v, 44v, 45v) based on the plurality of three-dimensional position information included in the weather information with the captured image. The weather information graphics 41f, 42f, 43f, 44f, 45f, which are the two-dimensional graphics converted as described above, are generated. The rendering unit 32 generates a composite video in which the weather information graphics 41f, 42f, ... Are composited with the captured video.
 これにより、オペレータは、カメラ映像とともに気象情報を、高さ方向の外形の情報を含めて容易に理解することができる。 With this, the operator can easily understand the weather information, including the camera image, including the outline information in the height direction.
 以上に本発明の好適な実施の形態を説明したが、上記の構成は例えば以下のように変更することができる。 The preferred embodiment of the present invention has been described above, but the above configuration can be modified as follows, for example.
  レンダリング部32は、仮想現実オブジェクト41v,42v,・・・に基づく2次元図形の生成と、2次元図形とカメラ映像との合成と、を同時に行ってい る。しかしながら、これらの処理が別々に行われても良い。具体的には、拡張現実映像生成部30は、更に2次元合成部を備える。レンダリング部32は、映写 スクリーン51を省略した状態で、仮想現実オブジェクト41v,42v,・・・だけをレンダリングして2次元画像を生成する。この2次元画像には、仮想現 実オブジェクト41v,42v,・・・が変換された2次元図形(気象情報図形及び物標情報図形)が含まれる。その後、2次元合成部は、レンダリング結果の 画像を、カメラ映像に対して重畳する。この構成では、レンダリング部32が気象情報図形生成部に相当し、2次元合成部が合成映像生成部に相当する。 The rendering unit 32 simultaneously generates a two-dimensional figure based on the virtual reality objects 41v, 42v, ... And synthesizes the two-dimensional figure and the camera image. However, these processes may be performed separately. Specifically, the augmented reality video generation unit 30 further includes a two-dimensional synthesis unit. The rendering unit 32 renders only the virtual reality objects 41v, 42v, ... With the projection screen 51 omitted to generate a two-dimensional image. The two-dimensional image includes two-dimensional figures (meteorological information figure and target information figure) obtained by converting the virtual actual objects 41v, 42v, .... After that, the two-dimensional synthesis unit superimposes the rendering result image on the camera image. In this configuration, the rendering unit 32 corresponds to the weather information graphic generation unit, and the two-dimensional synthesis unit corresponds to the synthetic video generation unit.
 拡張現実映像生成部30は、例えば海図チャートにより得られる海岸線のベクトルデータを仮想現実オブジェクトとして3次元シーンデータ50に配置して、レンダリングしても良い。この場合、合成映像に海岸線の図形を表示することができる。 The augmented reality image generation unit 30 may arrange vector data of a coastline obtained from a nautical chart, for example, in the three-dimensional scene data 50 as a virtual reality object for rendering. In this case, the figure of the coastline can be displayed on the composite image.
  拡張現実映像生成部30は、地図を表す3次元形状を仮想現実オブジェクトとして3次元シーンデータに配置して、レンダリングしても良い。この場合、合成映 像に、例えば道路、建物の図形を表示することができる。また、上述の太陽の位置に基づいて、建物によって生じる影を示す図形を計算して表示することもでき る。 The augmented reality video generation unit 30 may arrange a three-dimensional shape representing a map as virtual reality objects in three-dimensional scene data for rendering. In this case, for example, a figure of a road or a building can be displayed on the composite image. Further, it is also possible to calculate and display the figure showing the shadow generated by the building based on the position of the sun described above.
 気象情報によって周辺の状況(例えば、橋梁における車両の通行、鉄道車両の運行等)に影響がある場合には、特別な警告を示す図形又はテキストを合成映像に表示し、オペレータの注意を喚起しても良い。 When the weather information affects surrounding conditions (for example, traffic of vehicles on a bridge, operation of railway vehicles, etc.), a graphic or text indicating a special warning is displayed on the composite video to alert the operator. May be.
  現在に限らず、過去又は将来の気象情報(例えば、降水の分布、雲の分布等)を、オペレータが日時を指定することで表示できるように構成しても良い。取得し た現在の気象情報を適宜の記憶部に蓄積していくことで、過去の気象情報を得ることができる。将来の気象情報は、周辺監視装置1が過去及び現在の気象情報か ら計算により推定することにより得ることができる。例えば、オペレータがキーボード36及びマウス37等を用いて過去又は将来の日時を指定することによ り、過去の気象情報、又は、将来の推定した気象情報を示す気象情報図形が、現在のカメラ映像に重畳されて表示されるように構成することが考えられる。 Not limited to the present, past or future meteorological information (for example, precipitation distribution, cloud distribution, etc.) may be configured so that the operator can display the date and time. By accumulating the acquired current weather information in an appropriate storage unit, past weather information can be obtained. The future weather information can be obtained by the surroundings monitoring device 1 estimating from past and present weather information by calculation. For example, by the operator specifying the past or future date and time using the keyboard 36, the mouse 37, etc., the weather information graphic indicating the past weather information or the future estimated weather information is displayed as the current camera image. It is conceivable that it is configured so as to be superimposed on and displayed.
 カメラ3において上記のパン/チルト機能を省略し、撮影方向を変更不能に構成してもよい。 The above pan / tilt function may be omitted in the camera 3, and the shooting direction may be unchangeable.
  3次元シーン生成部31が3次元シーンデータ50を生成するにあたって、上記の実施形態では、図4で説明したように、仮想現実オブジェクト 41v,42v,・・・がカメラ3の位置を原点としたカメラ方位基準で配置されている。しかしながら、仮想現実オブジェクト41v,42v,・・・を、カ メラ方位基準ではなく、+z方向が常に真北となる真北基準で配置してもよい。この場合、カメラ3のパン動作によって方位角が変化したときは、仮想現実オブ ジェクト41v,42v,・・・を再配置する代わりに、3次元仮想空間40においてカメラ3の位置及び向きの変化をシミュレートするように視点カメラ55 の方位角を変更してレンダリングを行うことで、上述のカメラ方位基準の場合と全く同じレンダリング結果を得ることができる。 When the 3D scene generation unit 31 generates the 3D scene data 50, in the above embodiment, the virtual reality objects 41v, 42v, ... Have the position of the camera 3 as the origin, as described in FIG. It is arranged based on the camera orientation. However, the virtual reality objects 41v, 42v, ... May be arranged not on the camera orientation reference but on the true north reference in which the + z direction is always true north. In this case, when the azimuth changes due to the pan operation of the camera 3, instead of rearranging the virtual reality objects 41v, 42v, ..., The position and orientation of the camera 3 in the three-dimensional virtual space 40 are changed. By changing the azimuth angle of the viewpoint camera 55 so as to perform the rendering, it is possible to obtain the same rendering result as in the case of the camera azimuth reference described above.
 また、3次元仮想空間40の座標系は、カメラ3の真下の位置を原点にすることに代えて、地球上に適宜定められた固定点を原点とし、例えば、+z方向が真北、+x方向が真東となるように定められてもよい。 Further, in the coordinate system of the three-dimensional virtual space 40, instead of using the position directly under the camera 3 as the origin, a fixed point appropriately set on the earth is used as the origin, and, for example, + z direction is true north, + x direction is May be set to be true east.
 付加表示情報を表す図形は、図5に示すものに限られず、様々な図形に変更することができる。例えば、雲を表す仮想現実オブジェクト41vは、小さな球の集まりに代えて、雲塊を模した複雑な輪郭形状の3次元モデルとして構成することができる。 The figure representing the additional display information is not limited to the one shown in FIG. 5, and can be changed to various figures. For example, the virtual reality object 41v representing a cloud can be configured as a three-dimensional model having a complicated contour shape imitating a cloud lump instead of a collection of small spheres.
  船舶46rが検出された位置に、船舶の3次元モデルをレンダリングした図形を表示することもできる。これにより、より臨場感のある表示を実現することがで きる。図4の3次元仮想空間40において、船舶の3次元モデルは、AIS情報等から得られる船舶の向きと一致する向きで配置される。3次元仮想空間40に 配置される船舶の3次元モデルの大きさを、AIS情報等から得られる船舶の大きさに応じて変化させても良い。 ・ It is also possible to display a graphic rendering a three-dimensional model of the ship at the position where the ship 46r is detected. This makes it possible to realize a more realistic display. In the three-dimensional virtual space 40 of FIG. 4, the three-dimensional model of the ship is arranged in a direction that matches the direction of the ship obtained from the AIS information and the like. The size of the three-dimensional model of the ship arranged in the three-dimensional virtual space 40 may be changed according to the size of the ship obtained from the AIS information or the like.
 船舶監視用レーダ12及びAIS受信機9のうち少なくとも何れかについて、周辺監視装置1に接続されなくても良い。また、周辺監視装置1から画像認識部28が省略されても良い。 At least one of the ship monitoring radar 12 and the AIS receiver 9 may not be connected to the peripheral monitoring device 1. Further, the image recognition unit 28 may be omitted from the peripheral monitoring device 1.
 周辺監視装置1は、地上の施設に限定されず、船舶等の移動体に設けられても良い。 The peripheral monitoring device 1 is not limited to a facility on the ground, and may be provided in a moving body such as a ship.
 1 周辺監視装置(映像生成装置)
 3 カメラ
 12 船舶監視用レーダ(物標監視用レーダ)
 15 気象用レーダ
 21 撮影映像入力部
 22 気象情報入力部(気象情報取得部)
 26 太陽位置取得部(気象情報取得部)
 32 レンダリング部(気象情報図形生成部、合成映像生成部)
 41f,42f,43f,44f,45f 気象情報図形
 46f 物標情報図形
1 Surrounding monitoring device (video generation device)
3 cameras 12 Ship surveillance radar (target surveillance radar)
15 Meteorological Radar 21 Photographed Video Input Section 22 Meteorological Information Input Section (Meteorological Information Acquisition Section)
26 Sun position acquisition unit (weather information acquisition unit)
32 Rendering unit (weather information graphic generation unit, synthetic image generation unit)
41f, 42f, 43f, 44f, 45f Weather information graphic 46f Target information graphic
用語the term
 必ずしも全ての目的または効果・利点が、本明細書中に記載される任意の特定の実施形態に則って達成され得るわけではない。従って、例えば当業者であれば、特定の実施形態は、本明細書中で教示または示唆されるような他の目的または効果・利点を必ずしも達成することなく、本明細書中で教示されるような1つまたは複数の効果・利点を達成または最適化するように動作するように構成され得ることを想到するであろう。 Not all the objectives or effects / advantages can be achieved according to any specific embodiments described in the present specification. Thus, for example, a person of ordinary skill in the art will recognize that certain embodiments are taught herein without necessarily achieving the other objects or advantages as taught or suggested herein. It will be appreciated that it may be configured to operate to achieve or optimize one or more effects or advantages.
 本明細書中に記載される全ての処理は、1つまたは複数のコンピュータまたはプロセッサを含むコンピューティングシステムによって実行されるソフトウェアコードモジュールにより具現化され、完全に自動化され得る。コードモジュールは、任意のタイプの非一時的なコンピュータ可読媒体または他のコンピュータ記憶装置に記憶することができる。一部または全ての方法は、専用のコンピュータハードウェアで具現化され得る。 All processes described herein may be embodied by software code modules executed by a computing system including one or more computers or processors and may be fully automated. The code modules may be stored on any type of non-transitory computer readable medium or other computer storage device. Some or all of the methods may be embodied in dedicated computer hardware.
 本明細書中に記載されるもの以外でも、多くの他の変形例があることは、本開示から明らかである。例えば、実施形態に応じて、本明細書中に記載されるアルゴリズムのいずれかの特定の動作、イベント、または機能は、異なるシーケンスで実行することができ、追加、併合、または完全に除外することができる (例えば、記述された全ての行為または事象がアルゴリズムの実行に必要というわけではない)。さらに、特定の実施形態では、動作またはイベントは、例えば、マルチスレッド処理、割り込み処理、または複数のプロセッサまたはプロセッサコアを介して、または他の並列アーキテクチャ上で、逐次ではなく、並列に実行することができる。さらに、異なるタスクまたはプロセスは、一緒に機能し得る異なるマシンおよび/またはコンピューティングシステムによっても実行され得る。 It will be apparent from this disclosure that there are many other variations other than those described herein. For example, depending on the embodiment, particular acts, events, or functions of any of the algorithms described herein can be performed in different sequences, added, merged, or omitted altogether. (For example, not all described acts or events are required to execute an algorithm). Further, in certain embodiments, the acts or events may be executed in parallel rather than serially, eg, via multithreading, interrupt handling, or through multiple processors or processor cores, or on other parallel architectures. You can Moreover, different tasks or processes may be performed by different machines and / or computing systems that may work together.
 本明細書中に開示された実施形態に関連して説明された様々な例示的論理ブロックおよびモジュールは、プロセッサなどのマシンによって実施または実行することができる。プロセッサは、マイクロプロセッサであってもよいが、代替的に、プロセッサは、コントローラ、マイクロコントローラ、またはステートマシン、またはそれらの組み合わせなどであってもよい。プロセッサは、コンピュータ実行可能命令を処理するように構成された電気回路を含むことができる。別の実施形態では、プロセッサは、特定用途向け集積回路(ASIC)、フィールドプログラマブルゲートアレイ(FPGA)、またはコンピュータ実行可能命令を処理することなく論理演算を実行する他のプログラマブルデバイスを含む。プロセッサはまた、コンピューティングデバイスの組み合わせ、例えば、デジタル信号プロセッサ(デジタル信号処理装置)とマイクロプロセッサの組み合わせ、複数のマイクロプロセッサ、DSPコアと組み合わせた1つ以上のマイクロプロセッサ、または任意の他のそのような構成として実装することができる。本明細書中では、主にデジタル技術に関して説明するが、プロセッサは、主にアナログ素子を含むこともできる。例えば、本明細書中に記載される信号処理アルゴリズムの一部または全部は、アナログ回路またはアナログとデジタルの混合回路により実装することができる。コンピューティング環境は、マイクロプロセッサ、メインフレームコンピュータ、デジタル信号プロセッサ、ポータブルコンピューティングデバイス、デバイスコントローラ、または装置内の計算エンジンに基づくコンピュータシステムを含むが、これらに限定されない任意のタイプのコンピュータシステムを含むことができる。 The various exemplary logic blocks and modules described in connection with the embodiments disclosed herein can be implemented or executed by a machine such as a processor. The processor may be a microprocessor, but in the alternative, the processor may be a controller, microcontroller, or state machine, combinations thereof, or the like. The processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, the processor comprises an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other programmable device that performs logical operations without processing computer-executable instructions. A processor may also be a combination of computing devices, such as a combination of digital signal processors (digital signal processors) and microprocessors, multiple microprocessors, one or more microprocessors in combination with DSP cores, or any other thereof. It can be implemented as such a configuration. Although described primarily herein in terms of digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented by analog circuits or mixed analog and digital circuits. A computing environment includes any type of computer system including, but not limited to, a microprocessor, mainframe computer, digital signal processor, portable computing device, device controller, or computing engine-based computer system in an apparatus. be able to.
 特に明記しない限り、「できる」「できた」「だろう」または「可能性がある」などの条件付き言語は、特定の実施形態が特定の特徴、要素および/またはステップを含むが、他の実施形態は含まないことを伝達するために一般に使用される文脈内での意味で理解される。従って、このような条件付き言語は、一般に、特徴、要素および/またはステップが1つ以上の実施形態に必要とされる任意の方法であること、または1つ以上の実施形態が、これらの特徴、要素および/またはステップが任意の特定の実施形態に含まれるか、または実行されるかどうかを決定するための論理を必然的に含むことを意味するという訳ではない。 Unless otherwise stated, conditional languages such as “capable”, “capable”, “possible” or “possible” refer to particular features, elements and / or steps that a particular embodiment includes. Embodiments are understood in the sense of the context commonly used to convey the inclusion. Accordingly, such a conditional language is generally any method in which features, elements and / or steps are required for one or more embodiments, or one or more embodiments It is not meant to necessarily include logic to determine whether an element and / or step is included in or executed by any particular embodiment.
 語句「X、Y、Zの少なくとも1つ」のような選言的言語は、特に別段の記載がない限り、項目、用語等が X, Y, Z、のいずれか、又はそれらの任意の組み合わせであり得ることを示すために一般的に使用されている文脈で理解される(例: X、Y、Z)。従って、このような選言的言語は、一般的には、特定の実施形態がそれぞれ存在するXの少なくとも1つ、Yの少なくとも1つ、またはZの少なくとも1つ、の各々を必要とすることを意味するものではない。 A disjunctive language such as the phrase "at least one of X, Y, and Z", unless otherwise specified, has an item, term, etc. that is any of X, Y, Z, or any combination thereof. Understood in a commonly used context to indicate that it can be (eg, X, Y, Z). Accordingly, such disjunctive languages generally require each of at least one of X, at least one of Y, or at least one of Z for which a particular embodiment is present. Does not mean.
 本明細書中に記載されかつ/または添付の図面に示されたフロー図における任意のプロセス記述、要素またはブロックは、プロセスにおける特定の論理機能または要素を実装するための1つ以上の実行可能命令を含む、潜在的にモジュール、セグメント、またはコードの一部を表すものとして理解されるべきである。代替の実施形態は、本明細書中に記載された実施形態の範囲内に含まれ、ここでは、要素または機能は、当業者に理解されるように、関連する機能性に応じて、実質的に同時にまたは逆の順序で、図示または説明されたものから削除、順不同で実行され得る。 Any process description, element or block in the flow diagrams described herein and / or shown in the accompanying drawings is one or more executable instructions for implementing a particular logical function or element in the process. Should be understood as potentially representing modules, segments, or portions of code, including. Alternate embodiments are included within the scope of the embodiments described herein, where an element or function is substantially dependent on the functionality involved, as will be appreciated by those of skill in the art. May be performed simultaneously, or in reverse order, deleted from those illustrated or described, in any order.
 特に明示されていない限り、「一つ」のような数詞は、一般的に、1つ以上の記述された項目を含むと解釈されるべきである。従って、「~するように設定された一つのデバイス」などの語句は、1つ以上の列挙されたデバイスを含むことを意図している。このような1つまたは複数の列挙されたデバイスは、記載された引用を実行するように集合的に構成することもできる。例えば、「以下のA、BおよびCを実行するように構成されたプロセッサ」は、Aを実行するように構成された第1のプロセッサと、BおよびCを実行するように構成された第2のプロセッサとを含むことができる。加えて、導入された実施例の具体的な数の列挙が明示的に列挙されたとしても、当業者は、このような列挙が典型的には少なくとも列挙された数(例えば、他の修飾語を用いない「2つの列挙と」の単なる列挙は、通常、少なくとも2つの列挙、または2つ以上の列挙を意味する)を意味すると解釈されるべきである。 Numerals such as "one" should generally be construed to include one or more described items unless specifically stated. Thus, phrases such as "one device configured to" are intended to include one or more of the listed devices. Such one or more enumerated devices may also be collectively configured to perform the recited citations. For example, "a processor configured to perform A, B and C below" refers to a first processor configured to perform A and a second processor configured to perform B and C. Processor. In addition, even if a specific number enumeration of the introduced examples is explicitly recited, one of ordinary skill in the art will appreciate that such an enumeration typically includes at least the recited number (e.g., other modifiers The mere enumeration of "with two enumerations" without "is usually meant to mean at least two enumerations, or more than one enumeration."
 一般に、本明細書中で使用される用語は、一般に、「非限定」用語(例えば、「~を含む」という用語は「それだけでなく、少なくとも~を含む」と解釈すべきであり、「~を持つ」という用語は「少なくとも~を持っている」と解釈すべきであり、「含む」という用語は「以下を含むが、これらに限定されない。」などと解釈すべきである。) を意図していると、当業者には判断される。 In general, the terms used in this specification generally should be construed as "non-limiting" (eg, the term "comprising" includes "as well as including at least") The term "has" should be construed as "has at least" and the term "includes" should be construed as "including but not limited to".) The person skilled in the art will judge that
 説明の目的のために、本明細書中で使用される「水平」という用語は、その方向に関係なく、説明されるシステムが使用される領域の床の平面または表面に平行な平面、または説明される方法が実施される平面として定義される。「床」という用語は、「地面」または「水面」という用語と置き換えることができる。「垂直/鉛直」という用語は、定義された水平線に垂直/鉛直な方向を指します。「上側」「下側」「下」「上」「側面」「より高く」「より低く」「上の方に」「~を越えて」「下の」などの用語は水平面に対して定義されている。 For purposes of description, the term "horizontal" as used herein, regardless of its orientation, is a plane parallel to the plane or surface of the floor of the area in which the described system is used, or description. Is defined as the plane in which the method is performed. The term "floor" can be replaced with the terms "ground" or "water surface". The term "vertical / vertical" refers to the direction vertical / vertical to the defined horizontal line. Terms such as “upper”, “lower”, “lower”, “upper”, “side”, “higher”, “lower”, “upward”, “above”, and “below” are defined with respect to the horizontal plane. ing.
 本明細書中で使用される用語の「付着する」、「接続する」、「対になる」及び他の関連用語は、別段の注記がない限り、取り外し可能、移動可能、固定、調節可能、及び/または、取り外し可能な接続または連結を含むと解釈されるべきである。接続/連結は、直接接続及び/または説明した2つの構成要素間の中間構造を有する接続を含む。 As used herein, the terms "attach," "connect," "pair," and other related terms, unless otherwise noted, are removable, moveable, fixed, adjustable, And / or should be construed as including a removable connection or coupling. Connections / couplings include direct connections and / or connections having an intermediate structure between the two described components.
 特に明示されていない限り、本明細書中で使用される、「およそ」、「約」、および「実質的に」のような用語が先行する数は、列挙された数を含み、また、さらに所望の機能を実行するか、または所望の結果を達成する、記載された量に近い量を表す。例えば、「およそ」、「約」及び「実質的に」とは、特に明示されていない限り、記載された数値の10%未満の値をいう。本明細書中で使用されているように、「およそ」、「約」、および「実質的に」などの用語が先行して開示されている実施形態の特徴は、さらに所望の機能を実行するか、またはその特徴について所望の結果を達成するいくつかの可変性を有する特徴を表す。 As used herein, the numbers preceded by terms such as “approximately”, “about”, and “substantially” include the recited numbers, and unless otherwise indicated. Represents an amount near the stated amount that performs a desired function or achieves a desired result. For example, "approximately", "about" and "substantially" refer to values less than 10% of the stated numerical value, unless expressly specified otherwise. As used herein, the features of the embodiments in which the terms "about," "about," and "substantially" are previously disclosed perform additional desirable functions. Or represents a feature with some variability in achieving that desired result for that feature.
 上述した実施形態には、多くの変形例および修正例を加えることができ、それらの要素は、他の許容可能な例の中にあるものとして理解されるべきである。そのような全ての修正および変形は、本開示の範囲内に含まれることを意図し、以下の特許請求の範囲によって保護される。 Many variations and modifications can be made to the above-described embodiments, and those elements should be understood as being among other acceptable examples. All such modifications and variations are intended to be included within the scope of this disclosure and protected by the following claims.

Claims (15)

  1.  カメラの撮影映像を入力する撮影映像入力部と、
     気象情報を取得する気象情報取得部と、
     前記気象情報に複数の3次元での位置情報が含まれる場合に、当該複数の3次元での位置情報に基づく3次元形状の外形を変換した2次元図形である気象情報図形を生成する気象情報図形生成部と、
     前記気象情報図形を前記撮影映像に合成した合成映像を生成する合成映像生成部と、
    を備えることを特徴とする映像生成装置。
    The captured image input section for inputting the captured image of the camera,
    A weather information acquisition unit that acquires weather information,
    When the meteorological information includes a plurality of three-dimensional position information, the meteorological information for generating a meteorological information figure which is a two-dimensional figure obtained by converting the outer shape of the three-dimensional shape based on the plurality of three-dimensional position information. A figure generator,
    A composite video generation unit that generates a composite video in which the weather information graphic is composited with the captured video;
    An image generation apparatus comprising:
  2.  請求項1に記載の映像生成装置であって、
     前記気象情報図形生成部は、前記3次元形状を2次元にレンダリングすることにより、前記気象情報図形を生成することを特徴とする映像生成装置。
    The video generation device according to claim 1,
    The image generation device, wherein the weather information graphic generation unit generates the weather information graphic by rendering the three-dimensional shape in two dimensions.
  3.  請求項2に記載の映像生成装置であって、
     前記気象情報図形生成部は、3次元空間に配置した前記3次元形状としての仮想現実オブジェクトを、前記撮影映像に対応する視点の位置及び向きで2次元にレンダリングすることで、前記気象情報図形を生成することを特徴とする映像生成装置。
    The video generation device according to claim 2, wherein
    The meteorological information graphic generation unit renders the meteorological information graphic by two-dimensionally rendering the virtual reality object as the three-dimensional shape arranged in a three-dimensional space at the position and orientation of the viewpoint corresponding to the captured video. An image generation device characterized by generating.
  4.  請求項1に記載の映像生成装置であって、
     前記気象情報には、気象用レーダで取得した降水エコー情報が含まれることを特徴とする映像生成装置。
    The video generation device according to claim 1,
    The image generation device, wherein the weather information includes precipitation echo information acquired by a weather radar.
  5.  請求項4に記載の映像生成装置であって、
     前記気象情報図形生成部は、前記降水エコー情報に基づいて、降水がある地上又は水上の領域を示す図形を生成可能であることを特徴とする映像生成装置。
    The video generation device according to claim 4, wherein
    The image generation device, wherein the weather information graphic generation unit is capable of generating a graphic showing a region on the ground or above water where precipitation is present based on the precipitation echo information.
  6.  請求項1から5までの何れか一項に記載の映像生成装置であって、
     前記気象情報には、太陽の位置が含まれることを特徴とする映像生成装置。
    The video generation device according to any one of claims 1 to 5,
    The image generation device, wherein the weather information includes the position of the sun.
  7.  請求項1から6までの何れか一項に記載の映像生成装置であって、
     前記気象情報には、雲の3次元形状が含まれることを特徴とする映像生成装置。
    The video generation device according to any one of claims 1 to 6,
    The image generation device, wherein the weather information includes a three-dimensional shape of a cloud.
  8.  請求項7に記載の映像生成装置であって、
     前記気象情報図形生成部は、前記雲の3次元形状と太陽の位置とに基づいて、太陽光が雲によって遮られる地上又は水上の領域を示す図形を生成可能であることを特徴とする映像生成装置。
    The video generation device according to claim 7, wherein
    The weather information graphic generation unit is capable of generating a graphic showing a region on the ground or on the water where the sunlight is blocked by the cloud, based on the three-dimensional shape of the cloud and the position of the sun. apparatus.
  9.  請求項1から8までの何れか一項に記載の映像生成装置であって、
     前記気象情報には、風向の分布の情報、風速の分布の情報、気温の分布の情報、気圧の分布の情報、水蒸気量の分布の情報、雷の位置の情報、予測される日射の分布の情報、及び予測される降水の分布の情報のうち少なくとも何れかが含まれることを特徴とする映像生成装置。
    The video generation device according to any one of claims 1 to 8,
    The weather information includes wind direction distribution information, wind speed distribution information, temperature distribution information, atmospheric pressure distribution information, water vapor distribution information, lightning position information, and predicted solar radiation distribution information. An image generating apparatus characterized by including at least one of information and information of predicted precipitation distribution.
  10.  請求項1から9までの何れか一項に記載の映像生成装置であって、
     水上の物標の位置を含む物標情報を入力する物標情報入力部を備え、
     前記合成映像生成部は、前記気象情報図形を前記撮影映像に合成するとともに、前記物標の位置を前記撮影映像に幾何学的に整合させるように変換した位置を示す図形である物標情報図形を前記撮影映像に合成した合成映像を生成可能であることを特徴とする映像生成装置。
    The video generation device according to any one of claims 1 to 9,
    A target information input unit for inputting target information including the position of the target on the water is provided.
    The synthetic image generation unit synthesizes the weather information figure with the photographed image, and also a target information figure which is a figure indicating a position where the position of the target is converted so as to geometrically match the photographed image. An image generating apparatus capable of generating a combined image by combining the captured image with the captured image.
  11.  請求項10に記載の映像生成装置であって、
     前記物標情報に、物標監視用レーダで物標を探知して得られた情報が含まれることを特徴とする映像生成装置。
    The video generation device according to claim 10, wherein:
    An image generating apparatus, wherein the target information includes information obtained by detecting a target by a target monitoring radar.
  12.  請求項10又は11に記載の映像生成装置であって、
     前記物標情報に、自動船舶識別装置から得られた情報が含まれることを特徴とする映像生成装置。
    The video generation device according to claim 10 or 11, wherein
    An image generating apparatus, wherein the target information includes information obtained from an automatic ship identifying apparatus.
  13.  請求項1から12までの何れか一項に記載の映像生成装置であって、
     方位と、その方位について警戒すべき度合いを示す警戒度と、の関係を、少なくとも前記気象情報に基づいて取得する警戒度取得部を備え、
     前記合成映像生成部は、前記撮影映像に対して、前記方位と前記警戒度の関係を示す警戒度チャートを合成可能であることを特徴とする映像生成装置。
    The video generation device according to any one of claims 1 to 12,
    An azimuth and an alertness indicating a degree to which the azimuth should be alerted, and a relation between the alertness and an alertness acquisition unit that acquires based on the weather information
    The image generation device, wherein the synthetic image generation unit is capable of synthesizing a caution level chart showing a relationship between the orientation and the level of caution with the captured video.
  14.  請求項1から13までの何れか一項に記載の映像生成装置であって、
     前記合成映像生成部は、前記撮影映像に対して、方位を示す方位目盛りを合成可能であり、
     前記方位目盛りが前記撮影映像に対して合成される上下方向の位置が自動的に変更されることを特徴とする映像生成装置。
    The video generation device according to any one of claims 1 to 13,
    The synthetic image generation unit can synthesize an azimuth scale indicating an azimuth with respect to the captured image.
    An image generation apparatus, wherein a vertical position at which the azimuth scale is combined with the captured image is automatically changed.
  15.  カメラの撮影映像を入力し、
     気象情報を取得し、
     前記気象情報に複数の3次元での位置情報が含まれる場合に、当該複数の3次元での位置情報に基づく3次元形状の外形を、前記撮影映像に幾何学的に整合するように変換した2次元図形である気象情報図形を生成し、
     前記気象情報図形を前記撮影映像に合成した合成映像を生成することを特徴とする映像生成方法。
    Enter the video captured by the camera,
    Get weather information,
    When the weather information includes a plurality of three-dimensional position information, the outer shape of a three-dimensional shape based on the plurality of three-dimensional position information is converted so as to geometrically match the captured image. Generate a weather information graphic that is a two-dimensional graphic,
    A method for generating an image, comprising generating the combined image by combining the weather information graphic with the captured image.
PCT/JP2019/035309 2018-10-09 2019-09-09 Image generation device and image generation method WO2020075430A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020550233A JP7365355B2 (en) 2018-10-09 2019-09-09 Video generation device and video generation method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-191015 2018-10-09
JP2018191015 2018-10-09

Publications (1)

Publication Number Publication Date
WO2020075430A1 true WO2020075430A1 (en) 2020-04-16

Family

ID=70164258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/035309 WO2020075430A1 (en) 2018-10-09 2019-09-09 Image generation device and image generation method

Country Status (2)

Country Link
JP (1) JP7365355B2 (en)
WO (1) WO2020075430A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022244388A1 (en) * 2021-05-18 2022-11-24 古野電気株式会社 Weather observation device, weather observation system, and weather observation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003173475A (en) * 2001-12-05 2003-06-20 Mitsubishi Heavy Ind Ltd Terminal, system, and method for supporting behavior
JP2010003274A (en) * 2008-06-23 2010-01-07 National Maritime Research Institute Visual recognition support device and visual recognition support method
JP2014240754A (en) * 2013-06-11 2014-12-25 株式会社島津ビジネスシステムズ Meteorological information provision device and meteorological information provision program
JP2016536712A (en) * 2014-08-12 2016-11-24 シャオミ・インコーポレイテッド Weather display method, apparatus, program, and recording medium
WO2017070121A1 (en) * 2015-10-20 2017-04-27 Magic Leap, Inc. Selecting virtual objects in a three-dimensional space

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003173475A (en) * 2001-12-05 2003-06-20 Mitsubishi Heavy Ind Ltd Terminal, system, and method for supporting behavior
JP2010003274A (en) * 2008-06-23 2010-01-07 National Maritime Research Institute Visual recognition support device and visual recognition support method
JP2014240754A (en) * 2013-06-11 2014-12-25 株式会社島津ビジネスシステムズ Meteorological information provision device and meteorological information provision program
JP2016536712A (en) * 2014-08-12 2016-11-24 シャオミ・インコーポレイテッド Weather display method, apparatus, program, and recording medium
WO2017070121A1 (en) * 2015-10-20 2017-04-27 Magic Leap, Inc. Selecting virtual objects in a three-dimensional space

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022244388A1 (en) * 2021-05-18 2022-11-24 古野電気株式会社 Weather observation device, weather observation system, and weather observation method

Also Published As

Publication number Publication date
JP7365355B2 (en) 2023-10-19
JPWO2020075430A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
US11270458B2 (en) Image generating device
US20180259338A1 (en) Sonar sensor fusion and model based virtual and augmented reality systems and methods
JP6921191B2 (en) Video generator, video generation method
CN110998672B (en) Video generating device and video generating method
KR20190051704A (en) Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone
US20110013016A1 (en) Visual Detection of Clear Air Turbulence
US20060220953A1 (en) Stereo display for position sensing systems
KR20160073025A (en) Object generation apparatus and method of based augmented reality using actual measured
CN115597659B (en) Intelligent safety management and control method for transformer substation
US20200089957A1 (en) Image generating device
US20230081665A1 (en) Predicted course display device and method
CA2781241C (en) Method for generating a representation of surroundings
WO2020075430A1 (en) Image generation device and image generation method
US11548598B2 (en) Image generating device and method of generating image
JP2000187723A (en) Picture processor and picture processing method
JP2000306084A (en) Three-dimensional image display method
JP7346436B2 (en) Surrounding monitoring device and surrounding monitoring method
JP2023071272A (en) Field of view determining system, field of view determining method, and program
JP2023169512A (en) Monitoring system, control method for the same, and program
CN112433219A (en) Underwater detection method, system and readable storage medium
CN114967750A (en) Multi-unmanned aerial vehicle cooperation method for fusing SLAM (Simultaneous localization and mapping) with multiple sensors for fire protection
KR20020014782A (en) A Algorithm for detecting position and distance of real time image target

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19871865

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020550233

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19871865

Country of ref document: EP

Kind code of ref document: A1