WO2016075954A1 - Display device and display method - Google Patents

Display device and display method Download PDF

Info

Publication number
WO2016075954A1
WO2016075954A1 PCT/JP2015/055957 JP2015055957W WO2016075954A1 WO 2016075954 A1 WO2016075954 A1 WO 2016075954A1 JP 2015055957 W JP2015055957 W JP 2015055957W WO 2016075954 A1 WO2016075954 A1 WO 2016075954A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
moving body
display
vehicle
shadow
Prior art date
Application number
PCT/JP2015/055957
Other languages
French (fr)
Japanese (ja)
Inventor
拓良 柳
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Priority to JP2016558896A priority Critical patent/JP6500909B2/en
Publication of WO2016075954A1 publication Critical patent/WO2016075954A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a display device and a display method for stereoscopically displaying an image around a moving object.
  • Patent Document 1 a moving body display device that projects an image of an object around the moving body and an image of the moving body onto a three-dimensional coordinate system
  • the problem to be solved by the present invention is to display an image in which the position of the moving object can be easily grasped.
  • the present invention solves the above problem by displaying a video including a position display image indicating the position of the moving object.
  • the position of the moving body can be expressed by displaying the position display image, it is possible to display an image in which the position of the moving body can be easily grasped.
  • FIG. 2A and 2B are diagrams showing an example of the installation position of the camera of this embodiment. It is a figure which shows an example of a solid coordinate system. It is a figure which shows an example of the position display image of the own vehicle in the solid coordinate system shown to FIG. 3A. It is a figure which shows the other example of a solid coordinate system. It is a figure which shows the other example of the position display image of the own vehicle in the solid coordinate system shown to FIG. 4A. It is a figure which shows the example of a display of the position display image of the own vehicle and another vehicle. It is a figure which shows the 1st example of the 1st aspect of a position display image.
  • FIG. 10 is a flowchart showing a subroutine showing a procedure of reference point setting processing shown in FIG. 9. It is a figure for demonstrating the other example of a display of a position display image.
  • the display system 1 of the present embodiment displays a video for grasping the moving body and the surroundings of the moving body on a display viewed by an operator of the moving body.
  • FIG. 1 is a block configuration diagram of a display system 1 including a display device 100 according to the present embodiment.
  • the display system 1 of this embodiment includes a display device 100 and a mobile device 200.
  • Each device of the display device 100 and the mobile device 200 includes a wired or wireless communication device (not shown), and exchanges information with each other.
  • the moving body to which the display system 1 of the present embodiment is applied includes a vehicle, a helicopter, a submarine explorer, an airplane, an armored vehicle, a train, a forklift, and other devices having a moving function.
  • a case where the moving body is a vehicle will be described as an example.
  • the moving body of the present embodiment may be a manned machine on which a human can be boarded, or an unmanned machine on which a human is not boarding.
  • the display system 1 of the present embodiment may be configured as a device mounted on a moving body, or may be configured as a portable device that can be brought into the moving body.
  • a part of the configuration of the display system 1 according to the present embodiment may be mounted on a moving body, and another configuration may be mounted on a device physically different from the moving body, and the configuration may be distributed.
  • the mobile body and another device are configured to be able to exchange information.
  • the mobile device 200 of this embodiment includes a camera 40, a controller 50, a sensor 60, a navigation device 70, and a display 80.
  • a LAN CANCAController ⁇ Area Network
  • LAN CANCAController ⁇ Area Network
  • the camera 40 of the present embodiment is provided at a predetermined position of a vehicle (an example of a moving body; the same applies hereinafter).
  • the number of cameras 40 provided in the vehicle may be one or plural.
  • the camera 40 mounted on the vehicle images the vehicle and / or the surroundings of the vehicle, and sends the captured image to the display device 100.
  • the captured image in the present embodiment includes a part of the vehicle and a video around the vehicle.
  • the captured image data is used for calculation processing of the positional relationship with the ground surface around the vehicle and generation processing of the image of the vehicle or the surroundings of the vehicle.
  • FIGS. 2A and 2B are diagrams illustrating an example of the installation position of the camera 40 mounted on the host vehicle V.
  • the host vehicle V includes a right front camera 40R1, a right center camera 40R2, a right rear camera 40R3, a left front camera 40L1 of the host vehicle V, and a left side.
  • Six cameras 40 of a center camera 40L2 and a left rear camera 40L3 are installed.
  • the arrangement position of the camera 40 is not specifically limited, The imaging direction can also be set arbitrarily.
  • Each camera 40 sends a captured image to the display device 100 at a command from the control device 10 to be described later or at a preset timing.
  • the captured image captured by the camera 40 is used for generating a video, and for detecting an object and measuring a distance to the object.
  • the captured image of the camera 40 of the present embodiment includes an image of an object around the vehicle.
  • the target object in the present embodiment includes other vehicles, pedestrians, road structures, parking lots, signs, facilities, and other objects existing around the host vehicle V.
  • the object in the present embodiment includes the ground surface around the moving body.
  • the “ground surface” is a term indicating a concept including the surface of the earth and the surface of the earth's crust (land).
  • the term “surface” of the present embodiment refers to a land surface, a sea surface, a river or river surface, a lake surface, a seabed surface, a road surface, a parking lot surface, a port surface, or two of these. Includes faces that contain more than one.
  • the term “ground surface” in the present embodiment has a meaning including the surface of a structure such as a floor surface or a wall surface of the facility. .
  • the term “surface” used in the description here is a (tangible) surface exposed to the camera 40 during imaging.
  • the camera 40 of this embodiment includes an image processing device 401.
  • the image processing apparatus 401 extracts features such as an edge, a color, a shape, and a size from the captured image data of the camera 40, and identifies an attribute of the target object included in the captured image from the extracted features.
  • the image processing apparatus 401 stores in advance the characteristics of each target object, and identifies the target object included in the captured image by pattern matching processing.
  • the method for detecting the presence of an object using captured image data is not particularly limited, and a method known at the time of filing this application can be used as appropriate.
  • the image processing device 401 calculates the distance from the own vehicle to the object from the position of the feature point extracted from the data of the captured image of the camera 40 or the change over time of the position.
  • the image processing apparatus 401 uses imaging parameters such as the installation position of the camera 40, the optical axis direction, and imaging characteristics.
  • the method for measuring the distance to the object using the captured image data is not particularly limited, and a method known at the time of filing the present application can be used as appropriate.
  • the distance measuring device 41 may be provided as means for acquiring data for calculating the positional relationship with the host vehicle V.
  • the distance measuring device 41 may be used together with the camera 40 or may be used instead of the camera 40.
  • the distance measuring device 41 detects a target existing around the host vehicle V and measures the distance between the target and the host vehicle V. That is, the distance measuring device 41 has a function of detecting an object around the host vehicle V.
  • the distance measuring device 41 sends distance measurement data up to the measured object to the display device 100.
  • the distance measuring device 41 may be a radar distance measuring device or an ultrasonic distance measuring device. A ranging method known at the time of filing of the present application can be used.
  • the number of distance measuring devices 41 that install the distance measuring devices 41 on the host vehicle V is not particularly limited.
  • the installation position of the distance measuring device 41 that installs the distance measuring device 41 in the host vehicle V is not particularly limited.
  • the distance measuring device 41 may be provided at a position corresponding to the installation position of the camera 40 shown in FIG. 2 or in the vicinity thereof, or may be provided in front of or behind the host vehicle V. When the moving body is a helicopter, airplane, submarine spacecraft, or the like that moves in the height direction, the camera 40 and / or the distance measuring device 41 may be provided on the bottom side of the body.
  • the controller 50 of this embodiment controls the operation of the moving object including the host vehicle V.
  • the controller 50 centrally manages each piece of information related to the operation of the moving body, including detection information of the sensor 60 described later.
  • the sensor 60 of the present embodiment includes a speed sensor 61 and a longitudinal acceleration sensor 62.
  • the speed sensor 61 detects the moving speed of the host vehicle V.
  • the longitudinal acceleration sensor 62 detects the acceleration in the longitudinal direction of the host vehicle V.
  • the navigation device 70 of the present embodiment includes a position detection device 71 including a GPS (Global Positioning System) 711, map information 72, and road information 73.
  • the navigation device 70 obtains the current position of the host vehicle V using the GPS 711 and sends it to the display device 100.
  • the map information 72 of the present embodiment is information in which points are associated with roads, structures, facilities, and the like.
  • the navigation device 70 has a function of referring to the map information 72, obtaining a route from the current position of the host vehicle V detected by the position detection device 71 to the destination, and guiding the host vehicle V.
  • the road information 73 of this embodiment is information in which position information and road attribute information are associated with each other.
  • the road attribute information includes road attributes such as that each road is an overtaking lane / not an overtaking lane, an uphill lane / not an uphill lane.
  • the navigation device 70 refers to the road information 73 and, at the current position detected by the position detection device 71, the lane adjacent to the road on which the host vehicle V is traveling is an overtaking lane (a lane having a relatively high traveling speed). It is possible to obtain information on whether or not there is an uphill lane (a lane having a relatively low traveling speed).
  • the control device 10 can predict the vehicle speed of the other vehicle from the detected attribute information of the road on which the other vehicle travels.
  • the display 80 of the present embodiment displays an image of the host vehicle V and the surroundings of the host vehicle V generated from an arbitrary virtual viewpoint generated by the display device 100 described later.
  • the display system 1 in which the display 80 is mounted on a moving body will be described as an example.
  • the display 80 may be provided on the portable display device 100 side that can be brought into the moving body.
  • the display device 100 of this embodiment includes a control device 10.
  • the control device 10 of the display device 100 is stored in a ROM (Read Only Memory) 12 in which a program for displaying a moving body and surrounding images is stored, and in the ROM 12.
  • the CPU (Central Processing Unit) 11 serving as an operation circuit for realizing the functions of the display device 100 and the RAM (Random Access Memory) 13 functioning as an accessible storage device are provided.
  • the control device 10 may include a Graphics / Processing / Unit that executes image processing.
  • the control device 10 of the display device 100 realizes an image acquisition function, an information acquisition function, an image generation function, and a display function.
  • the control apparatus 10 of this embodiment performs each function by cooperation of the software for implement
  • the control device 10 acquires captured image data captured by the camera 40.
  • the display device 100 acquires captured image data from the mobile device 200 using a communication device (not shown).
  • the control device 10 acquires various types of information from the mobile device 200 using a communication device (not shown).
  • the control apparatus 10 acquires the current position information of the host vehicle V as a moving body.
  • the control device 10 acquires the current position detected by the GPS 711 of the navigation device 70.
  • the control device 10 acquires position information of an object existing around the host vehicle V as a moving body.
  • the acquired position information of the object is used for setting processing of the position of the virtual light source described later.
  • the control device 10 calculates the distance from the host vehicle V to the target object from the captured image of the camera 40 as position information of the target object with respect to the host vehicle V.
  • the control device 10 may use the imaging parameter of the camera 40 for the calculation process of the position information of the object.
  • the control device 10 may acquire the position information of the object calculated by the image processing device 401.
  • the control device 10 acquires the speed of the object.
  • the control device 10 calculates the speed of the object from the change with time of the position information of the object.
  • the control device 10 may calculate the speed of the object based on the captured image data.
  • the control device 10 may acquire the speed information of the object calculated by the image processing device 401.
  • the control device 10 acquires attribute information of the lane on which the target object travels.
  • the control device 10 refers to the road information 73 of the navigation device 70 and acquires the attribute information of the lane on which the object travels.
  • the control device 10 refers to the map information 72 or the road information 73 and identifies a road and a traveling lane that include the acquired position of the object.
  • the control device 10 refers to the road information 73 and acquires attribute information associated with the travel lane of the identified object.
  • the control device 10 calculates the positional relationship between the position of the host vehicle V detected by the GPS 711 and the symmetrical object, and considers the positional relationship, and determines the attribute of the traveling lane of the target object from the traveling lane attribute of the own vehicle V.
  • the traveling lane of the other vehicle is an overtaking lane.
  • the control device 10 acquires the acceleration in the traveling direction of the host vehicle V that is a moving body.
  • the control device 10 acquires the acceleration in the traveling direction of the host vehicle V from the host vehicle V.
  • the control device 10 acquires the longitudinal acceleration detected by the longitudinal acceleration sensor 62.
  • the control device 10 may calculate the acceleration from the speed detected by the speed sensor 61.
  • the control device 10 may calculate acceleration from a change in position information of the host vehicle V detected by the GPS 711.
  • the control device 10 uses the position information of the moving body to generate a video including a position display image indicating the position of the moving body in the captured image. Specifically, the control device 10 of the present embodiment sets a reference point using the position information of the moving body, and displays a position display image and a captured image that indicate the position of the moving body observed from the reference point in the captured image. Generate a video containing The position display image of the present embodiment is an image that represents the position where the moving object is present. The location of the moving object may be positively expressed by a diagram, a shadow, or the like, or may be passively expressed by a lack or change of the background.
  • the position display image of the present embodiment may include a diagram image indicating the position of the moving object in the captured image that is observed from a preset reference point.
  • the control device 10 uses the position information of the moving body to set a viewpoint from which the moving body is viewed as a reference point, and includes a diagram image indicating the position of the moving body when the moving body is viewed from this viewpoint. Generate a display image.
  • the reference point is a viewpoint for observing the moving body.
  • the reference point may be set based on the position where the moving object exists.
  • the reference point may be set based on the position of the projection plane described later.
  • the diagram image of the position display image may be a graphic image imitating the outer shape of the moving body or an icon image reminiscent of the moving body.
  • the representation mode of the diagram image is not particularly limited.
  • the form of lines such as line thickness, color, broken line, and double line of the diagram image is not limited.
  • the hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the diagram image are not particularly limited.
  • the control apparatus 10 of this embodiment calculates
  • the position display image of the present embodiment may include a background image indicating the position of the moving object observed from a preset reference point.
  • the control device 10 uses the position information of the moving body to set a viewpoint from which the moving body is viewed as a reference point, and when the moving body is viewed from this viewpoint, a position display including a background image indicating the position of the moving body Generate an image.
  • the position of the moving object is expressed by a background image.
  • the reference point is a viewpoint for observing the moving body.
  • the reference point may be set based on the position where the moving object exists.
  • the reference point may be set based on the position of the projection plane described later.
  • the representation mode of the background image is not particularly limited.
  • the mode of lines such as line thickness, color, broken line, and double line of the background image is not limited.
  • the hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the background image are not particularly limited.
  • the control apparatus 10 of this embodiment calculates
  • the position display image of the present embodiment may include a shadow image indicating the position of the moving object observed from a preset reference point.
  • the position of the moving object is expressed by the shadow image of the moving object.
  • the reference point is a light source that irradiates light to the moving body.
  • the control device 10 sets a virtual light source as a reference point using position information of the moving body, and imitates a shadow that is generated when light is emitted from the virtual light source to the moving body.
  • a position display image including a shadow image indicating the existence position of is generated.
  • a shadow image imitating a shadow that is generated when light is emitted from a virtual light source to a moving object indicates the position of the moving object that is observed from a preset reference point.
  • the expression form of the shadow image is not particularly limited.
  • the form of the line such as the thickness, color, broken line, double line, etc. of the shadow image is not limited.
  • the hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the shadow image are not particularly limited.
  • the position display image is displayed at the position where the moving body exists in the coordinate system of the captured image, which is obtained using the actual position information of the moving body.
  • the control device 10 sets a virtual light source according to the position information of the host vehicle V, and generates a shadow image imitating a shadow that is generated when the host vehicle V is irradiated with light from the virtual light source.
  • the control device 10 displays the image obtained by projecting the captured image on the three-dimensional coordinate system including the position display image indicating the position of the moving object.
  • the displayed video includes part or all of the captured image and the position display image.
  • the control device 10 generates a position display image indicating the position where the moving object is present.
  • the position display image includes a diagram image, a background image, and a shadow image.
  • the control device 10 sets a viewpoint as a reference point according to the acquired position information of the host vehicle V, and generates a diagram image and a background image imitating a video when a moving object and an object are viewed from the reference point. .
  • the control device 10 sets a virtual light source as a reference point according to the acquired position information of the host vehicle V, and generates a shadow image imitating a shadow generated when light is emitted from the virtual light source to the moving object or the object. Generate.
  • the shadow image may be an image approximated to the shadow itself, or may be an image obtained by deforming the shadow itself in order to display the position and orientation of the host vehicle V.
  • the position display image including the diagram image, the background image, and the shadow image does not strictly correspond to the current shape of the host vehicle V, and is an image that can indicate the position where the host vehicle V exists. Since the position only needs to be known, it is not necessary to imitate the shape of the host vehicle V or the like. However, a position display image imitating the shape of the host vehicle V or the like may be generated so that the traveling direction of the host vehicle V can be understood.
  • the shadow image is an image that looks like a shadow, not an actual shadow. A pattern or color may be added to the position display image.
  • the hue, brightness, saturation, color tone (tone), pattern, and gradation of the position display image can be changed according to the illuminance outside the moving body, the brightness of the captured image, and the like.
  • the position display image including the diagram image, the background image, and the shadow image is not limited to the shape corresponding to the current shape of the host vehicle V, and may be an image showing the movable range of the host vehicle V.
  • the position display image of the present embodiment may be an image showing the movable range of the door when the door of the vehicle V that is currently closed is released.
  • the mode of the position display image is not limited, and is appropriately designed according to information desired to be shown to the user. You may use the shadow mapping technique known at the time of application for the production
  • FIG. 3A is a diagram illustrating an example of a cylindrical solid coordinate system S1.
  • the host vehicle V shows a state of being placed on the plane G0.
  • FIG. 3B is a diagram illustrating a display example of the position display image SH including the diagram image, the background image, and the shadow image in the three-dimensional coordinate system S1 illustrated in FIG. 3A.
  • the position display image SH including the diagram image, background image, or shadow image in this example is projected on the plane G0.
  • FIG. 4A is a diagram illustrating an example of a spherical solid coordinate system S2.
  • the host vehicle V shows a state placed on the plane G0.
  • FIG. 4B is a diagram illustrating a display example of the position display image SH including the diagram image, the background image, or the shadow image of the host vehicle V in the three-dimensional coordinate system S2 illustrated in FIG. 4A.
  • the position display image SH is projected on the plane G0.
  • the position display image SH shown in FIGS. 3B and 4B is a shadow image SH imitating a shadow generated when light is emitted from the virtual light source LG set in the three-dimensional coordinate systems S1 and S2 to the host vehicle V.
  • the position display image SH shown in FIG. 3B and FIG. 4B is a diagram image showing the existence position of the host vehicle V when the host vehicle V is viewed from the viewpoint LG set in the three-dimensional coordinate systems S1 and S2.
  • the captured image is projected onto the three-dimensional coordinate system S (S1, S2) of FIGS. 3B and 4B.
  • the position display image SH is the presence of the host vehicle V viewed from the viewpoint LG by changing the background image of the captured image projected on the three-dimensional coordinate systems S1 and S2 (for example, by shifting the relative position of the background image). A position may be indicated. As shown in FIGS. 3B and 4B, the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the position display image SH of the host vehicle V. Thereby, even if the image of the surrounding object is projected onto the three-dimensional coordinate system S1, S2, an image in which the positional relationship between the host vehicle V and the object can be easily grasped can be displayed.
  • the shape of the three-dimensional coordinate system S of the present embodiment is not particularly limited, and may be a bowl shape disclosed in Japanese Patent Application Laid-Open No. 2012-138660.
  • the control device 10 of the present embodiment sets a reference point according to the acquired position information of an object such as another vehicle, and generates a position display image indicating the position of the object observed from this reference point.
  • the position display image includes the above-described diagram image, background image, and shadow image.
  • the control device 10 according to the present embodiment uses the acquired position information of the target object such as the other vehicle VX to set a viewpoint for viewing the target object as a reference point, and the mobile object when the mobile object is viewed from this viewpoint.
  • a diagram image or background image indicating the location of the image is generated.
  • the control device 10 of the present embodiment sets a virtual light source as a reference point according to the acquired position information of the target object such as the other vehicle VX, and creates a shadow that is generated when the target object is irradiated with light from the virtual light source.
  • a simulated shadow image is generated.
  • a position display image (a diagram image, a background image, a shadow image) indicating the position where the moving object is present is projected onto the three-dimensional coordinate systems S1 and S2.
  • the reference point for generating the diagram image or the background image may be set according to a reference position (center of gravity, driver's seat position, etc.) set in advance on the moving body, or a predetermined point in the three-dimensional coordinate system S1, S2.
  • the position may be set.
  • the reference point is set according to the height of the headrest of the driver's seat of the moving body.
  • FIG. 5 includes a position display image (line image, background image, shadow image) SH of the own vehicle V, a position display image SH1 of the other vehicle VX1 as an object, and a position display image SH2 of the other vehicle VX2.
  • the control device 10 of the present embodiment displays a position display image SH as a position display image indicating the position of the moving body including the host vehicle V and / or other vehicles VX1, VX2 on the projection plane SQ.
  • the position display image SH is an image showing the position of the moving body when the moving body is observed from a reference point set at a position corresponding to the position of the moving body.
  • the position display image SH may be a shadow image imitating a shadow when a moving body is irradiated with light from a virtual light source.
  • the position of the reference point LG including the viewpoint and the virtual light source may be set according to the reference position ⁇ of the moving object.
  • the reference position ⁇ can be arbitrarily set according to the center of gravity position, the center position, and the like of the moving body.
  • the reference point (viewpoint, virtual light source) LG for the host vehicle V, the other vehicle VX1, and the other vehicle VX2 may be one point, or may be set for each of the host vehicle V, the other vehicle VX1, and the other vehicle VX2.
  • the position of the reference point (viewpoint, virtual light source) LG for the host vehicle V and the position of the reference point (viewpoint, virtual light source) LG for the other vehicle VX1 (VX2) may be the same or different. May be.
  • the positional relationship between the host vehicle V and the target object is displayed by also displaying the position display images (line diagram image, background image, shadow image) of the target object such as the other vehicle VX as well as the host vehicle V.
  • the control device 10 of the present embodiment sets a projection surface SQ for projecting a position display image (line image, background image, shadow image) SH along the direction in which the host vehicle V moves. .
  • the control device 10 of the present embodiment displays a position display image (a diagram image, a background) when the other vehicle VX1 traveling in the adjacent lane Ln1 adjacent to the traveling lane Ln2 in which the host vehicle V travels (moves) is observed from the reference point.
  • Image, shadow image) SH1 is generated.
  • the position display image (line diagram image, background image, shadow image) SH1 of the other vehicle VX1 (target object) traveling in the adjacent lane Ln1 is converted into the position display image (line diagram image, background image,
  • the common projection plane SQ together with the shadow image (SH) it is possible to present an image that makes it easy to grasp the positional relationship between the host vehicle V and the other vehicle V1.
  • the projection plane SQ along the traveling direction of the host vehicle V it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized.
  • a shadow image SH1 is projected. Further, by setting the projection plane SQ so as to be substantially orthogonal (intersect at 90 degrees) with the road surface of the lane Ln2 on which the host vehicle V is traveling, the position of the host vehicle V is at a position where the driver of the host vehicle V can easily see.
  • the display image SH and the position display image SH1 of the other vehicle VX1 can be displayed. That is, the driver
  • the projection position when the diagram image SH (or background image SH) is projected onto the projection plane SQ indicates the own vehicle V (or other vehicle VX1) so as to indicate the position of the own vehicle V (or other vehicles VX1, VX2).
  • VX2) is determined according to an arbitrary reference position ⁇ .
  • the position on the XZ coordinate of the arbitrary reference position ⁇ of the host vehicle V (or other vehicles VX1, VX2) shown in FIG. 5 is preferably the same as the position on the XZ coordinate of the projection plane SQ.
  • the diagram image SH, the background image SH, and the shadow image SH are displayed at substantially the same position on the projection plane SQ.
  • the diagram image SH2, the background image SH2, and the shadow image SH2 are displayed at substantially the same position on the projection plane SQ.
  • region Rd1 of each figure is the background image Rd1 corresponding to road structures, such as a travel path of a mobile body, a travel path adjacent to it, and a guardrail / roadside zone.
  • a region Rd2 (region on the upper side of the region Rd1) in each figure is a background image Rd2 corresponding to a tree, a building, a sign, or the like on the road side of the traveling path of the moving body.
  • Background image Rd1 and background image Rd2 are displayed on projection plane SQ.
  • the projection plane SQ may include both the background images Rd1 and Rd2 or any one of them.
  • FIGS. 6A to 6F are views showing aspects of a position display image including a shadow image and a position display image including a diagram image SH imitating the shape of the vehicle.
  • the shadow image also approximates the shape of the vehicle.
  • a position display image including a shadow image and a position display image including a diagram image SH representing a shape of a vehicle will be described with reference to the same drawing.
  • FIG. 6A shows a position display image in which the shadow area of the shadow image SH6a is shown as an opaque area (transmittance is less than a predetermined value).
  • FIG. 6A shows a position display image in which the region in the figure of the diagram image SH6a is shown as an opaque region (low transmittance).
  • FIG. 6B shows a position display image in which the shadow area of the shadow image SH6b is shown as a semi-transparent area (transmittance is a predetermined value or more).
  • FIG. 6B shows a position display image in which the in-figure region of the diagram image SH6b is shown as a semi-transparent region (transmittance is a predetermined value or more).
  • FIG. 6C shows a position display image in which the brightness of the shadow area (or diagram area) of the shadow image (or diagram image) SH6c is expressed higher than the brightness of the area other than the shadow image (or diagram image). .
  • the brightness of the background images Rd1 and Rd2 is low.
  • the captured image is included in the position display image, if the brightness of the shadow image (or diagram image) SH6a is lowered as shown in FIG. 6A, the drawing position of the shadow image (or diagram image) SH6a may become unclear. There is.
  • the brightness of the shadow area (or diagram area) of the shadow image (or diagram image) SH6c is the shadow image (or diagram area).
  • Image) A position display image including SH6c is shown.
  • FIG. 6D shows a position display image in which the shadow region (or diagram region) of the shadow image (or diagram image) SH6d is colored. Since colors cannot be expressed in the drawings attached to the application, it is shown for convenience that the shadow image (or diagram image) SH6d is colored by hatching. The hue of the color of the shadow area (or diagram area) is not limited. The brightness and saturation are not limited.
  • FIG. 6E shows a position display image including the outline of the shadow region (or diagram region) of the shadow image (or diagram image) SH6e. The inward extension (inner side) surrounded by the outline of the shadow area (or diagram area) shown in FIG. 6E is transparent or translucent.
  • FIG. 6F shows a position display image obtained by blurring a part of the shadow region (or diagram region) of the shadow image (or diagram image) SH6f.
  • gradation is given so that the brightness decreases from the inside toward the outside.
  • the outline of the shadow area (or diagram area) can be clearly shown.
  • the object on the other side of the moving object can be shown by increasing the inner transparency. Thereby, the presence position of a moving body and the presence of the object of the other side (side) of a moving body can be confirmed simultaneously.
  • FIG. 7 is a diagram illustrating an aspect of a position display image including a background image.
  • FIG. 7 shows the shadow image (or diagram image) SH7a as translucent or transparent, and the height (z direction) of the road background image Rd1a that is transparently displayed inside the shadow image (or diagram image) SH7a. This is an example in which (position) is changed.
  • the height in the Z-axis direction of the road background image Rd1a transparently displayed inside the shadow image (or diagram image) SH7a is lower than the height of the other road background images Rd1b and Rd1c in the Z-axis direction.
  • FIG. 7 shows the shadow image (or diagram image) SH7a as translucent or transparent, and the horizontal position of the background image Rp1a of the utility pole transparently displayed inside the shadow image (or diagram image) SH7a ( This is an example in which the position in the X direction is changed.
  • the position in the X-axis direction of the background image Rp1a of the utility pole that is transparently displayed so as to overlap with the shadow image (or diagram image) SH7a is shifted to the right in the figure from the position in the X-axis direction of the background image Rp1b of the other utility pole ing.
  • the positions of the utility pole background images Rp1a and Rp1b in the X-axis direction that should be continuous change at the boundary of the shadow image (or diagram image) SH7a. From the change position of the horizontal position of the electric pole background image Rp1 shown in the position display image, the user can recognize the presence position (position in the X-axis direction) of the moving body.
  • a region (outlined extension line) in which a part of the background image is missing may be formed along the extension of the shadow image (or diagram image) SH7a.
  • the outline region along the extension of the shadow image (or diagram image) SH7a may be added to the background image Rd1 and the background image Rd2.
  • FIGS. 8A to 8D are diagrams showing aspects of a position display image including a diagram.
  • FIGS. 6A to 6F show examples of graphics imitating the outline of the moving body, but FIGS. 8A to 8D show an aspect of a position display image including an axis indicating a coordinate position and a graphic.
  • FIG. 8A is an example in which the position of the moving object is indicated by a circular diagram image SH8a.
  • the position of the moving body on the projection plane SQ can be indicated by the displayed position of the circle.
  • the diagram image SH8a illustrated in FIG. 8A further includes a vertical line SH8az along the Z-axis direction of the projection surface SQ and a horizontal line SH8ax along the X-axis direction of the projection surface SQ.
  • the vertical line SH8az and the horizontal line SH8ax are substantially orthogonal at the center point SH8a0.
  • the center point SH8a0 corresponds to a reference position such as the center of gravity of the moving object.
  • the vertical line SH8az and the horizontal line SH8ax may be graduated.
  • the XZ coordinates are graduated with reference to the center of the radius of the circular diagram image SH8a. Thereby, the user can grasp
  • FIG. 8B is an example showing the position of the moving object using coordinate axes SH8bx and SH8bx that are substantially orthogonal to each other at the reference point SH8b0.
  • the center point SH8b0 corresponds to a reference position such as the center of gravity of the moving body.
  • the position of the moving object on the projection plane SQ can be indicated by the position of the center point and the coordinate axis.
  • FIG. 8C is an example showing the position of the moving object using two vertical lines SH8c1 and SH8c2 along the Z direction of the projection plane SQ.
  • the position of the vertical line SH8c1 corresponds to the position of the front end (or rear end) of the moving body.
  • the position of the vertical line SH8c2 corresponds to the position of the rear end (or front end) of the moving body.
  • the position of the moving body on the projection plane SQ can be indicated by the positions of the two vertical lines SH8c1 and SH8c2 in the X-axis direction.
  • FIG. 8D is an example showing the location of the moving object using a rectangular area SH8d.
  • One end of the region SH8d in the X-axis direction is defined by a vertical line SH8d1, and the other end is defined by a vertical line SH8d2.
  • the height of the vertical line SH8d1 in the Z-axis direction is not limited. In this example, the height of the vertical line SH8d1 in the Z-axis direction is the same as the height of the projection surface SQ in the Z-axis direction.
  • the position of the vertical line SH8d1 in the X-axis direction corresponds to the position of the front end (or rear end) of the moving body.
  • the position of the vertical line SH8d2 in the X-axis direction corresponds to the position of the rear end (or front end) of the moving body.
  • the position of the moving body on the projection plane SQ can be indicated by the positions of the two vertical lines SH8d1 and SH8d2 in the X direction.
  • the control device 10 generates a video including a position display image indicating the position of the target in the captured image, using the position information of the target existing around the moving body acquired by the information acquisition function.
  • a position display image of an object such as another vehicle includes a diagram image, a background image, and a shadow image, similarly to the position display image of the host vehicle (moving body). About these, in order to avoid the overlapping description, the description mentioned above is used.
  • the control device 10 of the present embodiment observes the preceding other vehicle VX2 traveling in front of the traveling lane in which the host vehicle V travels (moves) from the reference point, and from the reference point.
  • a position display image (a diagram image, a background image, a shadow image) of the observed preceding other vehicle VX2 is generated.
  • the control device 10 of the present embodiment uses the position display image SH of the host vehicle V, the position display image SH1 of the other vehicle VX1, and the position display image SH2 of the other vehicle VX2 on a common projection plane SQ. Project.
  • the driver of the own vehicle V can correctly grasp the positional relationship between the own vehicle V, the other vehicle VX1, and the other vehicle VX2 from the positional relationship with the position display images SH, SH1, and SH2.
  • the timing at which the host vehicle tries to change the lane from the lane Ln2 to the lane Ln1 is determined from the steering angle of the host vehicle, the blinker operation, the braking operation, and the like.
  • the projection position of the position display image such as the shadow image SH is changed according to the acceleration / deceleration of the host vehicle V. Specifically, when the host vehicle V is accelerating, the control device 10 shifts the position of the position display image (line image, background image, shadow image) SH forward. On the other hand, when the host vehicle V is decelerating, the control device 10 shifts the position backward to the position display image (line image, background image, shadow image) SH. Thereby, according to the situation of the own vehicle V, the position display image SH which can grasp
  • the projection position of the position display image (line image, background image, shadow image) is changed according to the difference between the speed of the lane in which the host vehicle V travels and the speed of the host vehicle V.
  • the speed of the lane flow may be an average speed of another vehicle VX traveling in the same lane as the host vehicle V is traveling, or a legal speed of the lane.
  • the control device 10 determines the position when the difference (positive value) between the vehicle speed of the host vehicle V and the flow speed of the lane is large, that is, when the host vehicle V is approaching or overtaking the preceding other vehicle.
  • the position of the position display image SH of the display image (line diagram image, background image, shadow image) is shifted forward.
  • the control device 10 has a small difference between the vehicle speed of the host vehicle V and the flow speed of the lane, or a large difference (negative value), that is, the host vehicle V is approached by the other vehicle behind or overtakes the rear vehicle.
  • the position display image SH is shifted backward.
  • a position display image (a diagram image, a background image, a shadow image) SH that makes it easy to grasp the positional relationship between the host vehicle V and the other vehicle VX according to the relative situation between the host vehicle V and the other vehicle VX.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the control device 10 has an area of a position display image (a diagram image, a background image, a shadow image) SH of an object having a relatively high speed of the object including the other vehicle VX and a relatively high speed.
  • the position display images SH1 and SH2 are generated so as to be larger than the area of the position display image such as the position display image SH of the low object.
  • the control device 10 of the present embodiment acquires the vehicle speed P1 of the other vehicle VX1 and the vehicle speed P2 of the other vehicle VX2, and compares the vehicle speeds P1 and P2.
  • the control device 10 generates the position display image SH of the other vehicle VX with a high vehicle speed so that the area is larger than the position display image SH of the other vehicle VX with a low vehicle speed.
  • the area of the position display image SH1 of the other vehicle VX1 is set to the position display image SH2 of the other vehicle VX2. Larger than the area.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • control device 10 of the present embodiment may increase the area of the position display image (line image, background image, shadow image) SH of the other vehicle VX as the speed of the other vehicle VX increases. .
  • a driver using this system can predict the speed of the other vehicle VX from the size of the position display image SH.
  • the speed of the other vehicle VX may be an absolute speed or may be a relative speed with respect to the vehicle speed of the host vehicle V.
  • the position display image SH of the other vehicle V having a high degree of approach to the host vehicle V can be displayed in a large size.
  • the hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the position display image may be changed.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the control device 10 of the present embodiment acquires attribute information that the lane Ln1 in which the other vehicle VX travels is an overtaking lane
  • the lane Ln2 in which the other vehicle VX is traveling is not in the overtaking lane.
  • the position display image SH is generated so that the area of the position display image (line image, background image, shadow image) SH of the other vehicle VX traveling on the overtaking lane is larger than that obtained. This is because the speed of the other vehicle VX traveling on the overtaking lane can be predicted to be higher than the speed of the other vehicle VX traveling on the non-overtaking lane.
  • the attribute information that the lane is an overtaking lane or is not an overtaking lane is acquired from the map information 72 and / or road information 73 of the navigation device 70.
  • the method for identifying and acquiring the lane attribute information is not particularly limited, and a method known at the time of filing can be used as appropriate.
  • control device 10 changes the area of the position display image (line image, background image, shadow image) SH according to the driving skill of the driver of the moving object, for example, the driver of the host vehicle V. Also good. For example, when the skill of the pilot is low, the control device 10 generates the position display image SH so that the area of the position display image SH is larger than when the skill of the pilot is high. Thereby, each position of the own vehicle V and the other vehicle V and those positional relationships can be shown in an easy-to-understand manner to a pilot with low skill.
  • the operator's skill may be input by the operator himself / herself, or may be determined based on experience such as the number of operations and distance.
  • the driver's skill may be determined from the pilot's past operation history.
  • the operation history of a pilot with high skills is compared with the operation history of individual pilots. If the difference is large, it is determined that the operation skill is low, and if the difference is small, the operation skill is Judged to be high.
  • vehicle driving driving
  • the driving skill of the vehicle can be determined based on the acceleration operation, the timing of the steering operation, and the steering amount when the travel lane is changed.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the position display image (line diagram image, background image, shadow image) SH1 of the other vehicle VX1 traveling on the lane Ln1 is more position display image (line diagram image, background image) of the other vehicle VX2 traveling on the lane Ln2.
  • Shadow image The area is larger than SH2 and the position display image SH of the host vehicle V.
  • the size of the position display image SH is controlled by the actual vehicle speed by changing the size of the position display image SH according to the attribute of the lane in which the host vehicle V and the other vehicle VX travel.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the control device 10 of the present embodiment determines that the host vehicle V is in an acceleration state from the acquired acceleration of the host vehicle V
  • the control device 10 determines the position of the reference point LG in the traveling direction of the host vehicle V (in the drawing). Shift to the opposite side (arrow F ′ direction in the figure) of arrow F direction.
  • the projection position of the position display image SH can be shifted to the traveling direction (the direction of arrow F in the figure) of the host vehicle V (the direction of arrow F in the figure).
  • the control device 10 determines that the host vehicle V is in an acceleration state from the acquired acceleration of the host vehicle V, and then the virtual light source LG.
  • a reference point (viewpoint, virtual light source) LG is set to a rear position, for example, a reference point (viewpoint, virtual light source) LG2 ( Shift to the position indicated by the broken line.
  • the position of the reference point (viewpoint, virtual light source) LG is shifted backward, and the projection position of the position display image (line image, background image, shadow image) SH is shifted forward.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the control device 10 determines the position of the reference point (viewpoint, virtual light source) LG in the traveling direction side of the host vehicle V (see FIG. Shift in the direction of the middle arrow F). Thereby, when it is determined that the host vehicle V is in a decelerating state, the projection position of the position display image SH can be shifted to the side opposite to the traveling direction of the host vehicle V (the direction of the arrow F ′ in the figure).
  • the control device 10 of the present embodiment determines the position of the virtual light source LG when the host vehicle V determines from the acquired acceleration that the vehicle V is in a decelerating state. Shift to the traveling direction side of vehicle V (in the direction of arrow F in the figure).
  • the reference point (viewpoint, virtual light source) LG is set to a forward position, for example, the position of the reference point LG1 (displayed with a broken line). Shift to.
  • the position of the reference point LG is shifted forward, and the projection position of the position display image (line diagram image, background image, shadow image) SH is shifted backward, thereby determining the driver's judgment status.
  • the position display image suitable for can be displayed.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the setting position of the reference point (viewpoint, virtual light source) LG is not limited, but may be the same position as the virtual viewpoint in the projection processing.
  • the position of the virtual viewpoint viewing the host vehicle V can be recognized from the shape of the position display image (line diagram image, background image, shadow image) SH, and the positional relationship between the host vehicle V and the surroundings can be easily understood.
  • the reference point LG may be arranged at infinity. In this case, since parallel projection can be performed, it becomes easy to grasp the positional relationship between the host vehicle V and the object (other vehicle VX, etc.) from the position display image SH.
  • the setting position of the virtual viewpoint is not particularly limited. Further, the position of the virtual viewpoint may be changeable according to the user's designation. Further, the surface on which the position display image SH is projected (represented) may be a road surface of a road on which the host vehicle travels, or may be a projection surface set as shown in FIG. Further, when the moving body is a helicopter, a position display image (line diagram image, background image, shadow image) for displaying the position may be projected on the ground surface below the helicopter. When the moving body is a ship, the shadow information may be projected on the sea surface.
  • the control device 10 projects the captured image data acquired from the camera 40 onto the three-dimensional coordinate system S or the projection plane SQ, and generates images of the host vehicle V and surrounding objects from the set virtual viewpoint. Then, the control device 10 displays the generated video on the display 80.
  • the display 80 may be mounted on the host vehicle V and configured as the mobile device 200 or may be provided on the display device 100 side.
  • the display 80 may be a display for a two-dimensional image or a display that displays a three-dimensional image in which the positional relationship in the depth direction of the screen can be visually recognized.
  • the video to be displayed in the present embodiment includes a position display image (a diagram image, a background image, a shadow image) SH of the host vehicle V.
  • the display image may include position display images (line diagram image, background image, shadow image) SH1 and SH2 of the other vehicle VX as an object.
  • the displayed image may include both the position display image SH of the host vehicle V and the position display images SH1 and SH2 of the other vehicle VX.
  • a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
  • the icon image V ′ (see FIGS. 3A, 3B, 4A, and 4B) indicating the host vehicle V prepared in advance is superimposed and displayed on the video displayed by the display device 100 of the present embodiment. Good.
  • the icon image V ′ of the vehicle may be created and stored in advance based on the design of the host vehicle V. In this manner, by superimposing the icon image V ′ of the host vehicle V on the video, the relationship between the position and orientation of the host vehicle V and the surrounding video can be shown in an easily understandable manner.
  • step S ⁇ b> 1 the control device 10 acquires a captured image captured by the camera 40.
  • step S2 the control device 10 acquires the current position of the host vehicle V and the position of the object including the other vehicle VX.
  • the host vehicle V is an example of a “moving body”
  • the other vehicle VX is an example of an “object”.
  • the control device 10 sets a reference point using the position information of the moving body (the host vehicle V, the other vehicle VX).
  • One reference point may be set based on the host vehicle V, or a plurality of reference points may be set for each of the host vehicle V and the other vehicle VX.
  • the reference point is a point (viewpoint) for observing a moving object that is a target of a diagram image, a point (viewpoint) for observing a moving object that is a target of a background image, and a light that is irradiated to a moving object that is a target of a shadow image Includes points (light sources).
  • step S 3 An example of the method for setting the reference point (viewpoint, virtual light source) (step S3) will be described based on the flowchart of FIG.
  • the control device acquires the acceleration of the host vehicle V.
  • step S12 the control device 10 determines whether the host vehicle V is in an acceleration state or a deceleration state based on the acceleration. If it is in the accelerated state, the process proceeds to step S13.
  • step S13 by shifting the reference point (viewpoint, virtual light source) to the rear side, the projection position of the position display image including the diagram image SH, the background image SH, and the shadow image SH is shifted to the front side in the traveling direction. .
  • the position of the position display image projected on the projection plane when the host vehicle V is in the acceleration state is more than the position of the position display image projected on the projection plane when the host vehicle V is not in the acceleration state. It is located on the front side in the traveling direction of V.
  • the process proceeds to step S15.
  • step S15 by shifting the reference point (viewpoint, virtual light source) to the front side, the projection position of the position display image including the diagram image SH and the background image SH is shifted to the rear side in the traveling direction.
  • the position of the reference point that is set when the host vehicle V is in the deceleration state is more forward of the traveling direction of the host vehicle V than the position of the reference point that is set when the host vehicle V is not in the deceleration state.
  • the position of the position display image projected on the projection plane when the host vehicle V is in a deceleration state is greater than the position of the position display image projected on the projection plane when the host vehicle V is not in a deceleration state.
  • the projection position of the position display image is a position in the coordinate system of the projection plane set in a later process.
  • the control device 10 sets a projection plane.
  • the projection plane may be a three-dimensional coordinate system shown in FIGS. 3A, 3B, 4A, and 4B, or may be a two-dimensional coordinate system like the projection plane SQ shown in FIG. Furthermore, as shown in FIG. 11 described later, a plurality of projection planes may be set.
  • step S5 the control device 10 generates a diagram image SH, a background image SH, or a shadow image SH of the host vehicle V.
  • the diagram images SH1, SH2, background images SH1, SH2 or shadow images SH1, SH2 of the other vehicles VX1, VX2 are generated.
  • step S6 the control device 10 executes a process of projecting the captured image on the set projection plane SQ, and generates a display image.
  • step S7 the control device 10 displays the generated video on the display 80.
  • the position display image described here includes a shadow image SH, a diagram image SH, and a background image SH.
  • a case where the position display image is a shadow image SH will be described as an example.
  • a diagram image SH and a background image SH may be used.
  • the shadow image SH (or diagram image SH) of the position display image of this example includes an image indicating the movable range of the movable member of the host vehicle V.
  • the background image SH includes a contour line of the movable member of the host vehicle V and a diagram (background defect region) indicating the movable range of the working member.
  • the projection plane showing the shadow image SH of the host vehicle V includes a first projection plane SQs along the vehicle length direction of the host vehicle V and a second projection plane SQb along the vehicle width direction.
  • a position display image (shadow image, diagram image, background image) SHs showing a movable member when observed from a reference point (viewpoint, virtual light source) LG set on the side of the vehicle is displayed.
  • the position display image includes a shadow image imitating the shadow of the movable member when light is emitted from the virtual light source LG.
  • a position display image (shadow image, diagram image, background image) SHb showing a movable member when observed from a reference point (viewpoint, virtual light source) LG set in front of the vehicle is projected.
  • the position display image projects a shadow image SHb simulating a shadow when light is emitted from the virtual light source LG.
  • the position display image (shadow image, diagram image, background image) SH of this example includes an image indicating the movable range of the movable member of the host vehicle V.
  • the own vehicle V of this example is a hatchback type vehicle, and has a side door and a back door (hatch door).
  • the side door of the host vehicle V opens and closes sideways, and the back door of the host vehicle V opens and closes rearward.
  • the side door and the back door of the host vehicle V are movable members of the host vehicle V that is a moving body.
  • the control device 10 assumes a case where the occupant opens the back door in order to carry in or out the luggage from the rear loading platform, and displays a position display image (shadow image, line drawing image, background image) indicating the movable range of the back door. ) Is generated.
  • the position display image SHs projected on the first projection surface SQs includes a back door portion Vd3.
  • the back door portion Vd3 represents the rear extension (movable range) of the host vehicle V when the back door is opened.
  • the shadow image indicating the movable range of the back door is projected on the left and right sides of the host vehicle V or on the placement surface (parking surface / road surface) of the host vehicle V.
  • the shadow image may be projected on a wall surface or floor surface that actually exists.
  • the control device 10 assumes a case where the side door is opened so that an occupant can get on and off from the seat or carry in / out the luggage, and a position display image (shadow image, diagram image) indicating the movable range of the side door. , Background image).
  • the position display image (shadow image, diagram image, background image) SHb projected on the second projection plane SQb includes side door portions Vd1 and Vd2. Expresses the lateral extension (movable range) of the side door when the side door is opened.
  • the position display image indicating the movable range of the side door is projected forward and / or rearward of the host vehicle V.
  • the control device 10 may set a projection plane to project a position display image (shadow image, diagram image, background image), and project the position display image onto the projection plane.
  • the driver of the host vehicle V considers the work after the host vehicle V is parked. And the parking position of the own vehicle V can be determined. Furthermore, by superimposing the captured images of the surrounding objects together with the position display image SH, it is possible to park at a position where the work after parking is not hindered while avoiding the surrounding objects.
  • the mode of the position display image (shadow image, diagram image, background image) SH will be described in the case where the moving body is the host vehicle V, that is, the door of the host vehicle V is movable.
  • the mode of the position display image SH is not limited to this.
  • a position display image (a shadow image, a diagram image, a background image) is generated in consideration of the movable range of the forklift body and the lift equipment.
  • a position display image (shadow image, diagram image, background image) is generated in consideration of the rotation range of the helicopter body and the rotor blades.
  • a position display image (a shadow image, a diagram image, a background image) is generated in consideration of the installation range of the airplane main body and ancillary equipment such as a passenger trap.
  • the mobile body is a submarine spacecraft, it is generated in consideration of the installation range of the platform provided as necessary.
  • the generated position display image is displayed on the display 80.
  • a position display image (shadow image, diagram image, background image) indicating the operating range of the lift device is displayed, for example, on the surrounding ground surface (for example, the floor of a facility such as a warehouse or a factory).
  • the surrounding ground surface for example, the floor of a facility such as a warehouse or a factory.
  • the attitude of the helicopter can be confirmed from the sky.
  • a shadow image indicating the installation range of the main body of the airplane and the attached equipment on the ground surface (for example, the ground surface) of the airplane it is possible to search from the sky for a place having an area where the airplane can make an emergency landing.
  • the position display image (shadow image, diagram image, background image) SH may store a basic pattern prepared in advance, and the control device 10 may read it from the memory as necessary.
  • the display device 100 sets a reference point using position information of a moving body such as the host vehicle V and the other vehicle VX, and the host vehicle V and the other vehicle observed from the reference point in the captured image.
  • a position display image SH indicating the position where the VX exists is generated, and an image including the position display image SH and part or all of the captured image is displayed.
  • the video including the position display image SH indicating the position where the host vehicle V is present and the captured image can be displayed, a video in which the positional relationship of the host vehicle V with respect to the surroundings can be easily understood can be displayed.
  • the display device 100 of the present embodiment generates a diagram image SH as a position display image indicating the position of the host vehicle V in the captured image, and includes part or all of the diagram image SH and the captured image. Display video.
  • the diagram image SH indicating the location of the host vehicle V and the image including a part or all of the captured image can be displayed, so that the image in which the positional relationship of the host vehicle V with respect to the surroundings can be easily understood can be displayed.
  • the display device 100 generates a background image SH as a position display image indicating the position of the host vehicle V in the captured image, and displays a video including the background image SH and a part or all of the captured image. Display.
  • the background image SH indicating the position of the host vehicle V and the image including a part or all of the captured image can be displayed, so that it is possible to display an image in which the positional relationship of the host vehicle V with respect to the surroundings can be easily understood.
  • the display device 100 of the present embodiment generates a shadow image SH as a position display image indicating the position of the host vehicle V in the captured image, and displays an image including the shadow image SH.
  • the display device 10 sets a virtual light source as a reference point using position information of moving bodies such as the host vehicle V and the other vehicle VX, and generates shadows when the host vehicle V is irradiated with light from the virtual light source. Is generated, and a video including part or all of the shadow image SH and a captured image around the host vehicle V is displayed.
  • the shadow image SH of the host vehicle V since the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the shadow image SH of the host vehicle V, it is possible to display an image in which the positional relationship between the host vehicle V and surrounding objects can be easily understood.
  • the display device 100 generates a position display image SH indicating the position of an object including the other vehicle VX, and displays an image including the position display image SH and a part or all of the captured image. .
  • the position display image SH indicating the position of the other vehicle VX as the object and the image including part or all of the captured image can be displayed together, the positional relationship between the host vehicle V and the other vehicle VX can be displayed. Easy-to-understand video can be displayed.
  • the display device 100 generates a diagram image SH as a position display image indicating the position of an object including the other vehicle VX, and displays part or all of the diagram image SH and the captured image. Display the video that contains it.
  • the diagram image SH indicating the position of the other vehicle VX as the object and the image including part or all of the captured image can be displayed together, the positional relationship between the host vehicle V and the other vehicle VX can be grasped. Can be displayed easily.
  • the display device 100 generates a background image SH as a position display image indicating the position where an object including the other vehicle VX is present, and includes a part or all of the background image SH and the captured image. Is displayed.
  • the background image SH indicating the position of the other vehicle VX as the object and the image including a part or all of the captured image can be displayed together, so that the positional relationship between the host vehicle V and the other vehicle VX can be grasped. Easy video can be displayed.
  • the display device 100 of the present embodiment generates a shadow image SH as a position display image indicating the position of an object including the other vehicle VX in the captured image, and displays an image including the shadow image SH.
  • the display device 100 according to the present embodiment generates a shadow image SH imitating a shadow generated when light is irradiated to an object including the other vehicle VX, and displays an image including the shadow image SH and a part or all of the captured image. Display.
  • the positional relationship between the other vehicle VX and the host vehicle V can be expressed by the shadow images SH1 and SH2 of the other vehicle VX, the positional relationship between the host vehicle V and the surrounding object such as the other vehicle VX can be expressed. Easy-to-understand video can be displayed.
  • the display device 100 of the present embodiment sets a projection plane SQ for projecting a position display image along the direction in which the host vehicle V moves, and is adjacent to the travel lane Ln2 in which the host vehicle V travels (moves).
  • a position display image for displaying the position of the other vehicle VX1 traveling in the adjacent lane Ln1 is generated.
  • the video includes a part or all of the captured image around the host vehicle V
  • the positional relationship is easier to grasp.
  • by setting the projection surface SQ along the traveling direction of the host vehicle V it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized.
  • the display device 100 has an area of a position display image (a diagram image, a background image, or a shadow image) SH of an object having a relatively high speed, and an object having a relatively low speed.
  • the position display images (line image, background image, or shadow image) SH1 and SH2 of the object are generated so as to be larger than the area of the position display image (line image, background image, or shadow image) SH.
  • the position display image (line image, background image, or shadow image) of the relatively high-speed object is displayed relatively large, so that an image that calls attention to the high-speed object is displayed. Can be displayed.
  • the display device 100 changes the size of the position display image (a diagram image, a background image, or a shadow image) SH according to the attribute of the lane on which the host vehicle V and the other vehicle VX travel.
  • the driver's attention to the other vehicle VX traveling at a high speed can be alerted in the same way as when the size of the position display image (line image, background image, or shadow image) SH is controlled by the actual vehicle speed.
  • the display device 100 shifts the position of the reference point for observing the moving object or the object to the rear to thereby display the position display image (the diagram image, the background image, or the shadow image). ) It is possible to display an image suitable for the judgment situation of the driver that shifts the SH projection position forward.
  • the display device 100 of the present embodiment shifts the position of the virtual light source LG of the shadow image SH backward in the acceleration state, resulting in the shadow image SH as a result.
  • the projected position can be shifted forward, and a shadow image suitable for the driver's judgment situation can be displayed.
  • the position display image (a diagram image, a background image, or a shadow image) is obtained by shifting the position of a reference point for observing a moving object or object forward.
  • the SH projection position can be shifted backward to display an image suitable for the driver's judgment status.
  • the display device 100 according to the present embodiment shifts the position of the virtual light source LG forward when the vehicle is in a decelerating state, thereby resulting in the projection position of the shadow image SH. It is possible to display a shadow image suitable for the judgment situation of the driver by shifting backward.
  • the display device 100 generates and displays a position display image including a shadow image SH, a diagram image SH, and a background image SH indicating the movable range of the movable member of the host vehicle V, thereby displaying the position display image.
  • the driver of the vehicle V can determine the parking position of the host vehicle V in consideration of the work after the host vehicle V is parked.
  • the display device 100 causes the display method of the present embodiment to execute the above-described effect.
  • the display system 1 including the display device 100 as one embodiment of the display device according to the present invention will be described as an example, but the present invention is not limited to this.
  • the display device 100 including the control device 10 including the CPU 11, the ROM 12, and the RAM 13 is described as an embodiment of the display device according to the present invention, but the present invention is not limited to this.
  • an image acquisition function an information acquisition function, and an image generation
  • the display device 100 including the control device 10 that executes the function and the display function will be described as an example, but the present invention is not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Provided is a display device 100 comprising a control device 10 that executes: an image acquisition function that obtains a captured image captured by a camera 40 mounted in a vehicle V; an information acquisition function that obtains position information for the vehicle V; an image generation function that uses the position information for the vehicle, generates a position display image indicating the position at which a traveling body is present in the captured image, and generates a video including this position display image and the captured image; and a display function that displays the generated video.

Description

表示装置及び表示方法Display device and display method
 本発明は、移動体の周囲の映像を立体的に表示する表示装置及び表示方法に関する。 The present invention relates to a display device and a display method for stereoscopically displaying an image around a moving object.
 この種の装置に関し、移動体周囲の対象物の像と移動体の像とを、立体座標系に投影する移動体用表示装置が知られている(特許文献1)。 Referring to this type of device, there is known a moving body display device that projects an image of an object around the moving body and an image of the moving body onto a three-dimensional coordinate system (Patent Document 1).
特開2012-138660号公報JP 2012-138660 A
 しかしながら、移動体の像をそのまま表示するだけでは、移動体と対象物との位置関係が把握しにくい場合があるという問題がある。 However, there is a problem in that it is difficult to grasp the positional relationship between the moving object and the object simply by displaying the image of the moving object as it is.
 本発明が解決しようとする課題は、移動体の存在位置が把握しやすい映像を表示することである。 The problem to be solved by the present invention is to display an image in which the position of the moving object can be easily grasped.
 本発明は、移動体の存在位置を示す位置表示画像を含む映像を表示させることにより、上記課題を解決する。 The present invention solves the above problem by displaying a video including a position display image indicating the position of the moving object.
 本発明によれば、位置表示画像を表示することにより、移動体の存在位置を表現できるので、移動体の位置が把握しやすい映像を表示できる。 According to the present invention, since the position of the moving body can be expressed by displaying the position display image, it is possible to display an image in which the position of the moving body can be easily grasped.
本発明に係る表示装置を備える表示システムの構成図である。It is a block diagram of a display system provided with the display apparatus which concerns on this invention. 図2(A)(B)は本実施形態のカメラの設置位置の一例を示す図である。2A and 2B are diagrams showing an example of the installation position of the camera of this embodiment. 立体座標系の一例を示す図である。It is a figure which shows an example of a solid coordinate system. 図3Aに示す立体座標系における自車両の位置表示画像の一例を示す図である。It is a figure which shows an example of the position display image of the own vehicle in the solid coordinate system shown to FIG. 3A. 立体座標系の他の例を示す図である。It is a figure which shows the other example of a solid coordinate system. 図4Aに示す立体座標系における自車両の位置表示画像の他の例を示す図である。It is a figure which shows the other example of the position display image of the own vehicle in the solid coordinate system shown to FIG. 4A. 自車両及び他車両の位置表示画像の表示例を示す図である。It is a figure which shows the example of a display of the position display image of the own vehicle and another vehicle. 位置表示画像の第1態様の第1の例を示す図である。It is a figure which shows the 1st example of the 1st aspect of a position display image. 位置表示画像の第1態様の第2の例を示す図である。It is a figure which shows the 2nd example of the 1st aspect of a position display image. 位置表示画像の第1態様の第3の例を示す図である。It is a figure which shows the 3rd example of the 1st aspect of a position display image. 位置表示画像の第1態様の第4の例を示す図である。It is a figure which shows the 4th example of the 1st aspect of a position display image. 位置表示画像の第1態様の第5の例を示す図である。It is a figure which shows the 5th example of the 1st aspect of a position display image. 位置表示画像の第1態様の第6の例を示す図である。It is a figure which shows the 6th example of the 1st aspect of a position display image. 位置表示画像の第2態様の例を示す図である。It is a figure which shows the example of the 2nd aspect of a position display image. 位置表示画像の第3態様の第1の例を示す図である。It is a figure which shows the 1st example of the 3rd aspect of a position display image. 位置表示画像の第3態様の第2の例を示す図である。It is a figure which shows the 2nd example of the 3rd aspect of a position display image. 位置表示画像の第3態様の第3の例を示す図である。It is a figure which shows the 3rd example of the 3rd aspect of a position display image. 位置表示画像の第3態様の第4の例を示す図である。It is a figure which shows the 4th example of the 3rd aspect of a position display image. 本実施形態に係る表示装置の制御手順を示すフローチャート図である。It is a flowchart figure which shows the control procedure of the display apparatus which concerns on this embodiment. 図9に示す基準点の設定処理の手順を示すサブルーチンを示すフローチャート図である。FIG. 10 is a flowchart showing a subroutine showing a procedure of reference point setting processing shown in FIG. 9. 位置表示画像の他の表示例を説明するための図である。It is a figure for demonstrating the other example of a display of a position display image.
 以下、本発明の実施形態を図面に基づいて説明する。本実施形態では、本発明に係る表示装置を、移動体に搭載された表示システム1に適用した場合を例にして説明する。本実施形態の表示システム1は、移動体及び移動体周囲の状況を把握するための映像を移動体の操作者が見るディスプレイに表示する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the present embodiment, a case where the display device according to the present invention is applied to a display system 1 mounted on a moving body will be described as an example. The display system 1 of the present embodiment displays a video for grasping the moving body and the surroundings of the moving body on a display viewed by an operator of the moving body.
 図1は、本実施形態に係る表示装置100を含む表示システム1のブロック構成図である。図1に示すように、本実施形態の表示システム1は、表示装置100と、移動体装置200とを備える。表示装置100と移動体装置200の各機器は、いずれも図示しない有線又は無線の通信装置を備え、互いに情報の授受を行う。 FIG. 1 is a block configuration diagram of a display system 1 including a display device 100 according to the present embodiment. As shown in FIG. 1, the display system 1 of this embodiment includes a display device 100 and a mobile device 200. Each device of the display device 100 and the mobile device 200 includes a wired or wireless communication device (not shown), and exchanges information with each other.
 本実施形態の表示システム1が適用される移動体は、車両、ヘリコプター、海底探査機、飛行機、装甲車、列車、フォークリフトその他の移動機能を備えるものを含む。本実施形態では、移動体が車両である場合を例にして説明する。なお、本実施形態の移動体は、人間が搭乗可能な有人機であってもよいし、人間が搭乗しない無人機であってもよい。 The moving body to which the display system 1 of the present embodiment is applied includes a vehicle, a helicopter, a submarine explorer, an airplane, an armored vehicle, a train, a forklift, and other devices having a moving function. In the present embodiment, a case where the moving body is a vehicle will be described as an example. Note that the moving body of the present embodiment may be a manned machine on which a human can be boarded, or an unmanned machine on which a human is not boarding.
 なお、本実施形態の表示システム1は、移動体に搭載された装置として構成してもよいし、移動体に持ち込み可能な可搬装置として構成してもよい。また、本実施形態の表示システム1の一部の構成を移動体に搭載し、他の構成を移動体とは物理的に別の装置に搭載し、構成を分散させてもよい。この場合において、移動体と別の装置とは、情報の授受が可能なように構成される。 Note that the display system 1 of the present embodiment may be configured as a device mounted on a moving body, or may be configured as a portable device that can be brought into the moving body. In addition, a part of the configuration of the display system 1 according to the present embodiment may be mounted on a moving body, and another configuration may be mounted on a device physically different from the moving body, and the configuration may be distributed. In this case, the mobile body and another device are configured to be able to exchange information.
 図1に示すように、本実施形態の移動体装置200は、カメラ40と、コントローラ50と、センサ60と、ナビゲーション装置70と、ディスプレイ80と、を備える。これらの各装置はCAN(Controller Area Network)その他の移動体に搭載されたLANによって接続され、相互に情報の授受を行うことができる。 As shown in FIG. 1, the mobile device 200 of this embodiment includes a camera 40, a controller 50, a sensor 60, a navigation device 70, and a display 80. Each of these devices is connected by a LAN (CANCAController 体 Area Network) or other LAN mounted on a mobile body, and can exchange information with each other.
 本実施形態のカメラ40は、車両(移動体の一例である。以下同じ)の所定位置に設けられている。車両に設けるカメラ40の個数は、一つでもよいし、複数でもよい。車両に搭載されたカメラ40は、車両及び/又は車両の周囲を撮像し、その撮像画像を表示装置100に送出する。本実施形態における撮像画像は、車両の一部と車両の周囲の映像を含む。撮像画像のデータは、車両の周囲の地表との位置関係の算出処理、車両又は車両の周囲の映像の生成処理に用いられる。 The camera 40 of the present embodiment is provided at a predetermined position of a vehicle (an example of a moving body; the same applies hereinafter). The number of cameras 40 provided in the vehicle may be one or plural. The camera 40 mounted on the vehicle images the vehicle and / or the surroundings of the vehicle, and sends the captured image to the display device 100. The captured image in the present embodiment includes a part of the vehicle and a video around the vehicle. The captured image data is used for calculation processing of the positional relationship with the ground surface around the vehicle and generation processing of the image of the vehicle or the surroundings of the vehicle.
 図2(A)(B)は、自車両Vに搭載されたカメラ40の設置位置の一例を示す図である。同図に示す例では、自車両Vには、自車両Vの右側前方のカメラ40R1と、右側中央のカメラ40R2と、右側後方のカメラ40R3と、自車両Vの左側前方のカメラ40L1と、左側中央のカメラ40L2と、左側後方のカメラ40L3との6つのカメラ40が設置されている。なお、カメラ40の配置位置は特に限定されず、その撮像方向も任意に設定ができる。各カメラ40は後述する制御装置10からの指令又は予め設定されたタイミングで撮像画像を表示装置100へ送出する。 FIGS. 2A and 2B are diagrams illustrating an example of the installation position of the camera 40 mounted on the host vehicle V. FIG. In the example shown in the figure, the host vehicle V includes a right front camera 40R1, a right center camera 40R2, a right rear camera 40R3, a left front camera 40L1 of the host vehicle V, and a left side. Six cameras 40 of a center camera 40L2 and a left rear camera 40L3 are installed. In addition, the arrangement position of the camera 40 is not specifically limited, The imaging direction can also be set arbitrarily. Each camera 40 sends a captured image to the display device 100 at a command from the control device 10 to be described later or at a preset timing.
 本実施形態において、カメラ40が撮像した撮像画像は、映像の生成に用いられるほか、対象物の検出、及び対象物までの距離の計測に用いられる。本実施形態のカメラ40の撮像画像は、車両の周囲の対象物の映像を含む。
 本実施形態における対象物は、自車両Vの周囲に存在する他車両、歩行者、道路構造物、駐車場、標識、施設、その他の物体を含む。本実施形態における対象物は、移動体周囲の地表を含む。本実施形態において「地表」は、地球の表面、地球の地殻(土地)の表面を含む概念を示す用語である。本実施形態の用語「地表」は、陸地の表面、海の表面、河又は川の表面、湖の表面、海底の表面、道路の表面、駐車場の表面、ポートの表面、又はこれらのうち二つ以上を含む面を含む。なお、移動体が倉庫や工場などの施設(建屋)の屋内に存在する場合において、本実施形態の用語「地表」は、その施設の床面、壁面などの構造物の表面を含む意味を有する。ここで説明に用いる用語「表面」とは、撮像時にカメラ40側に表出される(有体物の)面である。
In the present embodiment, the captured image captured by the camera 40 is used for generating a video, and for detecting an object and measuring a distance to the object. The captured image of the camera 40 of the present embodiment includes an image of an object around the vehicle.
The target object in the present embodiment includes other vehicles, pedestrians, road structures, parking lots, signs, facilities, and other objects existing around the host vehicle V. The object in the present embodiment includes the ground surface around the moving body. In the present embodiment, the “ground surface” is a term indicating a concept including the surface of the earth and the surface of the earth's crust (land). The term “surface” of the present embodiment refers to a land surface, a sea surface, a river or river surface, a lake surface, a seabed surface, a road surface, a parking lot surface, a port surface, or two of these. Includes faces that contain more than one. In addition, in the case where the moving body is present indoors in a facility (building) such as a warehouse or a factory, the term “ground surface” in the present embodiment has a meaning including the surface of a structure such as a floor surface or a wall surface of the facility. . The term “surface” used in the description here is a (tangible) surface exposed to the camera 40 during imaging.
 本実施形態のカメラ40は、画像処理装置401を備える。画像処理装置401は、カメラ40の撮像画像のデータからエッジ、色、形状、大きさなどの特徴を抽出し、抽出した特徴から撮像画像に含まれる対象物の属性を特定する。画像処理装置401は、各対象物の特徴を予め記憶し、パターンマッチング処理により撮像画像に含まれる対象物を特定する。撮像画像のデータを用いて対象物の存在を検出する手法は、特に限定されず、本願出願時に知られた手法を適宜に用いることができる。画像処理装置401は、カメラ40の撮像画像のデータから抽出した特徴点の位置又は位置の経時的な変化から自車両から対象物までの距離を算出する。距離を算出する際に、画像処理装置401は、カメラ40の設置位置、光軸方向、撮像特性などの撮像パラメータを用いる。撮像画像のデータを用いて対象物までの距離を計測する手法は、特に限定されず、本願出願時に知られた手法を適宜に用いることができる。 The camera 40 of this embodiment includes an image processing device 401. The image processing apparatus 401 extracts features such as an edge, a color, a shape, and a size from the captured image data of the camera 40, and identifies an attribute of the target object included in the captured image from the extracted features. The image processing apparatus 401 stores in advance the characteristics of each target object, and identifies the target object included in the captured image by pattern matching processing. The method for detecting the presence of an object using captured image data is not particularly limited, and a method known at the time of filing this application can be used as appropriate. The image processing device 401 calculates the distance from the own vehicle to the object from the position of the feature point extracted from the data of the captured image of the camera 40 or the change over time of the position. When calculating the distance, the image processing apparatus 401 uses imaging parameters such as the installation position of the camera 40, the optical axis direction, and imaging characteristics. The method for measuring the distance to the object using the captured image data is not particularly limited, and a method known at the time of filing the present application can be used as appropriate.
 本実施形態では、自車両Vとの位置関係を算出するためのデータを取得する手段として、測距装置41を備えてもよい。測距装置41は、カメラ40とともに使用してもよいし、カメラ40に代えて使用してもよい。測距装置41は、自車両Vの周囲に存在する対象物を検出し、対象物と自車両Vとの距離を計測する。つまり、測距装置41は、自車両Vの周囲の対象物を検出する機能を備える。測距装置41は、計測した対象物までの測距データを表示装置100に送出する。 In the present embodiment, the distance measuring device 41 may be provided as means for acquiring data for calculating the positional relationship with the host vehicle V. The distance measuring device 41 may be used together with the camera 40 or may be used instead of the camera 40. The distance measuring device 41 detects a target existing around the host vehicle V and measures the distance between the target and the host vehicle V. That is, the distance measuring device 41 has a function of detecting an object around the host vehicle V. The distance measuring device 41 sends distance measurement data up to the measured object to the display device 100.
 測距装置41は、レーダー測距装置であってもよいし、超音波測距装置であってもよい。本願出願時に知られた測距手法を利用できる。自車両Vに測距装置41を設置する測距装置41の個数は特に限定されない。また、自車両Vに測距装置41を設置する測距装置41の設置位置も、特に限定されない。測距装置41は、図2に示すカメラ40の設置位置に対応する位置又はその近傍に設けてもよいし、自車両Vの前方又は後方に設けてもよい。移動体が、高さ方向に移動するヘリコプター、飛行機、海底探査機などである場合には、カメラ40及び/又は測距装置41を機体の底面側に設けてもよい。 The distance measuring device 41 may be a radar distance measuring device or an ultrasonic distance measuring device. A ranging method known at the time of filing of the present application can be used. The number of distance measuring devices 41 that install the distance measuring devices 41 on the host vehicle V is not particularly limited. Also, the installation position of the distance measuring device 41 that installs the distance measuring device 41 in the host vehicle V is not particularly limited. The distance measuring device 41 may be provided at a position corresponding to the installation position of the camera 40 shown in FIG. 2 or in the vicinity thereof, or may be provided in front of or behind the host vehicle V. When the moving body is a helicopter, airplane, submarine spacecraft, or the like that moves in the height direction, the camera 40 and / or the distance measuring device 41 may be provided on the bottom side of the body.
 本実施形態のコントローラ50は、自車両Vを含む移動体の動作を制御する。コントローラ50は、後述するセンサ60の検出情報を含む、移動体の動作に関する各情報を集中的に管理する。 The controller 50 of this embodiment controls the operation of the moving object including the host vehicle V. The controller 50 centrally manages each piece of information related to the operation of the moving body, including detection information of the sensor 60 described later.
 本実施形態のセンサ60は、速度センサ61と、前後加速度センサ62とを含む。速度センサ61は、自車両Vの移動速度を検出する。前後加速度センサ62は、自車両Vの前後方向の加速度を検出する。 The sensor 60 of the present embodiment includes a speed sensor 61 and a longitudinal acceleration sensor 62. The speed sensor 61 detects the moving speed of the host vehicle V. The longitudinal acceleration sensor 62 detects the acceleration in the longitudinal direction of the host vehicle V.
 本実施形態のナビゲーション装置70は、GPS(Global Positioning System)711を備える位置検出装置71と、地図情報72と、道路情報73とを有する。ナビゲーション装置70は、GPS711を用いて、自車両Vの現在位置を求め、表示装置100に送出する。本実施形態の地図情報72は、地点と、道路、構造物、施設などが対応づけられた情報である。ナビゲーション装置70は、地図情報72を参照し、位置検出装置71により検出された自車両Vの現在位置から目的地までの経路を求め、自車両Vを誘導する機能を備える。 The navigation device 70 of the present embodiment includes a position detection device 71 including a GPS (Global Positioning System) 711, map information 72, and road information 73. The navigation device 70 obtains the current position of the host vehicle V using the GPS 711 and sends it to the display device 100. The map information 72 of the present embodiment is information in which points are associated with roads, structures, facilities, and the like. The navigation device 70 has a function of referring to the map information 72, obtaining a route from the current position of the host vehicle V detected by the position detection device 71 to the destination, and guiding the host vehicle V.
 本実施形態の道路情報73は、位置情報と道路の属性情報が対応づけられた情報である。道路の属性情報は、各道路が、追越し車線であること/追い越し車線ではないこと、登坂車線であること/登坂車線ではないことなどの道路の属性を含む。ナビゲーション装置70は、道路情報73を参照し、位置検出装置71により検出された現在位置において、自車両Vが走行する道路に隣接する車線が、追越し車線(相対的に走行速度が高い車線)であるか否か、又は登坂車線(相対的に走行速度が低い車線)であるか否かの情報を得ることができる。制御装置10は、検出された他車両が走行する道路の属性情報から、他車両の車速を予測できる。 The road information 73 of this embodiment is information in which position information and road attribute information are associated with each other. The road attribute information includes road attributes such as that each road is an overtaking lane / not an overtaking lane, an uphill lane / not an uphill lane. The navigation device 70 refers to the road information 73 and, at the current position detected by the position detection device 71, the lane adjacent to the road on which the host vehicle V is traveling is an overtaking lane (a lane having a relatively high traveling speed). It is possible to obtain information on whether or not there is an uphill lane (a lane having a relatively low traveling speed). The control device 10 can predict the vehicle speed of the other vehicle from the detected attribute information of the road on which the other vehicle travels.
 本実施形態のディスプレイ80は、後述する表示装置100が生成した、任意の仮想視点から自車両V及びその自車両Vの周囲を見た映像を表示する。なお、本実施形態では、ディスプレイ80を移動体に搭載させた表示システム1を例にして説明するが、移動体に持ち込み可能な可搬の表示装置100側にディスプレイ80を設けてもよい。 The display 80 of the present embodiment displays an image of the host vehicle V and the surroundings of the host vehicle V generated from an arbitrary virtual viewpoint generated by the display device 100 described later. In the present embodiment, the display system 1 in which the display 80 is mounted on a moving body will be described as an example. However, the display 80 may be provided on the portable display device 100 side that can be brought into the moving body.
 次に、本実施形態の表示装置100について説明する。本実施形態の表示装置100は、制御装置10を備える。 Next, the display device 100 of this embodiment will be described. The display device 100 of this embodiment includes a control device 10.
 図1に示すように、本実施形態に係る表示装置100の制御装置10は、移動体及びその周囲の映像を表示させるプログラムが格納されたROM(Read Only Memory)12と、このROM12に格納されたプログラムを実行することで、表示装置100の機能を実現させる動作回路としてのCPU(Central Processing Unit)11と、アクセス可能な記憶装置として機能するRAM(Random Access Memory)13と、を備える。制御装置10は、画像処理を実行するGraphics Processing Unit(グラフィックス プロセッシング ユニット)を備えてもよい。 As shown in FIG. 1, the control device 10 of the display device 100 according to the present embodiment is stored in a ROM (Read Only Memory) 12 in which a program for displaying a moving body and surrounding images is stored, and in the ROM 12. The CPU (Central Processing Unit) 11 serving as an operation circuit for realizing the functions of the display device 100 and the RAM (Random Access Memory) 13 functioning as an accessible storage device are provided. The control device 10 may include a Graphics / Processing / Unit that executes image processing.
 本実施形態に係る表示装置100の制御装置10は、画像取得機能と、情報取得機能と、画像生成機能と、表示機能とを実現する。本実施形態の制御装置10は、上記機能を実現するためのソフトウェアと、上述したハードウェアの協働により各機能を実行する。 The control device 10 of the display device 100 according to the present embodiment realizes an image acquisition function, an information acquisition function, an image generation function, and a display function. The control apparatus 10 of this embodiment performs each function by cooperation of the software for implement | achieving the said function, and the hardware mentioned above.
 以下、制御装置10が実現する各機能についてそれぞれ説明する。 Hereinafter, each function realized by the control device 10 will be described.
 まず、制御装置10の画像取得機能について説明する。本実施形態の制御装置10は、カメラ40が撮像した撮像画像のデータを取得する。表示装置100は、図示しない通信装置を用いて移動体装置200から撮像画像のデータを取得する。 First, the image acquisition function of the control device 10 will be described. The control device 10 according to the present embodiment acquires captured image data captured by the camera 40. The display device 100 acquires captured image data from the mobile device 200 using a communication device (not shown).
 次に、制御装置10の情報取得機能について説明する。制御装置10は、図示しない通信装置を用いて移動体装置200から各種の情報を取得する。制御装置10は、移動体としての自車両Vの現在の位置情報を取得する。制御装置10は、ナビゲーション装置70のGPS711により検出された現在位置を取得する。 Next, the information acquisition function of the control device 10 will be described. The control device 10 acquires various types of information from the mobile device 200 using a communication device (not shown). The control apparatus 10 acquires the current position information of the host vehicle V as a moving body. The control device 10 acquires the current position detected by the GPS 711 of the navigation device 70.
 制御装置10は、移動体としての自車両Vの周囲に存在する対象物の位置情報を取得する。取得した対象物の位置情報は、後述する仮想光源の位置の設定処理に用いられる。制御装置10は、カメラ40の撮像画像から自車両Vから対象物までの距離を、自車両Vに対する対象物の位置情報として算出する。制御装置10は、対象物の位置情報の算出処理にカメラ40の撮像パラメータを用いてもよい。制御装置10は、画像処理装置401が算出した対象物の位置情報を取得してもよい。 The control device 10 acquires position information of an object existing around the host vehicle V as a moving body. The acquired position information of the object is used for setting processing of the position of the virtual light source described later. The control device 10 calculates the distance from the host vehicle V to the target object from the captured image of the camera 40 as position information of the target object with respect to the host vehicle V. The control device 10 may use the imaging parameter of the camera 40 for the calculation process of the position information of the object. The control device 10 may acquire the position information of the object calculated by the image processing device 401.
 制御装置10は対象物の速度を取得する。制御装置10は対象物の位置情報の経時的変化から対象物の速度を算出する。制御装置10は、撮像画像のデータに基づいて対象物の速度を算出してもよい。制御装置10は、画像処理装置401が算出した対象物の速度情報を取得してもよい。 The control device 10 acquires the speed of the object. The control device 10 calculates the speed of the object from the change with time of the position information of the object. The control device 10 may calculate the speed of the object based on the captured image data. The control device 10 may acquire the speed information of the object calculated by the image processing device 401.
 制御装置10は、対象物が走行するレーンの属性情報を取得する。制御装置10は、ナビゲーション装置70の道路情報73を参照し、対象物が走行するレーンの属性情報を取得する。具体的に、制御装置10は、地図情報72又は道路情報73を参照し、取得した対象物の位置を含む道路及び走行レーンを特定する。制御装置10は、道路情報73を参照し、特定された対象物の走行レーンに対応づけられた属性情報を取得する。制御装置10は、GPS711により検出された自車両Vの位置と対称物との位置関係を算出し、その位置関係を考慮して、自車両Vの走行レーンの属性から対象物の走行レーンの属性を判断してもよい。例えば、相対的に右側のレーンが追越しレーンであるという規則が存在する場合において、対象物である他車両が自車両Vの右側のレーンを走行し、自車両の走行レーンが非追越しレーンである場合には、他車両の走行レーンは追越しレーンであると判断できる。 The control device 10 acquires attribute information of the lane on which the target object travels. The control device 10 refers to the road information 73 of the navigation device 70 and acquires the attribute information of the lane on which the object travels. Specifically, the control device 10 refers to the map information 72 or the road information 73 and identifies a road and a traveling lane that include the acquired position of the object. The control device 10 refers to the road information 73 and acquires attribute information associated with the travel lane of the identified object. The control device 10 calculates the positional relationship between the position of the host vehicle V detected by the GPS 711 and the symmetrical object, and considers the positional relationship, and determines the attribute of the traveling lane of the target object from the traveling lane attribute of the own vehicle V. May be judged. For example, in the case where there is a rule that the right lane is an overtaking lane, the other vehicle as the object travels on the right lane of the own vehicle V, and the traveling lane of the own vehicle is a non-overtaking lane. In this case, it can be determined that the traveling lane of the other vehicle is an overtaking lane.
 制御装置10は、移動体である自車両Vの進行方向の加速度を取得する。制御装置10は、自車両Vの進行方向の加速度を自車両Vから取得する。制御装置10は、前後加速度センサ62が検出した前後加速度を取得する。制御装置10は、速度センサ61が検出した速度から加速度を算出してもよい。制御装置10は、GPS711が検出した自車両Vの位置情報の変化から加速度を算出してもよい。 The control device 10 acquires the acceleration in the traveling direction of the host vehicle V that is a moving body. The control device 10 acquires the acceleration in the traveling direction of the host vehicle V from the host vehicle V. The control device 10 acquires the longitudinal acceleration detected by the longitudinal acceleration sensor 62. The control device 10 may calculate the acceleration from the speed detected by the speed sensor 61. The control device 10 may calculate acceleration from a change in position information of the host vehicle V detected by the GPS 711.
 次に、制御装置10の画像生成機能について説明する。
 本実施形態の制御装置10は、移動体の位置情報を用いて、撮像画像における移動体の存在位置を示す位置表示画像を含む映像を生成する。具体的に、本実施形態の制御装置10は、移動体の位置情報を用いて基準点を設定し、撮像画像において基準点から観察される移動体の存在位置を示す位置表示画像と撮像画像とを含む映像を生成する。本実施形態の位置表示画像は、移動体の存在位置を表現する画像である。移動体の存在位置は、線図や影などにより積極的に表現してもよいし、背景の欠けや変化などにより消極的に表現してもよい。
Next, the image generation function of the control device 10 will be described.
The control device 10 according to the present embodiment uses the position information of the moving body to generate a video including a position display image indicating the position of the moving body in the captured image. Specifically, the control device 10 of the present embodiment sets a reference point using the position information of the moving body, and displays a position display image and a captured image that indicate the position of the moving body observed from the reference point in the captured image. Generate a video containing The position display image of the present embodiment is an image that represents the position where the moving object is present. The location of the moving object may be positively expressed by a diagram, a shadow, or the like, or may be passively expressed by a lack or change of the background.
 本実施形態の位置表示画像は、予め設定された基準点から観察される、撮像画像における移動体の存在位置を示す線図画像を含んでもよい。制御装置10は、移動体の位置情報を用いて、移動体を見る視点を基準点として設定し、この視点から移動体を見たときの、移動体の存在位置を示す線図画像を含む位置表示画像を生成する。本例において、基準点は、移動体を観察する視点である。基準点は移動体の存在位置に基づいて設定してもよい。基準点は、後述する投影面の位置に基づいて設定してもよい。位置表示画像の線図画像は、移動体の外形を模した図形の画像、移動体を想起させるアイコンの画像であってもよい。線図画像の表現態様は特に限定されない。線図画像の線の太さ、色彩、破線、二重線などの線の態様は限定されない。線図画像の色相、明度、彩度、色調(トーン)、模様、グラデーション、大きさ(拡大縮小)の態様は特に限定されない。本実施形態の制御装置10は、移動体の位置情報を用いて撮像画像における移動体の存在位置を求め、移動体の存在位置を示す線図画像を含む位置表示画像を生成する。 The position display image of the present embodiment may include a diagram image indicating the position of the moving object in the captured image that is observed from a preset reference point. The control device 10 uses the position information of the moving body to set a viewpoint from which the moving body is viewed as a reference point, and includes a diagram image indicating the position of the moving body when the moving body is viewed from this viewpoint. Generate a display image. In this example, the reference point is a viewpoint for observing the moving body. The reference point may be set based on the position where the moving object exists. The reference point may be set based on the position of the projection plane described later. The diagram image of the position display image may be a graphic image imitating the outer shape of the moving body or an icon image reminiscent of the moving body. The representation mode of the diagram image is not particularly limited. The form of lines such as line thickness, color, broken line, and double line of the diagram image is not limited. The hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the diagram image are not particularly limited. The control apparatus 10 of this embodiment calculates | requires the presence position of the mobile body in a captured image using the positional information on a mobile body, and produces | generates the position display image containing the diagram image which shows the presence position of a mobile body.
 本実施形態の位置表示画像は、予め設定された基準点から観察される移動体の存在位置を示す背景画像を含んでもよい。制御装置10は、移動体の位置情報を用いて、移動体を見る視点を基準点として設定し、この視点から移動体を見たときの、移動体の存在位置を示す背景画像を含む位置表示画像を生成する。本実施形態では、移動体の存在位置を背景の画像により表現する。本例において、基準点は、移動体を観察する視点である。基準点は移動体の存在位置に基づいて設定してもよい。基準点は、後述する投影面の位置に基づいて設定してもよい。背景画像の表現態様は特に限定されない。背景画像中に移動体の存在位置を示す画像欠損領域を含めてもよい。背景画像中に移動体の存在位置を示す線図を含めてもよい。背景画像の線の太さ、色彩、破線、二重線などの線の態様は限定されない。背景画像の色相、明度、彩度、色調(トーン)、模様、グラデーション、大きさ(拡大縮小)の態様は特に限定されない。本実施形態の制御装置10は、移動体の位置情報を用いて撮像画像における移動体の存在位置を求め、移動体の存在位置を示す背景画像を含む位置表示画像を生成する。 The position display image of the present embodiment may include a background image indicating the position of the moving object observed from a preset reference point. The control device 10 uses the position information of the moving body to set a viewpoint from which the moving body is viewed as a reference point, and when the moving body is viewed from this viewpoint, a position display including a background image indicating the position of the moving body Generate an image. In the present embodiment, the position of the moving object is expressed by a background image. In this example, the reference point is a viewpoint for observing the moving body. The reference point may be set based on the position where the moving object exists. The reference point may be set based on the position of the projection plane described later. The representation mode of the background image is not particularly limited. You may include the image defect area | region which shows the presence position of a moving body in a background image. You may include the diagram which shows the presence position of a moving body in a background image. The mode of lines such as line thickness, color, broken line, and double line of the background image is not limited. The hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the background image are not particularly limited. The control apparatus 10 of this embodiment calculates | requires the presence position of the moving body in a captured image using the positional information on a moving body, and produces | generates the position display image containing the background image which shows the presence position of a moving body.
 本実施形態の位置表示画像は、予め設定された基準点から観察される移動体の存在位置を示す影画像を含んでもよい。本実施形態では、移動体の存在位置を移動体の影の画像により表現する。本例において、基準点は、移動体に光を照射する光源である。具体的に、本実施形態の制御装置10は、移動体の位置情報を用いて仮想光源を基準点として設定し、仮想光源から移動体に光を照射したときに生じる影を模した、移動体の存在位置を示す影画像を含む位置表示画像を生成する。仮想光源から移動体に光を照射したときに生じる影を模した影画像は、予め設定された基準点から観察される移動体の存在位置を示す。影画像の表現態様は特に限定されない。影画像の線の太さ、色彩、破線、二重線などの線の態様は限定されない。影画像の色相、明度、彩度、色調(トーン)、模様、グラデーション、大きさ(拡大縮小)の態様は特に限定されない。位置表示画像は、移動体の実際の位置情報を用いて求められた、撮像画像の座標系における移動体の存在位置に表示される。本実施形態の制御装置10は、自車両Vの位置情報に応じて仮想光源を設定し、仮想光源から自車両Vに光を照射したときに生じる影を模した影画像を生成する。 The position display image of the present embodiment may include a shadow image indicating the position of the moving object observed from a preset reference point. In the present embodiment, the position of the moving object is expressed by the shadow image of the moving object. In this example, the reference point is a light source that irradiates light to the moving body. Specifically, the control device 10 according to the present embodiment sets a virtual light source as a reference point using position information of the moving body, and imitates a shadow that is generated when light is emitted from the virtual light source to the moving body. A position display image including a shadow image indicating the existence position of is generated. A shadow image imitating a shadow that is generated when light is emitted from a virtual light source to a moving object indicates the position of the moving object that is observed from a preset reference point. The expression form of the shadow image is not particularly limited. The form of the line such as the thickness, color, broken line, double line, etc. of the shadow image is not limited. The hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the shadow image are not particularly limited. The position display image is displayed at the position where the moving body exists in the coordinate system of the captured image, which is obtained using the actual position information of the moving body. The control device 10 according to the present embodiment sets a virtual light source according to the position information of the host vehicle V, and generates a shadow image imitating a shadow that is generated when the host vehicle V is irradiated with light from the virtual light source.
 本実施形態の制御装置10は、撮像画像を立体座標系に投影した映像に、移動体の存在位置を示す位置表示画像を含めて表示する。表示される映像には、撮像画像の一部又は全部と位置表示画像とが含まれる。 The control device 10 according to the present embodiment displays the image obtained by projecting the captured image on the three-dimensional coordinate system including the position display image indicating the position of the moving object. The displayed video includes part or all of the captured image and the position display image.
 制御装置10は、移動体の存在位置を示す位置表示画像を生成する。位置表示画像は、線図画像、背景画像、影画像を含む。
 制御装置10は、取得した自車両Vの位置情報に応じて視点を基準点として設定し、基準点から移動体、対象物を見たときの映像を模した線図画像、背景画像を生成する。また、制御装置10は、取得した自車両Vの位置情報に応じて仮想光源を基準点として設定し、仮想光源から移動体、対象物に光を照射したときに生じる影を模した影画像を生成する。影画像は、影そのものに近似させた画像であってもよいし、自車両Vの位置や向きなどを表示するために影そのものを変形させた画像であってもよい。
 線図画像、背景画像、影画像を含む位置表示画像は、自車両Vの現在の形状に厳密に対応するものではなく、自車両Vが存在する位置を示すことのできる画像である。位置が分かればよいので、自車両V等の形状を模さなくてもよい。ただし、自車両Vの走行方向などが分かるように、自車両V等の形状を模した位置表示画像を生成してもよい。影画像は、影のように見える画像であって実際の影ではない。位置表示画像には、模様や色を付してもよい。位置表示画像の色相、明度、彩度、色調(トーン)、模様、グラデーションは、移動体外部の照度、撮像画像の明るさなどに応じて、変更できる。
 線図画像、背景画像、影画像を含む位置表示画像は、自車両Vの現在の形状そのものに対応する形状に限られず、自車両Vの可動範囲を示す画像であってもよい。たとえば、本実施形態の位置表示画像は、現在は閉まっている自車両Vのドアが解放された場合に、ドアの可動範囲を示す画像であってもよい。位置表示画像の態様は限定されず、ユーザに示したい情報に応じて適宜に設計される。位置表示画像(線図画像、背景画像、影画像)の生成に、出願時に知られたシャドウマッピング技術を利用してもよい。
The control device 10 generates a position display image indicating the position where the moving object is present. The position display image includes a diagram image, a background image, and a shadow image.
The control device 10 sets a viewpoint as a reference point according to the acquired position information of the host vehicle V, and generates a diagram image and a background image imitating a video when a moving object and an object are viewed from the reference point. . In addition, the control device 10 sets a virtual light source as a reference point according to the acquired position information of the host vehicle V, and generates a shadow image imitating a shadow generated when light is emitted from the virtual light source to the moving object or the object. Generate. The shadow image may be an image approximated to the shadow itself, or may be an image obtained by deforming the shadow itself in order to display the position and orientation of the host vehicle V.
The position display image including the diagram image, the background image, and the shadow image does not strictly correspond to the current shape of the host vehicle V, and is an image that can indicate the position where the host vehicle V exists. Since the position only needs to be known, it is not necessary to imitate the shape of the host vehicle V or the like. However, a position display image imitating the shape of the host vehicle V or the like may be generated so that the traveling direction of the host vehicle V can be understood. The shadow image is an image that looks like a shadow, not an actual shadow. A pattern or color may be added to the position display image. The hue, brightness, saturation, color tone (tone), pattern, and gradation of the position display image can be changed according to the illuminance outside the moving body, the brightness of the captured image, and the like.
The position display image including the diagram image, the background image, and the shadow image is not limited to the shape corresponding to the current shape of the host vehicle V, and may be an image showing the movable range of the host vehicle V. For example, the position display image of the present embodiment may be an image showing the movable range of the door when the door of the vehicle V that is currently closed is released. The mode of the position display image is not limited, and is appropriately designed according to information desired to be shown to the user. You may use the shadow mapping technique known at the time of application for the production | generation of a position display image (a diagram image, a background image, a shadow image).
 図3Aに円筒形状の立体座標系S1の一例を示す図である。図3Aに示す立体座標系S1において、自車両Vは平面G0に載置された状態を示す。図3Bは、図3Aに示す立体座標系S1における線図画像、背景画像、影画像を含む位置表示画像SHの表示例を示す図である。本例における線図画像、背景画像、又は影画像を含む位置表示画像SHは、平面G0に投影される。また、図4Aは、球形状の立体座標系S2の一例を示す図である。図4Aに示す立体座標系S2において、自車両Vは平面G0に載置された状態を示す。図4Bは、図4Aに示す立体座標系S2における自車両Vの線図画像、背景画像、又は影画像を含む位置表示画像SHの表示例を示す図である。位置表示画像SHは、平面G0に投影される。 FIG. 3A is a diagram illustrating an example of a cylindrical solid coordinate system S1. In the three-dimensional coordinate system S1 shown in FIG. 3A, the host vehicle V shows a state of being placed on the plane G0. FIG. 3B is a diagram illustrating a display example of the position display image SH including the diagram image, the background image, and the shadow image in the three-dimensional coordinate system S1 illustrated in FIG. 3A. The position display image SH including the diagram image, background image, or shadow image in this example is projected on the plane G0. FIG. 4A is a diagram illustrating an example of a spherical solid coordinate system S2. In the three-dimensional coordinate system S2 shown in FIG. 4A, the host vehicle V shows a state placed on the plane G0. FIG. 4B is a diagram illustrating a display example of the position display image SH including the diagram image, the background image, or the shadow image of the host vehicle V in the three-dimensional coordinate system S2 illustrated in FIG. 4A. The position display image SH is projected on the plane G0.
 図3B、図4Bに示す位置表示画像SHは、立体座標系S1,S2において設定された仮想光源LGから自車両Vに光を照射した場合に生じる影を模した影画像SHである。図3B、図4Bに示す位置表示画像SHは、立体座標系S1,S2において設定された視点LGから自車両Vを見たときの自車両Vの存在位置を示す線図画像である。図示はしないが、図3B、図4Bの立体座標系S(S1,S2)には、撮像画像が投影される。位置表示画像SHは、立体座標系S1,S2に投影された撮像画像の背景画像に変化を加えること(例えば、背景画像の相対位置をずらすこと)により、視点LGから見た自車両Vの存在位置を示してもよい。図3B、図4Bに示すように、自車両Vの位置表示画像SHにより、自車両Vの存在や自車両Vの向きを表現できる。これにより、立体座標系S1,S2に周囲の対象物の映像が投影された場合であっても、自車両Vと対象物との位置関係が把握しやすい映像を表示できる。
 なお、本実施形態の立体座標系Sの形状は特に限定されず、特開2012-138660号公報に開示された、お椀形状としてもよい。
The position display image SH shown in FIGS. 3B and 4B is a shadow image SH imitating a shadow generated when light is emitted from the virtual light source LG set in the three-dimensional coordinate systems S1 and S2 to the host vehicle V. The position display image SH shown in FIG. 3B and FIG. 4B is a diagram image showing the existence position of the host vehicle V when the host vehicle V is viewed from the viewpoint LG set in the three-dimensional coordinate systems S1 and S2. Although not shown, the captured image is projected onto the three-dimensional coordinate system S (S1, S2) of FIGS. 3B and 4B. The position display image SH is the presence of the host vehicle V viewed from the viewpoint LG by changing the background image of the captured image projected on the three-dimensional coordinate systems S1 and S2 (for example, by shifting the relative position of the background image). A position may be indicated. As shown in FIGS. 3B and 4B, the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the position display image SH of the host vehicle V. Thereby, even if the image of the surrounding object is projected onto the three-dimensional coordinate system S1, S2, an image in which the positional relationship between the host vehicle V and the object can be easily grasped can be displayed.
Note that the shape of the three-dimensional coordinate system S of the present embodiment is not particularly limited, and may be a bowl shape disclosed in Japanese Patent Application Laid-Open No. 2012-138660.
 本実施形態の制御装置10は、取得した他車両などの対象物の位置情報に応じて基準点を設定し、この基準点から観察される対象物の存在位置を示す位置表示画像を生成する。位置表示画像は、先述した線図画像、背景画像、影画像を含む。本実施形態の制御装置10は、取得した他車両VXなどの対象物の位置情報を用いて、対象物を見る視点を基準点として設定し、この視点から移動体を見たときの、移動体の存在位置を示す線図画像又は背景画像を生成する。本実施形態の制御装置10は、取得した他車両VXなどの対象物の位置情報に応じて、基準点としての仮想光源を設定し、仮想光源から対象物に光を照射したときに生じる影を模した影画像を生成する。本実施形態では、移動体の存在位置を示す位置表示画像(線図画像、背景画像、影画像)を立体座標系S1,S2に投影する。線図画像又は背景画像を生成する際の基準点は、移動体において予め設定された基準位置(重心、運転席位置など)に応じて設定してもよいし、立体座標系S1,S2の所定位置に設定してもよい。実施形態では、移動体の運転席のヘッドレストの高さに応じて基準点を設定する。 The control device 10 of the present embodiment sets a reference point according to the acquired position information of an object such as another vehicle, and generates a position display image indicating the position of the object observed from this reference point. The position display image includes the above-described diagram image, background image, and shadow image. The control device 10 according to the present embodiment uses the acquired position information of the target object such as the other vehicle VX to set a viewpoint for viewing the target object as a reference point, and the mobile object when the mobile object is viewed from this viewpoint. A diagram image or background image indicating the location of the image is generated. The control device 10 of the present embodiment sets a virtual light source as a reference point according to the acquired position information of the target object such as the other vehicle VX, and creates a shadow that is generated when the target object is irradiated with light from the virtual light source. A simulated shadow image is generated. In the present embodiment, a position display image (a diagram image, a background image, a shadow image) indicating the position where the moving object is present is projected onto the three-dimensional coordinate systems S1 and S2. The reference point for generating the diagram image or the background image may be set according to a reference position (center of gravity, driver's seat position, etc.) set in advance on the moving body, or a predetermined point in the three-dimensional coordinate system S1, S2. The position may be set. In the embodiment, the reference point is set according to the height of the headrest of the driver's seat of the moving body.
 図5は、自車両Vの位置表示画像(線図画像、背景画像、影画像)SHと、対象物としての他車両VX1の位置表示画像SH1と、他車両VX2の位置表示画像SH2とを含む映像の一例を示す図である。本実施形態の制御装置10は、自車両V及び/又は他車両VX1,VX2を含む移動体の存在位置を示す位置表示画像としての位置表示画像SHを投影面SQに表示する。位置表示画像SHは、移動体の存在位置に応じた位置に設定された基準点から移動体を観察したときの移動体の存在位置を示す画像である。位置表示画像SHは、仮想光源から移動体に光を照射したときの影を模した影画像としてもよい。視点、仮想光源を含む基準点LGの位置は、移動体の基準位置θに応じて設定してもよい。基準位置θは、移動体の重心位置、中心位置などに応じて任意に設定できる。自車両Vと他車両VX1と他車両VX2についての基準点(視点、仮想光源)LGは一点でもよいし、自車両V、他車両VX1及び他車両VX2のそれぞれに設定してもよい。自車両Vに対する基準点(視点、仮想光源)LGと、他車両VX1(VX2)に対する基準点(視点、仮想光源)LGの位置は、同じ位置関係であってもよいし、異なる位置関係であってもよい。このように、自車両Vのみならず他車両VXなどの対象物の位置表示画像(線図画像、背景画像、影画像)も併せて表示することにより、自車両Vと対象物との位置関係が把握しやすい映像を表示できる。 FIG. 5 includes a position display image (line image, background image, shadow image) SH of the own vehicle V, a position display image SH1 of the other vehicle VX1 as an object, and a position display image SH2 of the other vehicle VX2. It is a figure which shows an example of an image | video. The control device 10 of the present embodiment displays a position display image SH as a position display image indicating the position of the moving body including the host vehicle V and / or other vehicles VX1, VX2 on the projection plane SQ. The position display image SH is an image showing the position of the moving body when the moving body is observed from a reference point set at a position corresponding to the position of the moving body. The position display image SH may be a shadow image imitating a shadow when a moving body is irradiated with light from a virtual light source. The position of the reference point LG including the viewpoint and the virtual light source may be set according to the reference position θ of the moving object. The reference position θ can be arbitrarily set according to the center of gravity position, the center position, and the like of the moving body. The reference point (viewpoint, virtual light source) LG for the host vehicle V, the other vehicle VX1, and the other vehicle VX2 may be one point, or may be set for each of the host vehicle V, the other vehicle VX1, and the other vehicle VX2. The position of the reference point (viewpoint, virtual light source) LG for the host vehicle V and the position of the reference point (viewpoint, virtual light source) LG for the other vehicle VX1 (VX2) may be the same or different. May be. In this way, the positional relationship between the host vehicle V and the target object is displayed by also displaying the position display images (line diagram image, background image, shadow image) of the target object such as the other vehicle VX as well as the host vehicle V. Can display images that are easy to grasp.
 図5に示すように、本実施形態の制御装置10は、自車両Vが移動する方向に沿って位置表示画像(線図画像、背景画像、影画像)SHを投影する投影面SQを設定する。本実施形態の制御装置10は、自車両Vが走行(移動)する走行レーンLn2に隣接する隣接レーンLn1を走行する他車両VX1を基準点から観察したときの位置表示画像(線図画像、背景画像、影画像)SH1を生成する。
 このように、隣接レーンLn1を走行する他車両VX1(対象物)の位置表示画像(線図画像、背景画像、影画像)SH1を、自車両Vの位置表示画像(線図画像、背景画像、影画像)SHとともに共通の投影面SQに投影することにより、自車両Vと他車両V1との位置関係を把握しやすい映像を提示できる。また、投影面SQを自車両Vの進行方向に沿って設定することにより、自車両Vと他車両VX1との距離が認識されやすい映像を提示できる。投影面SQには、撮像画像と、自車両Vの位置表示画像(線図画像、背景画像、影画像)SHと、他車両VX1(対象物)の位置表示画像(線図画像、背景画像、影画像)SH1とが投影される。
 さらに、自車両Vが走行するレーンLn2の路面と略直交する(90度で交差する)ように投影面SQを設定することにより、自車両Vのドライバが視認しやすい位置に自車両Vの位置表示画像SHと他車両VX1の位置表示画像SH1を表示できる。つまり、自車両Vのドライバが運転席から、自車両Vと他車両VX1との位置関係を正確に認識できる映像を提示できる。
As shown in FIG. 5, the control device 10 of the present embodiment sets a projection surface SQ for projecting a position display image (line image, background image, shadow image) SH along the direction in which the host vehicle V moves. . The control device 10 of the present embodiment displays a position display image (a diagram image, a background) when the other vehicle VX1 traveling in the adjacent lane Ln1 adjacent to the traveling lane Ln2 in which the host vehicle V travels (moves) is observed from the reference point. Image, shadow image) SH1 is generated.
As described above, the position display image (line diagram image, background image, shadow image) SH1 of the other vehicle VX1 (target object) traveling in the adjacent lane Ln1 is converted into the position display image (line diagram image, background image, By projecting onto the common projection plane SQ together with the shadow image (SH), it is possible to present an image that makes it easy to grasp the positional relationship between the host vehicle V and the other vehicle V1. Further, by setting the projection plane SQ along the traveling direction of the host vehicle V, it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized. On the projection surface SQ, a captured image, a position display image (line diagram image, background image, shadow image) SH of the host vehicle V, and a position display image (line diagram image, background image) of another vehicle VX1 (target object), A shadow image SH1 is projected.
Further, by setting the projection plane SQ so as to be substantially orthogonal (intersect at 90 degrees) with the road surface of the lane Ln2 on which the host vehicle V is traveling, the position of the host vehicle V is at a position where the driver of the host vehicle V can easily see. The display image SH and the position display image SH1 of the other vehicle VX1 can be displayed. That is, the driver | operator of the own vehicle V can show the image | video which can recognize the positional relationship of the own vehicle V and the other vehicle VX1 correctly from a driver's seat.
 線図画像SH(又は背景画像SH)を投影面SQに投影する場合の投影位置は、自車両V(又は他車両VX1,VX2)の存在位置を示すように、自車両V(又は他車両VX1,VX2)の任意の基準位置θに応じて決定される。図5に示す自車両V(又は他車両VX1,VX2)の任意の基準位置θのXZ座標上の位置は、投影面SQのXZ座標上の位置と共通することが好ましい。 The projection position when the diagram image SH (or background image SH) is projected onto the projection plane SQ indicates the own vehicle V (or other vehicle VX1) so as to indicate the position of the own vehicle V (or other vehicles VX1, VX2). , VX2) is determined according to an arbitrary reference position θ. The position on the XZ coordinate of the arbitrary reference position θ of the host vehicle V (or other vehicles VX1, VX2) shown in FIG. 5 is preferably the same as the position on the XZ coordinate of the projection plane SQ.
 図5に示す自車両Vの位置が同じである場合には、線図画像SH、背景画像SH,影画像SHは、投影面SQにおいてほぼ同じ位置に表示される。同様に、他車両SH2の位置が同じである場合には、線図画像SH2、背景画像SH2、影画像SH2は、投影面SQにおいてほぼ同じ位置に表示される。 When the position of the host vehicle V shown in FIG. 5 is the same, the diagram image SH, the background image SH, and the shadow image SH are displayed at substantially the same position on the projection plane SQ. Similarly, when the position of the other vehicle SH2 is the same, the diagram image SH2, the background image SH2, and the shadow image SH2 are displayed at substantially the same position on the projection plane SQ.
 以下に、図6A-6F,図7,図8A-8Dに基づいて、影画像SH、線図画像SH、背景画像SHを含む位置表示画像の態様について説明する。なお、各図の領域Rd1は、移動体の走行路、それに隣接する走行路、ガードレール/路側帯等の道路構造物に対応する背景画像Rd1である。各図の領域Rd2(領域Rd1の上側の領域)は、移動体の走行路の路側の樹木、建造物、標識などに対応する背景画像Rd2である。背景画像Rd1と背景画像Rd2は、投影面SQに表示される。投影面SQには、背景画像Rd1,Rd2の両方を含めてもよいし、何れか一方を含めてもよい。 Hereinafter, the mode of the position display image including the shadow image SH, the diagram image SH, and the background image SH will be described with reference to FIGS. 6A-6F, FIGS. 7, and 8A-8D. In addition, area | region Rd1 of each figure is the background image Rd1 corresponding to road structures, such as a travel path of a mobile body, a travel path adjacent to it, and a guardrail / roadside zone. A region Rd2 (region on the upper side of the region Rd1) in each figure is a background image Rd2 corresponding to a tree, a building, a sign, or the like on the road side of the traveling path of the moving body. Background image Rd1 and background image Rd2 are displayed on projection plane SQ. The projection plane SQ may include both the background images Rd1 and Rd2 or any one of them.
 図6A-6Fは、影画像を含む位置表示画像、車両の形状を模した線図画像SHを含む位置表示画像の態様を示す図である。移動体が車両である場合には、影画像も車両の形状に近似する。ここでは、重複した説明を避けるため、影画像を含む位置表示画像と、車両の形状を模した図形の線図画像SHを含む位置表示画像とを同じ図面を参照して説明する。 FIGS. 6A to 6F are views showing aspects of a position display image including a shadow image and a position display image including a diagram image SH imitating the shape of the vehicle. When the moving body is a vehicle, the shadow image also approximates the shape of the vehicle. Here, in order to avoid overlapping explanation, a position display image including a shadow image and a position display image including a diagram image SH representing a shape of a vehicle will be described with reference to the same drawing.
 図6Aは、影画像SH6aの影領域が不透明(透過率が所定値未満)の領域として示された位置表示画像を示す。図6Aは、線図画像SH6aの図形内領域が不透明(透過率の低い)の領域として示された位置表示画像を示す。
 図6Bは、影画像SH6bの影領域が半透明(透過率が所定値以上)の領域として示された位置表示画像を示す。図6Bは、線図画像SH6bの図形内領域が半透明(透過率が所定値以上)の領域として示された位置表示画像を示す。
 図6Cは、影画像(又は線図画像)SH6cの影領域(又は線図領域)の明度が、影画像(又は線図画像)以外の領域の明度よりも高く表現された位置表示画像を示す。移動体の周囲が暗いと、背景画像Rd1,Rd2の明度が低くなる。位置表示画像に撮像画像を含める場合において、図6Aのように影画像(又は線図画像)SH6aの明度を低くすると、影画像(又は線図画像)SH6aの描画位置が不明確になる可能性がある。このため、本実施形態では、移動体の周囲の明度が所定値未満である場合には、影画像(又は線図画像)SH6cの影領域(又は線図領域)の明度が、影画像(又は線図画像)以外の領域の明度よりも高くなるように、影画像(又は線図画像)SH6cの影領域(又は線図領域)の明度を調整し、明度を調整した影画像(又は線図画像)SH6cを含む位置表示画像を示す。
FIG. 6A shows a position display image in which the shadow area of the shadow image SH6a is shown as an opaque area (transmittance is less than a predetermined value). FIG. 6A shows a position display image in which the region in the figure of the diagram image SH6a is shown as an opaque region (low transmittance).
FIG. 6B shows a position display image in which the shadow area of the shadow image SH6b is shown as a semi-transparent area (transmittance is a predetermined value or more). FIG. 6B shows a position display image in which the in-figure region of the diagram image SH6b is shown as a semi-transparent region (transmittance is a predetermined value or more).
FIG. 6C shows a position display image in which the brightness of the shadow area (or diagram area) of the shadow image (or diagram image) SH6c is expressed higher than the brightness of the area other than the shadow image (or diagram image). . When the periphery of the moving body is dark, the brightness of the background images Rd1 and Rd2 is low. When the captured image is included in the position display image, if the brightness of the shadow image (or diagram image) SH6a is lowered as shown in FIG. 6A, the drawing position of the shadow image (or diagram image) SH6a may become unclear. There is. For this reason, in this embodiment, when the brightness around the moving object is less than a predetermined value, the brightness of the shadow area (or diagram area) of the shadow image (or diagram image) SH6c is the shadow image (or diagram area). The shadow image (or diagram) in which the brightness is adjusted by adjusting the brightness of the shadow region (or diagram region) of the shadow image (or diagram image) SH6c so as to be higher than the brightness of the region other than the diagram image). Image) A position display image including SH6c is shown.
 図6Dは、影画像(又は線図画像)SH6dの影領域(又は線図領域)に色彩が付された位置表示画像を示す。願書に添付した図面においては色を表現できないので、便宜上、ハッチングにより影画像(又は線図画像)SH6dに彩色されたことを示す。影領域(又は線図領域)の色彩の色相は限定されない。明度、彩度についても限定されない。
 図6Eは、影画像(又は線図画像)SH6eの影領域(又は線図領域)の輪郭を含む位置表示画像を示す。図6Eに示す影領域(又は線図領域)の輪郭に囲まれた内延(内側)は透明乃至半透明である。もちろん、輪郭に囲まれた内延(内側)を不透明にしてもよいし、色彩、模様を付してもよい。
 図6Fは、影画像(又は線図画像)SH6fの影領域(又は線図領域)の一部をぼやかした位置表示画像を示す。図6Fに示す影領域(又は線図領域)では、内側から外側に向けて明度が下がるようにグラデーションを付した。これにより、影領域(又は線図領域)の輪郭を明瞭に示すことができる。併せて、内側の透明度を上げることにより、移動体の向こう側の対象物を示すことができる。これにより、移動体の存在位置と、移動体の向こう側(側方)の対象物の存在を同時に確認できる。
FIG. 6D shows a position display image in which the shadow region (or diagram region) of the shadow image (or diagram image) SH6d is colored. Since colors cannot be expressed in the drawings attached to the application, it is shown for convenience that the shadow image (or diagram image) SH6d is colored by hatching. The hue of the color of the shadow area (or diagram area) is not limited. The brightness and saturation are not limited.
FIG. 6E shows a position display image including the outline of the shadow region (or diagram region) of the shadow image (or diagram image) SH6e. The inward extension (inner side) surrounded by the outline of the shadow area (or diagram area) shown in FIG. 6E is transparent or translucent. Of course, the inward extension (inner side) surrounded by the outline may be opaque, or a color or pattern may be added.
FIG. 6F shows a position display image obtained by blurring a part of the shadow region (or diagram region) of the shadow image (or diagram image) SH6f. In the shadow area (or diagram area) shown in FIG. 6F, gradation is given so that the brightness decreases from the inside toward the outside. Thereby, the outline of the shadow area (or diagram area) can be clearly shown. In addition, the object on the other side of the moving object can be shown by increasing the inner transparency. Thereby, the presence position of a moving body and the presence of the object of the other side (side) of a moving body can be confirmed simultaneously.
 図7は、背景画像を含む位置表示画像の態様を示す図である。
 図7は、影画像(又は線図画像)SH7aを半透明乃至透明で示すとともに、この影画像(又は線図画像)SH7aの内側に透過表示される道路の背景画像Rd1aの高さ(z方向の位置)を変更した例である。影画像(又は線図画像)SH7aの内側に透過表示される道路の背景画像Rd1aのZ軸方向の高さは、それ以外の道路の背景画像Rd1b,Rd1cのZ軸方向の高さよりも低い。これにより、連続するはずである道路の背景画像Rd1a,Rd1b,Rd1cのZ軸方向の高さが、影画像(又は線図画像)SH7aの前端と後端において変化する。位置表示画像において示された、道路の背景画像Rd1の高さ位置の変化位置により、ユーザは移動体の存在位置(x方向の位置)を認識できる。
 また、図7は、影画像(又は線図画像)SH7aを半透明乃至透明で示すとともに、この影画像(又は線図画像)SH7aの内側に透過表示される電柱の背景画像Rp1aの横位置(X方向の位置)を変更した例である。影画像(又は線図画像)SH7aと重なって透過表示される電柱の背景画像Rp1aのX軸方向の位置は、それ以外の電柱の背景画像Rp1bのX軸方向の位置よりも図中右側にずれている。これにより、連続するはずである電柱の背景画像Rp1a,Rp1bのX軸方向の位置が、影画像(又は線図画像)SH7aの境界において変化する。位置表示画像において示された、電柱の背景画像Rp1の横位置の変化位置により、ユーザは移動体の存在位置(X軸方向の位置)を認識できる。
 図示はしないが、影画像(又は線図画像)SH7aの外延に沿って、背景画像の一部を欠損させた領域(白抜きの外延線)を形成してもよい。つまり、影画像(又は線図画像)SH7aの外延に沿った輪郭線の領域を、背景画像Rd1及び背景画像Rd2に付加してもよい。
FIG. 7 is a diagram illustrating an aspect of a position display image including a background image.
FIG. 7 shows the shadow image (or diagram image) SH7a as translucent or transparent, and the height (z direction) of the road background image Rd1a that is transparently displayed inside the shadow image (or diagram image) SH7a. This is an example in which (position) is changed. The height in the Z-axis direction of the road background image Rd1a transparently displayed inside the shadow image (or diagram image) SH7a is lower than the height of the other road background images Rd1b and Rd1c in the Z-axis direction. As a result, the height in the Z-axis direction of the road background images Rd1a, Rd1b, and Rd1c that should be continuous changes at the front end and the rear end of the shadow image (or diagram image) SH7a. The user can recognize the presence position (position in the x direction) of the moving body based on the change position of the height position of the road background image Rd1 shown in the position display image.
Further, FIG. 7 shows the shadow image (or diagram image) SH7a as translucent or transparent, and the horizontal position of the background image Rp1a of the utility pole transparently displayed inside the shadow image (or diagram image) SH7a ( This is an example in which the position in the X direction is changed. The position in the X-axis direction of the background image Rp1a of the utility pole that is transparently displayed so as to overlap with the shadow image (or diagram image) SH7a is shifted to the right in the figure from the position in the X-axis direction of the background image Rp1b of the other utility pole ing. As a result, the positions of the utility pole background images Rp1a and Rp1b in the X-axis direction that should be continuous change at the boundary of the shadow image (or diagram image) SH7a. From the change position of the horizontal position of the electric pole background image Rp1 shown in the position display image, the user can recognize the presence position (position in the X-axis direction) of the moving body.
Although not shown, a region (outlined extension line) in which a part of the background image is missing may be formed along the extension of the shadow image (or diagram image) SH7a. In other words, the outline region along the extension of the shadow image (or diagram image) SH7a may be added to the background image Rd1 and the background image Rd2.
 図8A-8Dは、線図を含む位置表示画像の態様を示す図である。図6A-6Fにおいては移動体の外形を模した図形の例を示したが、図8A-8Dにおいては、座標位置を示す軸や図形を含む位置表示画像の態様を示す。
 図8Aは、移動体の存在位置を円形の線図画像SH8aで示す例である。円形の表示された位置により、投影面SQにおける移動体の存在位置を示すことができる。図8Aに示す線図画像SH8aは、投影面SQのZ軸方向に沿う縦線SH8azと、投影面SQのX軸方向に沿う横線SH8axとをさらに有する。縦線SH8azと横線SH8axとは中心点SH8a0において略直交する。中心点SH8a0は、移動体の重心などの基準位置に対応する。また、縦線SH8az、横線SH8axには目盛を付してもよい。図8Aに示す例では円形の線図画像SH8aの半径の中央を基準にXZ座標について目盛を付する。これにより、ユーザは、投影面SQに映される外界と移動体との位置関係を具体的に把握できる。
 図8Bは、移動体の存在位置を基準点SH8b0において略直交する座標軸SH8bxとSH8bxを用いて示す例である。中心点SH8b0は、移動体の重心などの基準位置に対応する。中心点及び座標軸の位置により、投影面SQにおける移動体の存在位置を示すことができる。
 図8Cは、移動体の存在位置を投影面SQのZ方向に沿う二本の縦線SH8c1とSH8c2とを用いて示す例である。縦線SH8c1の位置は、移動体の前端(又は後端)の位置に対応する。縦線SH8c2の位置は、移動体の後端(又は前端)の位置に対応する。二本の縦線SH8c1とSH8c2のX軸方向の位置により、投影面SQにおける移動体の存在位置を示すことができる。
 図8Dは、移動体の存在位置を矩形の領域SH8dを用いて示す例である。領域SH8dのX軸方向の一方端は縦線SH8d1により定義され、その他方端は縦線SH8d2により定義される。縦線SH8d1のZ軸方向高さは限定されず、本例では、縦線SH8d1のZ軸方向高さは投影面SQのZ軸方向の高さと同じ高さである。縦線SH8d1のX軸方向の位置は、移動体の前端(又は後端)の位置に対応する。縦線SH8d2のX軸方向の位置は、移動体の後端(又は前端)の位置に対応する。二本の縦線SH8d1とSH8d2のX方向の位置により、投影面SQにおける移動体の存在位置を示すことができる。
FIGS. 8A to 8D are diagrams showing aspects of a position display image including a diagram. FIGS. 6A to 6F show examples of graphics imitating the outline of the moving body, but FIGS. 8A to 8D show an aspect of a position display image including an axis indicating a coordinate position and a graphic.
FIG. 8A is an example in which the position of the moving object is indicated by a circular diagram image SH8a. The position of the moving body on the projection plane SQ can be indicated by the displayed position of the circle. The diagram image SH8a illustrated in FIG. 8A further includes a vertical line SH8az along the Z-axis direction of the projection surface SQ and a horizontal line SH8ax along the X-axis direction of the projection surface SQ. The vertical line SH8az and the horizontal line SH8ax are substantially orthogonal at the center point SH8a0. The center point SH8a0 corresponds to a reference position such as the center of gravity of the moving object. The vertical line SH8az and the horizontal line SH8ax may be graduated. In the example shown in FIG. 8A, the XZ coordinates are graduated with reference to the center of the radius of the circular diagram image SH8a. Thereby, the user can grasp | ascertain specifically the positional relationship of the external field and moving body which are projected on the projection surface SQ.
FIG. 8B is an example showing the position of the moving object using coordinate axes SH8bx and SH8bx that are substantially orthogonal to each other at the reference point SH8b0. The center point SH8b0 corresponds to a reference position such as the center of gravity of the moving body. The position of the moving object on the projection plane SQ can be indicated by the position of the center point and the coordinate axis.
FIG. 8C is an example showing the position of the moving object using two vertical lines SH8c1 and SH8c2 along the Z direction of the projection plane SQ. The position of the vertical line SH8c1 corresponds to the position of the front end (or rear end) of the moving body. The position of the vertical line SH8c2 corresponds to the position of the rear end (or front end) of the moving body. The position of the moving body on the projection plane SQ can be indicated by the positions of the two vertical lines SH8c1 and SH8c2 in the X-axis direction.
FIG. 8D is an example showing the location of the moving object using a rectangular area SH8d. One end of the region SH8d in the X-axis direction is defined by a vertical line SH8d1, and the other end is defined by a vertical line SH8d2. The height of the vertical line SH8d1 in the Z-axis direction is not limited. In this example, the height of the vertical line SH8d1 in the Z-axis direction is the same as the height of the projection surface SQ in the Z-axis direction. The position of the vertical line SH8d1 in the X-axis direction corresponds to the position of the front end (or rear end) of the moving body. The position of the vertical line SH8d2 in the X-axis direction corresponds to the position of the rear end (or front end) of the moving body. The position of the moving body on the projection plane SQ can be indicated by the positions of the two vertical lines SH8d1 and SH8d2 in the X direction.
 本実施形態の制御装置10は、情報取得機能により取得した移動体の周囲に存在する対象物の位置情報を用いて、撮像画像における対象物の存在位置を示す位置表示画像を含む映像を生成する。他車両などの対象物の位置表示画像は、自車両(移動体)の位置表示画像と同様に、線図画像、背景画像、影画像を含む。これらについては、重複した説明を避けるために、上述した説明を援用する。 The control device 10 according to the present embodiment generates a video including a position display image indicating the position of the target in the captured image, using the position information of the target existing around the moving body acquired by the information acquisition function. . A position display image of an object such as another vehicle includes a diagram image, a background image, and a shadow image, similarly to the position display image of the host vehicle (moving body). About these, in order to avoid the overlapping description, the description mentioned above is used.
 図5に示す位置表示画像のように、本実施形態の制御装置10は、自車両Vが走行(移動)する走行レーンの前方を走行する先行他車両VX2を基準点から観察し、基準点から観察された先行他車両VX2の位置表示画像(線図画像、背景画像、影画像)を生成する。 As in the position display image shown in FIG. 5, the control device 10 of the present embodiment observes the preceding other vehicle VX2 traveling in front of the traveling lane in which the host vehicle V travels (moves) from the reference point, and from the reference point. A position display image (a diagram image, a background image, a shadow image) of the observed preceding other vehicle VX2 is generated.
 ちなみに、図5に示すように、自車両Vが、矢印Rnで示す経路に沿ってレーンLn2からレーンLn1へ車線変更をしようとするときに、自車両Vのドライバは、隣接レーンLn1上の後方を走行する他車両VX1と、自車両Vの走行レーンLn2の前方を走行する他車両VX2の挙動に注意を払う。このような場面において、本実施形態の制御装置10は、自車両Vの位置表示画像SH、他車両VX1の位置表示画像SH1、及び他車両VX2の位置表示画像SH2を、共通の投影面SQに投影させる。これにより、自車両Vのドライバは位置表示画像SH,SH1,SH2との位置関係から自車両V,他車両VX1及び他車両VX2の位置関係を正確に把握できる。自車両がレーンLn2からレーンLn1へ車線変更をしようとするタイミングは、自車両の操舵角、ウィンカー操作、制動操作などから判断する。 Incidentally, as shown in FIG. 5, when the host vehicle V tries to change the lane from the lane Ln2 to the lane Ln1 along the route indicated by the arrow Rn, the driver of the host vehicle V is rearward on the adjacent lane Ln1. Pay attention to the behavior of the other vehicle VX1 traveling in front of the vehicle and the other vehicle VX2 traveling in front of the traveling lane Ln2 of the host vehicle V. In such a scene, the control device 10 of the present embodiment uses the position display image SH of the host vehicle V, the position display image SH1 of the other vehicle VX1, and the position display image SH2 of the other vehicle VX2 on a common projection plane SQ. Project. Thereby, the driver of the own vehicle V can correctly grasp the positional relationship between the own vehicle V, the other vehicle VX1, and the other vehicle VX2 from the positional relationship with the position display images SH, SH1, and SH2. The timing at which the host vehicle tries to change the lane from the lane Ln2 to the lane Ln1 is determined from the steering angle of the host vehicle, the blinker operation, the braking operation, and the like.
 この場合において、自車両Vの加減速に応じて、影画像SHなどの位置表示画像の投影位置を変更する。具体的に、制御装置10は、自車両Vが加速しているときには、位置表示画像(線図画像、背景画像、影画像)SHの位置を前方にシフトさせる。他方、制御装置10は、自車両Vが減速しているときには、位置表示画像(線図画像、背景画像、影画像)SHに位置を後方にシフトさせる。これにより、自車両Vの状況に応じて、自車両V及び他車両VXとの位置関係を把握しやすい位置表示画像SHを表示できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 In this case, the projection position of the position display image such as the shadow image SH is changed according to the acceleration / deceleration of the host vehicle V. Specifically, when the host vehicle V is accelerating, the control device 10 shifts the position of the position display image (line image, background image, shadow image) SH forward. On the other hand, when the host vehicle V is decelerating, the control device 10 shifts the position backward to the position display image (line image, background image, shadow image) SH. Thereby, according to the situation of the own vehicle V, the position display image SH which can grasp | ascertain the positional relationship with the own vehicle V and the other vehicle VX can be displayed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 さらに、自車両Vが走行する車線の流れの速度と自車両Vの速度との差に応じて、位置表示画像(線図画像、背景画像、影画像)の投影位置を変更する。車線の流れの速度とは、自車両Vが走行する車線と同じ車線を走行する他車両VXの平均速度であってもよいし、その車線の法定速度であってもよい。具体的に、制御装置10は、自車両Vの車速と車線の流れ速度との差(正の値)が大きい、つまり、自車両Vが先行他車両に接近又は追い越しをしているときには、位置表示画像(線図画像、背景画像、影画像)の位置表示画像SHの位置を前方にシフトさせる。他方、制御装置10は、自車両Vの車速と車線の流れ速度との差が小さい、又は差(負の値)が大きい、つまり、自車両Vが後方他車両に接近され又は後方車両に追い越されるときには、位置表示画像SHの位置を後方にシフトさせる。これにより、自車両Vと他車両VXとの相対的な状況に応じて、自車両V及び他車両VXとの位置関係を把握しやすい位置表示画像(線図画像、背景画像、影画像)SHを表示できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 Further, the projection position of the position display image (line image, background image, shadow image) is changed according to the difference between the speed of the lane in which the host vehicle V travels and the speed of the host vehicle V. The speed of the lane flow may be an average speed of another vehicle VX traveling in the same lane as the host vehicle V is traveling, or a legal speed of the lane. Specifically, the control device 10 determines the position when the difference (positive value) between the vehicle speed of the host vehicle V and the flow speed of the lane is large, that is, when the host vehicle V is approaching or overtaking the preceding other vehicle. The position of the position display image SH of the display image (line diagram image, background image, shadow image) is shifted forward. On the other hand, the control device 10 has a small difference between the vehicle speed of the host vehicle V and the flow speed of the lane, or a large difference (negative value), that is, the host vehicle V is approached by the other vehicle behind or overtakes the rear vehicle. The position display image SH is shifted backward. Thereby, a position display image (a diagram image, a background image, a shadow image) SH that makes it easy to grasp the positional relationship between the host vehicle V and the other vehicle VX according to the relative situation between the host vehicle V and the other vehicle VX. Can be displayed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 本実施形態の制御装置10は、他車両VXを含む対象物の速度が相対的に高い対象物の位置表示画像(線図画像、背景画像、影画像)SHの面積が、相対的に速度が低い対象物の位置表示画像SHなどの位置表示画像の面積よりも大きくなるように、位置表示画像SH1,SH2を生成する。本実施形態の制御装置10は、他車両VX1の車速P1と、他車両VX2の車速P2とを取得し、車速P1とP2とを比較する。制御装置10は、車速が高い他車両VXの位置表示画像SHを、車速が低い他車両VXの位置表示画像SHよりも面積が大きくなるように生成する。本例で、他車両VX1の車速P1が他車両VX2の車速P2よりも高い場合には、図5に示すように、他車両VX1の位置表示画像SH1の面積を他車両VX2の位置表示画像SH2の面積よりも大きくする。面積の大きい位置表示画像SH2を大きく表示することにより、高速で走行する他車両VXに対するドライバの注意を喚起できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 The control device 10 according to the present embodiment has an area of a position display image (a diagram image, a background image, a shadow image) SH of an object having a relatively high speed of the object including the other vehicle VX and a relatively high speed. The position display images SH1 and SH2 are generated so as to be larger than the area of the position display image such as the position display image SH of the low object. The control device 10 of the present embodiment acquires the vehicle speed P1 of the other vehicle VX1 and the vehicle speed P2 of the other vehicle VX2, and compares the vehicle speeds P1 and P2. The control device 10 generates the position display image SH of the other vehicle VX with a high vehicle speed so that the area is larger than the position display image SH of the other vehicle VX with a low vehicle speed. In this example, when the vehicle speed P1 of the other vehicle VX1 is higher than the vehicle speed P2 of the other vehicle VX2, as shown in FIG. 5, the area of the position display image SH1 of the other vehicle VX1 is set to the position display image SH2 of the other vehicle VX2. Larger than the area. By displaying the large position display image SH2 with a large area, it is possible to alert the driver to the other vehicle VX traveling at high speed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 特に限定されないが、本実施形態の制御装置10は、他車両VXの速度が高くなるほど、他車両VXの位置表示画像(線図画像、背景画像、影画像)SHの面積を大きくしてもよい。本システムを利用するドライバは、位置表示画像SHの大きさから他車両VXの速度を予測できる。上記他車両VXの速度は、絶対速度であってもよいし、自車両Vの車速に対する相対速度であってもよい。これにより、自車両Vに接近する度合いの大きい他車両Vの位置表示画像SHを大きく表示できる。もちろん、自車両Vに対する他車両Vの速度に応じて、位置表示画像の色相、明度、彩度、色調(トーン)、模様、グラデーション、大きさ(拡大縮小)の態様を変更してもよい。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 Although not particularly limited, the control device 10 of the present embodiment may increase the area of the position display image (line image, background image, shadow image) SH of the other vehicle VX as the speed of the other vehicle VX increases. . A driver using this system can predict the speed of the other vehicle VX from the size of the position display image SH. The speed of the other vehicle VX may be an absolute speed or may be a relative speed with respect to the vehicle speed of the host vehicle V. Thereby, the position display image SH of the other vehicle V having a high degree of approach to the host vehicle V can be displayed in a large size. Of course, depending on the speed of the other vehicle V with respect to the host vehicle V, the hue, brightness, saturation, color tone (tone), pattern, gradation, and size (enlargement / reduction) of the position display image may be changed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 また、本実施形態の制御装置10は、他車両VXが走行するレーンLn1が追越しレーンであるという属性情報を取得した場合には、他車両VXが走行するレーンLn2が追越しレーンではない属性情報を取得した場合よりも、追越しレーンを走行する他車両VXの位置表示画像(線図画像、背景画像、影画像)SHの面積が大きくなるように位置表示画像SHを生成する。追い越しレーンを走行する他車両VXの速度のほうが、非追い越しレーンを走行する他車両VXの速度よりも高いと予測できるからである。レーンが追越しレーンである、又は追越しレーンではないという属性情報は、ナビゲーション装置70の地図情報72及び/又は道路情報73から取得する。レーンの属性情報の特定手法、取得手法は、特に限定されず、出願時に知られた手法を適宜に用いることができる。 In addition, when the control device 10 of the present embodiment acquires attribute information that the lane Ln1 in which the other vehicle VX travels is an overtaking lane, the lane Ln2 in which the other vehicle VX is traveling is not in the overtaking lane. The position display image SH is generated so that the area of the position display image (line image, background image, shadow image) SH of the other vehicle VX traveling on the overtaking lane is larger than that obtained. This is because the speed of the other vehicle VX traveling on the overtaking lane can be predicted to be higher than the speed of the other vehicle VX traveling on the non-overtaking lane. The attribute information that the lane is an overtaking lane or is not an overtaking lane is acquired from the map information 72 and / or road information 73 of the navigation device 70. The method for identifying and acquiring the lane attribute information is not particularly limited, and a method known at the time of filing can be used as appropriate.
 また、本実施形態の制御装置10は、移動体の操縦者、例えば自車両Vのドライバの操縦スキルに応じて位置表示画像(線図画像、背景画像、影画像)SHの面積を変化させてもよい。例えば、制御装置10は、操縦者のスキルが低い場合には、操縦者のスキルが高い場合よりも位置表示画像SHの面積が大きくなるように位置表示画像SHを生成する。これにより、スキルの低い操縦者に自車両V及び他車両Vの各位置及びそれらの位置関係を分かりやすく示すことができる。操縦者のスキルは、操縦者が自ら入力してもよいし、操縦回数、距離などの経験に基づいて判断してもよい。操縦者のスキルは、操縦者の過去の操縦履歴から判断してもよい。例えば、高いスキルを持つ操縦者の操縦履歴と、個々の操縦者の操縦履歴とを比較し、その差が大きい場合には操縦スキルが低いと判断し、その差が小さい場合には操縦スキルが高いと判断する。車両の操縦(運転)を例にすると、走行レーンの変更時における加速操作、操舵操作のタイミング、操舵量に基づいて車両の運転スキルを判断できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 Further, the control device 10 according to the present embodiment changes the area of the position display image (line image, background image, shadow image) SH according to the driving skill of the driver of the moving object, for example, the driver of the host vehicle V. Also good. For example, when the skill of the pilot is low, the control device 10 generates the position display image SH so that the area of the position display image SH is larger than when the skill of the pilot is high. Thereby, each position of the own vehicle V and the other vehicle V and those positional relationships can be shown in an easy-to-understand manner to a pilot with low skill. The operator's skill may be input by the operator himself / herself, or may be determined based on experience such as the number of operations and distance. The driver's skill may be determined from the pilot's past operation history. For example, the operation history of a pilot with high skills is compared with the operation history of individual pilots. If the difference is large, it is determined that the operation skill is low, and if the difference is small, the operation skill is Judged to be high. Taking vehicle driving (driving) as an example, the driving skill of the vehicle can be determined based on the acceleration operation, the timing of the steering operation, and the steering amount when the travel lane is changed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 図5に示す例では、レーンLn1が追越しレーンであり、レーンLn2が追越しレーンではない(登坂車線)である。このため、レーンLn1を走行する他車両VX1の位置表示画像(線図画像、背景画像、影画像)SH1の方が、レーンLn2を走行する他車両VX2の位置表示画像(線図画像、背景画像、影画像)SH2及び自車両Vの位置表示画像SHよりも面積が大きい。このように、自車両V及び他車両VXが走行するレーンの属性に応じて位置表示画像SHの大きさを変更することにより、実際の車速により位置表示画像SHの大きさを制御する場合と同様に、高速で走行する他車両VXに対するドライバの注意を喚起できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 In the example shown in FIG. 5, lane Ln1 is an overtaking lane, and lane Ln2 is not an overtaking lane (uphill lane). Therefore, the position display image (line diagram image, background image, shadow image) SH1 of the other vehicle VX1 traveling on the lane Ln1 is more position display image (line diagram image, background image) of the other vehicle VX2 traveling on the lane Ln2. , Shadow image) The area is larger than SH2 and the position display image SH of the host vehicle V. As described above, the size of the position display image SH is controlled by the actual vehicle speed by changing the size of the position display image SH according to the attribute of the lane in which the host vehicle V and the other vehicle VX travel. In addition, it is possible to alert the driver to the other vehicle VX traveling at high speed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 さらに、本実施形態の制御装置10は、取得した自車両Vの加速度から、自車両Vが加速状態であると判断した場合には、基準点LGの位置を自車両Vの進行方向(図中矢印F方向)の反対側(図中矢印F´方向)にずらす。これにより、位置表示画像SHの投影位置を自車両Vの進行方向(図中矢印F方向)側(図中矢印F方向)にずらすことができる。また、位置表示画像として影画像SHを用いる場合において、本実施形態の制御装置10は、取得した自車両Vの加速度から、自車両Vが加速状態であると判断した場合には、仮想光源LGの位置を自車両Vの進行方向(図中矢印F方向)の反対側(図中矢印F´方向)にずらす。図5に示す例において、自車両Vが加速状態であると判断された場合には、基準点(視点、仮想光源)LGを、後方の位置、例えば、基準点(視点、仮想光源)LG2(破線で表示)の位置にシフトする。このように、加速状態のときには、基準点(視点、仮想光源)LGの位置を後方にシフトさせ、位置表示画像(線図画像、背景画像、影画像)SHの投影位置を前方にシフトさせることにより、ドライバの判断状況に適した位置表示画像SHを表示できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 Furthermore, when the control device 10 of the present embodiment determines that the host vehicle V is in an acceleration state from the acquired acceleration of the host vehicle V, the control device 10 determines the position of the reference point LG in the traveling direction of the host vehicle V (in the drawing). Shift to the opposite side (arrow F ′ direction in the figure) of arrow F direction. Thereby, the projection position of the position display image SH can be shifted to the traveling direction (the direction of arrow F in the figure) of the host vehicle V (the direction of arrow F in the figure). Further, when the shadow image SH is used as the position display image, the control device 10 according to the present embodiment determines that the host vehicle V is in an acceleration state from the acquired acceleration of the host vehicle V, and then the virtual light source LG. Is shifted to the opposite side (arrow F ′ direction in the figure) to the traveling direction of the host vehicle V (arrow F direction in the figure). In the example illustrated in FIG. 5, when it is determined that the host vehicle V is in an acceleration state, a reference point (viewpoint, virtual light source) LG is set to a rear position, for example, a reference point (viewpoint, virtual light source) LG2 ( Shift to the position indicated by the broken line. Thus, in the acceleration state, the position of the reference point (viewpoint, virtual light source) LG is shifted backward, and the projection position of the position display image (line image, background image, shadow image) SH is shifted forward. Thus, it is possible to display the position display image SH suitable for the determination status of the driver. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 本実施形態の制御装置10は、取得した加速度から、自車両Vが減速状態であると判断した場合には、基準点(視点、仮想光源)LGの位置を自車両Vの進行方向側(図中矢印F方向)にずらす。これにより、自車両Vが減速状態であると判断した場合には、位置表示画像SHの投影位置を自車両Vの進行方向とは反対側(図中矢印F´方向)にずらすことができる。また、位置表示画像として影画像SHを用いる場合に、本実施形態の制御装置10は、取得した加速度から、自車両Vが減速状態であると判断した場合には、仮想光源LGの位置を自車両Vの進行方向側(図中矢印F方向)にずらす。図5に示す例において、自車両Vが減速状態であると判断された場合には、基準点(視点、仮想光源)LGを、前方の位置、例えば、基準点LG1(破線で表示)の位置にシフトする。このように、減速状態のときには、基準点LGの位置を前方にシフトさせ、位置表示画像(線図画像、背景画像、影画像)SHの投影位置を後方にシフトさせることにより、ドライバの判断状況に適した位置表示画像を表示できる。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 When the control device 10 according to the present embodiment determines from the acquired acceleration that the host vehicle V is in a decelerating state, the control device 10 determines the position of the reference point (viewpoint, virtual light source) LG in the traveling direction side of the host vehicle V (see FIG. Shift in the direction of the middle arrow F). Thereby, when it is determined that the host vehicle V is in a decelerating state, the projection position of the position display image SH can be shifted to the side opposite to the traveling direction of the host vehicle V (the direction of the arrow F ′ in the figure). Further, when the shadow image SH is used as the position display image, the control device 10 of the present embodiment determines the position of the virtual light source LG when the host vehicle V determines from the acquired acceleration that the vehicle V is in a decelerating state. Shift to the traveling direction side of vehicle V (in the direction of arrow F in the figure). In the example shown in FIG. 5, when it is determined that the host vehicle V is in a decelerating state, the reference point (viewpoint, virtual light source) LG is set to a forward position, for example, the position of the reference point LG1 (displayed with a broken line). Shift to. As described above, in the deceleration state, the position of the reference point LG is shifted forward, and the projection position of the position display image (line diagram image, background image, shadow image) SH is shifted backward, thereby determining the driver's judgment status. The position display image suitable for can be displayed. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 なお、基準点(視点、仮想光源)LGの設定位置は限定されないが、投影処理における仮想視点と同じ位置としてもよい。これにより、位置表示画像(線図画像、背景画像、影画像)SHの形状によって自車両Vを見る仮想視点の位置を認識でき、自車両Vと周囲との位置関係を把握しやすくなる。また、基準点LGは無限遠に配置してもよい。この場合には、並行投影ができるので、位置表示画像SHから自車両V、対象物(他車両VXなど)の位置関係を把握しやすくなる。 The setting position of the reference point (viewpoint, virtual light source) LG is not limited, but may be the same position as the virtual viewpoint in the projection processing. As a result, the position of the virtual viewpoint viewing the host vehicle V can be recognized from the shape of the position display image (line diagram image, background image, shadow image) SH, and the positional relationship between the host vehicle V and the surroundings can be easily understood. Further, the reference point LG may be arranged at infinity. In this case, since parallel projection can be performed, it becomes easy to grasp the positional relationship between the host vehicle V and the object (other vehicle VX, etc.) from the position display image SH.
 仮想視点の設定位置は、特に限定されない。また、仮想視点の位置は、ユーザの指定に応じて変更可能としてもよい。また、位置表示画SHが投影(表現)される面は、自車両が走行する道路の路面であってもよいし、図5のような設定された投影面であってもよい。また、移動体がヘリコプターである場合には、ヘリコプターから見て下方の地表に、その位置を表示する位置表示画像(線図画像、背景画像、影画像)を投影してもよい。移動体が船である場合には、海面にその影情報を投影してもよい。 The setting position of the virtual viewpoint is not particularly limited. Further, the position of the virtual viewpoint may be changeable according to the user's designation. Further, the surface on which the position display image SH is projected (represented) may be a road surface of a road on which the host vehicle travels, or may be a projection surface set as shown in FIG. Further, when the moving body is a helicopter, a position display image (line diagram image, background image, shadow image) for displaying the position may be projected on the ground surface below the helicopter. When the moving body is a ship, the shadow information may be projected on the sea surface.
 最後に、本実施形態の制御装置10の表示機能について説明する。制御装置10は、カメラ40から取得した撮像画像のデータを、立体座標系S、又は投影面SQに投影し、設定された仮想視点から自車両V及び周囲の対象物の映像を生成する。そして、制御装置10は、生成した映像をディスプレイ80に表示させる。なお、ディスプレイ80は、自車両Vに搭載し、移動体装置200として構成してもよいし、表示装置100側に設けてもよい。ディスプレイ80は、二次元画像用のディスプレイでもよいし、画面の奥行方向の位置関係を視認できる三次元画像を映し出すディスプレイであってもよい。 Finally, the display function of the control device 10 of this embodiment will be described. The control device 10 projects the captured image data acquired from the camera 40 onto the three-dimensional coordinate system S or the projection plane SQ, and generates images of the host vehicle V and surrounding objects from the set virtual viewpoint. Then, the control device 10 displays the generated video on the display 80. The display 80 may be mounted on the host vehicle V and configured as the mobile device 200 or may be provided on the display device 100 side. The display 80 may be a display for a two-dimensional image or a display that displays a three-dimensional image in which the positional relationship in the depth direction of the screen can be visually recognized.
 本実施形態における表示する映像には、自車両Vの位置表示画像(線図画像、背景画像、影画像)SHを含ませる。また、表示する映像に、対象物としての他車両VXの位置表示画像(線図画像、背景画像、影画像)SH1,SH2を含ませてもよい。もちろん、表示する画像に、自車両Vの位置表示画像SH及び他車両VXの位置表示画像SH1,SH2の両方を含ませてもよい。本処理においては、位置表示画像SHとして、影画像SH、線図画像SH、背景画像SHを用いることができる。 The video to be displayed in the present embodiment includes a position display image (a diagram image, a background image, a shadow image) SH of the host vehicle V. Further, the display image may include position display images (line diagram image, background image, shadow image) SH1 and SH2 of the other vehicle VX as an object. Of course, the displayed image may include both the position display image SH of the host vehicle V and the position display images SH1 and SH2 of the other vehicle VX. In this process, a shadow image SH, a diagram image SH, and a background image SH can be used as the position display image SH.
 本実施形態の表示装置100が表示させる映像には、予め準備された自車両Vを示すアイコン画像V´(図3A,図3B,図4A,図4Bを参照)を重畳させて表示してもよい。車両のアイコン画像V´は、自車両Vの意匠に基づいて予め作成し、記憶させてもよい。このように、自車両Vのアイコン画像V´を映像に重畳させることにより、自車両Vの位置及び向きと周囲の映像との関係を分かりやすく示すことができる。 Even if the icon image V ′ (see FIGS. 3A, 3B, 4A, and 4B) indicating the host vehicle V prepared in advance is superimposed and displayed on the video displayed by the display device 100 of the present embodiment. Good. The icon image V ′ of the vehicle may be created and stored in advance based on the design of the host vehicle V. In this manner, by superimposing the icon image V ′ of the host vehicle V on the video, the relationship between the position and orientation of the host vehicle V and the surrounding video can be shown in an easily understandable manner.
 以下、図9のフローチャートに基づいて、本実施形態の制御装置10の第1の動作を説明する。
 ステップS1において、制御装置10は、カメラ40により撮像された撮像画像を取得する。ステップS2において、制御装置10は、自車両Vの現在位置と、他車両VXを含む対象物の位置を取得する。本処理において、自車両Vは「移動体」の一例であり、他車両VXは「対象物」の一例である。
Hereinafter, based on the flowchart of FIG. 9, the first operation of the control device 10 of the present embodiment will be described.
In step S <b> 1, the control device 10 acquires a captured image captured by the camera 40. In step S2, the control device 10 acquires the current position of the host vehicle V and the position of the object including the other vehicle VX. In this process, the host vehicle V is an example of a “moving body”, and the other vehicle VX is an example of an “object”.
 ステップS3において、制御装置10は、移動体(自車両V,他車両VX)の位置情報を用いて基準点を設定する。基準点は、自車両Vを基準に一つ設定してもよいし、自車両V及び他車両VXのそれぞれについて複数設定してもよい。基準点は、線図画像の対象となる移動体を観察する点(視点)、背景画像の対象となる移動体を観察する点(視点)、影画像の対象となる移動体に光を照射する点(光源)を含む。 In step S3, the control device 10 sets a reference point using the position information of the moving body (the host vehicle V, the other vehicle VX). One reference point may be set based on the host vehicle V, or a plurality of reference points may be set for each of the host vehicle V and the other vehicle VX. The reference point is a point (viewpoint) for observing a moving object that is a target of a diagram image, a point (viewpoint) for observing a moving object that is a target of a background image, and a light that is irradiated to a moving object that is a target of a shadow image Includes points (light sources).
 基準点(視点、仮想光源)の設定処理(ステップS3)の手法の一例を図10のフローチャートに基づいて説明する。
 図10に示すように、ステップS11において、制御装置は、自車両Vの加速度を取得する。ステップS12において、制御装置10は、加速度に基づいて、自車両Vが加速状態であるか、又は減速状態であるかを判断する。加速状態である場合には、ステップS13に進む。ステップS13において、基準点(視点、仮想光源)を後方側にシフトすることにより、線図画像SH、背景画像SH、影画像SHを含む位置表示画像の投影位置が進行方向の前方側にシフトする。つまり、自車両Vが加速状態である場合に投影面に投影される位置表示画像の位置は、自車両Vが加速状態でない場合に投影面に投影される位置表示画像の位置よりも、自車両Vの進行方向の前方側に位置する。他方、ステップS14において自車両Vが減速状態であると判断された場合には、ステップS15に進む。ステップS15において、基準点(視点、仮想光源)を前方側にシフトすることにより、線図画像SH、背景画像SHを含む位置表示画像の投影位置を進行方向の後方側にシフトする。つまり、自車両Vが減速状態である場合に設定される基準点の位置は、自車両Vが減速状態でない場合に設定される基準点の位置よりも、自車両Vの進行方向の前方側に位置する。自車両Vが減速状態である場合に投影面に投影される位置表示画像の位置は、自車両Vが減速状態でない場合に投影面に投影される位置表示画像の位置よりも、自車両Vの進行方向の後方側に位置する。位置表示画像の投影位置は、後の処理で設定される投影面の座標系における位置である。
An example of the method for setting the reference point (viewpoint, virtual light source) (step S3) will be described based on the flowchart of FIG.
As shown in FIG. 10, in step S <b> 11, the control device acquires the acceleration of the host vehicle V. In step S12, the control device 10 determines whether the host vehicle V is in an acceleration state or a deceleration state based on the acceleration. If it is in the accelerated state, the process proceeds to step S13. In step S13, by shifting the reference point (viewpoint, virtual light source) to the rear side, the projection position of the position display image including the diagram image SH, the background image SH, and the shadow image SH is shifted to the front side in the traveling direction. . In other words, the position of the position display image projected on the projection plane when the host vehicle V is in the acceleration state is more than the position of the position display image projected on the projection plane when the host vehicle V is not in the acceleration state. It is located on the front side in the traveling direction of V. On the other hand, if it is determined in step S14 that the host vehicle V is in a decelerating state, the process proceeds to step S15. In step S15, by shifting the reference point (viewpoint, virtual light source) to the front side, the projection position of the position display image including the diagram image SH and the background image SH is shifted to the rear side in the traveling direction. That is, the position of the reference point that is set when the host vehicle V is in the deceleration state is more forward of the traveling direction of the host vehicle V than the position of the reference point that is set when the host vehicle V is not in the deceleration state. To position. The position of the position display image projected on the projection plane when the host vehicle V is in a deceleration state is greater than the position of the position display image projected on the projection plane when the host vehicle V is not in a deceleration state. Located on the rear side in the direction of travel. The projection position of the position display image is a position in the coordinate system of the projection plane set in a later process.
 図9に戻り、ステップS4において、制御装置10は、投影面を設定する。投影面は、図3A、図3B、図4A、図4Bに示す立体座標系であってもよいし、図5に示す投影面SQのように二次元の座標系であってもよい。さらには、後述する図11に示すように、複数の投影面を設定してもよい。 Referring back to FIG. 9, in step S4, the control device 10 sets a projection plane. The projection plane may be a three-dimensional coordinate system shown in FIGS. 3A, 3B, 4A, and 4B, or may be a two-dimensional coordinate system like the projection plane SQ shown in FIG. Furthermore, as shown in FIG. 11 described later, a plurality of projection planes may be set.
 続くステップS5において、制御装置10は、自車両Vの線図画像SH、背景画像SH、又は影画像SHを生成する。自車両Vの周囲に他車両VXが存在する場合には、他車両VX1,VX2の線図画像SH1,SH2、背景画像SH1,SH2又は影画像SH1,SH2を生成する。 In subsequent step S5, the control device 10 generates a diagram image SH, a background image SH, or a shadow image SH of the host vehicle V. When the other vehicle VX exists around the host vehicle V, the diagram images SH1, SH2, background images SH1, SH2 or shadow images SH1, SH2 of the other vehicles VX1, VX2 are generated.
 ステップS6において、制御装置10は、設定された投影面SQに撮像画像を投影する処理を実行し、表示用の映像を生成する。 In step S6, the control device 10 executes a process of projecting the captured image on the set projection plane SQ, and generates a display image.
 最後に、ステップS7において、制御装置10は、生成した映像をディスプレイ80に表示する。 Finally, in step S7, the control device 10 displays the generated video on the display 80.
 以下、図11に基づいて、移動体である自車両Vの位置表示画像の表示態様の他の例を説明する。ここで説明する位置表示画像は、影画像SH、線図画像SH、背景画像SHを含む。本例では、位置表示画像が影画像SHである場合を例に説明する。影画像SHに代えて、線図画像SH、背景画像SHを用いてもよい。
 本例の位置表示画像の影画像SH(又は線図画像SH)は、自車両Vの可動部材の可動範囲を示す画像を含む。背景画像SHは、自車両Vの可動部材の輪郭線、稼働部材の可動範囲を示す線図(背景の欠損領域)を含む。図11に示す例では、自車両Vの影画像SHを示す投影面は、自車両Vの車長方向に沿う第1投影面SQsと、車幅方向に沿う第2投影面SQbを含む。第1投影面SQsには、車両の側方に設定された基準点(視点、仮想光源)LGから観察したときの可動部材を示す位置表示画像(影画像、線図画像、背景画像)SHsを投影する。位置表示画像は、仮想光源LGから光を照射したときの可動部材の影を模した影画像を含む。第2投影面SQbには、車両の前方に設定された基準点(視点、仮想光源)LGから観察したときの可動部材を示す位置表示画像(影画像、線図画像、背景画像)SHbを投影する。位置表示画像は、仮想光源LGから光を照射したときの影を模した影画像SHbを投影する。
Hereinafter, based on FIG. 11, another example of the display mode of the position display image of the host vehicle V that is a moving body will be described. The position display image described here includes a shadow image SH, a diagram image SH, and a background image SH. In this example, a case where the position display image is a shadow image SH will be described as an example. Instead of the shadow image SH, a diagram image SH and a background image SH may be used.
The shadow image SH (or diagram image SH) of the position display image of this example includes an image indicating the movable range of the movable member of the host vehicle V. The background image SH includes a contour line of the movable member of the host vehicle V and a diagram (background defect region) indicating the movable range of the working member. In the example shown in FIG. 11, the projection plane showing the shadow image SH of the host vehicle V includes a first projection plane SQs along the vehicle length direction of the host vehicle V and a second projection plane SQb along the vehicle width direction. On the first projection plane SQs, a position display image (shadow image, diagram image, background image) SHs showing a movable member when observed from a reference point (viewpoint, virtual light source) LG set on the side of the vehicle is displayed. Project. The position display image includes a shadow image imitating the shadow of the movable member when light is emitted from the virtual light source LG. On the second projection plane SQb, a position display image (shadow image, diagram image, background image) SHb showing a movable member when observed from a reference point (viewpoint, virtual light source) LG set in front of the vehicle is projected. To do. The position display image projects a shadow image SHb simulating a shadow when light is emitted from the virtual light source LG.
 本例の位置表示画像(影画像、線図画像、背景画像)SHは、自車両Vの可動部材の可動範囲を示す画像を含む。本例の自車両Vは、ハッチバック型の車両であり、サイドドアとバックドア(ハッチドア)を有する。自車両Vのサイドドアは側方に開閉し、自車両Vのバックドアは後方に開閉する。自車両Vのサイドドア及びバックドアは、移動体である自車両Vの可動部材である。 The position display image (shadow image, diagram image, background image) SH of this example includes an image indicating the movable range of the movable member of the host vehicle V. The own vehicle V of this example is a hatchback type vehicle, and has a side door and a back door (hatch door). The side door of the host vehicle V opens and closes sideways, and the back door of the host vehicle V opens and closes rearward. The side door and the back door of the host vehicle V are movable members of the host vehicle V that is a moving body.
 制御装置10は、乗員が後方の荷台から荷物を搬入又は搬出するためにバックドアを開いた場合を想定して、バックドアの可動範囲を示す位置表示画像(影画像、線図画像、背景画像)を生成する。図11に示すように、第1投影面SQsに投影される位置表示画像SHsは、バックドア部Vd3を含む。バックドア部Vd3は、バックドアを開けたときの自車両Vの後方の外延(可動範囲)を表現する。バックドアの可動範囲を示す影画像は、自車両Vの左右側方又は自車両Vの載置面(駐車面・路面)に投影する。影画像は、実際に存在する壁面や床面に投影してもよい。 The control device 10 assumes a case where the occupant opens the back door in order to carry in or out the luggage from the rear loading platform, and displays a position display image (shadow image, line drawing image, background image) indicating the movable range of the back door. ) Is generated. As shown in FIG. 11, the position display image SHs projected on the first projection surface SQs includes a back door portion Vd3. The back door portion Vd3 represents the rear extension (movable range) of the host vehicle V when the back door is opened. The shadow image indicating the movable range of the back door is projected on the left and right sides of the host vehicle V or on the placement surface (parking surface / road surface) of the host vehicle V. The shadow image may be projected on a wall surface or floor surface that actually exists.
 制御装置10は、乗員が座席から乗降できるように又は荷物を搬入/搬出できるようにサイドドアを開いた場合を想定して、サイドドアの可動範囲を示す位置表示画像(影画像、線図画像、背景画像)を生成する。図11に示すように、第2投影面SQbに投影される位置表示画像(影画像、線図画像、背景画像)SHbは、サイドドア部Vd1,Vd2を含む。サイドドアを開いたときのサイドドアの側方の外延(可動範囲)を表現する。サイドドアの可動範囲を示す位置表示画像は、自車両Vの前方及び/又は後方に投影する。
 なお、制御装置10は、位置表示画像(影画像、線図画像、背景画像)を投影するために投影面を設定し、その投影面に位置表示画像を投影してもよい。
The control device 10 assumes a case where the side door is opened so that an occupant can get on and off from the seat or carry in / out the luggage, and a position display image (shadow image, diagram image) indicating the movable range of the side door. , Background image). As shown in FIG. 11, the position display image (shadow image, diagram image, background image) SHb projected on the second projection plane SQb includes side door portions Vd1 and Vd2. Expresses the lateral extension (movable range) of the side door when the side door is opened. The position display image indicating the movable range of the side door is projected forward and / or rearward of the host vehicle V.
Note that the control device 10 may set a projection plane to project a position display image (shadow image, diagram image, background image), and project the position display image onto the projection plane.
 このように、自車両Vの可動部材の可動範囲を示す画像を含む位置表示画像SHを生成し、表示することにより、自車両Vのドライバは、自車両Vを駐車させた後の作業を考慮して、自車両Vの駐車位置を決定できる。さらに、位置表示画像SHとともに、周囲の対象物の撮像画像を重畳させれば、周囲の対象物を避けつつ、駐車後の作業に支障のない位置に駐車できる。 Thus, by generating and displaying the position display image SH including the image indicating the movable range of the movable member of the host vehicle V, the driver of the host vehicle V considers the work after the host vehicle V is parked. And the parking position of the own vehicle V can be determined. Furthermore, by superimposing the captured images of the surrounding objects together with the position display image SH, it is possible to park at a position where the work after parking is not hindered while avoiding the surrounding objects.
 本例では、移動体が自車両Vである場合、つまり自車両Vのドアが可動であることを例にして位置表示画像(影画像、線図画像、背景画像)SHの態様を説明するが、位置表示画像SHの態様はこれに限定されない。例えば、移動体がフォークリフトである場合には、フォークリフト本体とそのリフト装備の可動範囲を考慮して位置表示画像(影画像、線図画像、背景画像)を生成する。移動体がヘリコプターである場合には、ヘリコプター本体と回転翼の回動範囲を考慮して位置表示画像(影画像、線図画像、背景画像)を生成する。移動体が飛行機である場合には、飛行機本体と乗降用のタラップなどの付属設備の設置範囲を考慮して位置表示画像(影画像、線図画像、背景画像)を生成する。移動体が海底探査機である場合には、必要に応じて設けられるプラットフォームの設置範囲を考慮して生成する。生成した位置表示画像は、ディスプレイ80に表示される。 In this example, the mode of the position display image (shadow image, diagram image, background image) SH will be described in the case where the moving body is the host vehicle V, that is, the door of the host vehicle V is movable. The mode of the position display image SH is not limited to this. For example, when the moving body is a forklift, a position display image (a shadow image, a diagram image, a background image) is generated in consideration of the movable range of the forklift body and the lift equipment. When the moving body is a helicopter, a position display image (shadow image, diagram image, background image) is generated in consideration of the rotation range of the helicopter body and the rotor blades. When the moving body is an airplane, a position display image (a shadow image, a diagram image, a background image) is generated in consideration of the installation range of the airplane main body and ancillary equipment such as a passenger trap. When the mobile body is a submarine spacecraft, it is generated in consideration of the installation range of the platform provided as necessary. The generated position display image is displayed on the display 80.
 ちなみに、移動体がフォークリフトである場合には、リフト装置の稼働範囲を示す位置表示画像(影画像、線図画像、背景画像)を、例えば、周囲の地表(例えば倉庫や工場などの施設の床面又は壁面)に表示することにより、フォークリフトが他の装置や設備に干渉することなく稼働できるか否かを事前に確認できる。ヘリコプターの回転翼の回転範囲を示す位置表示画像(影画像、線図画像、背景画像)を、例えば、周囲の地表(例えば地面)に表示することにより、ヘリコプターを着地させることができる広さの場所を上空から探すことができる。回転翼の回転範囲を示す影画像を例えば、ヘリコプター周囲の地表(例えば崖面)に表示することにより、ヘリコプターの姿勢を上空から確認できる。飛行機の本体及び付属設備の設置範囲を示す影画像を、飛行機の地表(例えば地面)に表示することにより、飛行機を緊急着陸させることができる広さの場所を上空から探すことができる。 Incidentally, when the moving body is a forklift, a position display image (shadow image, diagram image, background image) indicating the operating range of the lift device is displayed, for example, on the surrounding ground surface (for example, the floor of a facility such as a warehouse or a factory). By displaying on the surface or wall surface, it can be confirmed in advance whether or not the forklift can be operated without interfering with other devices and facilities. By displaying a position display image (shadow image, diagram image, background image) indicating the rotation range of the rotor blade of the helicopter on, for example, the surrounding ground surface (for example, the ground surface), the helicopter can be landed You can search for a place from the sky. For example, by displaying a shadow image indicating the rotation range of the rotor blade on the ground surface (for example, a cliff surface) around the helicopter, the attitude of the helicopter can be confirmed from the sky. By displaying a shadow image indicating the installation range of the main body of the airplane and the attached equipment on the ground surface (for example, the ground surface) of the airplane, it is possible to search from the sky for a place having an area where the airplane can make an emergency landing.
 位置表示画像(影画像、線図画像、背景画像)SHは、予め準備した基本パターンを記憶しておき、必要に応じて制御装置10がメモリから読み出してもよい。 The position display image (shadow image, diagram image, background image) SH may store a basic pattern prepared in advance, and the control device 10 may read it from the memory as necessary.
 本発明は以上のように構成され、以上のように作用するので、以下の効果を奏する。 Since the present invention is configured as described above and operates as described above, the following effects can be obtained.
 [1]本実施形態の表示装置100は、自車両V,他車両VXなどの移動体の位置情報を用いて基準点を設定し、撮像画像において基準点から観察される自車両V,他車両VXの存在位置を示す位置表示画像SHを生成し、位置表示画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、自車両Vの存在位置を示す位置表示画像SHと撮像画像とを含む映像を表示できるので、周囲に対する自車両Vの位置関係が把握しやすい映像を表示できる。 [1] The display device 100 according to the present embodiment sets a reference point using position information of a moving body such as the host vehicle V and the other vehicle VX, and the host vehicle V and the other vehicle observed from the reference point in the captured image. A position display image SH indicating the position where the VX exists is generated, and an image including the position display image SH and part or all of the captured image is displayed. As described above, since the video including the position display image SH indicating the position where the host vehicle V is present and the captured image can be displayed, a video in which the positional relationship of the host vehicle V with respect to the surroundings can be easily understood can be displayed.
 [2]本実施形態の表示装置100は、撮像画像における自車両Vの存在位置を示す位置表示画像としての線図画像SHを生成し、線図画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、自車両Vの存在位置を示す線図画像SHと撮像画像の一部又は全部を含む映像を表示できるので、周囲に対する自車両Vの位置関係が把握しやすい映像を表示できる。 [2] The display device 100 of the present embodiment generates a diagram image SH as a position display image indicating the position of the host vehicle V in the captured image, and includes part or all of the diagram image SH and the captured image. Display video. As described above, the diagram image SH indicating the location of the host vehicle V and the image including a part or all of the captured image can be displayed, so that the image in which the positional relationship of the host vehicle V with respect to the surroundings can be easily understood can be displayed.
 [3]本実施形態の表示装置100は、撮像画像における自車両Vの存在位置を示す位置表示画像としての背景画像SHを生成し、背景画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、自車両Vの存在位置を示す背景画像SHと撮像画像の一部又は全部を含む映像を表示できるので、周囲に対する自車両Vの位置関係が把握しやすい映像を表示できる。 [3] The display device 100 according to the present embodiment generates a background image SH as a position display image indicating the position of the host vehicle V in the captured image, and displays a video including the background image SH and a part or all of the captured image. Display. As described above, the background image SH indicating the position of the host vehicle V and the image including a part or all of the captured image can be displayed, so that it is possible to display an image in which the positional relationship of the host vehicle V with respect to the surroundings can be easily understood.
 [4]本実施形態の表示装置100は、撮像画像における自車両Vの存在位置を示す位置表示画像としての影画像SHを生成し、影画像SHを含む映像を表示させる。本実施形態の表示装置10は、自車両V,他車両VXなどの移動体の位置情報を用いて仮想光源を基準点として設定し、仮想光源から自車両Vに光を照射したときに生じる影を模した影画像SHを生成し、影画像SHと自車両Vの周囲の撮像画像の一部又は全部を含む映像を表示させる。このように、自車両Vの影画像SHにより、自車両Vの存在や自車両Vの向きを表現できるので、自車両Vと周囲の対象物との位置関係が把握しやすい映像を表示できる。 [4] The display device 100 of the present embodiment generates a shadow image SH as a position display image indicating the position of the host vehicle V in the captured image, and displays an image including the shadow image SH. The display device 10 according to the present embodiment sets a virtual light source as a reference point using position information of moving bodies such as the host vehicle V and the other vehicle VX, and generates shadows when the host vehicle V is irradiated with light from the virtual light source. Is generated, and a video including part or all of the shadow image SH and a captured image around the host vehicle V is displayed. Thus, since the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the shadow image SH of the host vehicle V, it is possible to display an image in which the positional relationship between the host vehicle V and surrounding objects can be easily understood.
 [5]本実施形態の表示装置100は、他車両VXを含む対象物の存在位置を示す位置表示画像SHを生成し、位置表示画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、対象物としての他車両VXの存在位置を示す位置表示画像SHと撮像画像の一部又は全部とを含む映像を併せて表示できるので、自車両Vと他車両VXの位置関係が把握しやすい映像を表示できる。 [5] The display device 100 according to the present embodiment generates a position display image SH indicating the position of an object including the other vehicle VX, and displays an image including the position display image SH and a part or all of the captured image. . As described above, since the position display image SH indicating the position of the other vehicle VX as the object and the image including part or all of the captured image can be displayed together, the positional relationship between the host vehicle V and the other vehicle VX can be displayed. Easy-to-understand video can be displayed.
 [6]本実施形態の表示装置100は、他車両VXを含む対象物の存在位置を示す位置表示画像としての線図画像SHを生成し、線図画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、対象物としての他車両VXの存在位置を示す線図画像SHと撮像画像の一部又は全部を含む映像を併せて表示できるので、自車両Vと他車両VXの位置関係が把握しやすい映像を表示できる。 [6] The display device 100 according to the present embodiment generates a diagram image SH as a position display image indicating the position of an object including the other vehicle VX, and displays part or all of the diagram image SH and the captured image. Display the video that contains it. As described above, since the diagram image SH indicating the position of the other vehicle VX as the object and the image including part or all of the captured image can be displayed together, the positional relationship between the host vehicle V and the other vehicle VX can be grasped. Can be displayed easily.
 [7]本実施形態の表示装置100は、他車両VXを含む対象物の存在位置を示す位置表示画像としての背景画像SHを生成し、背景画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、対象物としての他車両VXの存在位置を示す背景画像SHと撮像画像の一部又は全部を含む映像を併せて表示できるので、自車両Vと他車両VXの位置関係が把握しやすい映像を表示できる。 [7] The display device 100 according to the present embodiment generates a background image SH as a position display image indicating the position where an object including the other vehicle VX is present, and includes a part or all of the background image SH and the captured image. Is displayed. As described above, the background image SH indicating the position of the other vehicle VX as the object and the image including a part or all of the captured image can be displayed together, so that the positional relationship between the host vehicle V and the other vehicle VX can be grasped. Easy video can be displayed.
 [8]本実施形態の表示装置100は、撮像画像における他車両VXを含む対象物の存在位置を示す位置表示画像としての影画像SHを生成し、影画像SHを含む映像を表示させる。本実施形態の表示装置100は、他車両VXを含む対象物に光を照射したときに生じる影を模した影画像SHを生成し、影画像SHと撮像画像の一部又は全部を含む映像を表示させる。このように、他車両VXの影画像SH1,SH2により、他車両VXと自車両Vとの位置関係を表現できるので、自車両Vとその周囲の他車両VXなどの対象物との位置関係を把握しやすい映像を表示できる。 [8] The display device 100 of the present embodiment generates a shadow image SH as a position display image indicating the position of an object including the other vehicle VX in the captured image, and displays an image including the shadow image SH. The display device 100 according to the present embodiment generates a shadow image SH imitating a shadow generated when light is irradiated to an object including the other vehicle VX, and displays an image including the shadow image SH and a part or all of the captured image. Display. Thus, since the positional relationship between the other vehicle VX and the host vehicle V can be expressed by the shadow images SH1 and SH2 of the other vehicle VX, the positional relationship between the host vehicle V and the surrounding object such as the other vehicle VX can be expressed. Easy-to-understand video can be displayed.
 [9]本実施形態の表示装置100は、自車両Vが移動する方向に沿って位置表示画像を投影する投影面SQを設定し、自車両Vが走行(移動)する走行レーンLn2に隣接する隣接レーンLn1を走行する他車両VX1の存在位置を表示する位置表示画像を生成する。隣接レーンLn1を走行する他車両VX1(対象物)の位置表示画像SH1を、自車両Vの位置表示画像SHとともに共通の投影面SQに投影することにより、自車両Vと他車両V1との位置関係を把握しやすい映像を提示できる。しかも、その映像は自車両Vの周囲の撮像画像の一部又は全部を含むので位置関係がより把握しやすい。また、投影面SQを自車両Vの進行方向に沿って設定することにより、自車両Vと他車両VX1との距離を認識しやすい映像を提示できる。 [9] The display device 100 of the present embodiment sets a projection plane SQ for projecting a position display image along the direction in which the host vehicle V moves, and is adjacent to the travel lane Ln2 in which the host vehicle V travels (moves). A position display image for displaying the position of the other vehicle VX1 traveling in the adjacent lane Ln1 is generated. By projecting the position display image SH1 of the other vehicle VX1 (target) traveling in the adjacent lane Ln1 onto the common projection plane SQ together with the position display image SH of the host vehicle V, the position of the host vehicle V and the other vehicle V1. It is possible to present video that makes it easy to understand the relationship. Moreover, since the video includes a part or all of the captured image around the host vehicle V, the positional relationship is easier to grasp. In addition, by setting the projection surface SQ along the traveling direction of the host vehicle V, it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized.
 [10]本実施形態の表示装置100は、速度が相対的に高い対象物の位置表示画像(線図画像、背景画像、又は影画像)SHの面積が、相対的に速度が低い対象物の位置表示画像(線図画像、背景画像、又は影画像)SHの面積よりも大きくなるように、対象物の位置表示画像(線図画像、背景画像、又は影画像)SH1,SH2を生成する。このように、相対的に速度の高い対象物の位置表示画像(線図画像、背景画像、又は影画像)を相対的に大きく表示するので、速度の高い対象物への注意を喚起する映像を表示できる。 [10] The display device 100 according to the present embodiment has an area of a position display image (a diagram image, a background image, or a shadow image) SH of an object having a relatively high speed, and an object having a relatively low speed. The position display images (line image, background image, or shadow image) SH1 and SH2 of the object are generated so as to be larger than the area of the position display image (line image, background image, or shadow image) SH. In this way, the position display image (line image, background image, or shadow image) of the relatively high-speed object is displayed relatively large, so that an image that calls attention to the high-speed object is displayed. Can be displayed.
 [11]本実施形態の表示装置100は、自車両V及び他車両VXが走行するレーンの属性に応じて位置表示画像(線図画像、背景画像、又は影画像)SHの大きさを変更することにより、実際の車速により位置表示画像(線図画像、背景画像、又は影画像)SHの大きさを制御する場合と同様に、高速で走行する他車両VXに対するドライバの注意を喚起できる。 [11] The display device 100 according to the present embodiment changes the size of the position display image (a diagram image, a background image, or a shadow image) SH according to the attribute of the lane on which the host vehicle V and the other vehicle VX travel. As a result, the driver's attention to the other vehicle VX traveling at a high speed can be alerted in the same way as when the size of the position display image (line image, background image, or shadow image) SH is controlled by the actual vehicle speed.
 [12]本実施形態の表示装置100は、加速状態のときには、移動体又は対象物を観察する基準点の位置を後方にずらすことにより、位置表示画像(線図画像、背景画像、又は影画像)SHの投影位置を前方にシフトさせる、ドライバの判断状況に適した映像を表示できる。
 特に、位置表示画像が影画像である場合において、本実施形態の表示装置100は、加速状態のときに、影画像SHの仮想光源LGの位置を後方にシフトさせることにより、結果として影画像SHの投影位置を前方にシフトさせ、ドライバの判断状況に適した影画像を表示できる。
[12] In the acceleration state, the display device 100 according to the present embodiment shifts the position of the reference point for observing the moving object or the object to the rear to thereby display the position display image (the diagram image, the background image, or the shadow image). ) It is possible to display an image suitable for the judgment situation of the driver that shifts the SH projection position forward.
In particular, when the position display image is a shadow image, the display device 100 of the present embodiment shifts the position of the virtual light source LG of the shadow image SH backward in the acceleration state, resulting in the shadow image SH as a result. The projected position can be shifted forward, and a shadow image suitable for the driver's judgment situation can be displayed.
 [13]本実施形態の表示装置100は、減速状態のときには、移動体又は対象物を観察する基準点の位置を前方にずらすことにより、位置表示画像(線図画像、背景画像、又は影画像)SHの投影位置を後方にシフトさせ、ドライバの判断状況に適した映像を表示できる。
 特に、位置表示画像が影画像である場合において、本実施形態の表示装置100は、減速状態のときに、仮想光源LGの位置を前方にシフトさせることにより、結果として影画像SHの投影位置を後方にシフトさせ、ドライバの判断状況に適した影画像を表示できる。
[13] When the display device 100 according to the present embodiment is in a decelerating state, the position display image (a diagram image, a background image, or a shadow image) is obtained by shifting the position of a reference point for observing a moving object or object forward. ) The SH projection position can be shifted backward to display an image suitable for the driver's judgment status.
In particular, when the position display image is a shadow image, the display device 100 according to the present embodiment shifts the position of the virtual light source LG forward when the vehicle is in a decelerating state, thereby resulting in the projection position of the shadow image SH. It is possible to display a shadow image suitable for the judgment situation of the driver by shifting backward.
 [14]本実施形態の表示装置100は、自車両Vの可動部材の可動範囲を示す影画像SH、線図画像SH、背景画像SHを含む位置表示画像を生成し、表示することにより、自車両Vのドライバは、自車両Vを駐車させた後の作業を考慮して、自車両Vの駐車位置を決定できる。 [14] The display device 100 according to the present embodiment generates and displays a position display image including a shadow image SH, a diagram image SH, and a background image SH indicating the movable range of the movable member of the host vehicle V, thereby displaying the position display image. The driver of the vehicle V can determine the parking position of the host vehicle V in consideration of the work after the host vehicle V is parked.
 [15]本実施形態の表示装置100に本実施形態の表示方法を実行させることにより、上記効果を奏する。 [15] The display device 100 according to the present embodiment causes the display method of the present embodiment to execute the above-described effect.
 なお、以上説明した実施形態は、本発明の理解を容易にするために記載されたものであって、本発明を限定するために記載されたものではない。したがって、上記の実施形態に開示された各要素は、本発明の技術的範囲に属する全ての設計変更や均等物をも含む趣旨である。 The embodiment described above is described for easy understanding of the present invention, and is not described for limiting the present invention. Therefore, each element disclosed in the above embodiment is intended to include all design changes and equivalents belonging to the technical scope of the present invention.
 すなわち、本明細書では、本発明に係る表示装置の一態様としての表示装置100を含む表示システム1を例にして説明するが、本発明はこれに限定されるものではない。 That is, in this specification, the display system 1 including the display device 100 as one embodiment of the display device according to the present invention will be described as an example, but the present invention is not limited to this.
 また、本明細書では、本発明に係る表示装置の一態様として、CPU11、ROM12、RAM13を含む制御装置10を備える表示装置100を説明するが、これに限定されるものではない。 In this specification, the display device 100 including the control device 10 including the CPU 11, the ROM 12, and the RAM 13 is described as an embodiment of the display device according to the present invention, but the present invention is not limited to this.
 また、本明細書では、本願発明に係る画像取得手段と、情報取得手段と、画像生成手段と、表示手段とを有する表示装置の一態様として、画像取得機能と、情報取得機能と、画像生成機能と、表示機能とを実行させる制御装置10を備える表示装置100を例にして説明するが、本発明はこれに限定されるものではない。 Further, in this specification, as one aspect of a display device having an image acquisition unit, an information acquisition unit, an image generation unit, and a display unit according to the present invention, an image acquisition function, an information acquisition function, and an image generation The display device 100 including the control device 10 that executes the function and the display function will be described as an example, but the present invention is not limited to this.
1…表示システム
100…表示装置
 10…制御装置
  11…CPU
  12…ROM
  13…RAM
200…移動体装置
 40…カメラ
  41…測距装置
 50…コントローラ
 60…センサ
  61…速度センサ
  62…前後加速度センサ
 70…ナビゲーション装置
  71…位置検出装置
   711…GPS
  72…地図情報
  73…道路情報
 80…ディスプレイ
DESCRIPTION OF SYMBOLS 1 ... Display system 100 ... Display apparatus 10 ... Control apparatus 11 ... CPU
12 ... ROM
13 ... RAM
DESCRIPTION OF SYMBOLS 200 ... Mobile body apparatus 40 ... Camera 41 ... Distance measuring device 50 ... Controller 60 ... Sensor 61 ... Speed sensor 62 ... Front-back acceleration sensor 70 ... Navigation apparatus 71 ... Position detection apparatus 711 ... GPS
72 ... Map information 73 ... Road information 80 ... Display

Claims (15)

  1.  移動体に搭載されたカメラが撮像した撮像画像を取得する画像取得手段と、
     前記移動体の位置情報を取得する情報取得手段と、
     前記移動体の位置情報を用いて基準点を設定し、前記撮像画像において前記基準点から観察される前記移動体の存在位置を示す位置表示画像を生成し、前記位置表示画像と前記撮像画像とを含む映像を生成する画像生成手段と、
      前記画像生成手段により生成された前記映像を表示する表示手段と、
    を有する表示装置。
    Image acquisition means for acquiring a captured image captured by a camera mounted on a moving body;
    Information acquisition means for acquiring position information of the mobile body;
    A reference point is set using the position information of the moving body, a position display image indicating the position of the moving body observed from the reference point in the captured image is generated, and the position display image, the captured image, Image generating means for generating video including
    Display means for displaying the video generated by the image generation means;
    A display device.
  2.  前記画像生成手段は、前記移動体の位置情報を用いて、前記移動体を見る視点を前記基準点として設定し、前記視点から前記移動体を見たときの、前記移動体の存在位置を示す線図画像を含む前記位置表示画像を生成する請求項1に記載の表示装置。 The image generation means sets the viewpoint of viewing the moving object as the reference point using the position information of the moving object, and indicates the position of the moving object when the moving object is viewed from the viewpoint The display device according to claim 1, wherein the position display image including a diagram image is generated.
  3.  前記画像生成手段は、前記移動体の位置情報を用いて、前記移動体を見る視点を前記基準点として設定し、前記視点から前記移動体を見たときの、前記移動体の存在位置を示す背景画像を含む前記位置表示画像を生成する請求項1に記載の表示装置。 The image generation means sets the viewpoint of viewing the moving object as the reference point using the position information of the moving object, and indicates the position of the moving object when the moving object is viewed from the viewpoint The display device according to claim 1, wherein the position display image including a background image is generated.
  4.  前記画像生成手段は、前記移動体の位置情報を用いて仮想光源を前記基準点として設定し、前記仮想光源から前記移動体に光を照射したときに生じる影を模した、前記移動体の存在位置を示す影画像を含む前記位置表示画像を生成する請求項1に記載の表示装置。 The image generating means sets the virtual light source as the reference point using the position information of the moving body, and the existence of the moving body imitating a shadow that is generated when light is emitted from the virtual light source to the moving body The display device according to claim 1, wherein the position display image including a shadow image indicating a position is generated.
  5.  前記情報取得手段は、前記移動体の周囲に存在する対象物の位置情報を取得し、
     前記画像生成手段は、前記対象物の位置情報を用いて基準点を設定し、前記撮像画像において前記基準点から観察される前記対象物の存在位置を示す位置表示画像を生成し、前記位置表示画像と前記撮像画像とを含む映像を生成する請求項1~4の何れか一項に記載の表示装置。
    The information acquisition means acquires position information of an object existing around the moving body,
    The image generation means sets a reference point using the position information of the object, generates a position display image indicating the position of the object observed from the reference point in the captured image, and displays the position display The display device according to any one of claims 1 to 4, wherein an image including an image and the captured image is generated.
  6.  前記画像生成手段は、前記対象物の位置情報を用いて、前記対象物を見る視点を前記基準点として設定し、前記視点から前記対象物を見たときの、前記対象物の存在位置を示す線図画像を含む前記位置表示画像を生成する請求項5に記載の表示装置。 The image generation means sets the viewpoint of viewing the object as the reference point using the position information of the object, and indicates the position of the object when the object is viewed from the viewpoint The display device according to claim 5, wherein the position display image including a diagram image is generated.
  7.  前記画像生成手段は、前記対象物の位置情報を用いて、前記対象物を見る視点を前記基準点として設定し、前記視点から前記対象物を見たときの、前記対象物の存在位置を示す背景画像を含む前記位置表示画像を生成する請求項5に記載の表示装置。 The image generation means sets the viewpoint of viewing the object as the reference point using the position information of the object, and indicates the position of the object when the object is viewed from the viewpoint The display device according to claim 5, wherein the position display image including a background image is generated.
  8.  前記画像生成手段は、前記取得した対象物の位置情報を用いて仮想光源を前記基準点として設定し、前記仮想光源から前記対象物に光を照射したときに生じる影を模した、前記対象物の存在位置を示す影画像を含む前記位置表示画像を生成する請求項5に記載の表示装置。 The image generation means sets a virtual light source as the reference point using the acquired position information of the object, and imitates a shadow generated when light is emitted from the virtual light source to the object The display device according to claim 5, wherein the position display image including a shadow image indicating a presence position of the image is generated.
  9.  前記画像生成手段は、前記移動体が移動する方向に沿って前記位置表示画像を投影する投影面を設定し、前記移動体が移動する移動レーンに隣接する隣接レーンを移動する前記対象物の存在位置を示す位置表示画像を生成する請求項5~8の何れか一項に記載の表示装置。 The image generation means sets a projection plane for projecting the position display image along a direction in which the moving body moves, and the presence of the object moving in an adjacent lane adjacent to the moving lane in which the moving body moves The display device according to any one of claims 5 to 8, wherein a position display image indicating a position is generated.
  10.  前記情報取得手段は、前記対象物の速度を取得し、
     前記画像生成手段は、前記取得した速度が相対的に高い対象物の位置を示す位置表示画像の領域の面積が、前記取得した速度が相対的に低い対象物の位置を示す位置表示画像の領域の面積よりも大きくなるように、前記位置表示画像を生成する請求項5~9の何れか一項に記載の表示装置。
    The information acquisition means acquires the speed of the object,
    The area of the position display image indicating the position of the object having the relatively low acquired speed is the area of the area of the position display image indicating the position of the object having the relatively high acquired speed. The display device according to any one of claims 5 to 9, wherein the position display image is generated so as to be larger than an area.
  11.  前記情報取得手段は、前記対象物が走行するレーンの属性情報を取得し、
     前記画像生成手段は、前記対象物が走行するレーンが追越しレーンであるという属性情報を取得した場合には、前記対象物が走行するレーンが追越しレーンではないという属性情報を取得した場合よりも、前記追越しレーンを走行する対象物の位置を示す位置表示画像の領域の面積が大きくなるように、前記位置表示画像を生成する請求項5~9の何れか一項に記載の表示装置。
    The information acquisition means acquires attribute information of a lane in which the object travels,
    The image generation means, when acquiring attribute information that the lane on which the object travels is an overtaking lane, than when acquiring attribute information that the lane on which the object is traveling is not an overtaking lane, The display device according to any one of claims 5 to 9, wherein the position display image is generated so that an area of a region of the position display image indicating a position of an object traveling on the overtaking lane is increased.
  12.  前記情報取得手段は、前記移動体の進行方向の加速度を取得し、
     前記画像生成手段は、前記取得した加速度から前記移動体が加速状態であると判断した場合には、前記基準点の位置を前記移動体の進行方向の反対側にずらす請求項1~11の何れか一項に記載の表示装置。
    The information acquisition means acquires acceleration in the traveling direction of the moving body,
    12. The image generating unit according to claim 1, wherein when the moving body determines that the moving body is in an accelerated state from the acquired acceleration, the position of the reference point is shifted to the opposite side of the moving direction of the moving body. A display device according to claim 1.
  13.  前記情報取得手段は、前記移動体の進行方向の加速度を取得し、
     前記画像生成手段は、前記取得した加速度から前記移動体が減速状態であると判断した場合には、前記基準点の位置を前記移動体の進行方向側にずらす請求項1~12の何れか一項に記載の表示装置。
    The information acquisition means acquires acceleration in the traveling direction of the moving body,
    The image generation means shifts the position of the reference point toward the traveling direction of the moving body when the moving body determines that the moving body is in a decelerating state from the acquired acceleration. The display device according to item.
  14.  前記移動体の位置表示画像は、前記移動体の可動部材の可動範囲を示す画像を含む請求項1~13の何れか一項に記載の表示装置。 The display device according to any one of claims 1 to 13, wherein the position display image of the movable body includes an image indicating a movable range of a movable member of the movable body.
  15.  移動体に用いられる表示装置が実行する表示方法であって、
     前記表示装置は、画像取得手段と、情報取得手段と、画像生成手段と、表示手段と、を有し、
     前記画像取得手段は、移動体に搭載されたカメラが撮像した撮像画像を取得するステップを実行し、
     前記情報取得手段は、前記移動体の位置情報を取得するステップを実行し、
     前記画像生成手段は、前記取得した移動体の位置情報を用いて基準点を設定し、前記撮像画像において前記基準点から観察される前記移動体の存在位置を示す位置表示画像を生成し、前記位置表示画像と前記撮像画像とを含む映像を生成するステップを実行し、
     前記表示手段は、前記映像を表示させるステップを実行する表示方法。
    A display method executed by a display device used for a mobile body,
    The display device includes an image acquisition unit, an information acquisition unit, an image generation unit, and a display unit,
    The image acquisition means executes a step of acquiring a captured image captured by a camera mounted on a moving body,
    The information acquisition means executes a step of acquiring position information of the moving body,
    The image generation means sets a reference point using the acquired position information of the moving body, generates a position display image indicating the position of the moving body observed from the reference point in the captured image, and Executing a step of generating an image including a position display image and the captured image;
    A display method in which the display means executes a step of displaying the video.
PCT/JP2015/055957 2014-11-14 2015-02-27 Display device and display method WO2016075954A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016558896A JP6500909B2 (en) 2014-11-14 2015-02-27 Display device and display method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/JP2014/080177 WO2016075810A1 (en) 2014-11-14 2014-11-14 Display device and display method
JPPCT/JP2014/080177 2014-11-14

Publications (1)

Publication Number Publication Date
WO2016075954A1 true WO2016075954A1 (en) 2016-05-19

Family

ID=55953923

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2014/080177 WO2016075810A1 (en) 2014-11-14 2014-11-14 Display device and display method
PCT/JP2015/055957 WO2016075954A1 (en) 2014-11-14 2015-02-27 Display device and display method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/080177 WO2016075810A1 (en) 2014-11-14 2014-11-14 Display device and display method

Country Status (2)

Country Link
JP (1) JP6500909B2 (en)
WO (2) WO2016075810A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851193A (en) * 2016-12-22 2017-06-13 安徽保腾网络科技有限公司 New device for shooting accident vehicle chassis
US12002359B2 (en) 2018-06-20 2024-06-04 Nissan Motor Co., Ltd. Communication method for vehicle dispatch system, vehicle dispatch system, and communication device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7028609B2 (en) * 2017-11-08 2022-03-02 フォルシアクラリオン・エレクトロニクス株式会社 Image display device and image display system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007118762A (en) * 2005-10-27 2007-05-17 Aisin Seiki Co Ltd Circumference monitoring system
JP2007210458A (en) * 2006-02-09 2007-08-23 Nissan Motor Co Ltd Display device for vehicle and image display control method for vehicle
JP2007282098A (en) * 2006-04-11 2007-10-25 Denso Corp Image processing apparatus and image processing program
JP2009230225A (en) * 2008-03-19 2009-10-08 Mazda Motor Corp Periphery monitoring device for vehicle
JP2011028634A (en) * 2009-07-28 2011-02-10 Toshiba Alpine Automotive Technology Corp Image display device for vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10176931A (en) * 1996-12-18 1998-06-30 Nissan Motor Co Ltd Navigator for vehicle
WO2007083494A1 (en) * 2006-01-17 2007-07-26 Nec Corporation Graphic recognition device, graphic recognition method, and graphic recognition program
JP5715778B2 (en) * 2010-07-16 2015-05-13 東芝アルパイン・オートモティブテクノロジー株式会社 Image display device for vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007118762A (en) * 2005-10-27 2007-05-17 Aisin Seiki Co Ltd Circumference monitoring system
JP2007210458A (en) * 2006-02-09 2007-08-23 Nissan Motor Co Ltd Display device for vehicle and image display control method for vehicle
JP2007282098A (en) * 2006-04-11 2007-10-25 Denso Corp Image processing apparatus and image processing program
JP2009230225A (en) * 2008-03-19 2009-10-08 Mazda Motor Corp Periphery monitoring device for vehicle
JP2011028634A (en) * 2009-07-28 2011-02-10 Toshiba Alpine Automotive Technology Corp Image display device for vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851193A (en) * 2016-12-22 2017-06-13 安徽保腾网络科技有限公司 New device for shooting accident vehicle chassis
US12002359B2 (en) 2018-06-20 2024-06-04 Nissan Motor Co., Ltd. Communication method for vehicle dispatch system, vehicle dispatch system, and communication device

Also Published As

Publication number Publication date
JP6500909B2 (en) 2019-04-17
JPWO2016075954A1 (en) 2017-09-28
WO2016075810A1 (en) 2016-05-19

Similar Documents

Publication Publication Date Title
US10488218B2 (en) Vehicle user interface apparatus and vehicle
US10053001B1 (en) System and method for visual communication of an operational status
CN104883554B (en) The method and system of live video is shown by virtually having an X-rayed instrument cluster
CN111788102B (en) Odometer system and method for tracking traffic lights
CN109636924B (en) Vehicle-mounted multi-mode augmented reality system based on real road condition information three-dimensional modeling
US10908604B2 (en) Remote operation of vehicles in close-quarter environments
CN109863513A (en) Nerve network system for autonomous vehicle control
EP3235684A1 (en) Apparatus that presents result of recognition of recognition target
CN115039129A (en) Surface profile estimation and bump detection for autonomous machine applications
CN109690634A (en) Augmented reality display
US20170262710A1 (en) Apparatus that presents result of recognition of recognition target
WO2020031812A1 (en) Information processing device, information processing method, information processing program, and moving body
JPWO2019092846A1 (en) Display system, display method, and program
JP2022510450A (en) User assistance methods for remote control of automobiles, computer program products, remote control devices and driving assistance systems for automobiles
JP6380550B2 (en) Display device and display method
JP2022008854A (en) Control unit
JP6500909B2 (en) Display device and display method
CN110271487A (en) Vehicle display with augmented reality
CN113602282A (en) Vehicle driving and monitoring system and method for maintaining situational awareness at a sufficient level
JP2022129175A (en) Vehicle evaluation method and vehicle evaluation device
CN115857169A (en) Collision early warning information display method, head-up display device, carrier and medium
US10134182B1 (en) Large scale dense mapping
JP2007072224A (en) Driving simulator
CN117784768A (en) Vehicle obstacle avoidance planning method, device, computer equipment and storage medium
JP2017090189A (en) Travelling guide system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15859227

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016558896

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15859227

Country of ref document: EP

Kind code of ref document: A1