WO2017168953A1 - Vehicle device, vehicle program, and filter design program - Google Patents

Vehicle device, vehicle program, and filter design program Download PDF

Info

Publication number
WO2017168953A1
WO2017168953A1 PCT/JP2017/001217 JP2017001217W WO2017168953A1 WO 2017168953 A1 WO2017168953 A1 WO 2017168953A1 JP 2017001217 W JP2017001217 W JP 2017001217W WO 2017168953 A1 WO2017168953 A1 WO 2017168953A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
driver
viewpoint position
unit
viewpoint
Prior art date
Application number
PCT/JP2017/001217
Other languages
French (fr)
Japanese (ja)
Inventor
鈴木 孝光
真之 近藤
祐司 楠瀬
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016218084A external-priority patent/JP6493361B2/en
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to CN201780022100.6A priority Critical patent/CN108883702B/en
Priority to US16/089,273 priority patent/US10703272B2/en
Priority to DE112017001724.6T priority patent/DE112017001724T5/en
Publication of WO2017168953A1 publication Critical patent/WO2017168953A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays

Definitions

  • the present disclosure relates to a vehicle device, a vehicle program, and a filter design program.
  • a projection device that displays an image on a projection surface such as a windshield so that the image is superimposed on a landscape viewed by a driver.
  • the direction of the route at a complicated intersection is displayed as an arrow in accordance with the road on which the vehicle actually travels, or the driver is alerted by framing the road sign. By doing so, a vehicle device that supports driving has also come out.
  • Patent Document 1 proposes that the image display method is devised to reduce the shift of the image depending on the viewpoint position of the driver.
  • the size of the projection surface is limited, and the position of the driver's eyes when the position of the driver's face is shifted up and down, front and rear, left and right, etc. Therefore, there has been a problem that the fundamental display position misalignment cannot be solved because the positional relationship between the driver's eye position and the projection plane is misaligned.
  • An apparatus for a vehicle includes an image generation unit that generates an image to be notified to a driver using a predetermined reference coordinate system, and a driving detected by a viewpoint detection unit that detects the position of the driver's eyes.
  • a viewpoint position specifying unit that specifies the viewpoint position indicating the position of the driver's eyes in the passenger compartment, and a superposition that specifies the display position when displaying the generated image in the driver's field of view
  • a position specifying unit an image converting unit that generates a converted image that is an image obtained by converting the generated image into a coordinate system based on the viewpoint position of the driver, and a projection device that overlaps the converted image with the driver's field of view
  • the display unit is configured to include a notification unit that notifies the driver of information.
  • FIG. 1 is a diagram schematically illustrating an electrical configuration of the ECU according to the first embodiment.
  • FIG. 2 is a diagram schematically showing an example of the relationship between the viewpoint position of the camera and the viewpoint position of the driver.
  • FIG. 3 is a diagram schematically illustrating an example of the relationship between the position of the driver's face and the visual field,
  • FIG. 4 is a diagram showing a flow of processing of the vehicle program by the ECU.
  • FIG. 5 is a diagram schematically showing an example of a mode in which images are superimposed and displayed.
  • FIG. 6 is a diagram schematically showing an example of a mode in which images are superimposed and displayed.
  • FIG. 1 is a diagram schematically illustrating an electrical configuration of the ECU according to the first embodiment.
  • FIG. 2 is a diagram schematically showing an example of the relationship between the viewpoint position of the camera and the viewpoint position of the driver.
  • FIG. 3 is a diagram schematically illustrating an example of the relationship between the position of the driver's face and the visual
  • FIG. 7 is a diagram illustrating a flow of coordinate conversion filter generation processing by the ECU according to another embodiment.
  • FIG. 8 is a first diagram illustrating a flow of a vehicle program by the ECU according to another embodiment.
  • FIG. 9 is a second diagram illustrating the flow of the vehicle program by the ECU according to another embodiment.
  • a vehicle system 1 includes an ECU 2 (Electronic Control Unit) as a vehicle device, a camera 3 as an imaging unit, a millimeter wave radar 4, sensors 5, a viewpoint detection unit 6, a projection device 7, The speaker 8 and the microphone 9 are configured.
  • ECU 2 Electronic Control Unit
  • a camera 3 as an imaging unit
  • a millimeter wave radar 4 sensors
  • sensors 5 a viewpoint detection unit 6
  • a projection device 7 The speaker 8 and the microphone 9 are configured.
  • the ECU2 is provided in the vehicle 20 (refer FIG. 2).
  • the ECU 2 may be fixedly provided on the vehicle 20 or may be provided detachably from the vehicle 20.
  • the ECU 2 includes a control unit 10, a storage unit 11, an operation switch 12, and the like.
  • the control unit 10 is configured by a microcomputer including a CPU, a ROM, a RAM, and the like (not shown). For example, the control unit 10 controls the ECU 2 by executing a control program stored in the storage unit 11.
  • the storage unit 11 is composed of a recording medium that can read and write data, and stores the control program described above, a vehicle program described later, a program for image processing, and various data. That is, the storage unit 11 functions as a recording medium that stores the vehicle program.
  • the storage unit 11 functions as a recording medium that stores a filter design program described later.
  • the recording medium is not limited to the storage unit 11, and a recording medium that can be attached and detached from the ECU 2 can also be used.
  • the storage unit 11 also stores data such as the mounting position of the camera 3, the three-dimensional shape of the projection surface (C, see FIG. 2), and the driver's texture.
  • the operation switch 12 inputs various user operations on the ECU 2.
  • the control unit 10 includes an image generation unit 10a, a viewpoint position specification unit 10b, a superimposed position specification unit 10c, an image conversion unit 10d, a notification unit 10e, a prediction unit 10f, and the like.
  • each of these units (10a to 10e) is realized by software by causing the control unit 10 to execute a program.
  • each unit (10a to 10e) can be provided by hardware, or can be provided by a combination of hardware and software.
  • the image generation unit 10a generates an image (G1, see FIG. 2) to notify the driver in a coordinate system based on the viewpoint position of the camera 3 that captures the traveling direction of the vehicle 20. That is, in the present embodiment, the reference coordinate system is set as a coordinate system based on the mounting position of the camera 3. The image generation unit 10 a generates an image as an image when displayed in the field of view of the camera 3 based on the attachment position of the camera 3.
  • the image (G1) shown in FIG. 2 is actually projected onto the projection plane (C), but in FIG. 2, the driver seems to have an image at a certain distance.
  • the image (G1) is shown at a position away from the vehicle 20 in order to schematically show that it is visible.
  • FIG. 2 shows an example in which the image (G1) is a simple two-dimensional image for simplification of explanation, but of course, it is an image having a complicated shape that is three-dimensionally viewed from the driver. There may be. In that case, a three-dimensional stereoscopic image having a depth like a so-called 3D video is displayed to the driver without a sense of incongruity.
  • the viewpoint position specifying unit 10b specifies a viewpoint position indicating the position of the driver's eyes in the passenger compartment based on the position of the driver's eyes detected by the viewpoint detection unit 6. In other words, the viewpoint position specifying unit 10b specifies the positional relationship between the position of the driver's eyes and the projection plane (C).
  • the superimposed position specifying unit 10c uses the display position when displaying the image on the projection plane (C), that is, the driver's viewpoint position as a reference. Specify the display position when displaying an image.
  • the image conversion unit 10d converts the image into a coordinate system based on the driver's viewpoint position based on the driver's viewpoint position, thereby converting the image (G2, see FIG. 2), that is, actually the projection plane (C ) Is generated for projection.
  • reports a driver
  • the notification unit 103 displays an image in real time by following the change in the viewpoint position of the driver, as will be described later. Note that character information or the like may be displayed together.
  • the prediction unit (10f) predicts the driver's viewpoint position based on the movement history indicating the change of the driver's viewpoint position in time series.
  • the camera 3 is constituted by a CCD camera or a CMOS camera, and images a landscape in the traveling direction of the vehicle 20.
  • the millimeter wave radar 4 radiates radio waves and detects the distance to the object based on the reflected wave reflected by the object.
  • the sensors 5 are constituted by, for example, an infrared sensor or a proximity sensor, and detect surrounding objects.
  • the viewpoint detection unit 6 detects the position of the driver's eyes. Note that since various methods are known or known, the viewpoint detection unit 6 may use these methods. In that case, it is desirable to detect the eye position only by image processing without requiring special equipment from the driver.
  • the projection device 7 projects a virtual image on a transparent windshield 31 (see FIG. 2) or a combiner provided in the driver's field of view, and is also called a head-up display. It is.
  • a part of the windshield 31 is a projection plane (C).
  • the speaker 8 notifies the response sound of the operation to the ECU 2 and a message to the driver by voice. It is good also as a structure which alert
  • the microphone 9 inputs an operation on the ECU 2 by voice, for example.
  • the ECU 2 is connected to the position detection unit 13.
  • the position detection unit 13 includes a GPS unit, a navigation device, and the like, and detects the current position of the host vehicle. Moreover, in order to implement
  • the map data 13a may be stored in the storage unit 11.
  • the difference between the viewpoint position of the camera 3 and the viewpoint position of the driver, and the necessity for obtaining the superimposed position will be described.
  • the camera 3 is attached to a position where an image of the front of the vehicle 20 can be captured in the vicinity of the ceiling inside the vehicle 20.
  • the viewpoint position of the camera 3, that is, the center position of the visual field (A 1) of the camera 3 can be obtained from the mounting position of the camera 3.
  • the visual field (A2) of the driver (M) is specified by detecting the position of the eyes of the driver (M) by the viewpoint detection unit 6 provided in the passenger compartment.
  • the image generation unit 10a displays the image (G1) as a field of view (A1) of the camera 3. ) To overlap the landscape.
  • the position of the upper end of the image (G1) in the visual field (A1) of the camera 3 is referred to as an upper end position (Lc0) for convenience.
  • the image (G1) generated based on the viewpoint position of the camera 3 is positioned below the center line (Lc1) of the camera 3 by a distance (L1).
  • the image (G1) is located below the center in the coordinate system of the visual field (A1) of the camera 3.
  • this image (G1) is positioned above the center line (Lc2) by a distance (L2) from the view (A2) of the driver (M). That is, the image (G1) is located above the center in the driver (M) coordinate system.
  • the projection plane (C) is provided between the driver (M) and the actual landscape.
  • (G2) is displayed below the center line (Lc2) from the viewpoint of the driver (M).
  • the image (G2) does not overlap with the scenery that the driver (M) is viewing.
  • the central position is also referred to as a normal position for convenience.
  • a relatively tall driver and a relatively short driver have different face positions in the first place, and even if the driver is the same, the driving posture changes appropriately during driving.
  • the position of the face may differ due to reclining of the seat.
  • the ECU 2 can superimpose and display the image projected from the projection device 7 on the actual scenery viewed by the driver as follows. More specifically, the ECU 2 generates, converts, and displays an image in real time so as to overlap the driver's field of view, following changes in the scenery and changes in the driver's viewpoint position due to the movement of the vehicle 20.
  • the ECU 2 executes each process shown in FIG. These processes are performed by executing the vehicle program in the control unit 10. Each process is performed by each of the above-described units (10a to 10e), but for the sake of simplification of description, the ECU 2 will be mainly described below.
  • ECU2 first acquires the three-dimensional information in the visual field (A1) of the camera 3 (S10). That is, the ECU 2 captures a landscape in front of the vehicle 20 with the camera 3 and detects or extracts an object included in the image. Then, the ECU 2 generates an image image (G1) to notify the driver in a coordinate system based on the viewpoint position of the camera 3 (S11).
  • the ECU 2 specifies the driver's viewpoint position (S12), and converts the generated image (G1) into a coordinate image (G2, see FIG. 2) converted into a coordinate system based on the driver's viewpoint position.
  • a so-called view conversion method may be used as a coordinate conversion method when converting to a coordinate system based on the viewpoint position of the driver. Since this view conversion is a technique generally used in the graphics field that handles three-dimensional images, an outline thereof will be described here.
  • View conversion is a method for representing a figure generated in a state viewed from a certain viewpoint position in a state viewed from another viewpoint position in a three-dimensional space. Specifically, in view transformation, a parallel transformation matrix is used to translate the position of the image in the reference coordinate system that generated the figure so that it matches the coordinate system based on the viewpoint position of the person viewing the figure. Image conversion is performed by a combination of conversion and coordinate conversion that rotates a translated image toward a viewpoint position using a rotation conversion matrix.
  • step S13 described above the ECU 2 uses the view transformation projection to generate an image (G1) generated in the reference coordinate system with the camera 3 as a reference, and the viewpoint of the driver who will actually see the image.
  • the image is converted into a converted image (G2) converted into a coordinate system based on the position.
  • the conversion is performed in a state where the image looks three-dimensional with a depth.
  • the converted image (G2) when projecting the converted image (G2), it is necessary to consider the shape of the projection plane (C). For example, if the converted image (G1) is a square and is projected onto the projection surface (C) as it is, the image may be distorted depending on the surface shape of the projection surface (C) to form a trapezoid or a more complex curve. This is because it may look like a figure. Further, the image may be distorted depending on the viewpoint position of the driver, that is, depending on the positional relationship between the position of the driver's face and the projection plane (C).
  • the ECU 2 acquires the three-dimensional shape of the projection surface (C), that is, the surface shape (S14), and corrects the converted image (G1) according to the surface shape and the viewpoint position of the driver (S15). As a result, a corrected converted image (G2), that is, an image without distortion as viewed from the driver is generated.
  • the ECU 2 generates a frame image (G10; refer to FIG. 5, which corresponds to the boundary image) indicating the display range of the projection plane (C) (S16), the frame image (G10), and the corrected converted image (G2). More precisely, the display image (G11) that fits in the converted image (G2) is transmitted to the projection device 7 (S17). The transmitted frame image (G10) and display image (G11) are projected onto the projection plane (C) by the projection device 7 as shown in FIG.
  • the display image (G11) which is not distorted in the driver's visual field and accurately overlaps the driver's visual field is displayed.
  • the entire display image (G11) cannot be displayed.
  • the display image (G11) that prompts the user to change the course is displayed, but the portion indicated by the broken line located on the left outer side of the frame image (G10) in the display image (G11) is displayed. Will not be.
  • the ECU 2 determines whether or not to end the display (S18).
  • finished means complete
  • the display image is displayed on the traffic guide map, when the vehicle passes the traffic guide map, or when it is confirmed that the driver has recognized the notified information, the image is displayed further. Since there is no meaning, the display of the image is terminated so as not to obstruct the driver's visual field.
  • step S12 the ECU 2 proceeds to step S12 and repeats the process after the identification of the driver's viewpoint position. At this time, when the driver's viewpoint position changes, the process proceeds to step S12 in order to display a display image (G11) following the change. Thereby, the display image (G11) according to the change of the driver's viewpoint position is notified.
  • the driver turns his face to the right as shown in FIG. 6 in order to know what the left side of the display image (G11) is. I will send it. That is, when the driver wants to know what the left side of the display image (G11) is like, the driver brings his face to the opposite direction, that is, the right side.
  • the frame image (G10) is a window frame. If a building has a window and you want to see the left side that is not visible, the person will shift his face to the right side. That is, when the position of the driver's face is shifted, the image to be displayed is in the direction opposite to the moving direction of the driver's face.
  • the ECU 2 displays a frame image (G10) in order to inform the driver intuitively which part of the face that is not displayed can be seen by moving the face. Note that this is the same when the position of the face moves in the vertical direction as well as in the horizontal direction.
  • the frame image (G11) is not limited to the rectangular frame illustrated in FIG. 5, and may be an L-shaped or cross-shaped figure indicating four corners, or may be a broken line or a double line instead of a solid line. There may be. That is, any image may be used as long as it can notify the boundary of the projection plane (C), that is, the limit of the display range.
  • the ECU 2 displays the image projected from the projection device 7 on the actual landscape that the driver is accurately seeing. According to the embodiment described above, the following effects can be obtained.
  • the ECU 2 generates an image in a coordinate system based on the viewpoint position of the camera 3 that captures the traveling direction of the vehicle 20, a viewpoint position specifying unit 10b that specifies the viewpoint position of the driver, and an image (G1). )
  • a superimposed position specifying unit 10c that specifies a display position
  • a converted image (G2) that is an image obtained by converting the image (G1) into a coordinate system based on the viewpoint position of the driver.
  • an informing unit 10e that displays a display image (G11) that fits in the converted image using the projection device 7 so as to overlap the visual field of the driver.
  • the display image (G1) generated so as to overlap the landscape in the visual field of the camera 3 is converted into a converted image (G2) overlapping the landscape in the visual field of the driver.
  • the display image (G11) which fits in a conversion image is displayed so that it may overlap with a driver
  • ECU2 correct
  • the projection surface (C) is formed in a curved surface shape along the windshield 31, for example, an image without distortion can be displayed superimposed on the landscape when viewed from the driver.
  • ECU2 correct
  • ECU2 converts the image (G1) according to the movement of the driver's viewpoint position. As shown in FIG. 5 and FIG. 6 described above, when the face position is shifted, the appearance is also different. Therefore, the display image (G11) corresponding to the face position can be displayed by regenerating the image (G1). In this case, the generated image (G1) may be stored and reconverted according to the movement of the viewpoint position.
  • the display image is displayed on the traffic guide map
  • the vehicle passes the traffic guide map, or when it is confirmed that the driver has recognized the notified information
  • the image is displayed.
  • the possibility of obstructing the driver's field of view can be reduced.
  • ECU2 produces
  • Boundary image which shows the display range of a projection surface (C). This makes it possible for the driver to intuitively understand that the face should be shifted to the right when, for example, the left side of the display image (G11) is cut off.
  • the ECU 2 When the ECU 2 generates the image (G1) in a coordinate system based on the viewpoint position of the camera 3, the process for specifying the driver's viewpoint position, and the image (G1) is displayed in the driver's field of view. Processing for specifying a display position, processing for generating a converted image (G2) that is an image obtained by converting the image (G1) into a coordinate system based on the driver's viewpoint position, and a display image that fits in the converted image (G2)
  • the image projected from the projection device 7 can be accurately superimposed and displayed on the actual scenery viewed by the driver, even with the vehicle program for executing the process of displaying (G11) so as to overlap the visual field of the driver. The same effects as the ECU 2 can be obtained.
  • an example in which an image to be superimposed on an object in the field of view is generated and the image is converted is shown.
  • an image to be superimposed on the entire object that is, an image in which the inside is not filled.
  • An image may be displayed.
  • a contour image indicating the contour of the object may be generated in step S10 in FIG. 4
  • the contour image may be converted in step S13
  • the converted image may be corrected in step S15.
  • step S10 when displaying an image whose interior is simply filled, in step S10, first, an outline image showing the outline of the image is generated, the outline image is converted in step S13, and the converted image is corrected in step S15. In step S17, the inside of the converted image may be filled and displayed. As a result, it is not necessary to perform conversion processing for the inside of the image, and it is only necessary to simply fill the inside when displaying, so that a reduction in processing load can be expected.
  • the example in which the image (G1) is generated based on the viewpoint position of the driver is shown.
  • the image (G1) is displayed in FIG. 2 or FIG. You may generate
  • an example is shown in which an image is displayed based on the viewpoint position of the driver.
  • a time-series change in the viewpoint position of the driver is stored as a movement history in the storage unit 11 or the like, and based on the movement history.
  • the position of the driver's face may be predicted, and an image to be displayed may be corrected based on the prediction. That is, the image conversion unit 10d may perform image conversion and correction according to the shape of the projection plane (C) so as to correspond to the viewpoint position predicted by the prediction unit 10f.
  • the driver's viewpoint position based on not only the movement history of the driver's viewpoint position, but also the driver's face based on whether the portion of the display image (G11) that is not displayed is the top, bottom, left, or right of the projection plane (C).
  • the movement of the position may be predicted. This is because when the image is out of the display range and is interrupted, the user is considered to move the position of the face to see the interrupted portion.
  • the conversion image (G1) is corrected according to the surface shape of the projection surface (C) and the viewpoint position of the driver, but the surface shape of the projection surface (C) and the viewpoint position of the driver Depending on the positional relationship, the correction may be made according to either one of them.
  • the ECU 2 and the projection device 7 are provided separately, but they may be provided integrally. That is, the function of the ECU 2 may be mounted on the projection device 7. Alternatively, the function of the ECU 2 may be implemented in a drive recorder device or a navigation device that is already connectable to the camera 3, or a tablet terminal or a smartphone owned by the driver.
  • the converted image is regenerated according to the movement of the viewpoint position of the driver.
  • the converted image may be stored and the converted image may be recorrected according to the movement of the viewpoint position. Good.
  • the display image (G10) may be either a still image or a moving image.
  • the configuration of the embodiment described above it is possible to superimpose the landscape viewed from the position of the driver's eyes and the image projected on the projection plane (C), but the structure of the human eye, the method of recognizing the figure of the brain, etc. Because there are still unexplained factors, such factors may cause the image displayed on the projection plane (C) to feel shifted from the landscape or distorted. Conceivable.
  • the coordinate conversion filter is designed by executing the filter design program shown in FIG. 7 in FIG. 7, there is a place where almost the same processing as the processing of FIG. 4 of the embodiment is performed, and thus detailed description thereof will be omitted.
  • Step S10 S20
  • S21 This reference image is a relatively simple figure such as a lattice figure constituted by right angles.
  • the ECU 2 specifies the viewpoint position of the tester sitting in the driver's seat (S2), and generates a converted image obtained by converting the reference image into a coordinate system based on the specified viewpoint position (S23).
  • the ECU 2 acquires the shape of the projection plane (C) (S21), corrects the converted image according to the shape of the projection plane (C) and the viewpoint position (C25), and generates a frame image. Then, the corrected converted image and the frame image are transmitted to the projection device (S27), and the image is displayed on the projection plane (C).
  • the ECU 2 corrects distortion of the displayed image (S28). More precisely, in this step S28, the distortion of the image being displayed is corrected in accordance with an operation for correcting the tester so that it looks correct.
  • the ECU 2 acquires a correction amount when correcting the distortion of the image (S29), associates the correction amount as a correction amount for the current viewpoint position, and creates or updates a coordinate conversion filter (S30). .
  • a coordinate conversion filter is newly created in step S30, and when executed for the second time or later, the coordinate conversion filter is updated in step S30. . That is, a coordinate conversion filter that can specify a correction amount according to the viewpoint position is designed.
  • the coordinate conversion filter in which the viewpoint position of the driver and the correction amount at the viewpoint position are associated with each other is updated.
  • the coordinate conversion filter may be designed in advance, or may be generated after use is started by the driver.
  • This coordinate conversion filter is used during actual operation as follows.
  • the ECU 2 executes the vehicle program shown in FIG.
  • FIG. 8 the same steps as those in FIG. 4 of the embodiment are denoted by the same step numbers, and detailed description thereof is omitted.
  • the ECU 2 executes the processing from step S1 in the same manner as in the embodiment, and when the conversion image correction (S15) and the frame image generation (S16) are completed, the ECU 2 refers to the coordinate conversion filter described above, and finally the image. Is corrected (S40). In this step S40, based on the driver's viewpoint position, correction using the associated correction amount is performed. The process of step S40 is performed by the coordinate conversion unit 10d.
  • the ECU 2 transmits and displays the corrected image and the frame image (S17), and then determines whether the driver's line of sight is facing the displayed image (S41), and the line of sight is directed to the displayed image. If it is detected (S41: YES), the movement of the line of sight is detected (S42). At this time, the fluctuation of the line of sight is detected as a deviation amount between the center of the line of sight of the driver and the display position of the image.
  • the line of sight can be specified by detecting the direction of the line of sight when the viewpoint position is specified by the viewpoint position specifying unit.
  • the ECU 2 reflects the detected shake on the coordinate conversion filter, assuming that the display position of the image is deviated from the driver's consciousness ( S44).
  • the correction amount for the driver that is, the coordinate conversion filter is learned. Note that if the line of sight is facing the image (S41: NO), or if the shake is less than the specified value (S43: NO), the process ends.
  • the vehicle used when the coordinate conversion filter is designed in advance may be different from the vehicle in which the ECU 2 is actually provided.
  • the coordinate conversion filter may be newly created each time the vehicle is used to deal with different drivers, and the past driving state is saved to deal with the same driver. It is good also as a structure to keep. Of course, it is good also as a structure created newly, when there exists operation from a driver

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

An ECU 2 serving as a vehicle device is provided with: an image generation unit 10a that generates in a predetermined reference coordinate system; a viewpoint position identification unit 10b that identifies the viewpoint position of the driver; a superimposed position identification unit 10c that identifies a display position for displaying an image (G1) in the driver's field of view; an image conversion unit 10d that generates a converted image (G2), which results from converting the image (G1) to a coordinate system based on the driver's viewpoint position; and a notification unit 10e that uses a projection device 7 to display a display image (G11), which fits into the converted image, so as to be overlaid in the driver's field of view.

Description

車両用装置、車両用プログラム、フィルタ設計プログラムVehicle device, vehicle program, filter design program 関連出願の相互参照Cross-reference of related applications
 本出願は、2016年4月1日に出願された日本出願番号2016-074383号、および2016年11月8日に出願された日本出願番号2016-218084号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Application No. 2016-074383 filed on April 1, 2016, and Japanese Application No. 2016-218084 filed on November 8, 2016, the contents of which are described herein. Is used.
 本開示は、車両用装置、車両用プログラム、フィルタ設計プログラムに関する。 The present disclosure relates to a vehicle device, a vehicle program, and a filter design program.
 ウィンドシールド等の投影面に画像を透過表示させることにより、運転者が見ている風景に画像を重畳表示する投影装置がある。そして、近年では、この投影装置を用いることにより、例えば複雑な交差点での進路方向を実際に走行する道路に合わせて矢印表示したり、道路標識に枠付けすることにより運転者に注意を促したりすることで、運転を支援する車両用装置も出てきている。 There is a projection device that displays an image on a projection surface such as a windshield so that the image is superimposed on a landscape viewed by a driver. And in recent years, by using this projection device, for example, the direction of the route at a complicated intersection is displayed as an arrow in accordance with the road on which the vehicle actually travels, or the driver is alerted by framing the road sign. By doing so, a vehicle device that supports driving has also come out.
 さて、運転者の顔の位置は、必ずしも固定された位置にある訳ではない。そのため、運転者の顔の位置によっては、重畳表示した画像が実際の風景からずれて表示されたりすることがあり、実風景に画像を重畳表示する投影装置のメリットを生かせないことがある。また、ずれた位置に画像を表示することで、運転者に誤解を与えることもある。そのため、例えば特許文献1では、画像の表示方法を工夫することにより、運転者の視点位置によって画像がずれてしまうことを軽減することが提案されている。 Now, the position of the driver's face is not necessarily a fixed position. For this reason, depending on the position of the driver's face, the superimposed image may be displayed deviated from the actual landscape, and the advantages of the projection device that superimposes and displays the image on the actual landscape may not be utilized. Moreover, the driver may be misunderstood by displaying an image at a shifted position. For this reason, for example, Patent Document 1 proposes that the image display method is devised to reduce the shift of the image depending on the viewpoint position of the driver.
特開2005-69800号公報JP 2005-69800 A
 しかしながら、発明者の詳細な検討の結果、投影面の大きさは限られていること、また、運転者の顔の位置が上下、前後および左右等にずれた場合には運転者の目の位置もずれることから、運転者の目の位置と投影面との位置関係がずれること等の理由により、根本的な表示位置の位置ずれを解決できないないという課題が見出された。 However, as a result of detailed examination by the inventor, the size of the projection surface is limited, and the position of the driver's eyes when the position of the driver's face is shifted up and down, front and rear, left and right, etc. Therefore, there has been a problem that the fundamental display position misalignment cannot be solved because the positional relationship between the driver's eye position and the projection plane is misaligned.
 本開示は、投影装置から投影する画像を正確に運転者が見ている実際の風景に重なるように表示することができる車両用装置、車両用プログラム、フィルタ設計プログラムを提供することにある。 It is an object of the present disclosure to provide a vehicle device, a vehicle program, and a filter design program capable of accurately displaying an image projected from a projection device so as to overlap with an actual scenery viewed by a driver.
 本開示の一態様による車両用装置は、運転者に報知する画像を、予め定められた基準座標系で生成する画像生成部と、運転者の目の位置を検出する視点検出部で検出した運転者の目の位置に基づいて、車室内における運転者の目の位置を示す視点位置を特定する視点位置特定部と、生成した画像を運転者の視野に表示する際の表示位置を特定する重畳位置特定部と、生成した画像を運転者の視点位置を基準とした座標系に変換した画像である変換画像を生成する画像変換部と、投影装置を用いて変換画像を運転者の視野に重なるように表示させることにより、運転者に情報を報知する報知部と、を備える構成とする。 An apparatus for a vehicle according to an aspect of the present disclosure includes an image generation unit that generates an image to be notified to a driver using a predetermined reference coordinate system, and a driving detected by a viewpoint detection unit that detects the position of the driver's eyes. Based on the position of the driver's eyes, a viewpoint position specifying unit that specifies the viewpoint position indicating the position of the driver's eyes in the passenger compartment, and a superposition that specifies the display position when displaying the generated image in the driver's field of view A position specifying unit, an image converting unit that generates a converted image that is an image obtained by converting the generated image into a coordinate system based on the viewpoint position of the driver, and a projection device that overlaps the converted image with the driver's field of view Thus, the display unit is configured to include a notification unit that notifies the driver of information.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、第1実施形態のECUの電気的構成を模式的に示す図であり、 図2は、カメラの視点位置と運転者の視点位置との関係の一例を模式的に示す図であり、 図3は、運転者の顔の位置と視野との関係の一例を模式的に示す図であり、 図4は、ECUによる車両用プログラムの処理の流れを示す図であり、 図5は、画像を重畳表示した態様の一例を模式的に示す図その1であり、 図6は、画像を重畳表示した態様の一例を模式的に示す図その2であり、 図7は、その他の実施形態のECUによる座標変換フィルタ生成処理の流れを示す図であり、 図8は、その他の実施形態のECUによる車両用プログラムの流れを示す図その1であり、 図9は、その他の実施形態のECUによる車両用プログラムの流れを示す図その2である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
FIG. 1 is a diagram schematically illustrating an electrical configuration of the ECU according to the first embodiment. FIG. 2 is a diagram schematically showing an example of the relationship between the viewpoint position of the camera and the viewpoint position of the driver. FIG. 3 is a diagram schematically illustrating an example of the relationship between the position of the driver's face and the visual field, FIG. 4 is a diagram showing a flow of processing of the vehicle program by the ECU. FIG. 5 is a diagram schematically showing an example of a mode in which images are superimposed and displayed. FIG. 6 is a diagram schematically showing an example of a mode in which images are superimposed and displayed. FIG. 7 is a diagram illustrating a flow of coordinate conversion filter generation processing by the ECU according to another embodiment. FIG. 8 is a first diagram illustrating a flow of a vehicle program by the ECU according to another embodiment. FIG. 9 is a second diagram illustrating the flow of the vehicle program by the ECU according to another embodiment.
 以下、実施形態について図1から図6を参照しながら説明する。
 図1に示すように、車両用システム1は、車両用装置としてのECU2(Electronic Control Unit)、撮像部としてのカメラ3、ミリ波レーダ4、センサ類5、視点検出部6、投影装置7、スピーカ8、マイク9等により構成されている。
Hereinafter, embodiments will be described with reference to FIGS. 1 to 6.
As shown in FIG. 1, a vehicle system 1 includes an ECU 2 (Electronic Control Unit) as a vehicle device, a camera 3 as an imaging unit, a millimeter wave radar 4, sensors 5, a viewpoint detection unit 6, a projection device 7, The speaker 8 and the microphone 9 are configured.
 ECU2は、車両20(図2参照)に設けられている。なお、ECU2は、車両20に固定的に設けられていてもよいし、車両20から着脱可能に設けられていてもよい。ECU2は、制御部10、記憶部11、操作スイッチ12等を備えている。制御部10は、図示しないCPU、ROMおよびRAM等を備えたマイクロコンピュータで構成されている。制御部10は、例えば記憶部11に記憶されている制御プログラムを実行することにより、ECU2を制御する。 ECU2 is provided in the vehicle 20 (refer FIG. 2). The ECU 2 may be fixedly provided on the vehicle 20 or may be provided detachably from the vehicle 20. The ECU 2 includes a control unit 10, a storage unit 11, an operation switch 12, and the like. The control unit 10 is configured by a microcomputer including a CPU, a ROM, a RAM, and the like (not shown). For example, the control unit 10 controls the ECU 2 by executing a control program stored in the storage unit 11.
 記憶部11は、データを読み書き可能な記録媒体で構成されており、上記した制御プログラムや後述する車両用プログラム、画像処理のためのプログラムおよび各種のデータを記憶している。すなわち、記憶部11は、車両用プログラムを記憶する記録媒体として機能する。また、記憶部11は、後述するフィルタ設計プログラム記憶する記録媒体として機能する。ただし、記録媒体は、記憶部11に限らず、ECU2から着脱可能なものを採用することもできる。 The storage unit 11 is composed of a recording medium that can read and write data, and stores the control program described above, a vehicle program described later, a program for image processing, and various data. That is, the storage unit 11 functions as a recording medium that stores the vehicle program. The storage unit 11 functions as a recording medium that stores a filter design program described later. However, the recording medium is not limited to the storage unit 11, and a recording medium that can be attached and detached from the ECU 2 can also be used.
 また、記憶部11には、カメラ3の取り付け位置、投影面(C。図2参照)の3次元形状、運転者のきき目などのデータも記憶されている。操作スイッチ12は、ECU2に対するユーザの各種の操作を入力する。 The storage unit 11 also stores data such as the mounting position of the camera 3, the three-dimensional shape of the projection surface (C, see FIG. 2), and the driver's texture. The operation switch 12 inputs various user operations on the ECU 2.
 制御部10は、画像生成部10a、視点位置特定部10b、重畳位置特定部10c、画像変換部10d、報知部10eおよび予測部10f等を備えている。本実施形態では、これらの各部(10a~10e)は、制御部10にてプログラムを実行することによって、ソフトウェアで実現されている。ただし、各部(10a~10e)は、ハードウェアにより提供することができるし、ハードウェアとソフトウェアとの組み合わせによって提供することもできる。 The control unit 10 includes an image generation unit 10a, a viewpoint position specification unit 10b, a superimposed position specification unit 10c, an image conversion unit 10d, a notification unit 10e, a prediction unit 10f, and the like. In the present embodiment, each of these units (10a to 10e) is realized by software by causing the control unit 10 to execute a program. However, each unit (10a to 10e) can be provided by hardware, or can be provided by a combination of hardware and software.
 画像生成部10aは、運転者に報知する画像(G1。図2参照)を、車両20の進行方向を撮像するカメラ3の視点位置を基準とした座標系で生成する。つまり、本実施形態では、基準座標系は、カメラ3の取り付け位置を基準とした座標系として設定されている。この画像生成部10aは、画像を、カメラ3の取り付け位置に基づいて、カメラ3の視野に表示させた場合の画像として生成する。 The image generation unit 10a generates an image (G1, see FIG. 2) to notify the driver in a coordinate system based on the viewpoint position of the camera 3 that captures the traveling direction of the vehicle 20. That is, in the present embodiment, the reference coordinate system is set as a coordinate system based on the mounting position of the camera 3. The image generation unit 10 a generates an image as an image when displayed in the field of view of the camera 3 based on the attachment position of the camera 3.
 ここで、図2に示す画像(G1)は、実際には投影面(C)に投影されているものであるが、図2では、ある程度離れた位置に画像が存在しているように運転者には見えていることを模式的に示すために、画像(G1)を車両20から離間した位置に示している。 Here, the image (G1) shown in FIG. 2 is actually projected onto the projection plane (C), but in FIG. 2, the driver seems to have an image at a certain distance. The image (G1) is shown at a position away from the vehicle 20 in order to schematically show that it is visible.
 また、図2では、説明の簡略化のために画像(G1)を単純な2次元画像とした例を示しているが、当然、運転者から見て3次元的に見える複雑な形状の画像であってもよい。その場合、いわゆる3D映像のように奥行きを持った3次元の立体画像が運転者に視野に違和感なく表示されることになる。 Further, FIG. 2 shows an example in which the image (G1) is a simple two-dimensional image for simplification of explanation, but of course, it is an image having a complicated shape that is three-dimensionally viewed from the driver. There may be. In that case, a three-dimensional stereoscopic image having a depth like a so-called 3D video is displayed to the driver without a sense of incongruity.
 視点位置特定部10bは、視点検出部6で検出した運転者の目の位置に基づいて、車室内における運転者の目の位置を示す視点位置を特定する。換言すると、視点位置特定部10bは、運転者の目の位置と投影面(C)との位置関係を特定する。 The viewpoint position specifying unit 10b specifies a viewpoint position indicating the position of the driver's eyes in the passenger compartment based on the position of the driver's eyes detected by the viewpoint detection unit 6. In other words, the viewpoint position specifying unit 10b specifies the positional relationship between the position of the driver's eyes and the projection plane (C).
 重畳位置特定部10cは、カメラ3の視点位置と運転者の視点位置とに基づいて、投影面(C)に画像を表示する際の表示位置、つまりは、運転者の視点位置を基準とした画像を表示する際の表示位置を特定する。 Based on the viewpoint position of the camera 3 and the driver's viewpoint position, the superimposed position specifying unit 10c uses the display position when displaying the image on the projection plane (C), that is, the driver's viewpoint position as a reference. Specify the display position when displaying an image.
 画像変換部10dは、運転者の視点位置に基づいて画像を運転者の視点位置を基準とした座標系に変換することにより、変換画像(G2。図2参照)つまりは実際に投影面(C)に投影するための画像を生成する。 The image conversion unit 10d converts the image into a coordinate system based on the driver's viewpoint position based on the driver's viewpoint position, thereby converting the image (G2, see FIG. 2), that is, actually the projection plane (C ) Is generated for projection.
 報知部10eは、投影装置7を用いて変換画像(G2)を運転者の視野に重なるように表示させることによって運転者に報知する。このとき、報知部103は、後述するように運転者の視点位置の変化に追従させてリアルタイムに画像を表示する。なお、文字情報等を合わせて表示してもよい。 The alerting | reporting part 10e alert | reports a driver | operator by displaying the conversion image (G2) so that it may overlap with a driver | operator's visual field using the projection apparatus 7. FIG. At this time, the notification unit 103 displays an image in real time by following the change in the viewpoint position of the driver, as will be described later. Note that character information or the like may be displayed together.
 予測部(10f)は、運転者の視点位置の変化を時系列で示す移動履歴に基づいて運転者の視点位置を予測する。
 カメラ3は、CCDカメラやCMOSカメラで構成されており、車両20の進行方向の風景を撮像する。ミリ波レーダ4は、電波を放射し、物体で反射した反射波に基づいて、物体までの距離を検出する。センサ類5は、例えば赤外線センサや近接センサ等により構成されており、周辺の物体を検出する。
The prediction unit (10f) predicts the driver's viewpoint position based on the movement history indicating the change of the driver's viewpoint position in time series.
The camera 3 is constituted by a CCD camera or a CMOS camera, and images a landscape in the traveling direction of the vehicle 20. The millimeter wave radar 4 radiates radio waves and detects the distance to the object based on the reflected wave reflected by the object. The sensors 5 are constituted by, for example, an infrared sensor or a proximity sensor, and detect surrounding objects.
 視点検出部6は、運転者の目の位置を検出する。なお、視点検出部6は、様々な手法が周知あるいは周知となっているため、それらの手法を利用すればよい。その場合、画像処理のみによって目の位置を検出する等、運転者に特別な装備を求めること無く検出できるものが望ましい。 The viewpoint detection unit 6 detects the position of the driver's eyes. Note that since various methods are known or known, the viewpoint detection unit 6 may use these methods. In that case, it is desirable to detect the eye position only by image processing without requiring special equipment from the driver.
 投影装置7は、運転者の視野内に設けられている透明なウィンドシールド31(図2参照)やコンバイナに虚像を投影するものであり、ヘッドアップディスプレイ(Head-Up Display)とも称される装置である。なお、本実施形態では、ウィンドシールド31の一部が投影面(C)とされている。 The projection device 7 projects a virtual image on a transparent windshield 31 (see FIG. 2) or a combiner provided in the driver's field of view, and is also called a head-up display. It is. In the present embodiment, a part of the windshield 31 is a projection plane (C).
 スピーカ8は、ECU2に対する操作の応答音や、運転者に対するメッセージ等を音声にて報知する。スピーカ8にて報知する構成としてもよい。マイク9は、ECU2に対する操作を例えば音声にて入力する。 The speaker 8 notifies the response sound of the operation to the ECU 2 and a message to the driver by voice. It is good also as a structure which alert | reports with the speaker 8. FIG. The microphone 9 inputs an operation on the ECU 2 by voice, for example.
 また、ECU2は、位置検出部13に接続されている。この位置検出部13は、GPSユニットやナビゲーション装置等によって構成されており、自車両の現在位置を検出する。また、ナビゲーション機能を実現するために、地図データ13aを備えている。なお、地図データ13aは、記憶部11に記憶していてもよい。 Further, the ECU 2 is connected to the position detection unit 13. The position detection unit 13 includes a GPS unit, a navigation device, and the like, and detects the current position of the host vehicle. Moreover, in order to implement | achieve a navigation function, the map data 13a is provided. The map data 13a may be stored in the storage unit 11.
 次に、上記した構成の作用について説明する。
 前述のように、投影装置7を用いることにより、各種の情報を実際の風景に重畳表示させることにより、ユーザが気付いていない可能性の有る危険性を報知できる等のメリットがある一方、実際の風景と重畳させた画像との間に位置ずれが生じると、メリットが生かせない、あるいは、誤った情報を与えてしまうおそれがある。
Next, the operation of the above configuration will be described.
As described above, by using the projection device 7, various types of information are superimposed on an actual landscape, thereby providing a merit that a user may be aware of a danger that may not be noticed. If a position shift occurs between the landscape and the superimposed image, there is a possibility that the merit cannot be utilized or incorrect information is given.
 ここで、カメラ3の視点位置と運転者の視点位置との違い、ならびに、重畳位置を求める必要性について説明する。図2に示すように、カメラ3は、車両20の車室内の天井付近に車両20の前方を撮像可能な位置に取り付けられているものとする。このカメラ3の視点位置つまりはカメラ3の視野(A1)の中心位置は、カメラ3の取り付け位置から求めることができる。 Here, the difference between the viewpoint position of the camera 3 and the viewpoint position of the driver, and the necessity for obtaining the superimposed position will be described. As shown in FIG. 2, it is assumed that the camera 3 is attached to a position where an image of the front of the vehicle 20 can be captured in the vicinity of the ceiling inside the vehicle 20. The viewpoint position of the camera 3, that is, the center position of the visual field (A 1) of the camera 3 can be obtained from the mounting position of the camera 3.
 また、運転者(M)の視野(A2)は、車室内に設けられている視点検出部6によって運転者(M)の目の位置を検出することにより特定される。
 さて、カメラ3で実際の風景を撮像し、その風景に重畳する画像(G1)を画像生成部10aにて生成する場合、画像生成部10aは、画像(G1)を、カメラ3の視野(A1)において風景と重なるように生成する。このとき、カメラ3の視野(A1)における画像(G1)の上端の位置を、便宜的に上端位置(Lc0)と称する。
The visual field (A2) of the driver (M) is specified by detecting the position of the eyes of the driver (M) by the viewpoint detection unit 6 provided in the passenger compartment.
When an actual landscape is captured by the camera 3 and an image (G1) to be superimposed on the landscape is generated by the image generation unit 10a, the image generation unit 10a displays the image (G1) as a field of view (A1) of the camera 3. ) To overlap the landscape. At this time, the position of the upper end of the image (G1) in the visual field (A1) of the camera 3 is referred to as an upper end position (Lc0) for convenience.
 例えば、カメラ3の視点位置を基準として生成された画像(G1)が、カメラ3の中心線(Lc1)よりも距離(L1)だけ下方に位置していたとする。つまり、画像(G1)は、カメラ3の視野(A1)の座標系においては、中心よりも下方に位置していたものとする。 For example, it is assumed that the image (G1) generated based on the viewpoint position of the camera 3 is positioned below the center line (Lc1) of the camera 3 by a distance (L1). In other words, it is assumed that the image (G1) is located below the center in the coordinate system of the visual field (A1) of the camera 3.
 一方、この画像(G1)は、運転者(M)の視野(A2)からすると、その中心線(Lc2)よりも距離(L2)だけ上方に位置していることになる。つまり、画像(G1)は、運転者(M)の座標系では、中心よりも上方に位置していることになる。 On the other hand, this image (G1) is positioned above the center line (Lc2) by a distance (L2) from the view (A2) of the driver (M). That is, the image (G1) is located above the center in the driver (M) coordinate system.
 そのため、カメラ3の視点位置を基準とした座標系のまま画像(G2)を表示すると、投影面(C)が運転者(M)と実際の風景との間に設けられていることから、画像(G2)は、運転者(M)からすると中心線(Lc2)よりも下方に表示されることになる。その結果、画像(G2)が、運転者(M)が見ている風景と重畳しなくなる。 Therefore, when the image (G2) is displayed in the coordinate system with the viewpoint position of the camera 3 as a reference, the projection plane (C) is provided between the driver (M) and the actual landscape. (G2) is displayed below the center line (Lc2) from the viewpoint of the driver (M). As a result, the image (G2) does not overlap with the scenery that the driver (M) is viewing.
 これは、図3に示すように運転者(M)の顔の位置が左右にずれた場合も同様である。このため、運転者(M)の顔の位置が中央位置のときの視野(A20。上記したA2に対応する。)と、顔の位置が右にずれた場合の視野(A21)と、顔の位置が左にずれた場合の視野(A22)とでは、投影面(C)を介して見える風景が異なるため、画像(G1)が適切に風景に重ならないおそれがある。以下、中央位置を便宜的に通常位置とも称する。また、例えば相対的に背の高い運転者と相対的に背の低い運転者とではそもそもの顔の位置が異なっているし、同じ運転者であっても、運転中に適宜運転姿勢が変化すること、また、シートをリクライニングさせること等により、顔の位置が異なることがある。 This is the same when the position of the face of the driver (M) is shifted left and right as shown in FIG. For this reason, the visual field (A20 corresponding to A2 described above) when the face position of the driver (M) is the central position, the visual field (A21) when the face position is shifted to the right, Since the view seen through the projection plane (C) is different from the visual field (A22) when the position is shifted to the left, the image (G1) may not appropriately overlap the landscape. Hereinafter, the central position is also referred to as a normal position for convenience. Further, for example, a relatively tall driver and a relatively short driver have different face positions in the first place, and even if the driver is the same, the driving posture changes appropriately during driving. In addition, the position of the face may differ due to reclining of the seat.
 そこで、ECU2は、以下のようにして、投影装置7から投影する画像を正確に運転者が見ている実際の風景に重畳表示することができるようにしている。より具体的には、ECU2は、車両20の移動による風景の変化や運転者の視点位置の変化に追従して、運転者に視野に重なるように、画像をリアルタイムで生成、変換および表示する。 Therefore, the ECU 2 can superimpose and display the image projected from the projection device 7 on the actual scenery viewed by the driver as follows. More specifically, the ECU 2 generates, converts, and displays an image in real time so as to overlap the driver's field of view, following changes in the scenery and changes in the driver's viewpoint position due to the movement of the vehicle 20.
 ECU2は、図3に示す各処理を実行する。これらの処理は、制御部10において車両用プログラムを実行することによって行われる。なお各処理は、上記した各部(10a~10e)によって行われるものの、説明の簡略化のために、以下ではECU2を主体として説明する。 The ECU 2 executes each process shown in FIG. These processes are performed by executing the vehicle program in the control unit 10. Each process is performed by each of the above-described units (10a to 10e), but for the sake of simplification of description, the ECU 2 will be mainly described below.
 ECU2は、まず、カメラ3の視野(A1)内の3次元情報を取得する(S10)。つまり、ECU2は、カメラ3で車両20の前方の風景を撮像し、画像に含まれる物体等の検出あるいは抽出を行う。そして、ECU2は、カメラ3の視点位置を基準とした座標系にて、運転者に報知する画像画像(G1)を生成する(S11)。 ECU2 first acquires the three-dimensional information in the visual field (A1) of the camera 3 (S10). That is, the ECU 2 captures a landscape in front of the vehicle 20 with the camera 3 and detects or extracts an object included in the image. Then, the ECU 2 generates an image image (G1) to notify the driver in a coordinate system based on the viewpoint position of the camera 3 (S11).
 続いて、ECU2は、運転者の視点位置を特定し(S12)、生成した画像(G1)を、運転者の視点位置を基準とした座標系に変換した変換画像(G2。図2参照)を生成する(S13)。つまり、ECU2は、生成した画像(G1)を変換画像(G2)に変換する変換処理を行っている。このとき、ECU2は、表示画面全体に対して変換処理を行うのではなく、運転者に報知する画像に対して変換処理を行う。これにより、処理の負荷が低減される。 Subsequently, the ECU 2 specifies the driver's viewpoint position (S12), and converts the generated image (G1) into a coordinate image (G2, see FIG. 2) converted into a coordinate system based on the driver's viewpoint position. Generate (S13). That is, the ECU 2 performs conversion processing for converting the generated image (G1) into the converted image (G2). At this time, the ECU 2 does not perform the conversion process on the entire display screen, but performs the conversion process on the image notified to the driver. Thereby, the processing load is reduced.
 ここで、運転者の視点位置を基準とした座標系に変換する際の座標変換手法としては、例えばいわゆるビュー変換の手法を用いることが考えられる。このビュー変換は、三次元画像を扱うグラフィックス分野で一般的に利用されている手法であることから、ここではその概略を説明する。 Here, for example, a so-called view conversion method may be used as a coordinate conversion method when converting to a coordinate system based on the viewpoint position of the driver. Since this view conversion is a technique generally used in the graphics field that handles three-dimensional images, an outline thereof will be described here.
 ビュー変換は、ある視点位置から見た状態で生成されている図形を、三次元空間中の別の視点位置から見た状態で表現するための図法である。具体的には、ビュー変換では、平行変換行列を用い、図形を生成した基準座標系における画像の位置を、その図形を見る人の視点位置を基準とした座標系に合わせるように平行移動させる座標変換と、回転変換行列を用い、平行移動させた画像を視点位置に向けて回転させる座標変換との組み合せによる画像の変換が行われている。 View conversion is a method for representing a figure generated in a state viewed from a certain viewpoint position in a state viewed from another viewpoint position in a three-dimensional space. Specifically, in view transformation, a parallel transformation matrix is used to translate the position of the image in the reference coordinate system that generated the figure so that it matches the coordinate system based on the viewpoint position of the person viewing the figure. Image conversion is performed by a combination of conversion and coordinate conversion that rotates a translated image toward a viewpoint position using a rotation conversion matrix.
 ECU2は、上記のステップS13において、このビュー変換の図法を用いて、カメラ3を基準とする基準座標系において生成された画像(G1)を、その画像を実際に見ることになる運転者の視点位置を基準とした座標系に変換した変換画像(G2)に変換している。このとき、立体的な画像であれば、奥行きをもって立体的に見える状態で変換が行われる。 In step S13 described above, the ECU 2 uses the view transformation projection to generate an image (G1) generated in the reference coordinate system with the camera 3 as a reference, and the viewpoint of the driver who will actually see the image. The image is converted into a converted image (G2) converted into a coordinate system based on the position. At this time, if the image is a three-dimensional image, the conversion is performed in a state where the image looks three-dimensional with a depth.
 さて、変換画像(G2)を投影する場合には、投影面(C)の形状を考慮する必要がある。これは、例えば変換画像(G1)が正方形であった場合、そのまま投影面(C)に投影すると、投影面(C)の表面形状によっては画像が歪んで台形や更には複雑な曲線で形成された図形に見えてしまうことがあるためである。また、運転者の視点位置によっても、つまりは、運転者の顔の位置と投影面(C)との位置関係によっても、画像が歪んでしまうおそれがある。 Now, when projecting the converted image (G2), it is necessary to consider the shape of the projection plane (C). For example, if the converted image (G1) is a square and is projected onto the projection surface (C) as it is, the image may be distorted depending on the surface shape of the projection surface (C) to form a trapezoid or a more complex curve. This is because it may look like a figure. Further, the image may be distorted depending on the viewpoint position of the driver, that is, depending on the positional relationship between the position of the driver's face and the projection plane (C).
 そこで、ECU2は、投影面(C)の3次元形状つまりは表面形状を取得し(S14)、変換画像(G1)をその表面形状と運転者の視点位置と応じて補正する(S15)。これにより、補正された変換画像(G2)つまりは運転者から見て歪みのない画像が生成される。 Therefore, the ECU 2 acquires the three-dimensional shape of the projection surface (C), that is, the surface shape (S14), and corrects the converted image (G1) according to the surface shape and the viewpoint position of the driver (S15). As a result, a corrected converted image (G2), that is, an image without distortion as viewed from the driver is generated.
 そして、ECU2は、投影面(C)の表示範囲を示す枠画像(G10。図5参照。境界画像に相当する)を生成し(S16)、枠画像(G10)と、補正した変換画像(G2)、より厳密には、変換画像(G2)内に納まる表示画像(G11)とを、投影装置7に送信する(S17)。送信された枠画像(G10)と表示画像(G11)とは、投影装置7によって、図5に示すように投影面(C)に投影される。 Then, the ECU 2 generates a frame image (G10; refer to FIG. 5, which corresponds to the boundary image) indicating the display range of the projection plane (C) (S16), the frame image (G10), and the corrected converted image (G2). More precisely, the display image (G11) that fits in the converted image (G2) is transmitted to the projection device 7 (S17). The transmitted frame image (G10) and display image (G11) are projected onto the projection plane (C) by the projection device 7 as shown in FIG.
 これにより、運転者の視野には、歪みがなく、また、運転者の視野に正確に重なった表示画像(G11)が表示されることになる。
 ところで、投影面(C)の大きさが限られている場合、運転者に報知したい情報つまりは表示画像(G11)の全体を表示できないことがある。例えば図5の場合、進路変更を促す表示画像(G11)を表示しているものの、表示画像(G11)のうち、枠画像(G10)よりも左外側に位置する破線にて示す部分は、表示されないことになる。
As a result, the display image (G11) which is not distorted in the driver's visual field and accurately overlaps the driver's visual field is displayed.
By the way, when the size of the projection surface (C) is limited, there is a case where information desired to be notified to the driver, that is, the entire display image (G11) cannot be displayed. For example, in the case of FIG. 5, the display image (G11) that prompts the user to change the course is displayed, but the portion indicated by the broken line located on the left outer side of the frame image (G10) in the display image (G11) is displayed. Will not be.
 そのため、ECU2は、表示を終了するか否かを判定する(S18)。なお、表示を終了する場合とは、運転者への報知を終了することを意味している。例えば、交通案内図に対して表示画像を表示した際にその交通案内図を通過した場合や、報知した情報を運転者が認識したことが確認できた場合等には、それ以上画像を表示する意味は無いため、運転者の視野の邪魔にならないように画像の表示を終了する。 Therefore, the ECU 2 determines whether or not to end the display (S18). In addition, the case where a display is complete | finished means complete | finishing the alerting | reporting to a driver | operator. For example, when the display image is displayed on the traffic guide map, when the vehicle passes the traffic guide map, or when it is confirmed that the driver has recognized the notified information, the image is displayed further. Since there is no meaning, the display of the image is terminated so as not to obstruct the driver's visual field.
 そして、ECU2は、表示を終了すると判定した場合には(S18:YES)、処理を終了する。一方、ECU2は、表示を終了しないと判定した場合には(S18:NO)、ステップS12に移行して、運転者の視点位置の特定以降の処理を繰り返す。このとき、ステップS12に移行するのは、運転者の視点位置が変化した場合に、その変化に追従した表示画像(G11)を表示させるためである。これにより、運転者の視点位置の変化に応じた表示画像(G11)が報知される。 And ECU2 will complete | finish a process, when it determines with complete | finishing a display (S18: YES). On the other hand, if the ECU 2 determines not to end the display (S18: NO), the ECU 2 proceeds to step S12 and repeats the process after the identification of the driver's viewpoint position. At this time, when the driver's viewpoint position changes, the process proceeds to step S12 in order to display a display image (G11) following the change. Thereby, the display image (G11) according to the change of the driver's viewpoint position is notified.
 具体的には、表示画像(G11)の左側が切れている場合、運転者は、表示画像(G11)の左側がどうなっているかを知るために、図6に示すように、顔を右に寄せることになる。つまり、表示画像(G11)の左側がどうなっているかを知りたい場合には、運転者は、その逆方向つまりは右側に顔を寄せることになる。 Specifically, when the left side of the display image (G11) is cut off, the driver turns his face to the right as shown in FIG. 6 in order to know what the left side of the display image (G11) is. I will send it. That is, when the driver wants to know what the left side of the display image (G11) is like, the driver brings his face to the opposite direction, that is, the right side.
 これは、枠画像(G10)が窓枠であると想定すると理解し易い。建物に窓があり、窓から見えない左側を見たい場合、人は、顔を右側にずらすことになる。つまり、運転者の顔の位置がずれた場合、表示すべき画像は、運転者の顔の移動方向とは逆方向である。 This is easy to understand assuming that the frame image (G10) is a window frame. If a building has a window and you want to see the left side that is not visible, the person will shift his face to the right side. That is, when the position of the driver's face is shifted, the image to be displayed is in the direction opposite to the moving direction of the driver's face.
 そのため、どちらに顔を移動させれば表示されていない部分を見ることができるのかを直感的に運転者に報知するために、ECU2は、枠画像(G10)を表示する。なお、これは、左右方向だけで無く、上下方向に顔の位置が移動した場合も同様である。 Therefore, the ECU 2 displays a frame image (G10) in order to inform the driver intuitively which part of the face that is not displayed can be seen by moving the face. Note that this is the same when the position of the face moves in the vertical direction as well as in the horizontal direction.
 なお、枠画像(G11)は、図5に例示する長方形の枠に限定されず、四隅を示すL字状や十字状の図形であってもよいし、実線ではなく破線や二重線等であってもよい。すなわち、投影面(C)の境界つまりは表示範囲の限界を報知できるものであれば、どのような画像であってもよい。 Note that the frame image (G11) is not limited to the rectangular frame illustrated in FIG. 5, and may be an L-shaped or cross-shaped figure indicating four corners, or may be a broken line or a double line instead of a solid line. There may be. That is, any image may be used as long as it can notify the boundary of the projection plane (C), that is, the limit of the display range.
 このように、ECU2は、投影装置7から投影する画像を正確に運転者が見ている実際の風景に重畳表示させている。
 以上説明した実施形態によれば、次のような効果を得ることができる。
In this way, the ECU 2 displays the image projected from the projection device 7 on the actual landscape that the driver is accurately seeing.
According to the embodiment described above, the following effects can be obtained.
 ECU2は、画像を車両20の進行方向を撮像するカメラ3視点位置を基準とした座標系で生成する画像生成部10aと、運転者の視点位置を特定する視点位置特定部10bと、画像(G1)を運転者の視野に表示する際の表示位置を特定する重畳位置特定部10cと、画像(G1)を運転者の視点位置を基準とした座標系に変換した画像である変換画像(G2)を生成する画像変換部10dと、投影装置7を用いて変換画像に納まる表示画像(G11)を運転者の視野に重なるように表示させる報知部10eと、を備える。 The ECU 2 generates an image in a coordinate system based on the viewpoint position of the camera 3 that captures the traveling direction of the vehicle 20, a viewpoint position specifying unit 10b that specifies the viewpoint position of the driver, and an image (G1). ) In the driver's field of view, a superimposed position specifying unit 10c that specifies a display position, and a converted image (G2) that is an image obtained by converting the image (G1) into a coordinate system based on the viewpoint position of the driver. And an informing unit 10e that displays a display image (G11) that fits in the converted image using the projection device 7 so as to overlap the visual field of the driver.
 これにより、カメラ3の視野において風景に重なるように生成された表示画像(G1)は、運転者の視野において風景に重なる変換画像(G2)に変換される。そして、変換画像に納まる表示画像(G11)が、運転者の視野に重なるように表示される。したがって、投影装置7から投影する画像を正確に運転者が見ている実際の風景に重畳表示することができる。 Thereby, the display image (G1) generated so as to overlap the landscape in the visual field of the camera 3 is converted into a converted image (G2) overlapping the landscape in the visual field of the driver. And the display image (G11) which fits in a conversion image is displayed so that it may overlap with a driver | operator's visual field. Therefore, the image projected from the projection device 7 can be displayed in a superimposed manner on the actual landscape that the driver sees accurately.
 つまり、車両20の移動によって風景が変化したり、運転者が身じろぎする等により視点位置が変化したりした場合であっても、その変化に追従して画像がリアルタイムに生成、変換および表示される。これにより、表示位置の位置ずれが生じることが抑制されるとともに、表示位置がずれた画像が表示され続けることもなくなり、正確に運転者が見ている実際の風景に重なった状態で画像を表示することができる。したがって、運転者が違和感を感じることを抑制することができる。 That is, even when the scenery changes due to the movement of the vehicle 20 or the viewpoint position changes due to the driver's comfort, an image is generated, converted, and displayed in real time following the change. . As a result, the occurrence of displacement of the display position is suppressed, and the image with the displacement of the display position is not continuously displayed, and the image is displayed in a state where it is accurately superimposed on the actual scenery that the driver is looking at. can do. Therefore, the driver can be prevented from feeling uncomfortable.
 また、視点が決まった位置になくても実際の見ている画像に重畳可能な画像を生成することにより、実際の風景への重畳表示のメリットを安定的に提供することができる。
 このとき、カメラ3の視点位置から運転者の視点位置への変換の際には、最終的に表示される表示面全面を対象とするのではなく、重畳する対象図形のみを対象として変換処理を行うので、処理の負荷を低減することができる。
In addition, by generating an image that can be superimposed on an actually viewed image even if the viewpoint is not at a fixed position, it is possible to stably provide the merit of superimposed display on an actual landscape.
At this time, when converting from the viewpoint position of the camera 3 to the viewpoint position of the driver, the conversion process is performed only on the target graphic to be superimposed, not on the entire display surface to be finally displayed. As a result, the processing load can be reduced.
 ECU2は、変換画像(G2)を、投影面(C)の形状に応じて補正し、補正した変換画像(G2)を運転者の視野に重なるように表示させる。これにより、投影面(C)が例えばウィンドシールド31に沿った曲面形状に形成されていたとしても、運転者から見た場合に歪みのない画像を風景に重畳させて表示することができる。 ECU2 correct | amends the conversion image (G2) according to the shape of a projection surface (C), and displays the corrected conversion image (G2) so that it may overlap with a driver | operator's visual field. Thereby, even if the projection surface (C) is formed in a curved surface shape along the windshield 31, for example, an image without distortion can be displayed superimposed on the landscape when viewed from the driver.
 ECU2は、変換画像(G2)を、運転者の視点位置に応じて補正し、補正した変換画像(G2)を運転者の視野に重なるように表示させる。これにより、運転者の顔の位置が変わったとしても、運転者から見た場合に歪みのない画像を風景に重畳表示させることができる。 ECU2 correct | amends conversion image (G2) according to a driver | operator's viewpoint position, and displays the corrected conversion image (G2) so that it may overlap with a driver | operator's visual field. Thereby, even if the position of the driver's face changes, an image without distortion when viewed from the driver can be displayed superimposed on the landscape.
 ECU2は、運転者の視点位置の移動に応じて画像(G1)を変換する。上記した図5および図6に示したように、顔の位置がずれた場合には、見え方も異なることになる。そのため、画像(G1)を再生成することにより、顔の位置に応じた表示画像(G11)を表示することができる。この場合、生成された画像(G1)を記憶しておき、視点位置の移動に応じて再変換するようにしてもよい。 ECU2 converts the image (G1) according to the movement of the driver's viewpoint position. As shown in FIG. 5 and FIG. 6 described above, when the face position is shifted, the appearance is also different. Therefore, the display image (G11) corresponding to the face position can be displayed by regenerating the image (G1). In this case, the generated image (G1) may be stored and reconverted according to the movement of the viewpoint position.
 この場合、例えば交通案内図に対して表示画像を表示した際にその交通案内図を通過した場合や、報知した情報を運転者が認識したことが確認できた場合等には、画像の表示を終了することで、運転者の視野を妨げるおそれを低減することができる。 In this case, for example, when the display image is displayed on the traffic guide map, when the vehicle passes the traffic guide map, or when it is confirmed that the driver has recognized the notified information, the image is displayed. By ending, the possibility of obstructing the driver's field of view can be reduced.
 ECU2は、投影面(C)の表示範囲を示す枠画像(G10。境界画像)を生成する。これにより、運転者に対して、例えば表示画像(G11)の左側が切れている場合には顔を右にずらせばよいことを直感的に把握させることができる。 ECU2 produces | generates the frame image (G10. Boundary image) which shows the display range of a projection surface (C). This makes it possible for the driver to intuitively understand that the face should be shifted to the right when, for example, the left side of the display image (G11) is cut off.
 ECU2に、画像(G1)をカメラ3の視点位置を基準とした座標系で生成する処理と、運転者の視点位置を特定する処理と、画像(G1)を運転者の視野に表示する際の表示位置を特定する処理と、画像(G1)を運転者の視点位置を基準とした座標系に変換した画像である変換画像(G2)を生成する処理と、変換画像(G2)に納まる表示画像(G11)を運転者の視野に重なるように表示させる処理と、を実行させる車両用プログラムによっても、投影装置7から投影する画像を正確に運転者が見ている実際の風景に重畳表示することができる等、ECU2と同様の効果を得ることができる。 When the ECU 2 generates the image (G1) in a coordinate system based on the viewpoint position of the camera 3, the process for specifying the driver's viewpoint position, and the image (G1) is displayed in the driver's field of view. Processing for specifying a display position, processing for generating a converted image (G2) that is an image obtained by converting the image (G1) into a coordinate system based on the driver's viewpoint position, and a display image that fits in the converted image (G2) The image projected from the projection device 7 can be accurately superimposed and displayed on the actual scenery viewed by the driver, even with the vehicle program for executing the process of displaying (G11) so as to overlap the visual field of the driver. The same effects as the ECU 2 can be obtained.
   (その他の実施形態)
 本開示は、上記した実施形態にて例示したものに限定されることなく、その要旨をを逸脱しない範囲で任意に変形あるいは拡張することができる。
(Other embodiments)
This indication is not limited to what was illustrated in above-mentioned embodiment, and can change arbitrarily or extend in the range which does not deviate from the gist.
 実施形態では、視野内の物体に重畳する画像を生成してその画像を変換する例を示したが、物体全体に重畳される画像つまりは内部を塗りつぶした画像ではなく、その物体の輪郭を示す画像を表示するようにしてもよい。この場合、図4のステップS10において物体の輪郭を示す輪郭画像を生成し、ステップS13においてその輪郭画像を変換し、ステップS15において変換画像を補正するようにしてもよい。例えば、交通標識に重ねて画像を表示する際、標識そのものは画像が重ならないようにした方が見やすいため、交通標識の輪郭だけを示すことで、運転者への報知と、運転者の視認性の向上とを両立させることができる。 In the embodiment, an example in which an image to be superimposed on an object in the field of view is generated and the image is converted is shown. However, an image to be superimposed on the entire object, that is, an image in which the inside is not filled, is shown. An image may be displayed. In this case, a contour image indicating the contour of the object may be generated in step S10 in FIG. 4, the contour image may be converted in step S13, and the converted image may be corrected in step S15. For example, when displaying an image over a traffic sign, it is easier to see the sign itself so that the image does not overlap, so by showing only the outline of the traffic sign, notification to the driver and visibility of the driver It is possible to achieve both improvement.
 また、単純に内部が塗りつぶされた画像を表示する場合には、ステップS10においてまずはその画像の外形を示す外形画像を生成し、ステップS13においてその外形画像を変換し、ステップS15において変換画像を補正し、ステップS17において変換画像の内部を塗りつぶして表示させるようにしてもよい。これにより、画像の内部については変換処理を行うことが不要となり、表示する際には単純に内部を塗りつぶせばよいため、処理負荷の低減を期待できる。 Further, when displaying an image whose interior is simply filled, in step S10, first, an outline image showing the outline of the image is generated, the outline image is converted in step S13, and the converted image is corrected in step S15. In step S17, the inside of the converted image may be filled and displayed. As a result, it is not necessary to perform conversion processing for the inside of the image, and it is only necessary to simply fill the inside when displaying, so that a reduction in processing load can be expected.
 実施形態では運転者の視点位置を基準として画像(G1)を生成する例を示したが、更に詳細に、図2あるいは図3において例えば運転者の右目を基準とする等、画像(G1)を運転者のきき目を起点として生成してもよい。これにより、実際に見ている画像との重畳の一致率が高まり、違和感の無い画像を表示することができる。この場合、投影面の表示範囲内に収まる画像の切れ目と、その切れ目に対して図4のステップ12にて特定する顔の位置とに基づいて、きき目を取得することもできる。 In the embodiment, the example in which the image (G1) is generated based on the viewpoint position of the driver is shown. However, in more detail, the image (G1) is displayed in FIG. 2 or FIG. You may generate | occur | produce from the driver | operator's mark as a starting point. Thereby, the coincidence ratio of the superimposition with the actually viewed image is increased, and an image without a sense of incongruity can be displayed. In this case, it is also possible to acquire a notch based on the break of the image that falls within the display range of the projection plane and the position of the face specified in step 12 of FIG. 4 with respect to the break.
 実施形態では運転者の視点位置に基づいて画像を表示する例を示したが、運転者の視点位置の時系列での変化を移動履歴として記憶部11等に記憶し、その移動履歴に基づいて運転者の顔の位置を予測し、その予測に基づいて表示する画像を補正してもよい。つまり、予測部10fにて予測した視点位置に対応するように、画像変換部10dにて画像の変換および投影面(C)の形状に応じた補正を行えばよい。 In the embodiment, an example is shown in which an image is displayed based on the viewpoint position of the driver. However, a time-series change in the viewpoint position of the driver is stored as a movement history in the storage unit 11 or the like, and based on the movement history. The position of the driver's face may be predicted, and an image to be displayed may be corrected based on the prediction. That is, the image conversion unit 10d may perform image conversion and correction according to the shape of the projection plane (C) so as to correspond to the viewpoint position predicted by the prediction unit 10f.
 このように視点位置の変化を予測し、その予測に基づいて先回りして画像を変換あるいは補正することにより、顔の位置が変化したことを検出した後に画像の変換や補正を行う場合に比べて、処理の遅れが軽減され、画像の表示遅れを改善することができる。 Compared to the case where the change in the position of the face is detected and the image is converted or corrected in advance based on the prediction so that the change in the face position is detected and then the image is converted or corrected. , Processing delay can be reduced and image display delay can be improved.
 この場合、運転者の視点位置の移動履歴だけで無く、表示画像(G11)のうち表示されていない部分が投影面(C)の上下左右のどちらであるか等に基づいて、運転者の顔の位置の移動を予測してもよい。これは、画像が表示範囲から外れて途切れている場合、ユーザは、その途切れた部分を見ようとして顔の位置を移動させると考えられるためである。 In this case, based on not only the movement history of the driver's viewpoint position, but also the driver's face based on whether the portion of the display image (G11) that is not displayed is the top, bottom, left, or right of the projection plane (C). The movement of the position may be predicted. This is because when the image is out of the display range and is interrupted, the user is considered to move the position of the face to see the interrupted portion.
 実施形態では変換画像(G1)を投影面(C)の表面形状と運転者の視点位置とに応じて補正する例を示したが、投影面(C)の表面形状や運転者の視点位置との位置関係によっては、いずれか一方に応じて補正する構成であってもよい。 In the embodiment, the conversion image (G1) is corrected according to the surface shape of the projection surface (C) and the viewpoint position of the driver, but the surface shape of the projection surface (C) and the viewpoint position of the driver Depending on the positional relationship, the correction may be made according to either one of them.
 実施形態ではECU2と投影装置7とを別体に設けた例を示したが、一体に設けてもよい。すなわち、投影装置7にECU2の機能を実装してもよい。あるいは、既にカメラ3との接続が可能となっているドライブレコーダ装置やナビゲーション装置、あるいは運転者が所有するタブレット端末やスマートフォン等にECU2の機能を実装してもよい。 In the embodiment, an example is shown in which the ECU 2 and the projection device 7 are provided separately, but they may be provided integrally. That is, the function of the ECU 2 may be mounted on the projection device 7. Alternatively, the function of the ECU 2 may be implemented in a drive recorder device or a navigation device that is already connectable to the camera 3, or a tablet terminal or a smartphone owned by the driver.
 実施形態では運転者の視点位置の移動に応じて変換画像を再生成する例を示したが、変換画像を記憶しておき、視点位置の移動に応じて変換画像を再補正するようにしてもよい。 In the embodiment, the converted image is regenerated according to the movement of the viewpoint position of the driver. However, the converted image may be stored and the converted image may be recorrected according to the movement of the viewpoint position. Good.
 実施形態では画像のみを表示する例を示したが、文字等も、画像(G1)から表示画像(G10)への変換態様と同様に形状を変換することにより、歪みが無い状態で表示することができる。 In the embodiment, an example is shown in which only an image is displayed. However, characters and the like are also displayed without distortion by converting the shape in the same manner as in the conversion mode from the image (G1) to the display image (G10). Can do.
 実施形態では画像としたが、表示画像(G10)は、静止画および動画のいずれであってもよい。
 上記した実施形態の構成により、運転者の目の位置から見た風景と投影面(C)に投影される画像とを重ねることができるものの、人間の目の構造や脳の図形認識の方法等、まだまだ解明しきれてない要因が存在していることから、そのような要因によって、投影面(C)に表示した画像が風景とずれていると感じたり、歪んでいると感じたりすることが考えられる。
Although it is an image in the embodiment, the display image (G10) may be either a still image or a moving image.
With the configuration of the embodiment described above, it is possible to superimpose the landscape viewed from the position of the driver's eyes and the image projected on the projection plane (C), but the structure of the human eye, the method of recognizing the figure of the brain, etc. Because there are still unexplained factors, such factors may cause the image displayed on the projection plane (C) to feel shifted from the landscape or distorted. Conceivable.
 この場合、ECU2の試験時等、運転者が実際に使うよりも前の段階で、例えば直線が正しく直線として認識できるような補正量等が設定された座標変換フィルタを予め設計しておくことで対処することができる。 In this case, by designing in advance a coordinate conversion filter in which a correction amount or the like is set so that a straight line can be correctly recognized as a straight line, for example, at a stage before the driver actually uses it, such as during a test of the ECU 2. Can be dealt with.
 具体的には、図7に示すフィルタ設計プログラムを実行することで、座標変換フィルタを設計する。この、図7では、実施形態の図4の処理とほぼ同一の処理を行う箇所があるため、それらについての詳細な説明は省略する。 Specifically, the coordinate conversion filter is designed by executing the filter design program shown in FIG. In FIG. 7, there is a place where almost the same processing as the processing of FIG. 4 of the embodiment is performed, and thus detailed description thereof will be omitted.
 ECU2は、ステップS10において3次元情報を取得すると(S20)、基準画像を生成する(S21)。この基準画像は、例えば直角で構成された格子図形などの比較的単純な図形である。続いて、ECU2は、運転席に座った試験者の視点位置を特定し(S2)、基準画像を、特定した視点位置を基準とした座標系に変換した変換画像を生成する(S23)。 ECU2 will acquire a three-dimensional information in Step S10 (S20), and will generate a standard picture (S21). This reference image is a relatively simple figure such as a lattice figure constituted by right angles. Subsequently, the ECU 2 specifies the viewpoint position of the tester sitting in the driver's seat (S2), and generates a converted image obtained by converting the reference image into a coordinate system based on the specified viewpoint position (S23).
 変換画像を生成すると、ECU2は、投影面(C)の形状を取得し(S21)、変換画像を投影面(C)の形状と視点位置とに応じて補正し(C25)、枠画像を生成し(S26)、補正した変換画像と枠画像とを投影装置に送信して(S27)、投影面(C)に画像を表示させる。 When the converted image is generated, the ECU 2 acquires the shape of the projection plane (C) (S21), corrects the converted image according to the shape of the projection plane (C) and the viewpoint position (C25), and generates a frame image. Then, the corrected converted image and the frame image are transmitted to the projection device (S27), and the image is displayed on the projection plane (C).
 そして、ECU2は、表示している画像の歪みを補正する(S28)。より厳密には、このステップS28では、試験者が正しく見えるように補正する操作に従って、表示中の画像の歪みを補正する。 The ECU 2 corrects distortion of the displayed image (S28). More precisely, in this step S28, the distortion of the image being displayed is corrected in accordance with an operation for correcting the tester so that it looks correct.
 続いて、ECU2は、画像の歪みを補正した際の補正量を取得し(S29)、その補正量を、今回の視点位置に対する補正量として関連付けて、座標変換フィルタを作成または更新する(S30)。なお、この図7に示す処理が初めて実行された場合には、ステップS30では座標変換フィルタが新規に作成され、2回目以降に実行された場合には、ステップS30では座標変換フィルタが更新される。つまり、視点位置に応じた補正量を特定可能な座標変換フィルタが設計される。 Subsequently, the ECU 2 acquires a correction amount when correcting the distortion of the image (S29), associates the correction amount as a correction amount for the current viewpoint position, and creates or updates a coordinate conversion filter (S30). . When the process shown in FIG. 7 is executed for the first time, a coordinate conversion filter is newly created in step S30, and when executed for the second time or later, the coordinate conversion filter is updated in step S30. . That is, a coordinate conversion filter that can specify a correction amount according to the viewpoint position is designed.
 そして、この図7の処理を、視点位置を変化させながら繰り返し行うことにより、運転者の視点位置とその視点位置における補正量とが互いに関連付けられた座標変換フィルタが更新される。 7 is repeated while changing the viewpoint position, the coordinate conversion filter in which the viewpoint position of the driver and the correction amount at the viewpoint position are associated with each other is updated.
 これにより、上記した解明しきれてない要因に対処することができ、運転者がより見易いと感じる画像を表示させることができる。なお、この場合、座標変換フィルタは、予め設計しておいてもよいし、運転者によって利用が開始された後に生成されるものであってもよい。 This makes it possible to deal with the above-described unexplained factors and display an image that the driver feels easier to see. In this case, the coordinate conversion filter may be designed in advance, or may be generated after use is started by the driver.
 この座標変換フィルタは、以下のようにして、実運転時に利用される。
 ECU2は、図8に示す車両用プログラムを実行する。なお、この図8では、実施形態の図4と同一の処理については同一のステップ番号を付しているので、その詳細な説明は省略する。
This coordinate conversion filter is used during actual operation as follows.
The ECU 2 executes the vehicle program shown in FIG. In FIG. 8, the same steps as those in FIG. 4 of the embodiment are denoted by the same step numbers, and detailed description thereof is omitted.
 ECU2は、実施形態と同様にステップS1からの処理を実行し、変換画像の補正(S15)と枠画像の生成(S16)とが終わると、上記した座標変換フィルタを参照し、最終的に画像を補正する(S40)。このステップS40では、運転者の視点位置に基づいて、対応付けられている補正量による補正が行われる。なお、このステップS40の処理は、座標変換部10dにより行われる。 The ECU 2 executes the processing from step S1 in the same manner as in the embodiment, and when the conversion image correction (S15) and the frame image generation (S16) are completed, the ECU 2 refers to the coordinate conversion filter described above, and finally the image. Is corrected (S40). In this step S40, based on the driver's viewpoint position, correction using the associated correction amount is performed. The process of step S40 is performed by the coordinate conversion unit 10d.
 続いて、ECU2は、補正した画像と枠画像とを送信して(S17)表示した後、運転者の視線が表示した画像を向いているかを判定し(S41)、視線が表示した画像を向いている場合には(S41:YES)、視線の揺れを検出する(S42)。このとき、視線の揺れは、運転者の視線の中心と画像の表示位置とのずれ量として検出される。なお視線は、視点位置特定部で視点位置を特定する際、視線の向きを合わせて検出すること等により特定することができる。 Subsequently, the ECU 2 transmits and displays the corrected image and the frame image (S17), and then determines whether the driver's line of sight is facing the displayed image (S41), and the line of sight is directed to the displayed image. If it is detected (S41: YES), the movement of the line of sight is detected (S42). At this time, the fluctuation of the line of sight is detected as a deviation amount between the center of the line of sight of the driver and the display position of the image. The line of sight can be specified by detecting the direction of the line of sight when the viewpoint position is specified by the viewpoint position specifying unit.
 そして、ECU2は、検出した揺れが規定値以上である場合には(S43:YES)、画像の表示位置が、運転者の意識とずれているとして、検出した揺れを座標変換フィルタに反映させる(S44)。これにより、この運転者に対する補正量つまりは座標変換フィルタの学習が行われる。なお、視線が画像を向いている場合や(S41:NO)、揺れが規定値未満である場合には(S43:NO)、そのまま処理が終了する。 Then, when the detected shake is equal to or greater than the specified value (S43: YES), the ECU 2 reflects the detected shake on the coordinate conversion filter, assuming that the display position of the image is deviated from the driver's consciousness ( S44). Thus, the correction amount for the driver, that is, the coordinate conversion filter is learned. Note that if the line of sight is facing the image (S41: NO), or if the shake is less than the specified value (S43: NO), the process ends.
 これにより、画像を表示させた際に、視線の揺れが検出された場合には、表示位置が運転者の感覚に合っていないと判断することができる。そして、その揺れを座標変換フィルタに反映させることで、上記した解明しきれてない一般的な要因だけで無く、運転者ごとに個別の要因についても対処することができる。 This makes it possible to determine that the display position does not match the driver's sensation if a gaze fluctuation is detected when the image is displayed. Then, by reflecting the fluctuation in the coordinate conversion filter, not only the general factors that cannot be clarified but also individual factors for each driver can be dealt with.
 なお、運転者が異なることも当然予想されるものの、運転者が基本的には変わらないことも当然予想される。また、そもそも事前に座標変換フィルタを設計した際に利用した車両と、実際にECU2が設けられる車両とが異なることもある。そのため、座標変換フィルタは、異なる運転者に対応するために車両が利用されるたびにその都度新たに作成する構成としてもよいし、同じ運転者に対応するために過去の運転時の状態を保存しておく構成としてもよい。勿論、運転者からの操作があった場合に新規に作成する構成としてもよい。 It should be noted that although it is naturally expected that the drivers will be different, it is naturally expected that the driver will not basically change. In the first place, the vehicle used when the coordinate conversion filter is designed in advance may be different from the vehicle in which the ECU 2 is actually provided. For this reason, the coordinate conversion filter may be newly created each time the vehicle is used to deal with different drivers, and the past driving state is saved to deal with the same driver. It is good also as a structure to keep. Of course, it is good also as a structure created newly, when there exists operation from a driver | operator.
 実施形態では、カメラ3で撮像した画像に基づいて特定される3次元空間を基準とした画像を生成する例を示したが、地図データ13aに現在位置における風景等を特定可能な高精度のデータを記憶しておき、その地図データ13aから特定される絶対空間を基準となる3次元空間を基準としてもよい。つまり、地図データ13aに基づいて、基準座標系を設定してもよい。この場合には、図9に示すように、ステップS50において地図データ13aに基づいて3次元情報を取得し、実施形態と同様にステップS12以降の処理を行うことにより実現することができる。このような構成によっても、実施形態と同様に、投影装置7から投影する画像を正確に運転者が見ている実際の風景に重畳表示することができる。 In the embodiment, an example of generating an image based on a three-dimensional space specified based on an image captured by the camera 3 has been described. However, high-precision data that can specify a landscape or the like at the current position in the map data 13a. May be stored, and a three-dimensional space based on the absolute space specified from the map data 13a may be used as a reference. That is, the reference coordinate system may be set based on the map data 13a. In this case, as shown in FIG. 9, it can be realized by acquiring three-dimensional information based on the map data 13a in step S50 and performing the processing from step S12 onward as in the embodiment. Even with such a configuration, similarly to the embodiment, the image projected from the projection device 7 can be accurately superimposed and displayed on the actual scenery viewed by the driver.
 本開示は、実施形態に準拠して記述されたが、当該実施形態や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、更には、それらに一要素のみ、それ以上、或いはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described according to the embodiment, it is understood that the present disclosure is not limited to the embodiment or the structure. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (12)

  1.  投影面(C)に画像を表示する投影装置(7)を用いて運転者に情報を報知する車両用装置(2)であって、
     運転者に報知する画像を、予め定められた基準座標系で生成する画像生成部(10a)と、
     運転者の目の位置を検出する視点検出部(6)で検出した運転者の目の位置に基づいて、車室内における運転者の目の位置を示す視点位置を特定する視点位置特定部(10b)と、
     運転者の視点位置に基づいて、前記画像生成部(10a)で生成した画像を運転者の視野に表示する際の表示位置を特定する重畳位置特定部(10c)と、
     運転者の視点位置に基づいて、前記画像生成部(10a)で生成した画像を、運転者の視点位置を基準とした座標系に変換した画像である変換画像を生成する画像変換部(10d)と、
     前記投影装置(7)を用いて前記変換画像を運転者の視野に重なるように表示させることにより、運転者に情報を報知する報知部(10e)と、
     を備える車両用装置。
    A vehicle device (2) for notifying a driver of information using a projection device (7) for displaying an image on a projection surface (C),
    An image generation unit (10a) that generates an image to be notified to the driver in a predetermined reference coordinate system;
    A viewpoint position specifying unit (10b) for specifying a viewpoint position indicating the position of the driver's eyes in the passenger compartment based on the position of the driver's eyes detected by the viewpoint detection unit (6) that detects the position of the driver's eyes. )When,
    A superimposed position specifying unit (10c) for specifying a display position when displaying the image generated by the image generating unit (10a) in the field of view of the driver based on the viewpoint position of the driver;
    An image conversion unit (10d) that generates a converted image that is an image obtained by converting the image generated by the image generation unit (10a) into a coordinate system based on the driver's viewpoint position based on the viewpoint position of the driver. When,
    A notification unit (10e) for notifying the driver of information by displaying the converted image so as to overlap the driver's field of view using the projection device (7);
    A vehicle apparatus comprising:
  2.  前記画像変換部(10d)は、前記変換画像を、前記投影面(C)の形状に応じて補正し、
     前記報知部(10e)は、補正した前記変換画像を運転者の視野に重なるように表示させる請求項1記載の車両用装置。
    The image conversion unit (10d) corrects the converted image according to the shape of the projection plane (C),
    The vehicle device according to claim 1, wherein the notification unit (10 e) displays the corrected converted image so as to overlap a driver's visual field.
  3.  前記画像変換部(10d)は、前記変換画像を運転者の視点位置に応じて補正し、
     前記報知部(10e)は、補正した前記変換画像を運転者の視野に重なるように表示させる請求項1または2記載の車両用装置。
    The image conversion unit (10d) corrects the converted image according to the viewpoint position of the driver,
    The vehicle device according to claim 1, wherein the notification unit (10 e) displays the corrected converted image so as to overlap a driver's visual field.
  4.  前記画像変換部(10d)は、運転者の視点位置の移動に応じて、前記画像生成部(10a)で生成した画像を再変換する請求項1から3のいずれか一項記載の車両用装置。 The vehicle apparatus according to any one of claims 1 to 3, wherein the image conversion unit (10d) reconverts the image generated by the image generation unit (10a) in accordance with movement of a viewpoint position of a driver. .
  5.  前記画像生成部(10a)は、前記投影面(C)に表示する画像であって当該投影面(C)の表示範囲を示す境界画像を生成し、
     前記報知部(10e)は、前記変換画像とともに前記境界画像を運転者の視野に重なるように表示させる請求項1から4のいずれか一項記載の車両用装置。
    The image generation unit (10a) generates a boundary image that is an image to be displayed on the projection plane (C) and indicates a display range of the projection plane (C),
    The vehicle device according to any one of claims 1 to 4, wherein the notification unit (10e) displays the boundary image together with the converted image so as to overlap a driver's visual field.
  6.  前記画像変換部(10d)は、前記画像生成部(10a)で生成した画像を、運転者のきき目を起点として変換する請求項1から5のいずれか一項記載の車両用装置。 The device for a vehicle according to any one of claims 1 to 5, wherein the image conversion unit (10d) converts the image generated by the image generation unit (10a) using a driver's perception as a starting point.
  7.  運転者の視点位置の変化を時系列で示す移動履歴に基づいて運転者の視点位置を予測する予測部(10f)を備え、
     前記画像変換部(10d)は、予測した顔の位置に対応する視点位置に対応するように、前記画像生成部(10a)で生成した画像を変換する請求項1から6のいずれか一項記載の車両用装置。
    A prediction unit (10f) for predicting the driver's viewpoint position based on a movement history indicating changes in the driver's viewpoint position in time series;
    The said image conversion part (10d) transform | converts the image produced | generated by the said image generation part (10a) so that it may correspond to the viewpoint position corresponding to the position of the estimated face. Vehicle equipment.
  8.  前記画像生成部(10a)は、前記基準座標系として、撮像部の取り付け位置に基づいて特定される3次元空間を基準とする座標系を用いて運転者に報知する画像を生成する請求項1から7のいずれか一項記載の車両用装置。 The said image generation part (10a) produces | generates the image notified to a driver | operator using the coordinate system on the basis of the three-dimensional space specified based on the attachment position of an imaging part as said reference | standard coordinate system. The apparatus for vehicles as described in any one of from 7 to 7.
  9.  前記画像生成部(10a)は、前記基準座標系として、地図データに基づいて特定される3次元空間を基準とする座標系を用いて、運転者に報知する画像を生成する請求項1から7のいずれか一項記載の車両用装置。 The said image generation part (10a) produces | generates the image notified to a driver | operator using the coordinate system on the basis of the three-dimensional space specified based on map data as said reference | standard coordinate system. The vehicle device according to any one of the above.
  10.  前記画像変換部(10d)は、実際の人間が運転席から画像を見た際の見え方に基づいて設計された座標変換フィルタを用いて、前記画像生成部(10a)で生成した画像を変換する、または、前記変換画像を補正する請求項1から9のいずれか一項記載の車両用装置。 The image conversion unit (10d) converts the image generated by the image generation unit (10a) using a coordinate conversion filter designed based on how an actual human sees the image from the driver's seat. The apparatus for vehicles as described in any one of Claim 1 to 9 which corrects the said conversion image.
  11.  投影面(C)に画像を表示する投影装置(7)を用いて運転者に情報を報知する車両用装置(1)の制御部(10)に、
     運転者に報知する画像を、予め定められた基準座標系で生成する処理と、
     運転者の目の位置を検出する視点検出部(6)で検出した運転者の目の位置に基づいて、車室内における運転者の目の位置を示す視点位置を特定する処理と、
     運転者の視点位置に基づいて、画像生成部(10a)で生成した画像を運転者の視野に表示する際の表示位置を特定する処理と、
     運転者の視点位置に基づいて、前記画像生成部(10a)で生成した画像を、運転者の視点位置を基準とした座標系に変換した画像である変換画像を生成する処理と、
     前記投影装置(7)を用いて前記変換画像を運転者の視野に重なるように表示させることにより、運転者に情報を報知する処理と、
     を実行させる車両用プログラム。
    In the control unit (10) of the vehicle device (1) for notifying the driver of information using the projection device (7) that displays an image on the projection surface (C),
    Processing for generating an image to notify the driver in a predetermined reference coordinate system;
    A process of identifying a viewpoint position indicating the position of the driver's eyes in the passenger compartment based on the position of the driver's eyes detected by the viewpoint detection unit (6) that detects the position of the driver's eyes;
    Based on the driver's viewpoint position, a process for specifying the display position when displaying the image generated by the image generation unit (10a) in the driver's field of view;
    A process for generating a converted image that is an image obtained by converting the image generated by the image generation unit (10a) into a coordinate system based on the viewpoint position of the driver based on the viewpoint position of the driver;
    A process of notifying the driver of information by displaying the converted image so as to overlap the visual field of the driver using the projection device (7);
    A vehicle program that executes
  12.  投影面(C)に画像を表示する投影装置(7)を用いて運転者に情報を報知する車両用装置(1)の制御部(10)に、
     報知する基準画像を予め定められた基準座標系で生成する処理と、
     運転席に位置する人の目の位置を検出する視点検出部(6)で検出した目の位置に基づいて、運転席に位置する人の目の位置を示す視点位置を特定する処理と、
     特定した視点位置に基づいて、画像生成部(10a)で生成した画像を視野に表示する際の表示位置を特定する処理と、
     特定した視点位置に基づいて、前記画像生成部(10a)で生成した画像を、運転席に位置する人の視点位置を基準とした座標系に変換した画像である変換画像を生成する処理と、
     前記投影装置(7)を用いて前記変換画像を運転者の視野に重なるように表示させる処理と、
     運転席に位置する人の操作に応じて、表示中の画像の歪みを補正する処理と、
     画像の歪みを補正した際の補正量を取得する処理と、
     取得した補正量と視点位置とを互いに関連付けることにより、当該視点位置における補正量を特定可能な座標変換フィルタを設計する処理と、
     を実行させるフィルタ設計プログラム。
    In the control unit (10) of the vehicle device (1) for notifying the driver of information using the projection device (7) that displays an image on the projection surface (C),
    Processing for generating a reference image to be notified in a predetermined reference coordinate system;
    A process of identifying a viewpoint position indicating the position of the eyes of the person located in the driver seat based on the position of the eyes detected by the viewpoint detection unit (6) that detects the position of the eyes of the person located in the driver seat;
    Based on the identified viewpoint position, a process of identifying a display position when displaying the image generated by the image generation unit (10a) in the field of view;
    A process of generating a converted image that is an image obtained by converting the image generated by the image generation unit (10a) into a coordinate system based on the viewpoint position of a person located in the driver's seat based on the identified viewpoint position;
    A process of displaying the converted image so as to overlap the visual field of the driver using the projection device (7);
    Processing to correct distortion of the displayed image in accordance with the operation of the person located in the driver's seat;
    A process of acquiring a correction amount when correcting image distortion;
    A process of designing a coordinate transformation filter that can identify the correction amount at the viewpoint position by associating the acquired correction amount and the viewpoint position with each other;
    Filter design program to execute
PCT/JP2017/001217 2016-04-01 2017-01-16 Vehicle device, vehicle program, and filter design program WO2017168953A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780022100.6A CN108883702B (en) 2016-04-01 2017-01-16 Vehicle device and continuous tangible computer-readable medium
US16/089,273 US10703272B2 (en) 2016-04-01 2017-01-16 Vehicle device, and non-transitory tangible computer readable medium
DE112017001724.6T DE112017001724T5 (en) 2016-04-01 2017-01-16 VEHICLE DEVICE, VEHICLE PROGRAM AND FILTER DESIGN PROGRAM

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2016074383 2016-04-01
JP2016-074383 2016-04-01
JP2016-218084 2016-11-08
JP2016218084A JP6493361B2 (en) 2016-04-01 2016-11-08 Vehicle device, vehicle program, filter design program

Publications (1)

Publication Number Publication Date
WO2017168953A1 true WO2017168953A1 (en) 2017-10-05

Family

ID=59964006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/001217 WO2017168953A1 (en) 2016-04-01 2017-01-16 Vehicle device, vehicle program, and filter design program

Country Status (1)

Country Link
WO (1) WO2017168953A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020144798A1 (en) * 2019-01-10 2020-07-16 三菱電機株式会社 Information display control device and method, and program and recording medium
CN112606834A (en) * 2020-12-21 2021-04-06 福建工程学院 Driving auxiliary system for displaying blind area in front of vehicle
CN113993747A (en) * 2019-09-13 2022-01-28 马瑞利株式会社 Display device and display method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003039983A (en) * 2001-07-30 2003-02-13 Nippon Seiki Co Ltd Display device for vehicle
JP2005148973A (en) * 2003-11-13 2005-06-09 Nissan Motor Co Ltd Information presenting device
JP2011105306A (en) * 2010-12-28 2011-06-02 Yazaki Corp Display device for vehicle
JP2015087619A (en) * 2013-10-31 2015-05-07 日本精機株式会社 Vehicle information projection system and projection device
JP2015226304A (en) * 2014-05-30 2015-12-14 日本精機株式会社 Projection device for vehicle and head-up display system
JP2016025394A (en) * 2014-07-16 2016-02-08 株式会社デンソー Display device for vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003039983A (en) * 2001-07-30 2003-02-13 Nippon Seiki Co Ltd Display device for vehicle
JP2005148973A (en) * 2003-11-13 2005-06-09 Nissan Motor Co Ltd Information presenting device
JP2011105306A (en) * 2010-12-28 2011-06-02 Yazaki Corp Display device for vehicle
JP2015087619A (en) * 2013-10-31 2015-05-07 日本精機株式会社 Vehicle information projection system and projection device
JP2015226304A (en) * 2014-05-30 2015-12-14 日本精機株式会社 Projection device for vehicle and head-up display system
JP2016025394A (en) * 2014-07-16 2016-02-08 株式会社デンソー Display device for vehicle

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020144798A1 (en) * 2019-01-10 2020-07-16 三菱電機株式会社 Information display control device and method, and program and recording medium
JPWO2020144798A1 (en) * 2019-01-10 2021-03-18 三菱電機株式会社 Information display control devices and methods, as well as programs and recording media
CN113302661A (en) * 2019-01-10 2021-08-24 三菱电机株式会社 Information display control device and method, program, and recording medium
US11580689B2 (en) 2019-01-10 2023-02-14 Mitsubishi Electric Corporation Information display control device, method, and non-transitory computer-readable recording medium
CN113993747A (en) * 2019-09-13 2022-01-28 马瑞利株式会社 Display device and display method
CN113993747B (en) * 2019-09-13 2023-06-06 马瑞利株式会社 Display device and display method
CN112606834A (en) * 2020-12-21 2021-04-06 福建工程学院 Driving auxiliary system for displaying blind area in front of vehicle

Similar Documents

Publication Publication Date Title
JP6493361B2 (en) Vehicle device, vehicle program, filter design program
US11528413B2 (en) Image processing apparatus and image processing method to generate and display an image based on a vehicle movement
JP5874920B2 (en) Monitoring device for vehicle surroundings
JP2017185988A5 (en)
JP6445607B2 (en) Vehicle display system and method for controlling vehicle display system
JP2009071790A (en) Vehicle surroundings monitoring apparatus
JP5051263B2 (en) Vehicle rear view system
JP4367212B2 (en) Virtual image display device and program
JP2013168063A (en) Image processing device, image display system, and image processing method
WO2017168953A1 (en) Vehicle device, vehicle program, and filter design program
WO2018100377A1 (en) Multi-dimensional display
US11794667B2 (en) Image processing apparatus, image processing method, and image processing system
US11813988B2 (en) Image processing apparatus, image processing method, and image processing system
US20190166357A1 (en) Display device, electronic mirror and method for controlling display device
US20220001803A1 (en) Image processing apparatus, image processing method, and image processing system
US20190166358A1 (en) Display device, electronic mirror and method for controlling display device
US20220030178A1 (en) Image processing apparatus, image processing method, and image processing system
JP2013132976A (en) Obstacle alarm device
US20200231099A1 (en) Image processing apparatus
JP2016215726A (en) Vehicle periphery display apparatus and vehicle periphery display method
US20240214510A1 (en) Image processing device and image processing method
JP2016141303A (en) Visual field support device
JP3199380U (en) Automobile pillar visualization device
JP5825091B2 (en) Imaging area display device
JP5765575B2 (en) Imaging area display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17773503

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17773503

Country of ref document: EP

Kind code of ref document: A1