WO2017158829A1 - Display control device and display control method - Google Patents

Display control device and display control method Download PDF

Info

Publication number
WO2017158829A1
WO2017158829A1 PCT/JP2016/058749 JP2016058749W WO2017158829A1 WO 2017158829 A1 WO2017158829 A1 WO 2017158829A1 JP 2016058749 W JP2016058749 W JP 2016058749W WO 2017158829 A1 WO2017158829 A1 WO 2017158829A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
distance
display
vehicle
generation unit
Prior art date
Application number
PCT/JP2016/058749
Other languages
French (fr)
Japanese (ja)
Inventor
聖崇 加藤
篤史 前田
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2016/058749 priority Critical patent/WO2017158829A1/en
Publication of WO2017158829A1 publication Critical patent/WO2017158829A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/25Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the sides of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a technique for displaying an image of an area where a driver's view is blocked.
  • a blind spot image display device for a vehicle described in Patent Document 1 defines a virtual projection spherical surface having a radius from a driver's viewpoint position to a subject as a radius, and each pixel of the display using the driver's viewpoint position as a viewpoint. The intersection of the straight line passing through the virtual projection sphere and the virtual projection spherical surface is obtained, the image of the image captured by the camera corresponding to the obtained intersection is specified, and an image to be displayed on the display is generated.
  • the present invention was made to solve the above-described problems, and even when there are a plurality of subjects in an area where the driver's field of view is blocked, an image of the area where the driver's field of view is blocked is captured.
  • the purpose is to display the image continuously with the scenery outside the vehicle viewed from the viewpoint position of the driver.
  • the display control apparatus uses a captured image obtained by capturing a blind spot area in which a driver's field of view is blocked by a structure of the vehicle in a peripheral area of the vehicle, from a driver's viewpoint position to a blind spot.
  • a distance image generation unit that generates a distance image using a value of a distance to an object located in the region as a pixel value, and a driver's viewpoint position in a region around the vehicle using the distance image generated by the distance image generation unit
  • a plurality of virtual projection spheres defined as outer peripheral surfaces of a sphere centered on the image are set, and the coordinates on the display screen for displaying the captured image are displayed on the imaging surface of the captured image using the plurality of virtual projection spheres.
  • a display image generation unit that generates an image to be displayed on the display screen using the coordinates on the imaging surface converted by the coordinate conversion unit.
  • an image obtained by capturing an area where the driver's view is blocked can be displayed continuously with the scenery outside the vehicle viewed from the viewpoint position of the driver.
  • FIG. 1 is a block diagram illustrating a configuration of a display control device according to a first embodiment.
  • 2 is a diagram illustrating a hardware configuration of a display control apparatus according to Embodiment 1.
  • FIG. It is the figure which looked at the vehicle provided with the display control apparatus which concerns on Embodiment 1 from upper direction. It is the figure which looked at the part in the vehicle provided with the display control apparatus which concerns on Embodiment 1 from the rear seat. It is a figure which shows the scenery which a driver
  • 3 is a flowchart showing an operation of the display control apparatus according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of a captured image received by an image input unit of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a diagram illustrating an example of a distance image generated by a distance image generation unit of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1.
  • FIG. 4 is a flowchart illustrating an operation of a coordinate conversion unit of the display control device according to the first embodiment.
  • FIG. 6 is a diagram schematically illustrating coordinate conversion processing of a coordinate conversion unit of the display control device according to the first embodiment. 4 is a flowchart illustrating an operation of a display image generation unit of the display control apparatus according to the first embodiment.
  • 6 is a diagram showing a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a diagram showing a display result of the display control apparatus according to Embodiment 1.
  • FIG. 6 is a block diagram illustrating a configuration of a display control device according to Embodiment 2.
  • FIG. 6 is a diagram schematically illustrating coordinate conversion processing of a coordinate conversion unit of the display control device according to the first embodiment. 4 is a flowchart illustrating an operation of a display image generation unit of the display control apparatus according to the first embodiment.
  • 6 is a diagram showing a display image generated by
  • FIG. 10 is a flowchart illustrating an operation of a display image generation unit of the display control apparatus according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of setting conditions of the display control device according to the second embodiment.
  • 20A and 20B are diagrams illustrating an example of setting conditions of the display control device according to the second embodiment.
  • FIG. 1 is a block diagram illustrating a configuration of a display control apparatus 10 according to the first embodiment.
  • the display control device 10 includes an image input unit 1, a distance information acquisition unit 2, a viewpoint information acquisition unit 3, an image processing unit 4, and a display control unit 8. Further, the image processing unit 4 includes a distance image generation unit 5, a coordinate conversion unit 6, and a display image generation unit 7.
  • the display control device 10 is mounted on, for example, a vehicle 20 described later.
  • the image input unit 1 receives an input of a captured image obtained by imaging an area around the vehicle 20 by an imaging unit such as a camera.
  • the captured image is an image obtained by capturing an area where the driver's field of view is blocked by at least a structure of the vehicle 20 such as a pillar (hereinafter referred to as a blind spot area) among the areas around the vehicle 20.
  • the distance information acquisition unit 2 acquires distance information that is a result of measuring a distance from the sensor to an object in the area by scanning at least a blind spot area by a sensor or the like mounted on the vehicle 20.
  • the viewpoint information acquisition unit 3 acquires viewpoint information indicating the driver's viewpoint position.
  • the viewpoint position of the driver is, for example, the position of the driver's eyes or the position of the head.
  • the captured image received by the image input unit 1, the information acquired by the distance information acquisition unit 2 and the viewpoint information acquisition unit 3 are output to the image processing unit 4.
  • position information indicating the arrangement position of the imaging means, position information indicating the arrangement position of the distance measuring means, and position information indicating the arrangement position of the display described later are set in advance.
  • the distance image generation unit 5 of the image processing unit 4 uses at least the blind spot from the driver's viewpoint position using the distance information acquired by the distance information acquisition unit 2 and the driver's viewpoint information acquired by the viewpoint information acquisition unit 3. The distance to the object located in the area is calculated.
  • the distance image generation unit 5 further refers to the position information of the image pickup unit and the position information of the distance measurement unit, for each pixel of the picked-up image received by the image input unit 1 from the viewpoint position of the driver.
  • a distance image is generated in which the distance values up to (hereinafter referred to as distance values) are associated as pixel values.
  • the distance image generation unit 5 performs a process of generating a single same target area by combining the areas having the same distance from the viewpoint position of the driver among the areas where the distance is measured, and the distance of the same target area
  • the subject area and the background area are set according to the above. For example, when the distance value of the generated same target area is less than the threshold, the distance image generation unit 5 sets the same target area as the subject area, and when the generated distance value of the same target area is greater than or equal to the threshold The same target area is set as the background area.
  • the coordinate conversion unit 6 refers to the same target region generated by the distance image generation unit 5 and the distance information of the distance image, the subject region set from the driver's viewpoint position with the driver's viewpoint position as the center.
  • a virtual projection spherical surface defined as the outer peripheral surface of the sphere with the radius up to is set.
  • the coordinate conversion unit 6 sets a virtual projection spherical surface defined as an outer peripheral surface of a sphere centered on the driver's viewpoint position and having a radius from the driver's viewpoint position to the set background area as a radius.
  • the coordinate conversion unit 6 converts coordinates on the display (display screen) in the vehicle viewed from the viewpoint position of the driver into coordinates on the imaging surface of the captured image using the set virtual projection spherical surface. Thereby, the image data displayed on the display in the vehicle is specified by the coordinates of the imaging surface, that is, the pixels.
  • the display in the vehicle is a display arranged on a structure of the vehicle 20 such as a pillar that generates a blind spot area, and is a display for displaying an image of the blind spot area. Details of the processing of the coordinate conversion unit 6 will be described later.
  • the display image generation unit 7 generates image data for each virtual projection spherical surface set by the coordinate conversion unit 6 using the coordinates converted by the coordinate conversion unit 6, that is, the coordinates on the imaging surface of the captured image.
  • the display image generation unit 7 refers to the distance value of the distance image generated by the distance image generation unit 5 and selects image data of the virtual projection spherical surface used for each of the subject region and the background region.
  • the display image generation unit 7 generates display image data from the selected image data based on a preset display position and display size. Further, the display image generation unit 7 may have functions such as processing, synthesis, and graphics superimposition of the generated display image data. For example, the display image generation unit 7 may convert the display image data so that a menu screen or an alert screen is superimposed on the display image.
  • the display control unit 8 generates display control information for displaying an image based on the display image data generated by the display image generation unit 7 on a display or the like, and outputs the display control information to the display or the like.
  • FIG. 2 is a diagram illustrating a hardware configuration example of the display control apparatus 10.
  • the image input unit 1, the distance information acquisition unit 2, the viewpoint information acquisition unit 3, the distance image generation unit 5, the coordinate conversion unit 6, the display image generation unit 7, and the display control unit 8 in the display control device 10 are realized by a processing circuit.
  • a processing circuit for performing display control of the displayed image is realized by a processing circuit.
  • the processing circuit is dedicated hardware, the processing circuit is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-programmable Gate Array). ) Or a combination of these.
  • the functions of the image input unit 1, distance information acquisition unit 2, viewpoint information acquisition unit 3, distance image generation unit 5, coordinate conversion unit 6, display image generation unit 7 and display control unit 8 are realized by processing circuits. Alternatively, the functions of the respective units may be combined and realized by a processing circuit.
  • the processing circuit is a CPU (Central Processing Unit)
  • the processing circuit is the CPU 12 that executes a program stored in the memory 13 shown in FIG.
  • the functions of the image input unit 1, distance information acquisition unit 2, viewpoint information acquisition unit 3, distance image generation unit 5, coordinate conversion unit 6, display image generation unit 7 and display control unit 8 are software, firmware, or software and firmware. It is realized by the combination.
  • Software or firmware is described as a program and stored in the memory 13.
  • the CPU 12 reads out and executes a program stored in the memory 13 to thereby execute an image input unit 1, a distance information acquisition unit 2, a viewpoint information acquisition unit 3, a distance image generation unit 5, a coordinate conversion unit 6, and a display image generation unit. 7 and the display control unit 8 are realized.
  • these programs include procedures or methods of the image input unit 1, the distance information acquisition unit 2, the viewpoint information acquisition unit 3, the distance image generation unit 5, the coordinate conversion unit 6, the display image generation unit 7, and the display control unit 8. It can also be said to be executed by a computer.
  • the CPU 12 is, for example, a central processing unit, a processing unit, an arithmetic unit, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
  • the memory 13 may be, for example, a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), or an EEPROM (Electrically EPROM). Further, it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, CD (Compact Disc), or DVD (Digital Versatile Disc).
  • FIG. 3 is a view of the vehicle 20 including the display control device 10 according to the first embodiment as viewed from above.
  • FIG. 4 is a view of a part of the vehicle 20 provided with the display control device 10 according to the first embodiment as seen from the rear seat.
  • the driver A of the vehicle 20 is blocked from view by a front pillar (so-called A pillar) 21 on the left side of the vehicle 20, for example.
  • the viewpoint information acquired by the viewpoint information acquisition unit 3 is calculated using a captured image obtained by capturing the interior of the vehicle by the in-vehicle camera 33 or the like mounted on the vehicle 20.
  • the in-vehicle camera 33 arranged in front of the vehicle 20 captures the inside of the vehicle to acquire the captured image, and a calculation unit (not illustrated) calculates the face image of the driver A included in the captured image.
  • the viewpoint position 105 is calculated by analysis.
  • the viewpoint information can be acquired by using a known detection technique such as triangulation using captured images captured by a plurality of cameras or TOF (Time of Flight) using a monocular camera.
  • the viewpoint information may be calculated based on the physical information of the driver A registered in advance, or may be calculated and acquired based on the seat position of the driver's seat, the angle of the rearview mirror, and the angle of the side mirror. Also good.
  • the viewpoint position 105 indicated by the viewpoint information is set as a reference position for calculating the distance from the driver A to the object.
  • the captured image input to the image input unit 1 is captured by the vehicle exterior camera 31 a mounted on the vehicle 20.
  • the vehicle outside camera 31a is an imaging unit that images at least a blind spot area in which the driver's field of view is blocked by a structure of the vehicle 20, such as a pillar, in the peripheral area of the vehicle 20.
  • the outside camera 31 a is installed toward the left front of the vehicle 20, for example, near the side mirror 22.
  • the vehicle exterior camera 31a images an object or the like existing in the imaging range 102 that is an area around the vehicle 20 including at least the blind spot area 101.
  • the vehicle exterior camera 31a may be installed on the root portion of the front pillar 21 (not shown), the roof (not shown), the vehicle interior window (not shown), etc. in addition to the vicinity of the side mirror 22.
  • the installation position is not limited.
  • a plurality of outside cameras 31a may be arranged.
  • the vehicle exterior camera 31a may be configured by a single camera having a wide-angle lens, and the vehicle exterior camera 31a may be used for imaging both the left and right blind spot areas of the vehicle 20.
  • the vehicle exterior camera 31a is a camera that captures a color image, and includes an optical mechanism including a lens and a mirror, and an image sensor such as a CCD (Charge-Coupled Device) or CMOS (Complementary-Metal-Oxide Semiconductor) image sensor.
  • the outside camera 31a may include an infrared sensor or a light emitting element in order to enable night imaging.
  • the raw image data generated by the image sensor of the outside camera 31a is subjected to preprocessing such as color conversion processing, format conversion processing, filtering processing, and the like as necessary, and then the captured image To the image input unit 1.
  • the distance information acquired by the distance information acquisition unit 2 is generated by the distance measuring sensor 32a mounted on the vehicle 20.
  • the distance measuring sensor 32 a is disposed, for example, near the side mirror 22 toward the left front of the vehicle 20.
  • the distance measurement sensor 32 a measures the distance 103 to the object B and the distance 104 to the object C.
  • the distance measuring sensor 32a is not limited to the distance to the object B and the object C, but has a resolution range in which the distance to all the objects existing in the blind spot area 101 and each part constituting the object can be obtained.
  • the distance information is generated.
  • the distance measuring sensor 32a for example, a well-known distance measuring technique for a vehicle such as a millimeter wave radar, a laser radar, or an ultrasonic sensor can be applied. Further, the outside camera 31a may be used as a distance measuring unit. In this case, the distance information is generated from a distance measured using a technique such as triangulation using a plurality of cameras or TOF using a monocular camera. The distance information acquired from the distance measuring sensor 32a is updated at a constant cycle such as 10 to 240 Hz.
  • the display control information generated by the display control unit 8 is output to the display 34a.
  • the display 34 a is arranged on the surface of the front pillar 21 of the vehicle 20 that is visible to the driver A so as to overlap the blind spot area 101 that is a blind spot from the viewpoint position 105 of the driver A.
  • the display 34a is configured by using various display devices such as a liquid crystal display (LCD), an organic electro luminescence (OLE) display, and a projector. Further, the display 34a may be a display in which a plurality of small displays are arranged side by side.
  • the display 34a may include an illuminance sensor or the like for adjusting the brightness, and may display an image adjusted according to the amount of solar radiation in the vehicle interior.
  • the display image generation unit 7 has information on the arrangement position of the display 34a and the display size of the display 34a.
  • the vehicle exterior camera 31 b and the distance measuring sensor 32 b can be arranged in front of the right side of the vehicle 20 to acquire a captured image and distance information of the blind spot area.
  • a display image that displays an image of the blind spot area in front of the right side of the vehicle 20 is generated by the display control device 10 and is displayed on the display 34 b disposed in front of the front pillar 23.
  • the display control device 10 For other pillars other than the front pillars 21 and 23 (so-called B pillars, C pillars, etc.), the display control device 10 generates a display image from an image obtained by imaging the blind spot area, and is provided in the pillar. Can be displayed on the display.
  • FIG. 5 shows what the driver A of the vehicle 20 looks in the state shown in FIGS. 3 and 4 when he / she sees an area where the two objects B and C as persons are present from the viewpoint position 105. It is a figure. As shown in FIG.
  • the front pillar 21 exists between the front window 24 of the vehicle 20 and the left side window 25 of the vehicle 20. .
  • the field of view of the driver A is blocked by the front pillar 21, and a part of the object B and a part of the object C cannot be visually recognized.
  • a region where the field of view is blocked by the front pillar 21 is a blind spot region 101.
  • the display control device 10 generates a display image of the blind spot area 101 that cannot be visually recognized by the driver A, and performs display control for displaying the display image on the display 34a.
  • FIG. 6 is a flowchart showing the operation of the display control apparatus 10 according to the first embodiment.
  • the image input unit 1, the distance information acquisition unit 2, and the viewpoint information acquisition unit 3 acquire various types of information (step ST1). Specifically, the image input unit 1 receives an input of a captured image captured by the outside camera 31a. Further, the distance information acquisition unit 2 acquires distance information from the distance measuring sensor 32a to an object existing in the blind spot area 101. Further, the viewpoint information acquisition unit 3 acquires viewpoint information indicating the viewpoint position 105 of the driver A.
  • the distance image generation unit 5 includes the distance information acquired by the distance information acquisition unit 2 in step ST1, the position information of the distance measuring unit preset in the image processing unit 4, and the viewpoint acquired by the viewpoint information acquisition unit 3 in step ST1. From the information, the distance from the viewpoint position of the driver to each object is calculated (step ST2).
  • the distance image generation unit 5 for example, the distance 103 from the distance measurement sensor 32 a to the object B, the position information of the distance measurement sensor 32 a set in advance in the image processing unit 4, and the driver A's A distance value 201d from the viewpoint position 105 to the object B is calculated from the viewpoint position 105.
  • the distance image generation unit 5 includes the distance 104 from the distance measurement sensor 32a to the object C, the position information of the distance measurement sensor 32a preset in the image processing unit 4, and the viewpoint position 105 of the driver A.
  • the distance value 202d from the viewpoint position 105 to the object C is calculated.
  • one distance is shown as a representative, but distance information is also generated for at least the distances of the parts constituting the object existing in the blind spot area.
  • the distance image generation unit 5 refers to the distance from the viewpoint position of the driver calculated in step ST2 to each object, sets the subject area and the background area by dividing the area where the distance is measured by the distance measuring sensor 32a. (Step ST3).
  • the distance image generation unit 5 collectively sets adjacent areas having the same distance from the viewpoint position 105 of the driver A as one identical target area.
  • the distance is the same, the distance values do not need to be exactly the same, and the two distance values are approximate values (for example, 5 m ⁇ 30 cm) and are distance values obtained by measuring the same object. If it can be determined, the distances are considered to be the same.
  • the distance image generation unit 5 determines that the area is an object located near the vehicle 20 and sets it as a subject area.
  • the threshold is set to “30 m”, for example.
  • the distance image generation unit 5 determines that the area is an object located at a point away from the vehicle 20 and sets it as a background area.
  • the subject area is not limited to one, and a plurality of subject areas are set as long as the condition is satisfied.
  • the distance image generation unit 5 is configured such that, for example, the size of a certain same target area is less than a certain value even if the distance value of the generated same target area is less than a threshold, and the same target area is small enough to be ignored. If it is determined, it may be additionally determined that it is the background area.
  • the distance image generation unit 5 associates a distance value with the set subject area and background area.
  • the distance image generation unit 5 associates an average value or a median value of the distance values of the subject area as the distance value of the subject area.
  • the distance image generation unit 5 uses a distance value farther than the distance value associated with the subject area as the distance value of the background area, and the distance associated with the two subject areas when there are two subject areas. The median value or the same distance value as one subject area is associated.
  • the distance image generation unit 5 considers the difference between the arrangement positions of the outside camera 31a and the distance measuring sensor 32a based on the position information of the outside camera 31a and the position information of the distance measuring sensor 32a set in the image processing unit 4 in advance.
  • a distance image is generated by associating the distance value between the subject area and the background area set in step ST3 as a pixel value for each pixel of the captured image received by the image input unit 1 in step ST1 (step ST4). ).
  • FIG. 7 is a diagram illustrating a captured image received by the image input unit 1, and an object B and an object C that are persons exist in the captured image.
  • the distance image generation unit 5 is a diagram illustrating a distance image generated by the distance image generation unit 5, and includes a subject area Ba, a subject area Ca, and a background area D.
  • the subject area Ba is an area associated with a distance value 201d
  • the subject area Ca is an area associated with a distance value 202d
  • the background area D is associated with a distance value 203d. Indicates that this is an area.
  • the coordinate conversion unit 6 refers to the distance image generated in step ST4, and in the area around the vehicle 20, the driver's viewpoint position is the center, and the distance value set in each subject area is the radius.
  • a projection spherical surface is set (step ST5).
  • the coordinate conversion unit 6 uses the driver's viewpoint position as the center in the area around the vehicle 20 and sets the distance value set in the background area as the radius.
  • a virtual projection spherical surface is set (step ST6).
  • the coordinate conversion unit 6 performs the step ST5 and the step ST6, the virtual projection spherical surface 201 having the radius of the distance value 201d set in the subject area Ba and the distance value set in the subject area Ca.
  • a virtual projection spherical surface 202 having a radius of 202d and a virtual projection spherical surface 203 having a radius of the distance value 203d set in the background region D are set.
  • the setting of the virtual projection spherical surface is virtually performed in the calculation processing place in the display control apparatus 10.
  • the virtual projection spherical surfaces 201, 202, and 203 in FIG. 3 are merely the virtual spherical surfaces.
  • the coordinate conversion unit 6 converts the coordinate value on the display viewed from the viewpoint position into the coordinate value on the imaging surface of the imaging means using the virtual projection spherical surface set in step ST5 and step ST6 (step ST7).
  • the coordinate conversion unit 6 uses the virtual projection spherical surfaces 201, 202, 203 in step ST ⁇ b> 7, and the coordinate values on the surface of the display 34 a viewed from the viewpoint position 105 are not shown in the camera 31 a outside the vehicle. Convert to coordinate values on the imaging surface. Details of the coordinate conversion processing of the coordinate conversion unit 6 will be described later.
  • the display image generation unit 7 generates a display image for each virtual projection spherical surface using the coordinate values on the imaging surface obtained by converting the coordinate values on the display surface of the display in step ST7 (step ST8). .
  • the display image generation unit 7 uses the virtual projection spherical surfaces 201, 202, and 203 to convert the coordinates on the surface of the display 34 a to display images for display from the coordinate values on the imaging surface. Generate.
  • the display image generation unit 7 generates a display image to be displayed on the display using each display image generated in step ST8 (step ST9).
  • the display image generation unit 7 converts the display image of the subject area Ba into the display area generated by coordinate conversion using the virtual projection spherical surface 201 and the subject area information indicating the object B.
  • the display image of the subject area Ca is generated from the display image generated using the virtual projection spherical surface 202 and the information of the subject area indicating the object C, and is used for displaying the background area D.
  • An image is generated from the display image generated using the virtual projection spherical surface 203 and the subject area information indicating the background.
  • the display image generation unit 7 generates an overall display image to be displayed on the display by integrating the display images.
  • the display control unit 8 performs display control for displaying the display image generated in step ST9 on the display (step ST10), and returns to the process of step ST1.
  • FIG. 9 to 11 are diagrams showing the respective display images generated by the display image generation unit 7.
  • FIG. 9 shows a display image configured using coordinate values on the imaging screen converted using the virtual projection spherical surface 201, and the size of the subject region Bb is the size when the object B is viewed from the viewpoint position 105.
  • FIG. 10 shows a display image configured using coordinate values on the imaging screen converted using the virtual projection spherical surface 202, and the size of the subject region Cb is the size when the object C is viewed from the viewpoint position 105.
  • FIG. 11 shows a display image configured using coordinate values on the imaging screen converted using the virtual projection spherical surface 203.
  • the display image generated in step ST9 is an image in which the object B, the object C, and the background coincide with the size when viewed from the viewpoint position 105 as shown in FIG. .
  • FIG. 12 is a flowchart showing the operation of the coordinate conversion unit 6 of the display control apparatus 10 according to the first embodiment.
  • FIG. 13 is a diagram schematically illustrating a coordinate conversion process of the coordinate conversion unit 6 of the display control apparatus 10 according to the first embodiment.
  • FIG. 13 are the same as those shown in FIG. Each corresponds. Further, the subject areas Ba and Ca and the background area D shown in FIG. 13 correspond to the respective areas on the distance image shown in FIG. Moreover, in FIG. 13, the display surface 301 of the display 34a and the imaging surface 302 of the vehicle exterior camera 31a are shown.
  • the coordinate conversion unit 6 calculates the coordinate in of the intersection of the half line passing the coordinate pn on the display surface 301 of the display 34a from the viewpoint position 105 and the virtual projection spherical surface N (step ST21).
  • the coordinate pn is a coordinate in a coordinate system set so that the position of each point on the display surface 301 can be specified.
  • the coordinates on the virtual projection spherical surface N may be any as long as the position on the virtual projection spherical surface N can be specified.
  • the coordinate conversion unit 6 calculates a coordinate cn indicating an intersection of the straight line connecting the coordinate in calculated in step ST21 and the camera 31a outside the vehicle and the imaging surface 302 as a coordinate after conversion (step ST22).
  • the coordinates on the imaging surface 302 are coordinates in a coordinate system set so that the position of each point on the imaging surface 302 can be specified.
  • the coordinate conversion unit 6 determines whether all necessary coordinates on the display surface 301 have been converted (step ST23). If all necessary coordinates on the display surface 301 have not been converted (step ST23; NO), the coordinate conversion unit 6 sets the next coordinates on the display surface 301 (step ST24), and returns to the processing of step ST21.
  • the necessary coordinates are, for example, coordinate systems respectively corresponding to all the pixels on the display 34a.
  • step ST23 when the coordinates of all the points on the display surface 301 are converted (step ST23; YES), the coordinate conversion unit 6 determines whether the coordinate conversion has been performed for all the virtual projection spherical surfaces (step ST25). When coordinates have not been converted for all virtual projection spheres (step ST25; NO), the coordinate conversion unit 6 sets the next virtual projection sphere (step ST26) and returns to the process of step ST21. On the other hand, when coordinates are converted for all virtual projection spherical surfaces (step ST25; YES), the process proceeds to step ST8 of the flowchart of FIG.
  • the coordinate conversion unit 6 performs the processing from step ST21 to step ST26. It carries out in order with respect to the virtual projection spherical surface 201, the virtual projection spherical surface 202, and the virtual projection spherical surface 203. Further, the coordinate conversion unit 6 repeats the processing from step ST21 to step ST24 for each virtual projection spherical surface 201, 202, 203.
  • the coordinate conversion unit 6 calculates the coordinate i1 of the intersection point on the virtual projection spherical surface 201 and the half line passing through the viewpoint position 105 and the coordinate p1 on the display surface 301 in step ST21. To do.
  • step ST ⁇ b> 22 the coordinate conversion unit 6 calculates the coordinate c ⁇ b> 1 of the intersection of the straight line connecting the coordinate i ⁇ b> 1 and the vehicle exterior camera 31 a and the imaging surface 302.
  • step ST21 the coordinate conversion unit 6 uses the half line passing through the viewpoint position 105 and the coordinate p2 on the display surface 301 and the coordinate i2 of the intersection point on the virtual projection spherical surface 201. calculate.
  • step ST ⁇ b> 22 the coordinate conversion unit 6 calculates the coordinate c ⁇ b> 2 of the intersection of the straight line connecting the coordinate i ⁇ b> 2 and the outside camera 31 a and the imaging surface 302.
  • the coordinate conversion unit 6 has all necessary coordinates on the display surface 301. Perform coordinate transformation for.
  • the coordinate conversion unit 6 calculates the coordinate i5 of the intersection point on the virtual projection spherical surface 202 and the half line passing the viewpoint position 105 and the coordinate p5 on the display surface 301 in step ST21.
  • the coordinate conversion unit 6 calculates the coordinate c ⁇ b> 5 of the intersection point between the imaging surface 302 and the straight line connecting the coordinate i ⁇ b> 5 and the vehicle exterior camera 31 a.
  • step ST21 the coordinate conversion unit 6 uses the half line passing through the viewpoint position 105 and the coordinate p6 on the display surface 301 and the coordinate i6 of the intersection point on the virtual projection spherical surface 202. calculate.
  • step ST ⁇ b> 22 the coordinate conversion unit 6 calculates the coordinate c ⁇ b> 6 of the intersection point between the imaging surface 302 and the straight line connecting the coordinate i ⁇ b> 6 and the vehicle exterior camera 31 a.
  • step ST ⁇ b> 6 the coordinate conversion unit 6 calculates the coordinate c ⁇ b> 6 of the intersection point between the imaging surface 302 and the straight line connecting the coordinate i ⁇ b> 6 and the vehicle exterior camera 31 a.
  • the coordinate conversion unit 6 has all necessary coordinates on the display surface 301. Perform coordinate transformation for.
  • the coordinate conversion unit 6 calculates, as step ST21, the half line passing through the viewpoint position 105 and the coordinate p3 on the display surface 301 and the coordinate i3 of the intersection point on the virtual projection spherical surface 203.
  • step ST ⁇ b> 22 the coordinate conversion unit 6 calculates the coordinate c ⁇ b> 3 of the intersection between the straight line connecting the coordinate i ⁇ b> 3 and the outside camera 31 a and the imaging surface 302.
  • step ST21 the coordinate conversion unit 6 uses the half line passing through the viewpoint position 105 and the coordinate p4 on the display surface 301 and the coordinate i4 of the intersection point on the virtual projection spherical surface 203. calculate.
  • step ST ⁇ b> 22 the coordinate conversion unit 6 calculates a coordinate c ⁇ b> 4 of an intersection point between the imaging surface 302 and a straight line connecting the coordinate i ⁇ b> 4 and the vehicle exterior camera 31 a.
  • FIG. 14 is a flowchart showing the operation of the display image generation unit 7 of the display control apparatus 10 according to the first embodiment.
  • the display image generation unit 7 sets the image area of the display image displayed on the display from the position information such as the installation position of the display (step ST31).
  • the display image generation unit 7 acquires the distance value at each pixel position of the distance image generated in step ST4 for the image region set in step ST31 (step ST32).
  • the display image generation unit 7 selects a display image generated using the virtual projection spherical surface corresponding to the distance value acquired in step ST32 from the display image generated in step ST8 (step ST33).
  • the display image generation unit 7 determines whether display images have been selected at all pixel positions of the distance image (step ST34). When the display image is not selected for all pixel positions (step ST34; NO), the process returns to step ST33.
  • step ST34 when display images are selected for all pixel positions (step ST34; YES), the display image generation unit 7 integrates all display images selected in step ST33 to generate a display image. (Step ST35). The display image generation unit 7 outputs the display image generated in step ST35 to the display control unit 8 (step ST36). Thereafter, the process proceeds to step ST10 shown in the flowchart of FIG.
  • FIG. 15 is a diagram illustrating a display image generated by the display image generation unit 7 of the display control apparatus 10 according to the first embodiment.
  • the display image generation unit 7 sets the size of the image area 401 of the display image that can be displayed on the display 34a shown in FIG.
  • the display image generation unit 7 sets, for example, the distance value 201d at the pixel position 402, the distance value 202d at the pixel position 403, and the distance value 203d at the pixel position 404 for the image area 401 whose size is set. Set.
  • the pixel positions 402, 403, and 040 are shown, but distance values are set at all the pixel positions in the image area 401.
  • the display image generation unit 7 selects a display image generated using the virtual projection spherical surface 201 corresponding to the distance value 201d acquired at the pixel position 402, for example. Similarly, as step ST33, the display image generation unit 7 selects a display image generated using the virtual projection spherical surface 202 corresponding to the distance value 202d acquired at the pixel position 403. In step ST33, the display image generation unit 7 selects a display image generated using the virtual projection spherical surface 203 corresponding to the distance value 203d acquired at the pixel position 404.
  • step ST34 When the display image generation unit 7 selects display images for all the pixel positions in the image area 401 in step ST34 (step ST34; YES), the image area 401 shown in FIG. A display image as shown in FIG. As step ST ⁇ b> 36, the display image generation unit 7 outputs data indicating the generated display image to the display control unit 8.
  • FIG. 16 is a diagram illustrating a display result of the display control apparatus 10 according to the first embodiment.
  • the display control unit 8 performs display control for displaying the display image indicated by the image area 401 in FIG. 15 on the display 34 a arranged in front of the front pillar 21.
  • the display 34a displays the display image indicated by the image area 401, so that the driver A actually moves from the viewpoint position 105 through the front window 24 and the side window 25 as shown in FIG.
  • the scenery outside the vehicle seen in FIG. 5 and the display image of the blind spot area displayed on the display 34a on the front pillar 21 are continuously displayed. Thereby, the driver A does not feel discomfort in the continuity between the scenery of the vehicle actually viewed from the viewpoint position 105 and the display image on the display 34a.
  • the driver's viewpoint is obtained by using a captured image in which a blind spot area in which the driver's field of view is blocked by the vehicle structure is imaged.
  • a distance image generation unit 5 that generates a distance image using a value of a distance from a position to an object located in a blind spot area as a pixel value, and using the distance image, the driver's viewpoint position is centered in a peripheral area of the vehicle.
  • a plurality of virtual projection spheres defined as the outer peripheral surface of the sphere are set, and coordinates on the display screen for displaying the captured image are converted into coordinates on the imaging surface using the plurality of virtual projection spheres set. Since it is configured to include the coordinate conversion unit 6 and the display image generation unit 7 that generates an image to be displayed on the display screen using the converted coordinates on the imaging screen, Outside the actual vehicle as seen from the viewpoint position It can be displayed by continuously color.
  • the distance image generation unit 5 sets the distance image having three target regions based on the three distance values.
  • the distance value to be used is three. It is not limited to.
  • the distance image generation unit 5 may generate the same target area in units of pixels.
  • the coordinate conversion unit 6 sets a virtual projection spherical surface in units of pixels, and performs coordinate conversion using the virtual projection spherical surface of each pixel.
  • the distance image generation unit 5 may be configured to set conditions for generation of the distance image and reduce the number of virtual projection spherical surfaces set by the coordinate conversion unit 6.
  • the distance image generation unit 5 generates a distance image based on the condition that the number of virtual projection spherical surfaces is three. Further, the distance image generation unit 5 generates a distance image based on the condition that three objects in order from the smallest distance value are subjects for generating the same target region. Further, the distance image generation unit 5 generates a distance image based on the condition that the distance value of ⁇ 50 cm is regarded as the same distance value. In addition, the distance image generation unit 5 generates a distance image based on the condition that all objects at a position having a distance value of 30 m or more are regarded as the background. Further, the distance image generation unit 5 refers to the appearance frequency of the distance value, and generates a distance image based on a condition that a certain number of distance images are generated in descending order of appearance frequency.
  • FIG. 17 is a block diagram illustrating a configuration of the display control apparatus 10a according to the second embodiment.
  • the display control apparatus 10a according to the second embodiment is provided with a vehicle information acquisition unit 9 in addition to the display control apparatus 10 shown in the first embodiment, and includes an image processing unit 4a instead of the image processing unit 4.
  • the image processing unit 4a includes a distance image generation unit 5, a coordinate conversion unit 6a, and a display image generation unit 7a.
  • the same or corresponding parts as the components of the display control apparatus 10 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified.
  • the vehicle information acquisition unit 9 acquires vehicle information of a vehicle on which the display control device 10 is mounted via an in-vehicle network (not shown).
  • the vehicle information is information indicating, for example, own vehicle position information, traveling direction, vehicle speed, acceleration, steering angle, and the like.
  • Information acquired by the vehicle information acquisition unit 9 is input to the image processing unit 4.
  • the coordinate conversion unit 6a of the image processing unit 4a determines the number of virtual projection spherical surfaces to be set according to the distance image generated by the distance image generation unit 5 and the vehicle information acquired by the vehicle information acquisition unit 9. For example, when the coordinate conversion unit 6a refers to the vehicle speed of the vehicle information, when it is determined that the vehicle is traveling at a high speed from the vehicle speed acquired by the vehicle information acquisition unit 9, the number of virtual projection spherical surfaces is reduced. By changing the set number of virtual projection spherical surfaces according to the vehicle information by the coordinate conversion unit 6a, the processing load of image processing can be suppressed.
  • the display image generation unit 7a changes the image data of the display image to be generated according to the vehicle information acquired by the vehicle information acquisition unit 9.
  • the case where the display image generation unit 7a refers to the vehicle speed of the vehicle information will be described.
  • the processing amount that can be processed by the display image generating unit 7a is calculated by the data amount (size) of one frame of the display image to be generated and the number of frames to be displayed per second.
  • the update speed of the display image is determined from the definition of one frame of the display image.
  • the display image generation unit 7a reduces the resolution of the display image to be generated, and conversely improves the update speed of the display image. Thereby, the display image generation unit 7a can suppress the processing load of the image processing.
  • the display image generation unit 7a when the processing capability of the display image generation unit 7a is constant and the vehicle is traveling at a low speed, improvement in the definition of one frame of the display image is required rather than the update speed of the display image. Therefore, when the vehicle is traveling at a low speed, the display image generation unit 7a decreases the update speed of the display image to be generated and improves the definition of the display image. Thereby, the display image generation unit 7a can generate a display image with improved visibility.
  • the display image generation unit 7a is configured to perform a process of changing the number of colors of the display image according to the vehicle information acquired by the vehicle information acquisition unit 9, and to adjust the processing amount and the image quality of the display image. May be. Moreover, the process of the coordinate conversion part 6a mentioned above and the process of the display image generation part 7a may be performed simultaneously, and only one process may be performed.
  • the vehicle information acquisition unit 9 in the display control device 10a is realized by the input device 11 in FIG. 2 that inputs information from the outside. Further, the coordinate conversion unit 6a and the display image generation unit 7a in the display control device 10a are realized by a processing circuit.
  • functions may be executed by dedicated hardware or software.
  • the processing circuit is the CPU 12 that executes a program stored in the memory 13 shown in FIG.
  • FIG. 18 is a flowchart showing the operation of the coordinate conversion unit 6a of the display control apparatus 10a according to the second embodiment.
  • FIG. 18 the same steps as those in the flowchart of the first embodiment shown in FIG.
  • FIG. 19 is a diagram illustrating an example of data referred to by the coordinate conversion unit 6a of the display control apparatus 10a according to the second embodiment.
  • the coordinate conversion unit 6a refers to the vehicle information acquired by the vehicle information acquisition unit 9, and sets the number of virtual projection spherical surfaces set according to the vehicle information (step ST41). .
  • the coordinate conversion unit 6a refers to a database (not shown) in which the conditions shown in FIG. 19 are stored, and sets the number of virtual projection spherical surfaces according to the vehicle speed of the vehicle information, for example.
  • the coordinate conversion unit 6a performs the processing of step ST5 and step ST6 so that three virtual projection spherical surfaces are set. Note that which distance value the coordinate conversion unit 6a uses to set the virtual projection spherical surface is determined so that, for example, distance values having a high distribution of appearance frequencies are used in order as described in the first embodiment. .
  • FIGS. 20A and 20B are diagrams illustrating an example of data referred to by the display image generation unit 7a of the display control apparatus 10a according to Embodiment 2.
  • the display image generation unit 7a refers to a database (not shown) in which the setting conditions shown in FIG. 20A or 20B are stored, and determines the setting conditions for the display image according to the vehicle information.
  • the setting conditions of FIGS. 20A and 20B divide the vehicle speed into three stages of low speed driving, medium speed driving, and high speed driving, and the resolution (definition) of the display image set by the display image generating unit 7a according to each driving speed.
  • the frame rate (update speed) of the display image and the number of colors of the display image are shown.
  • FIG. 20A shows a case where the number of colors of the display image is a fixed value.
  • the display image generation unit 7a refers to the setting conditions of FIG. 20A.
  • the display image generation unit 7a When the vehicle is traveling at a low speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1920 ⁇ 960, frame rate 30 fps, color number RGB 24 bits. Generate. Further, when the vehicle is traveling at a medium speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1280 ⁇ 720, frame rate 60 fps, color number RGB 24 bits. Further, when the vehicle is traveling at a high speed, the display image generation unit 7a generates a display image based on the conditions of resolution 960 ⁇ 480, frame rate 120 fps, color number RGB 24 bits.
  • FIG. 20B shows a case where the resolution of the display image is a fixed value.
  • the display image generation unit 7a refers to the database of FIG. 20B, and generates a display image based on the conditions of resolution 1280 ⁇ 720, frame rate 30 fps, color number RGB 48 bits when the vehicle is traveling at a low speed. To do. Further, when the vehicle is traveling at a medium speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1280 ⁇ 720, frame rate 60 fps, color number RGB 24 bits. In addition, when the vehicle is traveling at high speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1280 ⁇ 720, frame rate 120 fps, color number YUV 16 bits.
  • the coordinate conversion unit 6a and the display image generation unit 7a are configured to refer to the vehicle speed of the vehicle information.
  • the number of projection spheres can be set, the definition of the display image, the update speed of the display image, and the like can be determined.
  • the vehicle information acquisition unit 9 that acquires the vehicle information indicating the traveling state of the vehicle is provided, and the coordinate conversion unit 6a refers to the vehicle information according to the vehicle information. Since the configuration is such that the number of virtual projection spheres to be generated is set, it is possible to suppress the load of coordinate conversion processing.
  • the vehicle information acquisition part 9 which acquires the vehicle information which shows the driving state of a vehicle
  • the display image generation part 7a refers to vehicle information, according to the driving state of a vehicle. Since it is configured to determine at least one of the definition of the display image to be generated, the update speed of the display image, or the number of colors of the display image, the load when generating the display image can be suppressed. it can.
  • the distance from the distance measuring sensor 32a to the object in the area the position information of the outside camera 31a, the position information of the distance measuring sensor 32a, the driver's viewpoint information, and the display
  • the description has been made on the assumption that the coordinates on 34a are information expressed using three-dimensional spatial coordinates.
  • the present invention can be freely combined with each embodiment, modified any component of each embodiment, or omitted any component in each embodiment. Is possible.
  • the display control device is provided in a structure of a vehicle because an image obtained by imaging a region that becomes a blind spot for a driver can be displayed continuously with an actual scene viewed from the viewpoint position of the driver.
  • the present invention can be applied to display an image on a display and can be used to improve driver visibility.
  • 1 image input unit 2 distance information acquisition unit, 3 viewpoint information acquisition unit, 4, 4a image processing unit, 5 distance image generation unit, 6, 6a coordinate conversion unit, 7, 7a display image generation unit, 8 display control unit, 9 Vehicle information acquisition unit, 10, 10a Display control device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention is provided with: a distance image generation unit (5) for using a captured image in which a blind spot is captured where the field of vision of a driver is obstructed by a structure of a vehicle to generate a distance image the pixel value of which is represented by a value of the distance from the position of the viewpoint of the driver to an object located in the blind spot; a coordinate conversion unit (6) for using the generated distance image to set a plurality of virtual projection spheres in an area around the vehicle that are defined as the outer circumferential surfaces of a globe centering around the position of the viewpoint of the driver, and using the plurality of virtual projection spheres to convert coordinates on a display screen for displaying the captured image into coordinates on the image-capture surface of the captured image; and a display image generation unit (7) for using the converted coordinates on the image-capture surface to generate an image displayed on the display screen.

Description

表示制御装置および表示制御方法Display control apparatus and display control method
 この発明は、運転者の視界が遮られる領域の映像を表示する技術に関するものである。 The present invention relates to a technique for displaying an image of an area where a driver's view is blocked.
 従来、車両構造部により運転者の視界が遮られる領域を撮像した画像を、当該車両構造部に設置されたディスプレイなどに表示することにより、運転者の視界が遮られる領域の状況を、運転者が目視で確認できるように支援する技術が提案されている。また、撮像した画像をディスプレイなどに表示する際に、撮像した画像を、運転者の視点から見た実際の車外の景色とある程度連続させて表示する技術も提案されている。例えば、特許文献1に記載された車両用死角映像表示装置は、運転者の視点位置から被写体までの距離を半径とした仮想投影球面を定義し、運転者の視点位置を視点としてディスプレイの各画素を通る直線と仮想投影球面との交点を求め、求めた交点に対応するカメラの撮像した映像の画素を特定してディスプレイに表示する画像を生成している。 Conventionally, by displaying an image obtained by capturing an area where the driver's view is blocked by the vehicle structure on a display or the like installed in the vehicle structure, the situation of the area where the driver's view is blocked is displayed. Has been proposed to assist in the visual confirmation. Also, a technique has been proposed in which when a captured image is displayed on a display or the like, the captured image is displayed to some extent continuously with the actual scenery outside the vehicle viewed from the viewpoint of the driver. For example, a blind spot image display device for a vehicle described in Patent Document 1 defines a virtual projection spherical surface having a radius from a driver's viewpoint position to a subject as a radius, and each pixel of the display using the driver's viewpoint position as a viewpoint. The intersection of the straight line passing through the virtual projection sphere and the virtual projection spherical surface is obtained, the image of the image captured by the camera corresponding to the obtained intersection is specified, and an image to be displayed on the display is generated.
特開2007-104538号公報JP 2007-104538 A
 上記特許文献1に記載された装置では、運転者の視点位置から1つの被写体までの距離を用いて仮想投影球面の半径を定義しているため、車両構造部により運転者の視界が遮られる領域に複数の被写体が存在する場合に、定義された仮想投影球面の半径と、運転者の視点位置からの距離が一致しない被写体が存在することとなる。仮想投影球面の半径と運転者の視点位置からの距離が一致しない被写体を撮像した画像は、運転者の視点位置から見た実際の車外の景色との連続性が低下するという課題があった。 In the device described in Patent Document 1, since the radius of the virtual projection spherical surface is defined using the distance from the driver's viewpoint position to one subject, the area in which the driver's field of view is blocked by the vehicle structure unit When there are a plurality of subjects, there exists a subject whose radius from the defined virtual projection spherical surface does not match the distance from the viewpoint position of the driver. An image obtained by capturing an object in which the radius of the virtual projection spherical surface and the distance from the driver's viewpoint position do not match has a problem that the continuity with the actual scenery outside the vehicle viewed from the driver's viewpoint position is reduced.
 この発明は、上記のような課題を解決するためになされたもので、運転者の視界が遮られる領域に複数の被写体が存在する場合にも、当該運転者の視界が遮られる領域を撮像した画像を運転者の視点位置から見た実際の車外の景色と連続させて表示させることを目的とする。 The present invention was made to solve the above-described problems, and even when there are a plurality of subjects in an area where the driver's field of view is blocked, an image of the area where the driver's field of view is blocked is captured. The purpose is to display the image continuously with the scenery outside the vehicle viewed from the viewpoint position of the driver.
 この発明に係る表示制御装置は、車両の周辺の領域のうち、当該車両の構造物により、運転者の視界が遮られる死角領域が撮像された撮像画像を用いて、運転者の視点位置から死角領域に位置する物体までの距離の値を画素値とした距離画像を生成する距離画像生成部と、距離画像生成部が生成した距離画像を用いて、車両の周辺の領域に運転者の視点位置を中心とした球体の外周面として定義される複数の仮想投影球面を設定し、当該複数の仮想投影球面を用いて、撮像画像を表示するための表示画面上の座標を撮像画像の撮像面上の座標に変換する座標変換部と、座標変換部が変換した撮像面上の座標を用いて、表示画面に表示する画像を生成する表示画像生成部とを備えるものである。 The display control apparatus according to the present invention uses a captured image obtained by capturing a blind spot area in which a driver's field of view is blocked by a structure of the vehicle in a peripheral area of the vehicle, from a driver's viewpoint position to a blind spot. A distance image generation unit that generates a distance image using a value of a distance to an object located in the region as a pixel value, and a driver's viewpoint position in a region around the vehicle using the distance image generated by the distance image generation unit A plurality of virtual projection spheres defined as outer peripheral surfaces of a sphere centered on the image are set, and the coordinates on the display screen for displaying the captured image are displayed on the imaging surface of the captured image using the plurality of virtual projection spheres. And a display image generation unit that generates an image to be displayed on the display screen using the coordinates on the imaging surface converted by the coordinate conversion unit.
 この発明によれば、運転者の視界が遮られる領域を撮像した画像を、運転者の視点位置から見た実際の車外の景色と連続させて表示させることができる。 According to the present invention, an image obtained by capturing an area where the driver's view is blocked can be displayed continuously with the scenery outside the vehicle viewed from the viewpoint position of the driver.
実施の形態1に係る表示制御装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of a display control device according to a first embodiment. 実施の形態1に係る表示制御装置のハードウェア構成を示す図である。2 is a diagram illustrating a hardware configuration of a display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置を備えた車両を上方から見た図である。It is the figure which looked at the vehicle provided with the display control apparatus which concerns on Embodiment 1 from upper direction. 実施の形態1に係る表示制御装置を備えた車両内の一部を後部座席から見た図である。It is the figure which looked at the part in the vehicle provided with the display control apparatus which concerns on Embodiment 1 from the rear seat. 運転者の視点位置から車外を直視した際に運転者が視認する景色を示す図である。It is a figure which shows the scenery which a driver | operator visually recognizes when looking directly outside the vehicle from a driver | operator's viewpoint position. 実施の形態1に係る表示制御装置の動作を示すフローチャートである。3 is a flowchart showing an operation of the display control apparatus according to the first embodiment. 実施の形態1に係る表示制御装置の画像入力部が受け付ける撮像画像の一例を示す図である。6 is a diagram illustrating an example of a captured image received by an image input unit of the display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置の距離画像生成部が生成した距離画像の一例を示す図である。6 is a diagram illustrating an example of a distance image generated by a distance image generation unit of the display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置の表示画像生成部が生成した表示用画像の表示例を示す図である。6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置の表示画像生成部が生成した表示用画像の表示例を示す図である。6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置の表示画像生成部が生成した表示用画像の表示例を示す図である。6 is a diagram illustrating a display example of a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置の座標変換部の動作を示すフローチャートである。4 is a flowchart illustrating an operation of a coordinate conversion unit of the display control device according to the first embodiment. 実施の形態1に係る表示制御装置の座標変換部の座標変換処理を模式的に示した図である。FIG. 6 is a diagram schematically illustrating coordinate conversion processing of a coordinate conversion unit of the display control device according to the first embodiment. 実施の形態1に係る表示制御装置の表示画像生成部の動作を示すフローチャートである。4 is a flowchart illustrating an operation of a display image generation unit of the display control apparatus according to the first embodiment. 実施の形態1に係る表示制御装置の表示画像生成部が生成した表示用画像を示した図である。6 is a diagram showing a display image generated by a display image generation unit of the display control apparatus according to Embodiment 1. FIG. 実施の形態1に係る表示制御装置の表示結果を示す図である。6 is a diagram showing a display result of the display control apparatus according to Embodiment 1. FIG. 実施の形態2に係る表示制御装置の構成を示すブロック図である。6 is a block diagram illustrating a configuration of a display control device according to Embodiment 2. FIG. 実施の形態2に係る表示制御装置の表示画像生成部の動作を示すフローチャートである。10 is a flowchart illustrating an operation of a display image generation unit of the display control apparatus according to the second embodiment. 実施の形態2に係る表示制御装置の設定条件の一例を示す図である。FIG. 10 is a diagram illustrating an example of setting conditions of the display control device according to the second embodiment. 図20A,図20Bは、実施の形態2に係る表示制御装置の設定条件の一例を示す図である。20A and 20B are diagrams illustrating an example of setting conditions of the display control device according to the second embodiment.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、実施の形態1に係る表示制御装置10の構成を示すブロック図である。
 表示制御装置10は、画像入力部1、距離情報取得部2、視点情報取得部3、画像処理部4および表示制御部8を備える。さらに、画像処理部4は、距離画像生成部5、座標変換部6および表示画像生成部7を備える。表示制御装置10は、例えば後述する車両20に搭載される。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram illustrating a configuration of a display control apparatus 10 according to the first embodiment.
The display control device 10 includes an image input unit 1, a distance information acquisition unit 2, a viewpoint information acquisition unit 3, an image processing unit 4, and a display control unit 8. Further, the image processing unit 4 includes a distance image generation unit 5, a coordinate conversion unit 6, and a display image generation unit 7. The display control device 10 is mounted on, for example, a vehicle 20 described later.
 画像入力部1は、車両20の周辺の領域をカメラ等の撮像手段が撮像した撮像画像の入力を受け付ける。撮像画像は、車両20の周辺の領域のうち、少なくとも例えばピラーなどの車両20の造物により運転者の視界が遮られる領域(以下、死角領域という)が撮像された画像である。
 距離情報取得部2は、車両20に搭載されたセンサ等が少なくとも死角領域を走査し、センサから領域内の物体までの距離を測定した結果である距離情報を取得する。
The image input unit 1 receives an input of a captured image obtained by imaging an area around the vehicle 20 by an imaging unit such as a camera. The captured image is an image obtained by capturing an area where the driver's field of view is blocked by at least a structure of the vehicle 20 such as a pillar (hereinafter referred to as a blind spot area) among the areas around the vehicle 20.
The distance information acquisition unit 2 acquires distance information that is a result of measuring a distance from the sensor to an object in the area by scanning at least a blind spot area by a sensor or the like mounted on the vehicle 20.
 視点情報取得部3は、運転者の視点位置を示す視点情報を取得する。運転者の視点位置は、例えば運転者の目の位置または頭部の位置等である。
 画像入力部1が受け付けた撮像画像、距離情報取得部2および視点情報取得部3が取得した情報は、画像処理部4に出力される。
The viewpoint information acquisition unit 3 acquires viewpoint information indicating the driver's viewpoint position. The viewpoint position of the driver is, for example, the position of the driver's eyes or the position of the head.
The captured image received by the image input unit 1, the information acquired by the distance information acquisition unit 2 and the viewpoint information acquisition unit 3 are output to the image processing unit 4.
 画像処理部4には、予め、撮像手段の配置位置を示す位置情報、測距手段の配置位置を示す位置情報、および後述するディスプレイの配置位置を示す位置情報が設定されている。
 画像処理部4の距離画像生成部5は、距離情報取得部2が取得した距離情報と、視点情報取得部3が取得した運転者の視点情報とを用いて、運転者の視点位置から少なくとも死角領域内に位置する物体までの距離を算出する。距離画像生成部5は、さらに撮像手段の位置情報および測距手段の位置情報を参照して、画像入力部1が入力を受け付けた撮像画像の各画素に対して、運転者の視点位置から物体までの距離の値(以下、距離値という)を画素値として対応付けた距離画像を生成する。
In the image processing unit 4, position information indicating the arrangement position of the imaging means, position information indicating the arrangement position of the distance measuring means, and position information indicating the arrangement position of the display described later are set in advance.
The distance image generation unit 5 of the image processing unit 4 uses at least the blind spot from the driver's viewpoint position using the distance information acquired by the distance information acquisition unit 2 and the driver's viewpoint information acquired by the viewpoint information acquisition unit 3. The distance to the object located in the area is calculated. The distance image generation unit 5 further refers to the position information of the image pickup unit and the position information of the distance measurement unit, for each pixel of the picked-up image received by the image input unit 1 from the viewpoint position of the driver. A distance image is generated in which the distance values up to (hereinafter referred to as distance values) are associated as pixel values.
 距離画像生成部5は、距離が測定された領域のうち、運転者の視点位置からの距離が同一である領域をまとめて1つの同一対象領域として生成する処理を行い、当該同一対象領域の距離に応じて被写体領域および背景領域を設定する。距離画像生成部5は、例えば、生成した同一対象領域の距離値が閾値未満である場合は当該同一対象領域を被写体領域に設定し、生成した同一対象領域の距離値が閾値以上である場合は当該同一対象領域を背景領域に設定する。 The distance image generation unit 5 performs a process of generating a single same target area by combining the areas having the same distance from the viewpoint position of the driver among the areas where the distance is measured, and the distance of the same target area The subject area and the background area are set according to the above. For example, when the distance value of the generated same target area is less than the threshold, the distance image generation unit 5 sets the same target area as the subject area, and when the generated distance value of the same target area is greater than or equal to the threshold The same target area is set as the background area.
 座標変換部6は、距離画像生成部5が生成した同一対象領域と、距離画像の距離情報とを参照し、運転者の視点位置を中心とし、当該運転者の視点位置から設定された被写体領域までの距離を半径とした球体の外周面として定義される仮想投影球面を設定する。同様に、座標変換部6は、運転者の視点位置を中心とし、当該運転者の視点位置から設定された背景領域までの距離を半径とした球体の外周面として定義される仮想投影球面を設定する。
 座標変換部6は、設定した仮想投影球面を用いて、運転者の視点位置から見た車内のディスプレイ(表示画面)上の座標を、撮像画像の撮像面の座標に変換する。これにより、車内のディスプレイに表示する画像データが、撮像面の座標、即ち画素で特定される。ここで、車内のディスプレイとは、死角領域を生じさせるピラーなどの車両20の構造物に配置したディスプレイであり、死角領域の画像を表示するためのディスプレイである。
 なお、座標変換部6の処理の詳細については後述する。
The coordinate conversion unit 6 refers to the same target region generated by the distance image generation unit 5 and the distance information of the distance image, the subject region set from the driver's viewpoint position with the driver's viewpoint position as the center. A virtual projection spherical surface defined as the outer peripheral surface of the sphere with the radius up to is set. Similarly, the coordinate conversion unit 6 sets a virtual projection spherical surface defined as an outer peripheral surface of a sphere centered on the driver's viewpoint position and having a radius from the driver's viewpoint position to the set background area as a radius. To do.
The coordinate conversion unit 6 converts coordinates on the display (display screen) in the vehicle viewed from the viewpoint position of the driver into coordinates on the imaging surface of the captured image using the set virtual projection spherical surface. Thereby, the image data displayed on the display in the vehicle is specified by the coordinates of the imaging surface, that is, the pixels. Here, the display in the vehicle is a display arranged on a structure of the vehicle 20 such as a pillar that generates a blind spot area, and is a display for displaying an image of the blind spot area.
Details of the processing of the coordinate conversion unit 6 will be described later.
 表示画像生成部7は、座標変換部6で変換された座標、即ち撮像画像の撮像面での座標を用いて、座標変換部6が設定した仮想投影球面毎に画像データを生成する。
 表示画像生成部7は、距離画像生成部5が生成した距離画像の距離値を参照し、被写体領域および背景領域それぞれに用いる仮想投影球面の画像データを選択する。
 表示画像生成部7は、予め設定されたディスプレイの配置位置およびディスプレイのサイズに基づいて、選択した画像データから表示画像データを生成する。
 また、表示画像生成部7は、生成した表示画像データの加工、合成、およびグラフィックスの重畳等の機能を備えてもよい。例えば、表示画像生成部7は、表示画像上に、メニュー画面または注意喚起画面等を重畳表示するように表示画像データの変換を行ってもよい。
The display image generation unit 7 generates image data for each virtual projection spherical surface set by the coordinate conversion unit 6 using the coordinates converted by the coordinate conversion unit 6, that is, the coordinates on the imaging surface of the captured image.
The display image generation unit 7 refers to the distance value of the distance image generated by the distance image generation unit 5 and selects image data of the virtual projection spherical surface used for each of the subject region and the background region.
The display image generation unit 7 generates display image data from the selected image data based on a preset display position and display size.
Further, the display image generation unit 7 may have functions such as processing, synthesis, and graphics superimposition of the generated display image data. For example, the display image generation unit 7 may convert the display image data so that a menu screen or an alert screen is superimposed on the display image.
 表示制御部8は、表示画像生成部7が生成した表示画像データに基づく画像をディスプレイ等に表示させるための表示制御情報を生成し、ディスプレイ等に出力する。 The display control unit 8 generates display control information for displaying an image based on the display image data generated by the display image generation unit 7 on a display or the like, and outputs the display control information to the display or the like.
 次に、表示制御装置10のハードウェア構成例を説明する。
 図2は、表示制御装置10のハードウェア構成例を示す図である。
 表示制御装置10における画像入力部1、距離情報取得部2、視点情報取得部3、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8は、処理回路により実現される。すなわち、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8は、入力された撮像画像、距離情報および視点情報を用いて、撮像画像の画像処理を行い、生成された表示画像の表示制御を行う処理回路を備える。
 処理回路が専用のハードウェアである場合、処理回路は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit),FPGA(Field-programmable Gate Array)、またはこれらを組み合わせたものが該当する。画像入力部1、距離情報取得部2、視点情報取得部3、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8の各部の機能それぞれを処理回路で実現してもよいし、各部の機能をまとめて処理回路で実現してもよい。
Next, a hardware configuration example of the display control apparatus 10 will be described.
FIG. 2 is a diagram illustrating a hardware configuration example of the display control apparatus 10.
The image input unit 1, the distance information acquisition unit 2, the viewpoint information acquisition unit 3, the distance image generation unit 5, the coordinate conversion unit 6, the display image generation unit 7, and the display control unit 8 in the display control device 10 are realized by a processing circuit. The That is, the distance image generation unit 5, the coordinate conversion unit 6, the display image generation unit 7, and the display control unit 8 perform image processing of the captured image using the input captured image, distance information, and viewpoint information, and are generated. A processing circuit for performing display control of the displayed image.
When the processing circuit is dedicated hardware, the processing circuit is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-programmable Gate Array). ) Or a combination of these. The functions of the image input unit 1, distance information acquisition unit 2, viewpoint information acquisition unit 3, distance image generation unit 5, coordinate conversion unit 6, display image generation unit 7 and display control unit 8 are realized by processing circuits. Alternatively, the functions of the respective units may be combined and realized by a processing circuit.
 処理回路がCPU(Central Processing Unit)の場合、処理回路は図2に示すメモリ13に格納されるプログラムを実行するCPU12である。
 画像入力部1、距離情報取得部2、視点情報取得部3、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8の機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ13に格納される。CPU12は、メモリ13に記憶されたプログラムを読み出して実行することにより、画像入力部1、距離情報取得部2、視点情報取得部3、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8の各機能を実現する。即ち、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8は、CPU12により実行されるときに、後述する図6に示す各ステップが結果的に実行されることになるプログラムを格納するためのメモリ13を備える。また、これらのプログラムは、画像入力部1、距離情報取得部2、視点情報取得部3、距離画像生成部5、座標変換部6、表示画像生成部7および表示制御部8の手順または方法をコンピュータに実行させるものであるともいえる。
When the processing circuit is a CPU (Central Processing Unit), the processing circuit is the CPU 12 that executes a program stored in the memory 13 shown in FIG.
The functions of the image input unit 1, distance information acquisition unit 2, viewpoint information acquisition unit 3, distance image generation unit 5, coordinate conversion unit 6, display image generation unit 7 and display control unit 8 are software, firmware, or software and firmware. It is realized by the combination. Software or firmware is described as a program and stored in the memory 13. The CPU 12 reads out and executes a program stored in the memory 13 to thereby execute an image input unit 1, a distance information acquisition unit 2, a viewpoint information acquisition unit 3, a distance image generation unit 5, a coordinate conversion unit 6, and a display image generation unit. 7 and the display control unit 8 are realized. That is, when the distance image generation unit 5, the coordinate conversion unit 6, the display image generation unit 7, and the display control unit 8 are executed by the CPU 12, each step shown in FIG. A memory 13 for storing the program. In addition, these programs include procedures or methods of the image input unit 1, the distance information acquisition unit 2, the viewpoint information acquisition unit 3, the distance image generation unit 5, the coordinate conversion unit 6, the display image generation unit 7, and the display control unit 8. It can also be said to be executed by a computer.
 ここで、CPU12は、例えば、中央処理装置、処理装置、演算装置、プロセッサ、マイクロプロセッサ、マイクロコンピュータ、またはDSP(Digital Signal Processor)などのことである。
 メモリ13は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(Electrically EPROM)等の不揮発性または揮発性の半導体メモリであってもよいし、ハードディスク、フレキシブルディスク等の磁気ディスクであってもよいし、ミニディスク、CD(Compact Disc)、DVD(Digital Versatile Disc)等の光ディスクであってもよい。
Here, the CPU 12 is, for example, a central processing unit, a processing unit, an arithmetic unit, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
The memory 13 may be, for example, a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), or an EEPROM (Electrically EPROM). Further, it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, CD (Compact Disc), or DVD (Digital Versatile Disc).
 次に、画像入力部1、距離情報取得部2および視点情報取得部3に入力される情報、および表示制御部8が出力する情報について、図3および図4の説明図を参照しながら説明する。
 図3は、実施の形態1に係る表示制御装置10を備えた車両20を上方から見た図である。図4は、実施の形態1に係る表示制御装置10を備えた車両20内の一部を後部座席から見た図である。
 車両20の運転者Aは、例えば車両20の左側のフロントピラー(いわゆるAピラー)21により視界が遮られている。そのため、運転者Aが着席位置からフロントピラー21側を視認すると、運転者Aは当該運転者Aから見てフロントピラー21の後方に位置する領域を視認することができず、死角領域101が生じる。
 この場合に、表示制御装置10に入力される情報について説明する。
Next, information input to the image input unit 1, the distance information acquisition unit 2 and the viewpoint information acquisition unit 3, and information output from the display control unit 8 will be described with reference to the explanatory diagrams of FIGS. 3 and 4. .
FIG. 3 is a view of the vehicle 20 including the display control device 10 according to the first embodiment as viewed from above. FIG. 4 is a view of a part of the vehicle 20 provided with the display control device 10 according to the first embodiment as seen from the rear seat.
The driver A of the vehicle 20 is blocked from view by a front pillar (so-called A pillar) 21 on the left side of the vehicle 20, for example. Therefore, when the driver A visually recognizes the front pillar 21 side from the seating position, the driver A cannot visually recognize the area located behind the front pillar 21 when viewed from the driver A, and the blind spot area 101 is generated. .
In this case, information input to the display control device 10 will be described.
 視点情報取得部3が取得する視点情報は、車両20に搭載された車内カメラ33等が車両内を撮像した撮像画像を用いて算出される。図3および図4の例では、車両20前方に配置された車内カメラ33が車両内を撮像して撮像画像を取得し、図示しない算出手段が当該撮像画像に含まれる運転者Aの顔画像を解析して、視点位置105を算出する。また、視点情報は、複数台のカメラが撮像した撮像画像を用いた三角測量や単眼カメラによるTOF(Time of Flight)等公知の検出技術を利用して取得可能である。また、視点情報は、予め登録された運転者Aの身体情報を元に算出してもよいし、運転席のシート位置、バックミラーの角度、サイドミラーの角度を元に算出して取得してもよい。
 視点情報が示す視点位置105は、運転者Aから物体までの距離を算出する際の基準位置に設定される。
The viewpoint information acquired by the viewpoint information acquisition unit 3 is calculated using a captured image obtained by capturing the interior of the vehicle by the in-vehicle camera 33 or the like mounted on the vehicle 20. In the example of FIGS. 3 and 4, the in-vehicle camera 33 arranged in front of the vehicle 20 captures the inside of the vehicle to acquire the captured image, and a calculation unit (not illustrated) calculates the face image of the driver A included in the captured image. The viewpoint position 105 is calculated by analysis. The viewpoint information can be acquired by using a known detection technique such as triangulation using captured images captured by a plurality of cameras or TOF (Time of Flight) using a monocular camera. The viewpoint information may be calculated based on the physical information of the driver A registered in advance, or may be calculated and acquired based on the seat position of the driver's seat, the angle of the rearview mirror, and the angle of the side mirror. Also good.
The viewpoint position 105 indicated by the viewpoint information is set as a reference position for calculating the distance from the driver A to the object.
 画像入力部1に入力される撮像画像は、車両20に搭載された車外カメラ31a等により撮像される。車外カメラ31aは、車両20の周辺領域のうち少なくとも、例えばピラーなどの車両20の構造物により運転者の視界が遮られる死角領域を撮像する撮像手段である。
 図3では、車外カメラ31aは車両20の左側前方に向けて、例えばサイドミラー22の近くに設置されている。車外カメラ31aは、少なくとも死角領域101を含む車両20の周辺の領域である撮像範囲102内に存在する物体等を撮像する。車外カメラ31aは、少なくとも死角領域101が撮像可能であれば、サイドミラー22の近く以外にも、図示しないフロントピラー21の付け根部分、図示しないルーフ、図示しない車内ウィンドウ等に設置してもよく、設置位置は限定されない。また、車外カメラ31aは、複数台配置してもよい。さらに、車外カメラ31aは、広画角なレンズを備えた1台のカメラで構成され、1台の車外カメラ31aが、車両20の左右両方の死角領域の撮像に使用されてもよい。
The captured image input to the image input unit 1 is captured by the vehicle exterior camera 31 a mounted on the vehicle 20. The vehicle outside camera 31a is an imaging unit that images at least a blind spot area in which the driver's field of view is blocked by a structure of the vehicle 20, such as a pillar, in the peripheral area of the vehicle 20.
In FIG. 3, the outside camera 31 a is installed toward the left front of the vehicle 20, for example, near the side mirror 22. The vehicle exterior camera 31a images an object or the like existing in the imaging range 102 that is an area around the vehicle 20 including at least the blind spot area 101. As long as at least the blind spot area 101 can be imaged, the vehicle exterior camera 31a may be installed on the root portion of the front pillar 21 (not shown), the roof (not shown), the vehicle interior window (not shown), etc. in addition to the vicinity of the side mirror 22. The installation position is not limited. Further, a plurality of outside cameras 31a may be arranged. Furthermore, the vehicle exterior camera 31a may be configured by a single camera having a wide-angle lens, and the vehicle exterior camera 31a may be used for imaging both the left and right blind spot areas of the vehicle 20.
 車外カメラ31aは、カラー画像を撮像するカメラであり、レンズおよびミラー等で構成される光学機構と、CCD(Charge Coupled Device)またはCMOS(Complementary Metal Oxide Semiconductor)のイメージセンサ等の撮像素子を備えている。また、車外カメラ31aは、夜間の撮像を可能にするために、赤外線センサまたは発光素子を備えていてもよい。車外カメラ31aの撮像素子が生成した生の画像データに対しては、必要に応じて色を変換する処理、フォーマットを変換する処理、フィルタを掛ける処理等の前処理が行われた後、撮像画像として画像入力部1に入力される。 The vehicle exterior camera 31a is a camera that captures a color image, and includes an optical mechanism including a lens and a mirror, and an image sensor such as a CCD (Charge-Coupled Device) or CMOS (Complementary-Metal-Oxide Semiconductor) image sensor. Yes. In addition, the outside camera 31a may include an infrared sensor or a light emitting element in order to enable night imaging. The raw image data generated by the image sensor of the outside camera 31a is subjected to preprocessing such as color conversion processing, format conversion processing, filtering processing, and the like as necessary, and then the captured image To the image input unit 1.
 距離情報取得部2が取得する距離情報は、車両20に搭載された測距センサ32a等により生成される。図3では、測距センサ32aは、車両20の左側前方に向けて、例えばサイドミラー22の近くに配置されている。また、図3の例では、測距センサ32aが物体Bまでの距離103と、物体Cまでの距離104を計測した場合を示している。なお、実際には、測距センサ32aは、物体Bおよび物体Cまでの距離のみではなく、死角領域101内に存在する全ての物体および物体を構成する各部分までの距離を可能な分解能の範囲で計測し、距離情報を生成している。
 測距センサ32aは、例えば、ミリ波レーダ、レーザレーダ、超音波センサ等、車両用の測距技術として公知のものが適用可能である。また、車外カメラ31aを、測距手段として併用してもよい。その場合、距離情報は、複数台カメラを用いた三角測量あるいは単眼カメラによるTOF等の技術を用いて計測された距離から生成される。測距センサ32aから取得される距離の情報は、例えば10~240Hz等の一定周期で更新される。
The distance information acquired by the distance information acquisition unit 2 is generated by the distance measuring sensor 32a mounted on the vehicle 20. In FIG. 3, the distance measuring sensor 32 a is disposed, for example, near the side mirror 22 toward the left front of the vehicle 20. In the example of FIG. 3, the distance measurement sensor 32 a measures the distance 103 to the object B and the distance 104 to the object C. In practice, the distance measuring sensor 32a is not limited to the distance to the object B and the object C, but has a resolution range in which the distance to all the objects existing in the blind spot area 101 and each part constituting the object can be obtained. The distance information is generated.
As the distance measuring sensor 32a, for example, a well-known distance measuring technique for a vehicle such as a millimeter wave radar, a laser radar, or an ultrasonic sensor can be applied. Further, the outside camera 31a may be used as a distance measuring unit. In this case, the distance information is generated from a distance measured using a technique such as triangulation using a plurality of cameras or TOF using a monocular camera. The distance information acquired from the distance measuring sensor 32a is updated at a constant cycle such as 10 to 240 Hz.
 表示制御部8が生成する表示制御情報は、ディスプレイ34aに出力される。ディスプレイ34aは、車両20のフロントピラー21における運転者Aから見える面に、運転者Aの視点位置105から死角となる死角領域101に重なるように配置される。ディスプレイ34aは、例えば、液晶ディスプレイ(LCD:Liquid Crystal Display)、有機EL(OLE:Organic Electro Luminescence)ディスプレイ、プロジェクタ等の各種の表示装置を用いて構成される。また、ディスプレイ34aは、複数の小型ディスプレイを並べて配置して1つのディスプレイとしたものを適用してもよい。また、ディスプレイ34aは、輝度調整を行うための照度センサ等を備え、車室内の日射量に応じて調整された画像を表示する構成としてもよい。
 表示画像生成部7は、ディスプレイ34aの配置位置およびディスプレイ34aの表示サイズの情報を有しているものとする。
The display control information generated by the display control unit 8 is output to the display 34a. The display 34 a is arranged on the surface of the front pillar 21 of the vehicle 20 that is visible to the driver A so as to overlap the blind spot area 101 that is a blind spot from the viewpoint position 105 of the driver A. The display 34a is configured by using various display devices such as a liquid crystal display (LCD), an organic electro luminescence (OLE) display, and a projector. Further, the display 34a may be a display in which a plurality of small displays are arranged side by side. The display 34a may include an illuminance sensor or the like for adjusting the brightness, and may display an image adjusted according to the amount of solar radiation in the vehicle interior.
The display image generation unit 7 has information on the arrangement position of the display 34a and the display size of the display 34a.
 詳細な説明は省略するが、車両20の右側に置いても同様に、運転者Aの視界はフロントピラー23(いわゆるAピラー)により遮られ、死角領域が発生する。そのため、図3に示すように、車外カメラ31bおよび測距センサ32bを、車両20の右側前方に配置し、死角領域の撮像画像および距離情報を取得するように構成可能である。車両20の右側前方の死角領域の画像を表示する表示画像は、表示制御装置10によって生成され、フロントピラー23の前面に配置されたディスプレイ34bに表示される。また、フロントピラー21,23以外の他のピラー(いわゆるBピラー、Cピラー等)についても同様に、表示制御装置10は、死角領域を撮像した画像から表示画像を生成し、当該ピラーに設けられたディスプレイに表示させることが可能である。 Although detailed description is omitted, the field of view of the driver A is similarly obstructed by the front pillar 23 (so-called A pillar) even when the vehicle 20 is placed on the right side of the vehicle 20, and a blind spot area is generated. Therefore, as shown in FIG. 3, the vehicle exterior camera 31 b and the distance measuring sensor 32 b can be arranged in front of the right side of the vehicle 20 to acquire a captured image and distance information of the blind spot area. A display image that displays an image of the blind spot area in front of the right side of the vehicle 20 is generated by the display control device 10 and is displayed on the display 34 b disposed in front of the front pillar 23. Similarly, for other pillars other than the front pillars 21 and 23 (so-called B pillars, C pillars, etc.), the display control device 10 generates a display image from an image obtained by imaging the blind spot area, and is provided in the pillar. Can be displayed on the display.
 次に、表示制御装置10の動作を、図5の説明図および図6のフローチャートを参照しながら説明する。
 なお、車両20に搭載される車外カメラ31a、測距センサ32a、車内カメラ33およびディスプレイ34a等の配置位置および機能は、図3および図4で示したものと同一であるため、説明を省略する。図5は、図3,4で示した状態において車両20の運転者Aが視点位置105から人物である2つの物体B,Cが存在する領域を見た場合に、どのように見えるかを示した図である。
 図5に示すように、運転者Aが物体B,Cが存在する領域を見た場合、車両20のフロントウィンドウ24と車両20の左側のサイドウィンドウ25との間に、フロントピラー21が存在する。運転者Aの視界は、フロントピラー21によって遮られ、物体Bの一部と、物体Cの一部を視認することができない。このフロントピラー21によって視界が遮られる領域が、死角領域101である。表示制御装置10は、運転者Aが視認することのできない死角領域101の表示画像を生成し、ディスプレイ34aに表示させるための表示制御を行う。
Next, the operation of the display control apparatus 10 will be described with reference to the explanatory diagram of FIG. 5 and the flowchart of FIG.
The arrangement positions and functions of the outside camera 31a, the ranging sensor 32a, the in-vehicle camera 33, the display 34a, and the like mounted on the vehicle 20 are the same as those shown in FIGS. . FIG. 5 shows what the driver A of the vehicle 20 looks in the state shown in FIGS. 3 and 4 when he / she sees an area where the two objects B and C as persons are present from the viewpoint position 105. It is a figure.
As shown in FIG. 5, when the driver A sees an area where the objects B and C exist, the front pillar 21 exists between the front window 24 of the vehicle 20 and the left side window 25 of the vehicle 20. . The field of view of the driver A is blocked by the front pillar 21, and a part of the object B and a part of the object C cannot be visually recognized. A region where the field of view is blocked by the front pillar 21 is a blind spot region 101. The display control device 10 generates a display image of the blind spot area 101 that cannot be visually recognized by the driver A, and performs display control for displaying the display image on the display 34a.
 図6は、実施の形態1に係る表示制御装置10の動作を示すフローチャートである。
 まず、画像入力部1、距離情報取得部2および視点情報取得部3が各種情報を取得する(ステップST1)。具体的には、画像入力部1が、車外カメラ31aが撮像した撮像画像の入力を受け付ける。また、距離情報取得部2が、測距センサ32aから死角領域101内に存在する物体までの距離情報を取得する。また、視点情報取得部3が、運転者Aの視点位置105を示す視点情報を取得する。
FIG. 6 is a flowchart showing the operation of the display control apparatus 10 according to the first embodiment.
First, the image input unit 1, the distance information acquisition unit 2, and the viewpoint information acquisition unit 3 acquire various types of information (step ST1). Specifically, the image input unit 1 receives an input of a captured image captured by the outside camera 31a. Further, the distance information acquisition unit 2 acquires distance information from the distance measuring sensor 32a to an object existing in the blind spot area 101. Further, the viewpoint information acquisition unit 3 acquires viewpoint information indicating the viewpoint position 105 of the driver A.
 距離画像生成部5は、ステップST1で距離情報取得部2が取得した距離情報、画像処理部4に予め設定された測距手段の位置情報、およびステップST1で視点情報取得部3が取得した視点情報から、運転者の視点位置から各物体までの距離を算出する(ステップST2)。図3の例では、距離画像生成部5は、例えば、測距センサ32aから物体Bまでの距離103と、画像処理部4に予め設定された測距センサ32aの位置情報と、運転者Aの視点位置105とから、視点位置105から物体Bまでの距離値201dを算出する。同様に、距離画像生成部5は、測距センサ32aから物体Cまでの距離104と、画像処理部4に予め設定された測距センサ32aの位置情報と、運転者Aの視点位置105とから、視点位置105から物体Cまでの距離値202dを算出する。ここでは、物体Bおよび物体Cについて、1つずつの距離を代表して示したが、距離情報は少なくとも死角領域内に存在する物体を構成する各部分の距離についても生成されている。 The distance image generation unit 5 includes the distance information acquired by the distance information acquisition unit 2 in step ST1, the position information of the distance measuring unit preset in the image processing unit 4, and the viewpoint acquired by the viewpoint information acquisition unit 3 in step ST1. From the information, the distance from the viewpoint position of the driver to each object is calculated (step ST2). In the example of FIG. 3, the distance image generation unit 5, for example, the distance 103 from the distance measurement sensor 32 a to the object B, the position information of the distance measurement sensor 32 a set in advance in the image processing unit 4, and the driver A's A distance value 201d from the viewpoint position 105 to the object B is calculated from the viewpoint position 105. Similarly, the distance image generation unit 5 includes the distance 104 from the distance measurement sensor 32a to the object C, the position information of the distance measurement sensor 32a preset in the image processing unit 4, and the viewpoint position 105 of the driver A. The distance value 202d from the viewpoint position 105 to the object C is calculated. Here, for each of the objects B and C, one distance is shown as a representative, but distance information is also generated for at least the distances of the parts constituting the object existing in the blind spot area.
 距離画像生成部5は、ステップST2で算出した運転者の視点位置から各物体までの距離を参照し、測距センサ32aにより距離が測定されている領域を分けて被写体領域と背景領域を設定する(ステップST3)。図3の例では、まず距離画像生成部5は、運転者Aの視点位置105からの距離が同一である隣接する領域をまとめて1つの同一対象領域とする。ここで、距離が同一とは、距離値が厳密に一致である必要はなく、2つの距離値が近似した値(例えば、5m±30cmなど)を示し、同一の物体を測定した距離値であると判断できる場合には、距離が同一であるとみなす。 The distance image generation unit 5 refers to the distance from the viewpoint position of the driver calculated in step ST2 to each object, sets the subject area and the background area by dividing the area where the distance is measured by the distance measuring sensor 32a. (Step ST3). In the example of FIG. 3, first, the distance image generation unit 5 collectively sets adjacent areas having the same distance from the viewpoint position 105 of the driver A as one identical target area. Here, the distance is the same, the distance values do not need to be exactly the same, and the two distance values are approximate values (for example, 5 m ± 30 cm) and are distance values obtained by measuring the same object. If it can be determined, the distances are considered to be the same.
 さらに、距離画像生成部5は、生成した同一対象領域の距離値が閾値未満である場合には、当該領域が車両20の近くに位置する物体であると判断して被写体領域に設定する。閾値は、例えば「30m」等に設定される。一方、距離画像生成部5は、生成した同一対象領域の距離値が閾値以上である場合には、当該領域が車両20から離れた地点に位置する物体であると判断して背景領域に設定する。
 被写体領域は、1つに限定されるものではなく、条件を満たせば複数設定される。また、距離画像生成部5は、生成した同一対象領域の距離値が閾値未満であっても、例えばある同一対象領域の大きさが一定値未満であって、当該同一対象領域が無視できるほど小さいと判断される場合には、背景領域であるとする判断を追加して行ってもよい。
Further, if the generated distance value of the same target area is less than the threshold value, the distance image generation unit 5 determines that the area is an object located near the vehicle 20 and sets it as a subject area. The threshold is set to “30 m”, for example. On the other hand, when the generated distance value of the same target area is equal to or greater than the threshold value, the distance image generation unit 5 determines that the area is an object located at a point away from the vehicle 20 and sets it as a background area. .
The subject area is not limited to one, and a plurality of subject areas are set as long as the condition is satisfied. In addition, the distance image generation unit 5 is configured such that, for example, the size of a certain same target area is less than a certain value even if the distance value of the generated same target area is less than a threshold, and the same target area is small enough to be ignored. If it is determined, it may be additionally determined that it is the background area.
 距離画像生成部5は、設定した被写体領域と背景領域とに距離値を対応付ける。距離画像生成部5は、被写体領域の距離値として、その被写体領域の距離値の平均値または中央値等を対応付ける。
 また、距離画像生成部5は、背景領域の距離値として、被写体領域に対応付けられた距離値よりも遠い距離値、2つの被写体領域が存在する場合当該2つの被写体領域に対応付けられた距離値の中央値または一つの被写体領域と同一の距離値等を対応付ける。
The distance image generation unit 5 associates a distance value with the set subject area and background area. The distance image generation unit 5 associates an average value or a median value of the distance values of the subject area as the distance value of the subject area.
In addition, the distance image generation unit 5 uses a distance value farther than the distance value associated with the subject area as the distance value of the background area, and the distance associated with the two subject areas when there are two subject areas. The median value or the same distance value as one subject area is associated.
 距離画像生成部5は、画像処理部4に予め設定された車外カメラ31aの位置情報および測距センサ32aの位置情報に基づき、車外カメラ31aおよび測距センサ32aの配置位置の差を考慮しながら、ステップST1で画像入力部1が入力を受け付けた撮像画像の各画素に対して、ステップST3で設定した被写体領域および背景領域の距離値を画素値として対応付けて距離画像を生成する(ステップST4)。
 図7は画像入力部1が受け付けた撮像画像を示す図であり、撮像画像内には人物である物体Bおよび物体Cが存在する。図8は距離画像生成部5が生成した距離画像を示す図であり、被写体領域Ba、被写体領域Caおよび背景領域Dで構成されている。図8の例では、被写体領域Baは距離値201dが対応付けられた領域であり、被写体領域Caは距離値202dが対応付けられた領域であり、背景領域Dは距離値203dが対応付けられた領域であることを示している。
The distance image generation unit 5 considers the difference between the arrangement positions of the outside camera 31a and the distance measuring sensor 32a based on the position information of the outside camera 31a and the position information of the distance measuring sensor 32a set in the image processing unit 4 in advance. A distance image is generated by associating the distance value between the subject area and the background area set in step ST3 as a pixel value for each pixel of the captured image received by the image input unit 1 in step ST1 (step ST4). ).
FIG. 7 is a diagram illustrating a captured image received by the image input unit 1, and an object B and an object C that are persons exist in the captured image. FIG. 8 is a diagram illustrating a distance image generated by the distance image generation unit 5, and includes a subject area Ba, a subject area Ca, and a background area D. In the example of FIG. 8, the subject area Ba is an area associated with a distance value 201d, the subject area Ca is an area associated with a distance value 202d, and the background area D is associated with a distance value 203d. Indicates that this is an area.
 座標変換部6は、ステップST4で生成された距離画像を参照して、車両20の周辺の領域に、運転者の視点位置を中心とし、各被写体領域に設定された距離値を半径とした仮想投影球面を設定する(ステップST5)。同様に、座標変換部6は、ステップST4で生成された距離画像を参照して、車両20の周辺の領域に運転者の視点位置を中心とし、背景領域に設定された距離値を半径とした仮想投影球面を設定する(ステップST6)。図3および図8の例では、座標変換部6は、ステップST5およびステップST6として、被写体領域Baに設定された距離値201dを半径とした仮想投影球面201、被写体領域Caに設定された距離値202dを半径とした仮想投影球面202、背景領域Dに設定された距離値203dを半径とした仮想投影球面203を設定する。なお、これらの仮想投影球面の設定は、表示制御装置10における計算処理場で仮想的に行われるものである。図3における仮想投影球面201,202,203も仮想的な球面を示したものに過ぎない。 The coordinate conversion unit 6 refers to the distance image generated in step ST4, and in the area around the vehicle 20, the driver's viewpoint position is the center, and the distance value set in each subject area is the radius. A projection spherical surface is set (step ST5). Similarly, with reference to the distance image generated in step ST4, the coordinate conversion unit 6 uses the driver's viewpoint position as the center in the area around the vehicle 20 and sets the distance value set in the background area as the radius. A virtual projection spherical surface is set (step ST6). In the example of FIGS. 3 and 8, the coordinate conversion unit 6 performs the step ST5 and the step ST6, the virtual projection spherical surface 201 having the radius of the distance value 201d set in the subject area Ba and the distance value set in the subject area Ca. A virtual projection spherical surface 202 having a radius of 202d and a virtual projection spherical surface 203 having a radius of the distance value 203d set in the background region D are set. The setting of the virtual projection spherical surface is virtually performed in the calculation processing place in the display control apparatus 10. The virtual projection spherical surfaces 201, 202, and 203 in FIG. 3 are merely the virtual spherical surfaces.
 座標変換部6は、ステップST5およびステップST6で設定した仮想投影球面を用いて、視点位置から見たディスプレイ上の座標値を、撮像手段の撮像面上の座標値に変換する(ステップST7)。図3の例では、座標変換部6は、ステップST7において、仮想投影球面201,202,203を用いて、視点位置105から見たディスプレイ34aの表面上の座標値を、車外カメラ31aの図示しない撮像面上の座標値に変換する。座標変換部6の座標変換処理の詳細については後述する。 The coordinate conversion unit 6 converts the coordinate value on the display viewed from the viewpoint position into the coordinate value on the imaging surface of the imaging means using the virtual projection spherical surface set in step ST5 and step ST6 (step ST7). In the example of FIG. 3, the coordinate conversion unit 6 uses the virtual projection spherical surfaces 201, 202, 203 in step ST <b> 7, and the coordinate values on the surface of the display 34 a viewed from the viewpoint position 105 are not shown in the camera 31 a outside the vehicle. Convert to coordinate values on the imaging surface. Details of the coordinate conversion processing of the coordinate conversion unit 6 will be described later.
 表示画像生成部7は、ステップST7でディスプレイの表示面上の座標値が変換されて求められた撮像面上の座標値を用いて、仮想投影球面毎に表示用画像を生成する(ステップST8)。図3の例では、表示画像生成部7は、仮想投影球面201,202,203をそれぞれ用いてディスプレイ34aの表面上の座標を変換して求めた撮像面上の座標値からそれぞれ表示用画像を生成する。 The display image generation unit 7 generates a display image for each virtual projection spherical surface using the coordinate values on the imaging surface obtained by converting the coordinate values on the display surface of the display in step ST7 (step ST8). . In the example of FIG. 3, the display image generation unit 7 uses the virtual projection spherical surfaces 201, 202, and 203 to convert the coordinates on the surface of the display 34 a to display images for display from the coordinate values on the imaging surface. Generate.
 表示画像生成部7は、ステップST8で生成した各表示用画像を用いて、ディスプレイに表示する表示画像を生成する(ステップST9)。図3および図8の例では、表示画像生成部7は、被写体領域Baの表示用画像を、仮想投影球面201を用いた座標変換により生成された表示用画像と物体Bを示す被写体領域の情報とから生成し、同様に、被写体領域Caの表示用画像を、仮想投影球面202を用いて生成された表示用画像と物体Cを示す被写体領域の情報とから生成し、背景領域Dの表示用画像を、仮想投影球面203を用いて生成された表示用画像と背景を示す被写体領域の情報とから生成する。表示画像生成部7は、各表示用画像を統合することにより、ディスプレイに表示するための全体的な表示画像を生成する。表示制御部8は、ステップST9で生成された表示画像をディスプレイに表示するための表示制御を行い(ステップST10)、ステップST1の処理に戻る。 The display image generation unit 7 generates a display image to be displayed on the display using each display image generated in step ST8 (step ST9). In the example of FIGS. 3 and 8, the display image generation unit 7 converts the display image of the subject area Ba into the display area generated by coordinate conversion using the virtual projection spherical surface 201 and the subject area information indicating the object B. Similarly, the display image of the subject area Ca is generated from the display image generated using the virtual projection spherical surface 202 and the information of the subject area indicating the object C, and is used for displaying the background area D. An image is generated from the display image generated using the virtual projection spherical surface 203 and the subject area information indicating the background. The display image generation unit 7 generates an overall display image to be displayed on the display by integrating the display images. The display control unit 8 performs display control for displaying the display image generated in step ST9 on the display (step ST10), and returns to the process of step ST1.
 図9から図11は、表示画像生成部7が生成した各表示用画像を示す図である。
 図9は、仮想投影球面201を用いて変換した撮像画面上の座標値を用いて構成した表示用画像を示し、被写体領域Bbの大きさは、物体Bを視点位置105から見た際の大きさと一致する。
 図10は、仮想投影球面202を用いて変換した撮像画面上の座標値を用いて構成した表示用画像を示し、被写体領域Cbの大きさは、物体Cを視点位置105から見た際の大きさと一致する。
 図11は、仮想投影球面203を用いて変換した撮像画面上の座標値を用いて構成した表示用画像を示している。被写体領域Bbおよび被写体領域Cbは、共に物体Bおよび物体Cを視点位置105から見た際の大きさと一致していない。
 これらの表示用画像に対し、ステップST9で生成された表示用画像では、後述する図16で示すように物体B、物体Cおよび背景が視点位置105から見た際の大きさと一致した画像となる。
9 to 11 are diagrams showing the respective display images generated by the display image generation unit 7.
FIG. 9 shows a display image configured using coordinate values on the imaging screen converted using the virtual projection spherical surface 201, and the size of the subject region Bb is the size when the object B is viewed from the viewpoint position 105. Match.
FIG. 10 shows a display image configured using coordinate values on the imaging screen converted using the virtual projection spherical surface 202, and the size of the subject region Cb is the size when the object C is viewed from the viewpoint position 105. Match.
FIG. 11 shows a display image configured using coordinate values on the imaging screen converted using the virtual projection spherical surface 203. Both the subject area Bb and the subject area Cb do not match the size when the object B and the object C are viewed from the viewpoint position 105.
In contrast to these display images, the display image generated in step ST9 is an image in which the object B, the object C, and the background coincide with the size when viewed from the viewpoint position 105 as shown in FIG. .
 次に、図6のフローチャートのステップST7で示した、座標変換部6の処理について、図12のフローチャートと図13の説明図を参照しながら、より詳細に説明する。
 図12は、実施の形態1に係る表示制御装置10の座標変換部6の動作を示すフローチャートである。
 図13は、実施の形態1に係る表示制御装置10の座標変換部6の座標変換処理を模式的に示した図である。
 図13で示した車外カメラ31a、ディスプレイ34a、死角領域101、撮像範囲102、視点位置105、距離値201d,202d,203d、および仮想投影球面201,202,203は、図3で示したものにそれぞれ対応している。また、図13で示した被写体領域Ba,Caおよび背景領域Dは、図8で示した距離画像上の各領域に対応している。また、図13では、ディスプレイ34aの表示面301と、車外カメラ31aの撮像面302を示している。
Next, the process of the coordinate conversion unit 6 shown in step ST7 of the flowchart of FIG. 6 will be described in more detail with reference to the flowchart of FIG. 12 and the explanatory diagram of FIG.
FIG. 12 is a flowchart showing the operation of the coordinate conversion unit 6 of the display control apparatus 10 according to the first embodiment.
FIG. 13 is a diagram schematically illustrating a coordinate conversion process of the coordinate conversion unit 6 of the display control apparatus 10 according to the first embodiment.
The vehicle exterior camera 31a, the display 34a, the blind spot area 101, the imaging range 102, the viewpoint position 105, the distance values 201d, 202d, and 203d, and the virtual projection spherical surfaces 201, 202, and 203 shown in FIG. 13 are the same as those shown in FIG. Each corresponds. Further, the subject areas Ba and Ca and the background area D shown in FIG. 13 correspond to the respective areas on the distance image shown in FIG. Moreover, in FIG. 13, the display surface 301 of the display 34a and the imaging surface 302 of the vehicle exterior camera 31a are shown.
 座標変換部6は、視点位置105からディスプレイ34aの表示面301上の座標pnを通る半直線と、仮想投影球面Nとの交点の座標inを算出する(ステップST21)。なお、座標pnは、表示面301上の各点の位置を特定できるように設定された座標系における座標である。なお、仮想投影球面N上の座標は、仮想投影球面N上における位置を特定できるものであればどのようなものでもよい。座標変換部6は、ステップST21で算出した座標inと車外カメラ31aとを結ぶ直線と、撮像面302との交点を示す座標cnを変換後の座標として算出する(ステップST22)。撮像面302上の座標は、撮像面302上の各点の位置を特定できるように設定された座標系における座標である。座標変換部6は、表示面301上の全ての必要な座標を変換したか判定する(ステップST23)。表示面301上の全ての必要な座標を変換していない場合(ステップST23;NO)、座標変換部6は表示面301上の次の座標を設定し(ステップST24)、ステップST21の処理に戻る。なお、必要な座標とは、例えばディスプレイ34a上の全ての画素にそれぞれ対応する座標系のことである。 The coordinate conversion unit 6 calculates the coordinate in of the intersection of the half line passing the coordinate pn on the display surface 301 of the display 34a from the viewpoint position 105 and the virtual projection spherical surface N (step ST21). The coordinate pn is a coordinate in a coordinate system set so that the position of each point on the display surface 301 can be specified. The coordinates on the virtual projection spherical surface N may be any as long as the position on the virtual projection spherical surface N can be specified. The coordinate conversion unit 6 calculates a coordinate cn indicating an intersection of the straight line connecting the coordinate in calculated in step ST21 and the camera 31a outside the vehicle and the imaging surface 302 as a coordinate after conversion (step ST22). The coordinates on the imaging surface 302 are coordinates in a coordinate system set so that the position of each point on the imaging surface 302 can be specified. The coordinate conversion unit 6 determines whether all necessary coordinates on the display surface 301 have been converted (step ST23). If all necessary coordinates on the display surface 301 have not been converted (step ST23; NO), the coordinate conversion unit 6 sets the next coordinates on the display surface 301 (step ST24), and returns to the processing of step ST21. . The necessary coordinates are, for example, coordinate systems respectively corresponding to all the pixels on the display 34a.
 一方、表示面301上の全ての点の座標を変換した場合(ステップST23;YES)、座標変換部6は、全ての仮想投影球面について座標変換を行ったか判定する(ステップST25)。全ての仮想投影球面について座標を変換していない場合(ステップST25;NO)、座標変換部6は、次の仮想投影球面を設定し(ステップST26)、ステップST21の処理に戻る。一方、全ての仮想投影球面について座標を変換した場合(ステップST25;YES)、図6のフローチャートのステップST8の処理に進む。 On the other hand, when the coordinates of all the points on the display surface 301 are converted (step ST23; YES), the coordinate conversion unit 6 determines whether the coordinate conversion has been performed for all the virtual projection spherical surfaces (step ST25). When coordinates have not been converted for all virtual projection spheres (step ST25; NO), the coordinate conversion unit 6 sets the next virtual projection sphere (step ST26) and returns to the process of step ST21. On the other hand, when coordinates are converted for all virtual projection spherical surfaces (step ST25; YES), the process proceeds to step ST8 of the flowchart of FIG.
 次に、図12のフローチャートに沿って、図13で示した具体例について説明する。
 図13の例では、座標変換部6は、ステップST21からステップST26の処理を、
仮想投影球面201、仮想投影球面202、仮想投影球面203に対して、順に行う。また、座標変換部6は、ステップST21からステップST24の処理を、各仮想投影球面201,202,203に対して繰り返し行う。
Next, the specific example shown in FIG. 13 will be described along the flowchart of FIG.
In the example of FIG. 13, the coordinate conversion unit 6 performs the processing from step ST21 to step ST26.
It carries out in order with respect to the virtual projection spherical surface 201, the virtual projection spherical surface 202, and the virtual projection spherical surface 203. Further, the coordinate conversion unit 6 repeats the processing from step ST21 to step ST24 for each virtual projection spherical surface 201, 202, 203.
 図13で示した仮想投影球面201について、座標変換部6は、ステップST21として、視点位置105と表示面301上の座標p1を通る半直線と、仮想投影球面201上の交点の座標i1を算出する。次に、ステップST22として、座標変換部6は、座標i1と車外カメラ31aとを結ぶ直線と、撮像面302との交点の座標c1を算出する。その後、ステップST23およびステップST24の処理を経て、ステップST21として、座標変換部6は、視点位置105と表示面301上の座標p2を通る半直線と、仮想投影球面201上の交点の座標i2を算出する。次に、ステップST22として、座標変換部6は、座標i2と車外カメラ31aとを結ぶ直線と、撮像面302との交点の座標c2を算出する。図13の例では、仮想投影球面201を用いた座標変換について、表示面301上の座標p1,p2のみの変換を示しているが、座標変換部6は表示面301上の全ての必要な座標について座標変換を行う。 With respect to the virtual projection spherical surface 201 shown in FIG. 13, the coordinate conversion unit 6 calculates the coordinate i1 of the intersection point on the virtual projection spherical surface 201 and the half line passing through the viewpoint position 105 and the coordinate p1 on the display surface 301 in step ST21. To do. Next, as step ST <b> 22, the coordinate conversion unit 6 calculates the coordinate c <b> 1 of the intersection of the straight line connecting the coordinate i <b> 1 and the vehicle exterior camera 31 a and the imaging surface 302. Thereafter, through the processing of step ST23 and step ST24, as step ST21, the coordinate conversion unit 6 uses the half line passing through the viewpoint position 105 and the coordinate p2 on the display surface 301 and the coordinate i2 of the intersection point on the virtual projection spherical surface 201. calculate. Next, as step ST <b> 22, the coordinate conversion unit 6 calculates the coordinate c <b> 2 of the intersection of the straight line connecting the coordinate i <b> 2 and the outside camera 31 a and the imaging surface 302. In the example of FIG. 13, only the coordinates p1 and p2 on the display surface 301 are shown for the coordinate conversion using the virtual projection spherical surface 201. However, the coordinate conversion unit 6 has all necessary coordinates on the display surface 301. Perform coordinate transformation for.
 同様に、仮想投影球面202について、座標変換部6は、ステップST21として、視点位置105と表示面301上の座標p5を通る半直線と、仮想投影球面202上の交点の座標i5を算出する。次に、ステップST22として、座標変換部6は、座標i5と車外カメラ31aとを結ぶ直線と、撮像面302との交点の座標c5を算出する。その後、ステップST23およびステップST24の処理を経て、ステップST21として、座標変換部6は、視点位置105と表示面301上の座標p6を通る半直線と、仮想投影球面202上の交点の座標i6を算出する。次に、ステップST22として、座標変換部6は、座標i6と車外カメラ31aとを結ぶ直線と、撮像面302との交点の座標c6を算出する。図13の例では、仮想投影球面202を用いた座標変換について、表示面301上の座標p5,p6のみの変換を示しているが、座標変換部6は表示面301上の全ての必要な座標について座標変換を行う。 Similarly, for the virtual projection spherical surface 202, the coordinate conversion unit 6 calculates the coordinate i5 of the intersection point on the virtual projection spherical surface 202 and the half line passing the viewpoint position 105 and the coordinate p5 on the display surface 301 in step ST21. Next, as step ST <b> 22, the coordinate conversion unit 6 calculates the coordinate c <b> 5 of the intersection point between the imaging surface 302 and the straight line connecting the coordinate i <b> 5 and the vehicle exterior camera 31 a. Thereafter, through the processing of step ST23 and step ST24, as step ST21, the coordinate conversion unit 6 uses the half line passing through the viewpoint position 105 and the coordinate p6 on the display surface 301 and the coordinate i6 of the intersection point on the virtual projection spherical surface 202. calculate. Next, as step ST <b> 22, the coordinate conversion unit 6 calculates the coordinate c <b> 6 of the intersection point between the imaging surface 302 and the straight line connecting the coordinate i <b> 6 and the vehicle exterior camera 31 a. In the example of FIG. 13, only the coordinates p5 and p6 on the display surface 301 are shown for the coordinate conversion using the virtual projection spherical surface 202. However, the coordinate conversion unit 6 has all necessary coordinates on the display surface 301. Perform coordinate transformation for.
 同様に、仮想投影球面203について、座標変換部6は、ステップST21として、視点位置105と表示面301上の座標p3を通る半直線と、仮想投影球面203上の交点の座標i3を算出する。次に、ステップST22として、座標変換部6は、座標i3と車外カメラ31aとを結ぶ直線と、撮像面302との交点の座標c3を算出する。その後、ステップST23およびステップST24の処理を経て、ステップST21として、座標変換部6は、視点位置105と表示面301上の座標p4を通る半直線と、仮想投影球面203上の交点の座標i4を算出する。次に、ステップST22として、座標変換部6は、座標i4と車外カメラ31aとを結ぶ直線と、撮像面302との交点の座標c4を算出する。図13の例では、仮想投影球面203を用いた座標変換について、表示面301上の座標p3,p4のみの変換を示しているが、座標変換部6は表示面301上の全ての必要な座標について座標変換を行う。
 仮想投影球面201,202,203全てについて、表示面301の全ての座標について処理が行われると、図6で示したフローチャートのステップST8の処理に進む。
Similarly, with respect to the virtual projection spherical surface 203, the coordinate conversion unit 6 calculates, as step ST21, the half line passing through the viewpoint position 105 and the coordinate p3 on the display surface 301 and the coordinate i3 of the intersection point on the virtual projection spherical surface 203. Next, as step ST <b> 22, the coordinate conversion unit 6 calculates the coordinate c <b> 3 of the intersection between the straight line connecting the coordinate i <b> 3 and the outside camera 31 a and the imaging surface 302. Thereafter, through the processing of step ST23 and step ST24, as step ST21, the coordinate conversion unit 6 uses the half line passing through the viewpoint position 105 and the coordinate p4 on the display surface 301 and the coordinate i4 of the intersection point on the virtual projection spherical surface 203. calculate. Next, as step ST <b> 22, the coordinate conversion unit 6 calculates a coordinate c <b> 4 of an intersection point between the imaging surface 302 and a straight line connecting the coordinate i <b> 4 and the vehicle exterior camera 31 a. In the example of FIG. 13, only the coordinates p3 and p4 on the display surface 301 are shown for the coordinate conversion using the virtual projection spherical surface 203, but the coordinate conversion unit 6 does not need all the necessary coordinates on the display surface 301. Perform coordinate transformation for.
When the processing is performed for all the coordinates on the display surface 301 for all the virtual projection spherical surfaces 201, 202, and 203, the process proceeds to step ST8 of the flowchart shown in FIG.
 次に、図6のフローチャートのステップST9で示した、表示画像生成部7の処理について、図14のフローチャートと図15の説明図を参照しながら、より詳細に説明する。
 図14は、実施の形態1に係る表示制御装置10の表示画像生成部7の動作を示すフローチャートである。
Next, the process of the display image generation unit 7 shown in step ST9 of the flowchart of FIG. 6 will be described in more detail with reference to the flowchart of FIG. 14 and the explanatory diagram of FIG.
FIG. 14 is a flowchart showing the operation of the display image generation unit 7 of the display control apparatus 10 according to the first embodiment.
 表示画像生成部7は、ディスプレイの設置位置等の位置情報から、ディスプレイに表示される表示用画像の画像領域を設定する(ステップST31)。表示画像生成部7は、ステップST31で設定した画像領域について、ステップST4で生成された距離画像の各画素位置における距離値を取得する(ステップST32)。表示画像生成部7は、ステップST8で生成した表示用画像から、ステップST32で取得した距離値に対応した仮想投影球面を用いて生成された表示用画像を選択する(ステップST33)。表示画像生成部7は、距離画像の全ての画素位置における表示用画像の選択が行われたか判定する(ステップST34)。全ての画素位置について表示用画像の選択が行われていない場合(ステップST34;NO)、ステップST33の処理に戻る。一方、全ての画素位置について表示用画像の選択が行われた場合(ステップST34;YES)、表示画像生成部7はステップST33で選択された全ての表示用画像を統合して表示画像を生成する(ステップST35)。表示画像生成部7は、ステップST35で生成された表示画像を表示制御部8に出力する(ステップST36)。その後、図5のフローチャートで示したステップST10の処理に進む。 The display image generation unit 7 sets the image area of the display image displayed on the display from the position information such as the installation position of the display (step ST31). The display image generation unit 7 acquires the distance value at each pixel position of the distance image generated in step ST4 for the image region set in step ST31 (step ST32). The display image generation unit 7 selects a display image generated using the virtual projection spherical surface corresponding to the distance value acquired in step ST32 from the display image generated in step ST8 (step ST33). The display image generation unit 7 determines whether display images have been selected at all pixel positions of the distance image (step ST34). When the display image is not selected for all pixel positions (step ST34; NO), the process returns to step ST33. On the other hand, when display images are selected for all pixel positions (step ST34; YES), the display image generation unit 7 integrates all display images selected in step ST33 to generate a display image. (Step ST35). The display image generation unit 7 outputs the display image generated in step ST35 to the display control unit 8 (step ST36). Thereafter, the process proceeds to step ST10 shown in the flowchart of FIG.
 次に、図14のフローチャートに沿って、図15で示した具体例について説明する。
 図15は、実施の形態1に係る表示制御装置10の表示画像生成部7が生成した表示用画像を示した図である。
 まず、ステップST31として、表示画像生成部7は、図3で示したディスプレイ34aに表示可能な表示用画像の画像領域401のサイズを設定する。次に、ステップST32として、表示画像生成部7は、サイズが設定された画像領域401について、例えば画素位置402における距離値201d、画素位置403における距離値202d、および画素位置404における距離値203dを設定する。図15の例では、画素位置402,403,040についてのみ示したが、画像領域401内の全ての画素位置において距離値の設定を行う。
Next, the specific example shown in FIG. 15 will be described along the flowchart of FIG.
FIG. 15 is a diagram illustrating a display image generated by the display image generation unit 7 of the display control apparatus 10 according to the first embodiment.
First, as step ST31, the display image generation unit 7 sets the size of the image area 401 of the display image that can be displayed on the display 34a shown in FIG. Next, as step ST32, the display image generation unit 7 sets, for example, the distance value 201d at the pixel position 402, the distance value 202d at the pixel position 403, and the distance value 203d at the pixel position 404 for the image area 401 whose size is set. Set. In the example of FIG. 15, only the pixel positions 402, 403, and 040 are shown, but distance values are set at all the pixel positions in the image area 401.
 ステップST33として、表示画像生成部7は、例えば画素位置402で取得した距離値201dに対応した仮想投影球面201を用いて生成された表示用画像を選択する。同様に、ステップST33として、表示画像生成部7は、画素位置403で取得した距離値202dに対応した仮想投影球面202を用いて生成された表示用画像を選択する。また、ステップST33として、表示画像生成部7は、画素位置404で取得した距離値203dに対応した仮想投影球面203を用いて生成された表示用画像を選択する。
 表示画像生成部7は、ステップST34において画像領域401内の全ての画素位置について表示用画像の選択を行うと(ステップST34;YES)、ステップST35としてそれらを統合した図15で示した画像領域401内に示したような表示画像を生成する。ステップST36として、表示画像生成部7は、生成した表示画像を示すデータを表示制御部8に出力する。
As step ST33, the display image generation unit 7 selects a display image generated using the virtual projection spherical surface 201 corresponding to the distance value 201d acquired at the pixel position 402, for example. Similarly, as step ST33, the display image generation unit 7 selects a display image generated using the virtual projection spherical surface 202 corresponding to the distance value 202d acquired at the pixel position 403. In step ST33, the display image generation unit 7 selects a display image generated using the virtual projection spherical surface 203 corresponding to the distance value 203d acquired at the pixel position 404.
When the display image generation unit 7 selects display images for all the pixel positions in the image area 401 in step ST34 (step ST34; YES), the image area 401 shown in FIG. A display image as shown in FIG. As step ST <b> 36, the display image generation unit 7 outputs data indicating the generated display image to the display control unit 8.
 その後、表示制御部8は、ステップST10として、入力された表示画像の表示制御を行う。
 図16は、実施の形態1に係る表示制御装置10の表示結果を示す図である。
 図16に示すように、表示制御部8は、フロントピラー21の前面に配置されたディスプレイ34aに、図15の画像領域401で示した表示画像を表示させるための表示制御を行う。
 表示制御部8の表示制御に従って、ディスプレイ34aが画像領域401で示した表示画像を表示することにより、図16に示すようにフロントウィンドウ24とサイドウィンドウ25越しに運転者Aが視点位置105から実際に見た車外の景色と、フロントピラー21上のディスプレイ34aに表示された死角領域の表示画像とが連続して表示される。これにより、運転者Aが視点位置105から実際に見た車両の景色と、ディスプレイ34aの表示画像との連続性に違和感を覚えることがない。
Then, the display control part 8 performs display control of the input display image as step ST10.
FIG. 16 is a diagram illustrating a display result of the display control apparatus 10 according to the first embodiment.
As shown in FIG. 16, the display control unit 8 performs display control for displaying the display image indicated by the image area 401 in FIG. 15 on the display 34 a arranged in front of the front pillar 21.
According to the display control of the display control unit 8, the display 34a displays the display image indicated by the image area 401, so that the driver A actually moves from the viewpoint position 105 through the front window 24 and the side window 25 as shown in FIG. The scenery outside the vehicle seen in FIG. 5 and the display image of the blind spot area displayed on the display 34a on the front pillar 21 are continuously displayed. Thereby, the driver A does not feel discomfort in the continuity between the scenery of the vehicle actually viewed from the viewpoint position 105 and the display image on the display 34a.
 以上のように、この実施の形態1によれば、車両の周辺の領域のうち、車両の構造物により運転者の視界が遮られる死角領域が撮像された撮像画像を用いて、運転者の視点位置から死角領域に位置する物体までの距離の値を画素値とした距離画像を生成する距離画像生成部5と、距離画像を用いて、車両の周辺の領域に運転者の視点位置を中心とした球体の外周面として定義される複数の仮想投影球面を設定し、設定した複数の仮想投影球面を用いて、撮像画像を表示するための表示画面上の座標を撮像面上の座標に変換する座標変換部6と、変換された撮像画面上の座標を用いて、表示画面に表示する画像を生成する表示画像生成部7とを備えるように構成したので、死角領域を撮像した画像を、運転者の視点位置から見た実際の車外の景色と連続させて表示させることができる。 As described above, according to the first embodiment, the driver's viewpoint is obtained by using a captured image in which a blind spot area in which the driver's field of view is blocked by the vehicle structure is imaged. A distance image generation unit 5 that generates a distance image using a value of a distance from a position to an object located in a blind spot area as a pixel value, and using the distance image, the driver's viewpoint position is centered in a peripheral area of the vehicle. A plurality of virtual projection spheres defined as the outer peripheral surface of the sphere are set, and coordinates on the display screen for displaying the captured image are converted into coordinates on the imaging surface using the plurality of virtual projection spheres set. Since it is configured to include the coordinate conversion unit 6 and the display image generation unit 7 that generates an image to be displayed on the display screen using the converted coordinates on the imaging screen, Outside the actual vehicle as seen from the viewpoint position It can be displayed by continuously color.
 なお、上述した実施の形態1では、距離画像生成部5が3つの距離値に基づいて、3つのそういつ対象領域を有する距離画像を設定する構成例を示したが、用いる距離値は3つに限定されるものではない。例えば、距離画像生成部5は、同一対象領域を画素単位で生成してもよい。その場合、座標変換部6は、画素単位で仮想投影球面を設定し、各画素の仮想投影球面を用いて座標変換を行う。また、距離画像生成部5は、距離画像の生成に条件を設定し、座標変換部6が設定する仮想投影球面の数を削減するように構成してもよい。 In the first embodiment described above, the distance image generation unit 5 sets the distance image having three target regions based on the three distance values. However, the distance value to be used is three. It is not limited to. For example, the distance image generation unit 5 may generate the same target area in units of pixels. In that case, the coordinate conversion unit 6 sets a virtual projection spherical surface in units of pixels, and performs coordinate conversion using the virtual projection spherical surface of each pixel. Further, the distance image generation unit 5 may be configured to set conditions for generation of the distance image and reduce the number of virtual projection spherical surfaces set by the coordinate conversion unit 6.
 距離画像の生成に関して設定する条件は、種々適用可能である。
 例えば、距離画像生成部5は、仮想投影球面の数を3までとするとの条件に基づいて、距離画像を生成する。また、距離画像生成部5は、距離値が小さいものから順に3つまでを同一対象領域を生成するための被写体とするとの条件に基づいて、距離画像を生成する。また、距離画像生成部5は、距離値±50cmの距離値は同一の距離値とみなすとの条件に基づいて、距離画像を生成する。また、距離画像生成部5は、距離値が30m以上の位置にある物体は全て背景とみなすとの条件に基づいて、距離画像を生成する。また、距離画像生成部5は、距離値の出現頻度を参照し、出現頻度が高い順に一定数の距離画像を生成するとの条件に基づいて距離画像を生成する。
Various conditions can be applied for setting the distance image.
For example, the distance image generation unit 5 generates a distance image based on the condition that the number of virtual projection spherical surfaces is three. Further, the distance image generation unit 5 generates a distance image based on the condition that three objects in order from the smallest distance value are subjects for generating the same target region. Further, the distance image generation unit 5 generates a distance image based on the condition that the distance value of ± 50 cm is regarded as the same distance value. In addition, the distance image generation unit 5 generates a distance image based on the condition that all objects at a position having a distance value of 30 m or more are regarded as the background. Further, the distance image generation unit 5 refers to the appearance frequency of the distance value, and generates a distance image based on a condition that a certain number of distance images are generated in descending order of appearance frequency.
実施の形態2.
 この実施の形態2では、車両情報を考慮して表示用画像を生成する構成を示す。
 図17は、実施の形態2に係る表示制御装置10aの構成を示すブロック図である。
 実施の形態2に係る表示制御装置10aは、実施の形態1で示した表示制御装置10に車両情報取得部9を追加して設け、画像処理部4に替えて画像処理部4aを備える。画像処理部4aは、距離画像生成部5、座標変換部6aおよび表示画像生成部7aで構成されている。
 以下では、実施の形態1に係る表示制御装置10の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。
Embodiment 2. FIG.
The second embodiment shows a configuration for generating a display image in consideration of vehicle information.
FIG. 17 is a block diagram illustrating a configuration of the display control apparatus 10a according to the second embodiment.
The display control apparatus 10a according to the second embodiment is provided with a vehicle information acquisition unit 9 in addition to the display control apparatus 10 shown in the first embodiment, and includes an image processing unit 4a instead of the image processing unit 4. The image processing unit 4a includes a distance image generation unit 5, a coordinate conversion unit 6a, and a display image generation unit 7a.
In the following, the same or corresponding parts as the components of the display control apparatus 10 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified.
 車両情報取得部9は、図示しない車内ネットワーク等を介して、表示制御装置10が搭載された車両の車両情報を取得する。車両情報は、例えば、自車位置情報、進行方向、車速、加速度、操舵角等を示す情報である。車両情報取得部9が取得した情報は、画像処理部4に入力される。 The vehicle information acquisition unit 9 acquires vehicle information of a vehicle on which the display control device 10 is mounted via an in-vehicle network (not shown). The vehicle information is information indicating, for example, own vehicle position information, traveling direction, vehicle speed, acceleration, steering angle, and the like. Information acquired by the vehicle information acquisition unit 9 is input to the image processing unit 4.
 画像処理部4aの座標変換部6aは、距離画像生成部5が生成した距離画像と、車両情報取得部9が取得した車両情報とに応じて、設定する仮想投影球面の数を決定する。例えば、座標変換部6aが車両情報の車速を参照する場合、車両情報取得部9が取得した車速から車両が高速走行していると判断した場合、仮想投影球面の数を削減する。座標変換部6aが車両情報に応じて仮想投影球面の設定数を変化させることにより、画像処理の処理負荷を抑制することができる。 The coordinate conversion unit 6a of the image processing unit 4a determines the number of virtual projection spherical surfaces to be set according to the distance image generated by the distance image generation unit 5 and the vehicle information acquired by the vehicle information acquisition unit 9. For example, when the coordinate conversion unit 6a refers to the vehicle speed of the vehicle information, when it is determined that the vehicle is traveling at a high speed from the vehicle speed acquired by the vehicle information acquisition unit 9, the number of virtual projection spherical surfaces is reduced. By changing the set number of virtual projection spherical surfaces according to the vehicle information by the coordinate conversion unit 6a, the processing load of image processing can be suppressed.
 表示画像生成部7aは、車両情報取得部9が取得した車両情報に応じて、生成する表示用画像の画像データを変更する。ここでも、表示画像生成部7aが車両情報の車速を参照する場合について説明する。
 表示画像生成部7aが処理可能な処理量は、生成する表示用画像1フレームのデータ量(サイズ)と、1秒間に表示するフレーム数とによって算出される。
 表示画像生成部7aの処理能力が一定であり、且つ車両が高速走行している場合、表示用画像1フレームの精細度よりも、表示用画像の更新速度が求められる。そこで、表示画像生成部7aは、車両が高速走行している場合に、生成する表示用画像の解像度を下げ、逆に表示用画像の更新速度を向上させる。これにより、表示画像生成部7aは、画像処理の処理負荷を抑制することができる。
The display image generation unit 7a changes the image data of the display image to be generated according to the vehicle information acquired by the vehicle information acquisition unit 9. Here, the case where the display image generation unit 7a refers to the vehicle speed of the vehicle information will be described.
The processing amount that can be processed by the display image generating unit 7a is calculated by the data amount (size) of one frame of the display image to be generated and the number of frames to be displayed per second.
When the processing capability of the display image generation unit 7a is constant and the vehicle is traveling at a high speed, the update speed of the display image is determined from the definition of one frame of the display image. Therefore, when the vehicle is traveling at a high speed, the display image generation unit 7a reduces the resolution of the display image to be generated, and conversely improves the update speed of the display image. Thereby, the display image generation unit 7a can suppress the processing load of the image processing.
 一方、表示画像生成部7aの処理能力が一定であり、且つ車両が低速走行している場合、表示用画像の更新速度よりも、表示用画像1フレームの精細度の向上が求められる。そこで、表示画像生成部7aは、車両が低速走行している場合、生成する表示用画像の更新速度を下げ、表示用画像の精細度を向上させる。これにより、表示画像生成部7aは、視認性を向上させた表示用画像を生成することができる。 On the other hand, when the processing capability of the display image generation unit 7a is constant and the vehicle is traveling at a low speed, improvement in the definition of one frame of the display image is required rather than the update speed of the display image. Therefore, when the vehicle is traveling at a low speed, the display image generation unit 7a decreases the update speed of the display image to be generated and improves the definition of the display image. Thereby, the display image generation unit 7a can generate a display image with improved visibility.
 表示画像生成部7aは、車両情報取得部9が取得した車両情報に応じて表示用画像の色数を変更する処理を行い、処理量の調整と表示用画像の画質を調整するように構成してもよい。
 また、上述した座標変換部6aの処理および表示画像生成部7aの処理を同時に行ってもよいし、いずれかの処理のみを行ってもよい。
The display image generation unit 7a is configured to perform a process of changing the number of colors of the display image according to the vehicle information acquired by the vehicle information acquisition unit 9, and to adjust the processing amount and the image quality of the display image. May be.
Moreover, the process of the coordinate conversion part 6a mentioned above and the process of the display image generation part 7a may be performed simultaneously, and only one process may be performed.
 次に、表示制御装置10aのハードウェア構成例を説明する。なお、実施の形態1と同一の構成の説明は省略する。
 表示制御装置10aにおける車両情報取得部9は、外部からの情報を入力する図2における入力装置11により実現される。また、表示制御装置10aにおける座標変換部6aおよび表示画像生成部7aは、処理回路により実現される。処理回路は、機能が専用のハードウェアで実行されてもよいし、ソフトウェアで実行されてもよい。機能がソフトウェアで実行される場合、処理回路は、図2で示したメモリ13に格納されるプログラムを実行するCPU12である。
Next, a hardware configuration example of the display control apparatus 10a will be described. Note that the description of the same configuration as that of Embodiment 1 is omitted.
The vehicle information acquisition unit 9 in the display control device 10a is realized by the input device 11 in FIG. 2 that inputs information from the outside. Further, the coordinate conversion unit 6a and the display image generation unit 7a in the display control device 10a are realized by a processing circuit. In the processing circuit, functions may be executed by dedicated hardware or software. When the function is executed by software, the processing circuit is the CPU 12 that executes a program stored in the memory 13 shown in FIG.
 次に、実施の形態2に係る表示制御装置10aの動作について説明する。
 まず、座標変換部6aの処理について、図18および図19を参照しながら説明する。
 図18は、実施の形態2に係る表示制御装置10aの座標変換部6aの動作を示すフローチャートである。図18において、図6で示した実施の形態1のフローチャートと同一のステップには同一の符号を付し、説明を省略する。
 図19は、実施の形態2に係る表示制御装置10aの座標変換部6aが参照するデータの一例を示す図である。
 ステップST4で距離画像が生成されると、座標変換部6aは、車両情報取得部9が取得した車両情報を参照し、車両情報に応じて設定する仮想投影球面の数を設定する(ステップST41)。座標変換部6aは、例えば図19に示す条件等が記憶された図示しないデータベースを参照し、例えば車両情報の車速に応じて仮想投影球面の数を設定する。座標変換部6aが、車両の車速が中速であり、仮想投影球面の数を3と設定した場合、3つの仮想投影球面が設定されるように、ステップST5およびステップST6の処理を行う。なお、座標変換部6aがいずれの距離値を用いて仮想投影球面を設定するかは、実施の形態1でも示したように、例えば、出現頻度の分布の高い距離値を順に用いるように決定する。
Next, the operation of the display control apparatus 10a according to Embodiment 2 will be described.
First, the processing of the coordinate conversion unit 6a will be described with reference to FIGS.
FIG. 18 is a flowchart showing the operation of the coordinate conversion unit 6a of the display control apparatus 10a according to the second embodiment. In FIG. 18, the same steps as those in the flowchart of the first embodiment shown in FIG.
FIG. 19 is a diagram illustrating an example of data referred to by the coordinate conversion unit 6a of the display control apparatus 10a according to the second embodiment.
When the distance image is generated in step ST4, the coordinate conversion unit 6a refers to the vehicle information acquired by the vehicle information acquisition unit 9, and sets the number of virtual projection spherical surfaces set according to the vehicle information (step ST41). . For example, the coordinate conversion unit 6a refers to a database (not shown) in which the conditions shown in FIG. 19 are stored, and sets the number of virtual projection spherical surfaces according to the vehicle speed of the vehicle information, for example. When the vehicle speed of the vehicle is medium speed and the number of virtual projection spherical surfaces is set to 3, the coordinate conversion unit 6a performs the processing of step ST5 and step ST6 so that three virtual projection spherical surfaces are set. Note that which distance value the coordinate conversion unit 6a uses to set the virtual projection spherical surface is determined so that, for example, distance values having a high distribution of appearance frequencies are used in order as described in the first embodiment. .
 次に、表示画像生成部7aの処理について、図20A,図20Bを参照しながら説明する。
 図20A,図20Bは、実施の形態2に係る表示制御装置10aの表示画像生成部7aが参照するデータの一例を示す図である。
 表示画像生成部7aは、図20Aまたは図20Bに示す設定条件等が記憶された図示しないデータベースを参照し、車両情報に応じて表示用画像の設定条件を決定する。
 図20Aおよび図20Bの設定条件は、車速を低速走行、中速走行、高速走行の3段階に分け、各走行速度に応じて表示画像生成部7aが設定する表示用画像の解像度(精細度)、表示用画像のフレームレート(更新速度)、表示用画像の色数を示している。
 なお、図20Aは、表示用画像の色数を固定値とした場合を示している。
Next, the processing of the display image generation unit 7a will be described with reference to FIGS. 20A and 20B.
20A and 20B are diagrams illustrating an example of data referred to by the display image generation unit 7a of the display control apparatus 10a according to Embodiment 2.
The display image generation unit 7a refers to a database (not shown) in which the setting conditions shown in FIG. 20A or 20B are stored, and determines the setting conditions for the display image according to the vehicle information.
The setting conditions of FIGS. 20A and 20B divide the vehicle speed into three stages of low speed driving, medium speed driving, and high speed driving, and the resolution (definition) of the display image set by the display image generating unit 7a according to each driving speed. The frame rate (update speed) of the display image and the number of colors of the display image are shown.
FIG. 20A shows a case where the number of colors of the display image is a fixed value.
 表示画像生成部7aは、図20Aの設定条件を参照し、車両が低速で走行している場合には、解像度1920×960、フレームレート30fps、色数RGB24ビットという条件に基づいて表示用画像を生成する。
 また、表示画像生成部7aは、車両が中速で走行している場合には、解像度1280×720、フレームレート60fps、色数RGB24ビットという条件に基づいて表示用画像を生成する。
 また、表示画像生成部7aは、車両が高速で走行している場合には、解像度960×480、フレームレート120fps、色数RGB24ビットという条件に基づいて表示用画像を生成する。
The display image generation unit 7a refers to the setting conditions of FIG. 20A. When the vehicle is traveling at a low speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1920 × 960, frame rate 30 fps, color number RGB 24 bits. Generate.
Further, when the vehicle is traveling at a medium speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1280 × 720, frame rate 60 fps, color number RGB 24 bits.
Further, when the vehicle is traveling at a high speed, the display image generation unit 7a generates a display image based on the conditions of resolution 960 × 480, frame rate 120 fps, color number RGB 24 bits.
 また、図20Bは、表示用画像の解像度を固定値とした場合を示している。
 表示画像生成部7aは、図20Bのデータベースを参照し、車両が低速で走行している場合には、解像度1280×720、フレームレート30fps、色数RGB48ビットという条件に基づいて表示用画像を生成する。
 また、表示画像生成部7aは、車両が中速で走行している場合には、解像度1280×720、フレームレート60fps、色数RGB24ビットという条件に基づいて表示用画像を生成する。
 また、表示画像生成部7aは、車両が高速で走行している場合には、解像度1280×720、フレームレート120fps、色数YUV16ビットという条件に基づいて表示用画像を生成する。
FIG. 20B shows a case where the resolution of the display image is a fixed value.
The display image generation unit 7a refers to the database of FIG. 20B, and generates a display image based on the conditions of resolution 1280 × 720, frame rate 30 fps, color number RGB 48 bits when the vehicle is traveling at a low speed. To do.
Further, when the vehicle is traveling at a medium speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1280 × 720, frame rate 60 fps, color number RGB 24 bits.
In addition, when the vehicle is traveling at high speed, the display image generation unit 7a generates a display image based on the conditions of resolution 1280 × 720, frame rate 120 fps, color number YUV 16 bits.
 なお、上述した説明では、座標変換部6aおよび表示画像生成部7aが車両情報の車速を参照する構成を示したが、車速に限定されることなく、加速度または操舵角等を参照して、仮想投影球面の数の設定、表示用画像の精細度、表示用画像の更新速度等を決定するように構成することができる。 In the above description, the coordinate conversion unit 6a and the display image generation unit 7a are configured to refer to the vehicle speed of the vehicle information. The number of projection spheres can be set, the definition of the display image, the update speed of the display image, and the like can be determined.
 以上のように、この実施の形態2によれば、車両の走行状態を示す車両情報を取得する車両情報取得部9を備え、座標変換部6aが車両情報を参照して、車両情報に応じて生成する仮想投影球面の数を設定するように構成したので、座標変換の処理の負荷を抑制することができる。 As described above, according to the second embodiment, the vehicle information acquisition unit 9 that acquires the vehicle information indicating the traveling state of the vehicle is provided, and the coordinate conversion unit 6a refers to the vehicle information according to the vehicle information. Since the configuration is such that the number of virtual projection spheres to be generated is set, it is possible to suppress the load of coordinate conversion processing.
 また、この実施の形態2によれば、車両の走行状態を示す車両情報を取得する車両情報取得部9を備え、表示画像生成部7aが車両情報を参照して、車両の走行状態に応じて生成する表示用画像の精細度、表示用画像の更新速度、または表示用画像の色数の少なくとも1つを決定するように構成したので、表示用画像を生成する際の負荷を抑制することができる。 Moreover, according to this Embodiment 2, it has the vehicle information acquisition part 9 which acquires the vehicle information which shows the driving state of a vehicle, and the display image generation part 7a refers to vehicle information, according to the driving state of a vehicle. Since it is configured to determine at least one of the definition of the display image to be generated, the update speed of the display image, or the number of colors of the display image, the load when generating the display image can be suppressed. it can.
 なお、上述した実施の形態1および実施の形態2では、測距センサ32aから領域内の物体までの距離、車外カメラ31aの位置情報、測距センサ32aの位置情報、運転者の視点情報およびディスプレイ34a上の座標は、いずれも3次元の空間座標を用いて表される情報であることを前提に説明を行った。 In the first and second embodiments described above, the distance from the distance measuring sensor 32a to the object in the area, the position information of the outside camera 31a, the position information of the distance measuring sensor 32a, the driver's viewpoint information, and the display The description has been made on the assumption that the coordinates on 34a are information expressed using three-dimensional spatial coordinates.
 上記以外にも、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 In addition to the above, within the scope of the present invention, the present invention can be freely combined with each embodiment, modified any component of each embodiment, or omitted any component in each embodiment. Is possible.
 この発明に係る表示制御装置は、運転者にとって死角となる領域を撮像した画像を、運転者の視点位置から見た実際の景色と連続させて表示可能なため、車両の構造物に備えられたディスプレイへの画像の表示に適用し、運転者の視認性の向上に用いることができる。 The display control device according to the present invention is provided in a structure of a vehicle because an image obtained by imaging a region that becomes a blind spot for a driver can be displayed continuously with an actual scene viewed from the viewpoint position of the driver. The present invention can be applied to display an image on a display and can be used to improve driver visibility.
 1 画像入力部、2 距離情報取得部、3 視点情報取得部、4,4a 画像処理部、5 距離画像生成部、6,6a 座標変換部、7,7a 表示画像生成部、8 表示制御部、9 車両情報取得部、10,10a 表示制御装置。 1 image input unit, 2 distance information acquisition unit, 3 viewpoint information acquisition unit, 4, 4a image processing unit, 5 distance image generation unit, 6, 6a coordinate conversion unit, 7, 7a display image generation unit, 8 display control unit, 9 Vehicle information acquisition unit, 10, 10a Display control device.

Claims (6)

  1.  車両の周辺の領域のうち、当該車両の構造物により、運転者の視界が遮られる死角領域が撮像された撮像画像を用いて、前記運転者の視点位置から前記死角領域に位置する物体までの距離の値を画素値とした距離画像を生成する距離画像生成部と、
     前記距離画像生成部が生成した前記距離画像を用いて、前記車両の周辺の領域に前記運転者の視点位置を中心とした球体の外周面として定義される複数の仮想投影球面を設定し、当該複数の仮想投影球面を用いて、前記撮像画像を表示するための表示画面上の座標を前記撮像画像の撮像面上の座標に変換する座標変換部と、
     前記座標変換部が変換した前記撮像面上の座標を用いて、前記表示画面に表示する画像を生成する表示画像生成部とを備えた表示制御装置。
    Using a captured image obtained by capturing a blind spot area in which the driver's field of view is blocked by a structure of the vehicle, the area from the driver's viewpoint position to an object located in the blind spot area. A distance image generation unit that generates a distance image with a distance value as a pixel value;
    Using the distance image generated by the distance image generation unit, a plurality of virtual projection spheres defined as outer peripheral surfaces of a sphere centered on the driver's viewpoint position are set in an area around the vehicle, A coordinate conversion unit that converts coordinates on the display screen for displaying the captured image into coordinates on the imaging surface of the captured image using a plurality of virtual projection spherical surfaces;
    A display control apparatus comprising: a display image generation unit configured to generate an image to be displayed on the display screen using the coordinates on the imaging surface converted by the coordinate conversion unit.
  2.  前記距離画像生成部は、前記距離の値に基づいて、前記距離画像内に複数の同一対象領域を設定し、
     前記座標変換部は、前記距離画像生成部が設定した前記複数の同一対象領域毎に、前記運転者の視点位置から前記同一対象領域までの距離の値を半径とし、前記運転者の視点位置を中心とした前記仮想投影球面を設定することを特徴とする請求項1記載の表示制御装置。
    The distance image generation unit sets a plurality of identical target areas in the distance image based on the distance value,
    The coordinate conversion unit, for each of the plurality of the same target region set by the distance image generation unit, a radius value of the distance from the driver's viewpoint position to the same target region, the driver's viewpoint position The display control apparatus according to claim 1, wherein the virtual projection spherical surface at the center is set.
  3.  前記距離画像生成部は、前記運転者の視点位置から前記死角領域に位置する物体までの距離が閾値以上である場合に、前記画素値を固定値とした前記距離画像を生成することを特徴とする請求項1記載の表示制御装置。 The distance image generation unit generates the distance image with the pixel value as a fixed value when a distance from the driver's viewpoint position to an object located in the blind spot area is a threshold value or more. The display control apparatus according to claim 1.
  4.  前記座標変換部は、前記車両の車両情報に応じて、前記設定する仮想投影球面の数を変更することを特徴とする請求項1記載の表示制御装置。 The display control apparatus according to claim 1, wherein the coordinate conversion unit changes the number of virtual projection spherical surfaces to be set according to vehicle information of the vehicle.
  5.  前記表示画像生成部は、前記車両の車両情報に応じて、前記表示画面に表示する画像の解像度または前記画像のフレームレート、または前記画像の色数のいずれか1つを変更することを特徴とする請求項1記載の表示制御装置。 The display image generation unit changes any one of a resolution of an image to be displayed on the display screen, a frame rate of the image, or a number of colors of the image according to vehicle information of the vehicle. The display control apparatus according to claim 1.
  6.  距離画像生成部が、車両の周辺の領域のうち、当該車両の構造物により、運転者の視界が遮られる死角領域が撮像された撮像画像を用いて、前記運転者の視点位置から前記死角領域に位置する物体までの距離の値を画素値とした距離画像を生成するステップと、
     座標変換部が、前記距離画像を用いて、前記車両の周辺の領域に前記運転者の視点位置を中心とした球体の外周面として定義される複数の仮想投影球面を設定し、当該複数の仮想投影球面を用いて、前記撮像画像を表示するための表示画面上の座標を前記撮像画像の撮像面上の座標に変換するステップと、
     表示画像生成部が、前記撮像面上の座標を用いて、前記表示画面に表示する画像を生成するステップとを備えた表示制御方法。
    The range image generation unit uses the captured image obtained by capturing a blind spot area in which the driver's field of view is blocked by a structure of the vehicle in a peripheral area of the vehicle, from the viewpoint position of the driver to the blind spot area. Generating a distance image having a pixel value as a distance value to an object located at a position;
    A coordinate conversion unit sets a plurality of virtual projection spheres defined as outer peripheral surfaces of a sphere centered on the driver's viewpoint position in a region around the vehicle using the distance image, and Transforming coordinates on the display screen for displaying the captured image into coordinates on the imaging surface of the captured image using a projection spherical surface;
    A display image generation unit including a step of generating an image to be displayed on the display screen using coordinates on the imaging surface.
PCT/JP2016/058749 2016-03-18 2016-03-18 Display control device and display control method WO2017158829A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/058749 WO2017158829A1 (en) 2016-03-18 2016-03-18 Display control device and display control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/058749 WO2017158829A1 (en) 2016-03-18 2016-03-18 Display control device and display control method

Publications (1)

Publication Number Publication Date
WO2017158829A1 true WO2017158829A1 (en) 2017-09-21

Family

ID=59851079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/058749 WO2017158829A1 (en) 2016-03-18 2016-03-18 Display control device and display control method

Country Status (1)

Country Link
WO (1) WO2017158829A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109552177A (en) * 2017-09-26 2019-04-02 电装国际美国公司 The system and method for environment animation are projected for environment animation and on interface
CN113342914A (en) * 2021-06-17 2021-09-03 重庆大学 Method for acquiring and automatically labeling data set for globe region detection
CN113744353A (en) * 2021-09-15 2021-12-03 合众新能源汽车有限公司 Blind area image generation method, device and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005269010A (en) * 2004-03-17 2005-09-29 Olympus Corp Image creating device, program and method
JP2006135797A (en) * 2004-11-08 2006-05-25 Matsushita Electric Ind Co Ltd Ambient status displaying device for vehicle
JP2006270175A (en) * 2005-03-22 2006-10-05 Megachips System Solutions Inc System for recording vehicle mounted image
JP2007015667A (en) * 2005-07-11 2007-01-25 Denso Corp Road imaging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005269010A (en) * 2004-03-17 2005-09-29 Olympus Corp Image creating device, program and method
JP2006135797A (en) * 2004-11-08 2006-05-25 Matsushita Electric Ind Co Ltd Ambient status displaying device for vehicle
JP2006270175A (en) * 2005-03-22 2006-10-05 Megachips System Solutions Inc System for recording vehicle mounted image
JP2007015667A (en) * 2005-07-11 2007-01-25 Denso Corp Road imaging device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109552177A (en) * 2017-09-26 2019-04-02 电装国际美国公司 The system and method for environment animation are projected for environment animation and on interface
CN109552177B (en) * 2017-09-26 2022-02-18 电装国际美国公司 System and method for ambient animation and projecting ambient animation on an interface
CN113342914A (en) * 2021-06-17 2021-09-03 重庆大学 Method for acquiring and automatically labeling data set for globe region detection
CN113744353A (en) * 2021-09-15 2021-12-03 合众新能源汽车有限公司 Blind area image generation method, device and computer readable medium

Similar Documents

Publication Publication Date Title
US10899277B2 (en) Vehicular vision system with reduced distortion display
JP6937443B2 (en) Imaging device and control method of imaging device
US11472338B2 (en) Method for displaying reduced distortion video images via a vehicular vision system
JP6724982B2 (en) Signal processing device and imaging device
JP5194679B2 (en) Vehicle periphery monitoring device and video display method
JP5953824B2 (en) Vehicle rear view support apparatus and vehicle rear view support method
JP5321711B2 (en) Vehicle periphery monitoring device and video display method
JPWO2018003532A1 (en) Object detection display device, moving object and object detection display method
US11438531B2 (en) Imaging apparatus and electronic equipment
US10455159B2 (en) Imaging setting changing apparatus, imaging system, and imaging setting changing method
US11273763B2 (en) Image processing apparatus, image processing method, and image processing program
US11375136B2 (en) Imaging device for high-speed read out, method of driving the same, and electronic instrument
WO2017158829A1 (en) Display control device and display control method
US20220314886A1 (en) Display control device, display control method, moving body, and storage medium
US20230098424A1 (en) Image processing system, mobile object, image processing method, and storage medium
US20200402206A1 (en) Image processing device, image processing method, and program
TWI842952B (en) Camera
US11778316B2 (en) Imaging apparatus
WO2022219874A1 (en) Signal processing device and method, and program

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16894447

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16894447

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP