JP2013025528A - Image generation device for vehicles and image generation method for vehicles - Google Patents

Image generation device for vehicles and image generation method for vehicles Download PDF

Info

Publication number
JP2013025528A
JP2013025528A JP2011158970A JP2011158970A JP2013025528A JP 2013025528 A JP2013025528 A JP 2013025528A JP 2011158970 A JP2011158970 A JP 2011158970A JP 2011158970 A JP2011158970 A JP 2011158970A JP 2013025528 A JP2013025528 A JP 2013025528A
Authority
JP
Japan
Prior art keywords
image
vehicle
information
blind spot
part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2011158970A
Other languages
Japanese (ja)
Other versions
JP5799631B2 (en
Inventor
Hiroyoshi Yanagi
柳  拓良
Original Assignee
Nissan Motor Co Ltd
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd, 日産自動車株式会社 filed Critical Nissan Motor Co Ltd
Priority to JP2011158970A priority Critical patent/JP5799631B2/en
Publication of JP2013025528A publication Critical patent/JP2013025528A/en
Application granted granted Critical
Publication of JP5799631B2 publication Critical patent/JP5799631B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An apparatus for more accurately generating an image viewed from an arbitrary viewpoint of another vehicle included in a vehicle surrounding image.
A blind spot for estimating information such as the shape and color of a blind spot part of another vehicle based on image information of a visible image part of the other vehicle included in a vehicle surrounding image having a three-dimensional structure. Based on the blind spot information estimated by the information estimation unit 25 and the blind spot information estimation unit 25, complementary image data for complementing the blind spot part is generated from the visible image partial data, and the generated complementary image data is used for the other vehicle. Based on the virtual viewpoint information input from the image complementing unit 26 and the virtual viewpoint setting unit 27 that complement the image of the blind spot part, an image that reconstructs the vehicle surrounding image after complementing into an image viewed from the viewpoint indicated by the virtual viewpoint information. And a reconfiguration unit 28.
[Selection] Figure 5

Description

  The present invention relates to a technique for generating a vehicle surrounding image having a three-dimensional structure viewed from an arbitrary viewpoint based on a plurality of images obtained by photographing a vehicle periphery with a plurality of in-vehicle cameras.

  As a technique for synthesizing images from a plurality of cameras mounted on a vehicle and generating an image around the vehicle viewed from an arbitrary viewpoint, for example, there is a technique described in Patent Document 1. In the prior art of this patent document 1, an input image from a camera is mapped to a predetermined spatial model in a three-dimensional space, and is viewed from an arbitrary viewpoint in the three-dimensional space with reference to the mapped spatial data. Create an image.

JP 10-317393 A

However, in the above prior art, when there is another vehicle around the vehicle, an image corresponding to an arbitrary viewpoint cannot be created for the blind spot portion of the other vehicle that is not included in the input image.
The present invention focuses on the above-described points, and aims to more accurately generate an image viewed from an arbitrary viewpoint of another vehicle included in the vehicle surrounding image.

  In order to solve the above-described problems, the present invention generates a vehicle surrounding image having a three-dimensional structure based on a plurality of images obtained by photographing a vehicle periphery with a plurality of cameras mounted on the vehicle. From this vehicle surrounding image, a visible image portion of another vehicle configured based on images taken by a plurality of cameras is detected. Based on this detection result, if it is determined that the visible image portion of the other vehicle is included in the vehicle surrounding image, the information on the blind spot portion of the other vehicle that is outside the imaging range of the plurality of cameras is estimated based on the image information of the visible image portion. To do. Based on the inferred information, the image information of the visible image portion is used to supplement the image of the blind spot part of the other vehicle included in the vehicle surrounding image. Then, the complemented vehicle surrounding image is reconstructed into an image viewed from an arbitrary virtual viewpoint.

  According to the present invention, the information on the blind spot part of the other vehicle is estimated based on the image information of the visible image portion of the other vehicle included in the vehicle surrounding image having the three-dimensional structure, and the other vehicle is based on the estimated image information. Complement the image of the blind spot area. As a result, it is possible to more accurately display an image of another vehicle included in the vehicle surrounding image when viewed from an arbitrary virtual viewpoint.

1 is a schematic configuration diagram of a vehicle image generation device according to an embodiment of the present invention. It is a figure which shows the example of arrangement | positioning of imaging device 11A-11D. (A)-(c) is a figure which shows the structural example of 11 A of imaging devices. (A)-(c) is a figure which shows the example of the imaging | photography range and range-finding range corresponding to each structure of Fig.3 (a)-(c). 2 is a block diagram illustrating an example of a functional configuration of the vehicle image generation device 100. FIG. It is a flowchart which shows an example of the process sequence of a vehicle surrounding image generation process. It is a flowchart which shows an example of the process sequence of a blind spot information estimation process. It is a flowchart which shows an example of the process sequence of an image complementation process. It is a figure which shows an example of the positional relationship of the own vehicle and other vehicles at the time of imaging | photography the area | region around a vehicle. It is a figure which shows an example of the imaging | photography range of the other vehicle at the time of seeing from the viewpoint on the opposite side to the virtual viewpoint of FIG. It is a figure which shows an example of the non-photographing range containing the blind spot part of another vehicle at the time of seeing from the virtual viewpoint of FIG. (A)-(d) is a figure which shows an example of the flow of the image complementation process when another vehicle is a driving | running | working state. (A) is a figure which shows the image seen from the virtual viewpoint of FIG. 9 after the image complementation when the other vehicle is in a running state, and (b) is the image of FIG. 9 after the image complementation when the other vehicle is in a stopped state. It is a figure which shows the image seen from the virtual viewpoint. It is a figure which shows an example of the flow of the image complementation process when another vehicle is a stop state. It is a figure which shows the other example of arrangement | positioning of an imaging device. It is a figure explaining the change of the imaging range according to the positional relationship of the own vehicle and another vehicle.

Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIGS. 1-14 is a figure which shows embodiment of the image generation apparatus for vehicles which concerns on embodiment of this invention, and the image generation method for vehicles.
(Constitution)
First, the configuration of the vehicle image generation device will be described.
FIG. 1 is a schematic configuration diagram of a vehicle image generation device according to an embodiment of the present invention.

In the present embodiment, the vehicle image generation apparatus 100 is mounted on the vehicle 1. The vehicle image generation device 100 includes a CPU 10, a vehicle surrounding photographing unit 11, a RAM 12, a ROM 13, an operation unit 14, a display 15, an HDD 16, and a vehicle speed sensor 17.
The vehicle surrounding imaging unit 11 includes an imaging device 11A, an imaging device 11B, an imaging device 11C, and an imaging device 11D.

FIG. 2 is a diagram illustrating an arrangement example of the imaging devices 11A to 11D.
The imaging device 11 </ b> A is provided at the center of the front end of the host vehicle 1, and the imaging device 11 </ b> B is provided at the center of the left end of the host vehicle 1. The imaging device 11C is provided at the roof side portion at the center of the right end portion of the host vehicle 1, and the imaging device 11D is provided at the center of the rear end portion of the host vehicle 1. The imaging devices 11 </ b> A to 11 </ b> D installed in this way take a picture of an area included in each photographing range and transmit the photographed image data to the CPU 10.

Here, the vehicular image generation apparatus 100 according to the present embodiment generates an image around the vehicle having a three-dimensional structure using image data obtained by photographing the area around the vehicle with the imaging devices 11A to 11D. Is.
Here, the captured image of the camera has a two-dimensional structure (X, Y). In order to generate a vehicle surrounding image having a three-dimensional structure, it is necessary to detect three-dimensional coordinate information (X, Y, Z) of a subject (object) existing in the photographing region in addition to the photographed image of the camera. is there. In the present embodiment, as shown in FIG. 2, the width direction of the vehicle is the X axis, the height direction is the Y axis, and the longitudinal direction of the vehicle is the Z axis. In order to obtain the three-dimensional coordinate information, for example, it is necessary to obtain distance information from the reference position to the object. In order to obtain this distance information, for example, a distance measuring method using a motion stereo method, a distance measuring method using a stereo image method, a distance measuring method using a range finder, or the like can be used. These distance measurement methods are all known techniques.

3A to 3C are diagrams illustrating a configuration example of the imaging device 11A.
FIG. 3A is a diagram illustrating a configuration example of the imaging device 11A when the motion stereo method is used. The motion stereo method is a technique for measuring the distance to the subject based on the “movement on the screen” and the “displacement amount of the photographing position” of the subject in the continuous images taken in time series. When the motion stereo method is used, the imaging device 11A is composed of a single CCD camera (which can be another camera such as a CMOS camera) as shown in FIG. The CCD camera is provided with a wide-angle lens (for example, an angle of view of 100 °) and can capture a relatively wide area. Further, the CCD camera includes a color filter and can capture a color image.

  FIG. 3B is a diagram illustrating a configuration example of the imaging device 11A when the stereo image method is used. The stereo image method is a technique for measuring the distance to a subject using the principle of triangulation based on a plurality of images including the same subject taken by a stereo camera. When such a technique is used, the imaging device 11A includes a single stereo camera as shown in FIG. The stereo camera according to the present embodiment includes a plurality of imaging units each including a plurality of lenses (two in the present embodiment) arranged in parallel at regular intervals and an imaging element corresponding to each lens. Have. The stereo camera can capture an area including the same subject from a plurality of directions by using the plurality of imaging units. Each imaging unit includes a wide-angle lens and a color filter as in the CCD camera of FIG. 3A, and can shoot a relatively wide area and color images.

  FIG. 3C is a diagram illustrating a configuration example of the imaging device 11 </ b> A in the case of using a distance measurement technique using a range finder. As a distance measurement technique using a range finder, there are various known methods such as an optical radar method, an active stereo method, and an illuminance difference stereo method. In this embodiment, the optical radar method is used. Examples of the optical radar method include a pulsed light projection (time measurement) method and a modulated light projection (phase difference measurement) method. The pulsed light projection method is a method for measuring a distance to a subject by projecting a light pulse and measuring a time until the light pulse is reflected and returned. The modulated light projection method is a method in which a light beam whose intensity is time-modulated with a sine wave or a rectangular wave is projected, and the distance is measured from the phase difference between the light beam and the reflected beam. Although the former has a low processing load, it is difficult to increase the distance resolution due to limitations of S / N and clock frequency. The latter has a high distance resolution but a large processing load.

  When using the distance measurement technique based on the optical radar method, the imaging device 11A includes one CCD camera (or other camera such as a CMOS camera) and one laser range finder as shown in FIG. It consists of. A CCD camera similar to that shown in FIG. When the pulse light projection method is used, the laser range finder projects light pulses onto an imaging range similar to or substantially similar to that of a CCD camera. And it has the function which receives the reflected light of the projected light pulse, measures a time difference, and converts the measured time difference into a distance. On the other hand, when the modulated light projection method is used, the light beam is projected onto an imaging range similar to or substantially the same as that of a CCD camera. Then, it has a function of receiving a reflected beam of the projected light beam, measuring a phase difference between the light beam and the reflected beam, and converting the measured phase difference into a distance.

Note that the imaging devices 11B to 11D have the same configuration as the imaging device 11A in any of the distance measurement methods described above.
FIGS. 4A to 4C are diagrams illustrating examples of shooting ranges and ranging ranges corresponding to the configurations of FIGS. 3A to 3C. In addition, the arrow attached | subjected to the own vehicle 1 in Fig.4 (a)-(c) shows the direction of the own vehicle 1. FIG.

  When the stereo motion technology is used, the photographing range of the CCD cameras 11A to 11D is a range of field angles determined by the characteristics (lens performance, focal length, etc.) of the wide-angle lens of each CCD camera, as shown in FIG. It becomes. In FIG. 4A, only a part of the shooting width in the X-axis direction is shown as the shooting range, but in reality, it is a conical shooting range with the center of the lens of the CCD camera as the apex. This also applies to the shooting range in FIG.

  In addition, when the stereo image method is used, the shooting ranges of the stereo cameras 11A to 11D are as follows. As shown in FIG. 4B, two shootings are performed by two wide-angle lenses arranged in parallel in the X-axis direction at a predetermined interval. The range is overlapped with a certain interval in the X-axis direction. Note that the two lenses and the two imaging elements that constitute the stereo camera are similar lenses and elements that are configured using similar parts (parts having the same model number, etc.). Therefore, the shooting range of each shooting unit is substantially the same shooting range. In addition, the shooting range of each shooting unit of the stereo cameras 11A to 11D is a range of field angles determined by the characteristics of the wide-angle lens of each shooting unit.

  When the optical radar method is used, the imaging range of the CCD cameras 110A to 110D is a solid line range shown in FIG. Further, the range of measurement of the laser range finders 111A to 111D is a range indicated by a one-dot chain line shown in FIG. The photographing ranges of the CCD cameras 11A to 11D are the same as when the stereo motion technology is used. On the other hand, the distance measuring ranges of the laser range finders 111A to 111D are determined by the scanning range of the laser range finder. This scanning range is determined by the scanning range of the laser irradiation unit of the laser range finder. As shown in FIG. 4C, the imaging ranges of the CCD cameras 110A to 110D and the corresponding ranging ranges of the laser range finders 111A to 111D are overlapped. That is, the range is set so that the subject photographed by the CCD cameras 110A to 110D corresponds to the subject to be measured by the corresponding laser range finders 111A to 111D.

  In addition, it is necessary to perform camera calibration for determining the external parameters and internal parameters of each CCD camera and each photographing unit in advance for the object coordinate system and the camera coordinate system where the subject exists. The camera calibration is a process for determining external parameters for performing conversion from the object coordinate system to the camera coordinate system and internal parameters for performing projection from the camera coordinate system to the image plane. This camera calibration is a known technique, and is performed by estimating each parameter from the projected point coordinate values using some points whose coordinate values are known in advance.

As described above, the imaging apparatuses 11A to 11D can employ the configuration exemplified above. In the present embodiment, a configuration corresponding to the stereo image method is adopted, and the configuration using the stereo image method will be mainly described below.
Returning to FIG. 1, the ROM 12 stores a dedicated program for realizing a vehicle surrounding image generation function, which will be described later, and various data necessary for executing the program. The ROM 12 reads various data stored in response to a request from the CPU 10 and inputs the data to the CPU 10.

The RAM 13 is used as a work memory when executing a dedicated program. The RAM 13 temporarily stores various data (photographed image data, distance measurement data, vehicle surrounding image data, etc.) necessary for executing a dedicated program.
The operation unit 14 is operated by a user and is used when inputting virtual viewpoint information described later.

The display 15 is utilized when the CPU 10 displays a vehicle surrounding image. A meter display or a navigation system display may be used together.
An HDD (Hard disk drive) 16 stores a vehicle image database 300 in which entire images (3D models) having a three-dimensional structure of a plurality of types of vehicles are stored.
The vehicle speed sensor 17 is a sensor that detects the vehicle speed of the host vehicle 1. The vehicle speed sensor 17 transmits the detected own vehicle speed to the CPU 10.

  A CPU (Centoral Processing Unit) 10 executes a dedicated program stored in the ROM 12 and generates a vehicle surrounding image having a three-dimensional structure based on the image information transmitted from the imaging devices 11A to 11D. When generating the vehicle surrounding image, the CPU 10 detects a visible image portion of another vehicle included in the vehicle surrounding image. And the image of the blind spot part of another vehicle is complemented based on the image information of the detected visible image part. Further, the CPU 10 reconstructs the complemented vehicle surrounding image into an image viewed from the virtual viewpoint based on the virtual viewpoint input via the operation unit 14. Further, the CPU 10 displays a vehicle surrounding image viewed from the virtual viewpoint on the display 15.

Next, a functional configuration of the vehicle image generation function realized by executing a dedicated program in the CPU 10 will be described.
FIG. 5 is a block diagram illustrating an example of a functional configuration of the vehicle image generation device 100.
As shown in FIG. 5, the vehicle image generation function configuration unit 120 includes an image information input unit 20, a coordinate information detection unit 21, a projection image generation unit 22, a vehicle surrounding image generation unit 23, and a vehicle image detection unit. 24, a blind spot information estimation unit 25, and an image complementing unit 26.

The vehicle image generation function configuration unit 120 further includes a virtual viewpoint setting unit 27, an image reconstruction unit 28, an image display unit 29, and a running state determination unit 30.
The image information input unit 20 sequentially acquires four sets (eight images) of captured image data obtained by photographing an area around the vehicle with each of the two imaging units of the imaging devices 11A to 11D at a preset sampling cycle. . The image information input unit 20 inputs the acquired four sets of captured image data to the coordinate information detection unit 21 and the projection image generation unit 22, respectively.

When the stereo motion method is used, the image information input unit 20 sequentially acquires four photographed image data obtained by photographing regions around the vehicle with the CCD cameras 11A to 11D at a preset sampling cycle. To do. The image information input unit 20 inputs the acquired four pieces of image data to the coordinate information detection unit 21 and the projection image generation unit 22, respectively.
When the optical radar method is used, the image information input unit 20 sequentially acquires four photographed image data obtained by photographing the area around the vehicle with the CCD cameras 110A to 110D at a preset sampling cycle. To do. In addition, the image information input unit 20 sequentially acquires distance measurement data obtained by measuring the distance to the subject existing in the area around the vehicle with the laser range finders 111A to 111D at a preset sampling period. The image information input unit 20 inputs the acquired four photographed image data and distance measurement data corresponding to the four photographed image data to the projection image generation unit 22.

  The image information input unit 20 stores the captured image data acquired from the imaging devices 11 </ b> A to 11 </ b> D in the RAM 13 in association with the own vehicle speed and the shooting time information transmitted from the vehicle speed sensor 17. In the present embodiment, the image information input unit 20 acquires the captured image data transmitted from each imaging device in synchronization via the buffer memory. The same applies to distance measurement data when the optical radar method is used.

The coordinate information detection unit 21 uses the target point in the other image with respect to the coordinates of the target point in one image for each set of the captured image data in the four sets of captured image data input from the image information input unit 20. The coordinates of the corresponding point corresponding to are searched. Then, the coordinates (XL, YL) of the target point, the coordinates (XR, YR) of the searched corresponding point, the focal length f (preset), and the base length (distance between the left and right lenses) b (preset) Are used to obtain a three-dimensional coordinate value (X, Y, Z) from the following expressions (1) and (2). In addition, the following formulas (1) and (2) are formulas when perspective transformation is used for generating a projection image.
XL = f · (X + b) / Z, YL = f · Y / Z (1)
XR = f · (X−b) / Z, YR = f · Y / Z (2)
The coordinate information detection unit 21 performs the search and the calculation of three-dimensional coordinate values for all the pixels of each set of captured image data. The coordinate information detection unit 21 inputs the three-dimensional coordinate information obtained in this way to the projection image generation unit 22.

  When the stereo motion method is used, the coordinate information detection unit 21 uses the image data for two cycles or more input from the image information input unit 20, and uses the image data for each CCD camera between successive images. The “movement on the screen” and the “displacement amount of the photographing position” of the subject in the are detected. Then, based on the detection result, the distance (X, Y, Z) to each subject is detected. The coordinate information detection unit 21 inputs detected distance information (three-dimensional coordinate information) to the projection image generation unit 22.

In addition, when the optical radar method is used, the coordinate information detection process is not necessary. That is, the laser range finders 111A to 111D themselves have a distance measuring function (function of the coordinate information detecting unit 21). Therefore, when the optical radar method is used, the image information input unit 20 directly inputs the distance measurement data from the laser range finders 111 </ b> A to 111 </ b> D to the projection image generation unit 22.
The laser range finders 111A to 111D may include only a laser irradiation unit and a light receiving unit that receives the reflected light. In this case, the measurement results of the laser range finders 111 </ b> A to 111 </ b> D are input to the coordinate information detection unit 21. Then, the coordinate information detection unit 21 performs a distance measurement process using the measurement result to detect three-dimensional coordinate information.

The projected image generation unit 22 is based on the captured image data of the imaging devices 111 </ b> A to 111 </ b> D stored in the RAM 13 and the three-dimensional coordinate information corresponding to each captured image data input from the coordinate information detection unit 21. Projection image data corresponding to the data is generated.
Specifically, the projection image generation unit 22 recognizes the shape of the object included in each captured image data from the three-dimensional coordinate information corresponding to each captured image data. Furthermore, the projection image generation unit 22 sets a projection plane corresponding to the recognized shape. Then, an image of an object corresponding to each captured image data is projected onto the set projection plane. In this way, projection image (image having a three-dimensional structure) data corresponding to each captured image data is generated. In the present embodiment, a known perspective transformation (also called perspective projection transformation) is used for generating projection image data. The projection image generation unit 22 inputs the generated projection image data of each object to the vehicle surrounding image generation unit 23. Further, the projection image generation unit 22 stores the coordinate information in the captured image data of each recognized object in the RAM 13 in association with each captured image data.

  The vehicle surrounding image generation unit 23 synthesizes the projection image data input from the projection image generation unit 22 and corresponding to each imaging range of the imaging devices 11A to 11D to generate vehicle surrounding image data having a three-dimensional structure. . At that time, the vehicle surrounding image generation unit 23 generates vehicle surrounding image data with the host vehicle 1 as a reference position. Specifically, a three-dimensional CG model corresponding to the host vehicle 1 is prepared in advance, and projection images corresponding to the respective shooting ranges are synthesized using the host vehicle model as a reference to generate vehicle surrounding image data. The vehicle surrounding image generation unit 23 stores the generated vehicle surrounding image data in the RAM 13 in association with the shooting time information of the shooting image data used for generation, and inputs it to the vehicle image detection unit 24.

  The vehicle image detection unit 24 detects an image of another vehicle included in the image of the vehicle surrounding image data input from the vehicle surrounding image generation unit 23 by template matching processing. Here, the template data is stored in the ROM 12 in advance. This template matching process may be performed on captured image data or on vehicle surrounding image data. In the present embodiment, it is performed on the vehicle surrounding image data. When the vehicle image detection unit 24 detects an image of another vehicle from the vehicle surrounding image, the vehicle image detection unit 24 cuts out the detected image portion (visible image portion) of the other vehicle included in the vehicle surrounding image data, Coordinate information in the vehicle surrounding image data is input to the blind spot information estimation unit 25. Note that, instead of actually cutting out the visible image portion from the original image, duplicate image data is generated. On the other hand, when the image of the other vehicle is not detected from the vehicle surrounding image, the vehicle image detection unit 24 notifies the blind spot information estimation unit 25 and the traveling state determination unit 30 of that fact. Moreover, when the vehicle image detection part 24 detects the visible image part of another vehicle, it notifies the driving | running | working state determination part 30 of that. In addition, the vehicle image detection unit 24 stores the information of the detected other vehicle in the RAM 13 in association with the coordinate information of the recognized object stored in the RAM 13.

  When the traveling state determination unit 30 receives a notification from the vehicle image detection unit 24 that the visible image portion of the other vehicle has been detected, the traveling state determination unit 30 stores the two successive images stored in the RAM 13 in time series including the image of the corresponding other vehicle. Read captured image data. Furthermore, the traveling state determination unit 30 determines whether or not the other vehicle is in a traveling state based on the position displacement between the own vehicle 1 and the other vehicle with respect to the own vehicle speed based on the read photographed image data and the own vehicle speed corresponding to the photographed image data. Determine whether. And if it determines with it being a driving state, the determination result of a driving state will be input into the blind spot information estimation part 25, and if it determines with it not being a driving state, the determination result of a stop state will be input into the blind spot information estimation part 25. For example, the ROM 12 stores information indicating a correspondence relationship between the vehicle speed and the image position displacement, which is obtained through an experiment or the like in advance. The traveling state determination unit 30 determines whether or not the traveling state is based on this information.

When the visible image partial data is input from the vehicle image detection unit 24, the blind spot information estimation unit 25 estimates the information on the blind spot part of the other vehicle based on the image information of the visible image partial data.
Here, a visible image part is an image part of the other vehicle comprised by the picked-up image of imaging device 11A-11D. Further, the blind spot part of the other vehicle is a part of the other vehicle that is outside the imaging range of the imaging devices 11A to 11D. The information on the blind spot part is information such as the shape and color of the blind spot part estimated from the visible image part data.

  When estimating the image information, the blind spot information estimation unit 25 first determines whether or not the information (shape, color, etc.) of the blind spot part can be estimated based on the obtained visible image partial data. The blind spot information estimating unit 25, when the visible image part is, for example, only the rear part or the front part of the other vehicle, or a part of the side face, it is difficult to guess the information of the blind spot part only by the visible image part. Judge that there is. The blind spot information estimation unit 25 determines that the estimation is impossible in such a case.

On the other hand, in the blind spot information estimation unit 25, it is assumed that the visible image portion includes, for example, an image (hereinafter referred to as a visible side image) of one entire side surface or substantially the entire side (for example, 80% or more) of another vehicle. In this case, since the shape of the blind spot part of the other vehicle (for example, the opposite side face) can be estimated from the visible side surface image, the blind spot information estimation unit 25 determines that it can be estimated in such a case. .
If the blind spot information estimation unit 25 determines that the other vehicle is in the traveling state based on the determination result of the traveling state determination unit 30, the blind spot information estimation unit 25 determines that the shape of the door portion on the opposite side surface can be estimated. On the other hand, if it is determined that the vehicle is stopped, it is determined that the shape of the door portion on the opposite side surface cannot be estimated.

Further, the blind spot information estimation unit 25 determines whether or not the blind spot part includes an illumination part such as a blinker lamp, a headlamp, and a brake lamp. Judge.
The blind spot information estimation unit 25, the visible image partial data and coordinate information, the information of the estimated blind spot part, the information determined to be predictable (hereinafter referred to as predictable information), and the information determined to be unpredictable (hereinafter referred to as the predictable information) The blind spot information including the non-guessable information) is input to the image complementing unit 26.

Based on the blind spot information input from the blind spot information estimation part 25, the image complementing part 26 complements the image of the blind spot part of the other vehicle included in the target vehicle surrounding image stored in the RAM 13 using the visible image partial data. To do.
For example, when a visible side image is included in the visible image portion, it is assumed that it is determined that the blind spot part is symmetrical with the shape of the visible side face image based on the shape information of the blind spot part included in the blind spot information. In this case, the image complementing unit 26 first reverses the visible side image. In addition, the image complementing unit 26 inverts the rudder angle of the front wheels in the inverted visible side surface image (inverts the image of the front wheels to the left and right) to generate complementary image data. And the image complement part 26 complements the image of a blind spot part with the produced | generated complement image data.

  Furthermore, it is assumed that the image complementing unit 26 determines that the other vehicle has a box shape based on the shape information included in the blind spot information. In this case, the image complementing unit 26 interpolates and draws the blind spot part between the visible side image and the complemented image (the side on the opposite side) by interpolation calculation. For example, external shapes such as a roof portion, a bumper portion, and a trunk portion of another vehicle are drawn by interpolation. Hereinafter, interpolation of the blind spot portion by interpolation drawing is referred to as drawing complement.

Further, the image complementing unit 26 corrects the portion determined to be unguessable included in the complemented image data based on the non-guessable information included in the blind spot information to the preset drawing content (for example, other Colored to distinguish from the part).
In addition, if the image complement part 26 determines with the blind spot information estimation part 25 that the blind spot part was not presumable, it will be referred to other vehicles from the vehicle image database 300 memorize | stored in HDD16 by using visible image partial data as search information. Search for a 3D CG model of the same model. When a three-dimensional CG model of the same vehicle type is found by this search, complementary image data is generated using the data. The image complementation part 26 complements the image of the blind spot part of another vehicle using the produced | generated complement image data. In the present embodiment, the complement using the visible image portion is performed with priority over the complement of the blind spot using the vehicle image database 300.

The image complementing unit 26 stores the supplemented vehicle surrounding image in the RAM 13 and inputs the supplemented vehicle surrounding image to the image reconstruction unit 28.
The virtual viewpoint setting unit 27 receives virtual viewpoint information input according to the operation of the operation unit 14 by the user. The virtual viewpoint setting unit 27 inputs the received virtual viewpoint information to the image reconstruction unit 28. In this embodiment, the virtual viewpoint information determines a plurality of preset virtual viewpoints, and the user selects and inputs an arbitrary virtual viewpoint from the plurality of virtual viewpoints via the operation unit 14. It has a configuration. Further, it is possible to rotate / enlarge an image while fixing the viewpoint through the operation unit 14.

Based on the virtual viewpoint information input from the virtual viewpoint setting unit 27, the image reconstruction unit 28 reconstructs the vehicle surrounding image input from the image complementing unit 26 into an image viewed from the viewpoint indicated by the virtual viewpoint information. . Further, the image reconstruction unit 28 reconstructs an image by rotating / enlarging the image in accordance with an instruction for rotation / enlargement or the like via the operation unit 14. Then, the reconstructed vehicle surrounding image is input to the image display unit 29.
The image display unit 29 displays the vehicle surrounding image input from the image reconstruction unit 28 on the display 15.

(Vehicle surrounding image generation processing)
Next, a processing procedure of vehicle surrounding image generation processing performed by the vehicle image generation function configuration unit 120 will be described.
FIG. 6 is a flowchart illustrating an example of a processing procedure of the vehicle surrounding image generation processing.
When the power is turned on (ignition is turned on) and a dedicated program is executed in the CPU 10, the process first proceeds to step S100 as shown in FIG.
In step S100, the CPU 10 executes an initialization process for initializing timers, counters, and flags used for the subsequent processes, and the process proceeds to step S102.

In step S102, the CPU 10 determines whether or not there is a start instruction from the user via the operation unit 14, and when it is determined that there is a start instruction (Yes), the process proceeds to step S104, and it is determined that it is not. In the case (No), the determination process is repeated until an instruction is given.
When the process proceeds to step S104, the image information input unit 20 acquires measurement data (captured image data or a combination of captured image data and distance measurement data) transmitted from the imaging devices 11A to 11D. Further, the vehicle speed information of the host vehicle 1 is acquired from the vehicle speed sensor 17. Then, the acquired measurement data and vehicle speed information are stored in the RAM 13 in association with the shooting time information, and the measurement data is input to the coordinate information detection unit 21 and the projection image generation unit 22, respectively, and the process proceeds to step S106. When the optical radar method is used, both the captured image data and the distance measurement data are input to the projection image generation unit 22.

In step S106, the coordinate information detection unit 21 detects the three-dimensional coordinate information of the object included in the captured image data based on the captured image data input from the image information input unit 20, and the process proceeds to step S108. Note that this processing is not necessary when the optical radar method is used.
In step S108, the projection image generation unit 22 performs projection corresponding to each captured image data based on the captured image data input from the image information input unit 20 and the three-dimensional coordinate information input from the coordinate information detection unit 21. Generate image data. And the produced | generated projection image data are input into the vehicle surrounding image generation part 23, and it transfers to step S110.

  In step S110, the vehicle surrounding image generation unit 23 synthesizes a plurality of projection image data corresponding to each imaging device input from the projection image generation unit 22 and the host vehicle model, and the vehicle surrounding image having a three-dimensional structure. Generate data. Then, the generated vehicle surrounding image data is stored in the RAM 13 in association with the shooting time, and is input to the vehicle image detection unit 24, and the process proceeds to step S112.

  In step S <b> 112, the vehicle image detection unit 24 reads a vehicle image detection template from the ROM 12. Then, using the read template, the template matching process is executed on the vehicle surrounding image data (or the captured image data) generated by the vehicle surrounding image generation unit 23. If a visible image of another vehicle is detected from the vehicle surrounding image, the visible image portion is cut out, the cut-out visible image portion data and coordinate information are input to the blind spot information estimation unit 25, and the process proceeds to step S114. In addition, when the image of another vehicle is not detected, the information which shows that it was not detected is input into the blind spot information estimation part 25, and it transfers to step S114.

In step S114, the blind spot information estimation unit 25 executes a blind spot information estimation process for estimating the information of the blind spot part based on the image information of the visible image partial data of the other vehicle input from the vehicle information detection unit 23. The process proceeds to S116.
In step S116, the image complementing unit 26 executes an image complementing process for complementing the image of the blind spot part of the other vehicle based on the blind spot information from the blind spot information estimating unit 25. Then, the vehicle surrounding image data complemented by the image complementing process is stored in the RAM 13 in association with the shooting time information, and is input to the image reconstruction unit 28, and the process proceeds to step S118.

  In step S118, the virtual viewpoint setting unit 27 determines whether or not virtual viewpoint information has been input via the operation unit 14 of the user. If it is determined that there has been input (Yes), the input virtual viewpoint information is determined. Is input to the image reconstruction unit 28, and the process proceeds to step S120. On the other hand, when it is determined that the virtual viewpoint information has not been input (No), the virtual viewpoint information input last time is input to the image reconstruction unit 28, and the process proceeds to step S126. That is, the virtual viewpoint setting unit 27 holds the latest virtual viewpoint information, and inputs the held virtual viewpoint information to the image reconstruction unit 28 until a new virtual viewpoint is set.

  When the process proceeds to step S120, the image reconstructing unit 28 is an image obtained by viewing the vehicle surrounding image data input from the image complementing unit 26 from the virtual viewpoint based on the virtual viewpoint information input from the virtual viewpoint setting unit 27. Reconstruct into data. And the reconfigure | reconstructed vehicle surrounding image data is input into the image display part 29, and it transfers to step S122. If the current virtual viewpoint matches the viewpoint of the input vehicle surrounding image data, the reconstruction process is not necessary. In this case, the vehicle surrounding image data input without reconfiguration is input to the image display unit 29.

In step S122, the image display unit 29 displays the image of the reconstructed vehicle surrounding image data input from the image reconstructing unit 28 on the display 15, and proceeds to step S124.
In step S124, the CPU 10 determines whether or not there is an end instruction from the user via the operation unit 14, and if it is determined that there is an end instruction (Yes), the series of processing ends. On the other hand, if it is determined that there is no termination instruction (Yes), the process proceeds to step S104.

In step S118, when there is no input of virtual viewpoint information and the process proceeds to step S126, the image reconstruction unit 28 viewed the vehicle surrounding image data input from the image complementing unit 26 from the previous virtual viewpoint. Reconstruct into image data. And the reconfigure | reconstructed vehicle surrounding image data is input into the image display part 29, and it transfers to step S128.
In step S128, the image display unit 29 displays the image of the reconstructed vehicle surrounding image data input from the image reconstructing unit 28 on the display 15, and the process proceeds to step S124.

(Blind spot information estimation process)
Next, a processing procedure of blind spot information estimation processing will be described.
FIG. 7 is a flowchart illustrating an example of the processing procedure of the blind spot information estimation process performed by the blind spot information estimation unit 25 in step S114.
When the blind spot information estimation process is executed in step S114, first, the process proceeds to step S200 as shown in FIG.
In step S200, the blind spot information estimation unit 25 determines whether or not the visible image portion of the other vehicle is detected from the vehicle surrounding image based on the information from the vehicle image detection unit 24. And when it determines with the visible image part of the other vehicle having been detected (Yes), it transfers to step S202, and when it determines with it not being (No), a series of processes are complete | finished and it returns to the original process. .

When the process proceeds to step S202, the blind spot information estimation unit 25 determines whether the visible image portion of the other vehicle includes a visible side image, and when it is determined that the visible side image includes a visible side image (Yes), step S204 is performed. If not (No), the process moves to step S214.
When the process proceeds to step S204, the blind spot information estimation unit 25 estimates the shape of the blind spot part from the visible side surface image, and the process proceeds to step S206. For example, it is assumed that the shape of the blind spot is symmetric and forms a box shape with the visible image portion.

In step S206, the blind spot information estimation unit 25 estimates the color of the blind spot part from the color of the visible image part, and the process proceeds to step S208. For example, when the roof part is a blind spot part, it is estimated that the roof part has the same color as the color of the door part of the visible side image.
In step S208, the blind spot information estimation unit 25 determines whether or not the visible image part includes an illumination part such as a blinker lamp part. If it is determined that the illumination part is included (Yes), the process proceeds to step S210. If not (No), the process proceeds to step S212.

When the process proceeds to step S210, the blind spot information estimation unit 25 determines that the illumination part of the blind spot part cannot be estimated, generates information that cannot be estimated for the illumination part, and the process proceeds to step S212.
In step S212, the blind spot information estimation unit 25 determines whether the corresponding other vehicle is in a traveling state based on the determination result from the traveling state determination unit 30. And when it determines with it being a driving state (Yes), it transfers to step S214, and when it determines with it not being (No), it transfers to step S216.

When the process proceeds to step S214, the blind spot information estimation unit 25 determines that the door part on the opposite side to the visible side as the blind spot part can be estimated, generates predictable information for the door part, and then proceeds to step S220. Transition. That is, if the other vehicle is in a running state, it can be determined that the door on the opposite side is closed, so that it can be estimated.
On the other hand, when the process proceeds to step S216, the blind spot information estimation unit 25 determines that the opposite door part cannot be estimated, generates non-estimable information for the door part, and proceeds to step S220. That is, when the other vehicle is stopped, the door on the opposite side may be open, so it is determined that it cannot be estimated.

In step S202, if the visible image portion does not include the visible side surface image and the process proceeds to step S218, the blind spot information estimation unit 25 determines that the blind spot part cannot be estimated and generates unpredictable information. The process proceeds to step S220.
In step S220, the blind spot information estimation unit 25 inputs the visible image partial data, the coordinate information, and the blind spot part information to the image complementing unit 26, ends the series of processes, and returns to the original process. Here, when the visible image portion includes a visible side image, the shape and color information of the blind spot part, the information that cannot be estimated when the lighting part is included, the information that the door part can be estimated in the running state, the door when the vehicle is stopped Information on the blind spot part such as information that cannot be estimated is input to the image complementing unit 26. In addition, when the visible image portion includes a visible side surface image, information that cannot be estimated for the blind spot part is input to the image complementing unit 26.

(Image completion processing)
Next, a processing procedure for image complement processing will be described.
FIG. 8 is a flowchart showing an example of the processing procedure of the image complementing process performed by the image complementing unit 26 in step S116.
When the image complementing process is executed in step S116, first, the process proceeds to step S300 as shown in FIG.
In step S300, based on the information from the blind spot information estimation unit 25, the image complementing unit 26 determines whether a visible image portion of another vehicle is detected from the vehicle surrounding image. And when it determines with the visible image part of the other vehicle having been detected (Yes), the vehicle surrounding image data of object is read from RAM13, and it transfers to step S302. On the other hand, if it is determined that this is not the case (No), the series of processes is terminated and the process returns to the original process.

When the process proceeds to step S302, the image complementing unit 26 determines whether or not the information on the blind spot part can be estimated based on the blind spot information. If it is determined that the information can be estimated (Yes), step S304 is performed. If not (No), the process proceeds to step S322.
When the process proceeds to step S304, the image complementing unit 26 generates an inverted image obtained by inverting the visible side image, and the process proceeds to step S306.
In step S306, the image complementing unit 26 inverts the front wheel portion in the inverted image, generates complementary image data of the side surface on the opposite side of the other vehicle, and proceeds to step S308.
In step S308, the image complementing unit 26 supplements the image of the side surface on the opposite side of the other vehicle with the supplemental image data generated in step S306, and the process proceeds to step S310.

In step S310, the image complementing unit 26 interpolates and draws the shape of the blind spot part between the visible image part and the complemented image part by interpolation, and the process proceeds to step S312.
In step S312, the image complementing unit 26 colors the blind spot part that has been complemented for drawing to the estimated color, and the process proceeds to step S314.
In step S314, based on the information from the blind spot information estimation unit 25, the image complementing unit 26 determines whether or not the illumination part cannot be estimated. If it is determined that the illumination part cannot be estimated (Yes), step S316 is performed. If not (No), the process proceeds to step S318.

When the process proceeds to step S316, the image complementing unit 26 changes (colors) the color of the illumination part in the complementary image on the opposite side surface to gray, and the process proceeds to step S318.
In step S318, the image complementing unit 26 determines whether or not the door unit cannot be estimated based on information from the blind spot information estimating unit 25. If it is determined that the door cannot be estimated (Yes), step S320 is performed. If it is determined that this is not the case (No), the series of processes is terminated and the original process is restored.
When the process proceeds to step S320, the image complementing unit 26 colors (changes) the color of the door in the complementary image on the opposite side surface to gray, ends the series of processes, and returns to the original process.

On the other hand, if it is determined in step S302 that the blind spot information cannot be estimated and the process proceeds to step S322, the image complementing unit 26 stores the vehicle image stored in the HDD 16 using the visible image portion as search information. The database 300 is searched, and the process proceeds to step S324.
In step S324, the image complementing unit 26 determines whether or not a vehicle of the same type as the other vehicle has been found by the search process in step S322. If it is determined that the same type of vehicle has been found (Yes), the process proceeds to step S326. If it is determined that this is not the case (No), the series of processes is terminated and the original process is restored.
When the process proceeds to step S326, the image complementing unit 26 generates the complementary image data of the blind spot part of the other vehicle using the searched three-dimensional CG data of the vehicle, and the process proceeds to step S328.

In step S328, the image complementing unit 26 supplements the blind spot part of the other vehicle using the supplemental image data generated in step S326, and the process proceeds to step S330.
In step S330, the image complementing unit 26 determines whether there is another vehicle based on the information from the blind spot information estimating unit 25. If it is determined that there is another vehicle (Yes), the process proceeds to step S302. If it is determined that this is not the case (No), the series of processes is terminated and the original process is restored.

(Operation)
Next, the operation of the vehicular image generation device 100 of this embodiment will be described.
FIG. 9 is a diagram illustrating an example of a positional relationship between the host vehicle and another vehicle at the time of photographing a region around the vehicle. FIG. 10 is a diagram illustrating an example of a shooting range of another vehicle when viewed from a viewpoint opposite to the virtual viewpoint in FIG. 9. FIG. 11 is a diagram illustrating an example of a non-photographing range including a blind spot part of another vehicle when viewed from the virtual viewpoint of FIG. 9. FIGS. 12A to 12D are diagrams illustrating an example of the flow of image complementation processing when another vehicle is in a traveling state. FIG. 13A is a diagram illustrating an image viewed from the virtual viewpoint of FIG. 9 after image complementation when the other vehicle is in a running state, and FIG. 13B is a diagram after image complementation when the other vehicle is stopped. It is a figure which shows the image seen from 9 virtual viewpoints. FIG. 14 is a diagram illustrating an example of the flow of image complement processing when the other vehicle is stopped.

When the ignition switch is turned on and the power is turned on, a dedicated program is executed in the CPU 10. When the program is executed, first, in addition to initialization of the timer and counter, various variables used in the program are initialized (S100). In the initialization process, camera calibration for each stereo camera of the imaging devices 11A to 11D is executed in a camera calibration unit (not shown). When the camera calibration process is completed and a start instruction is input from the user via the operation unit 14 (Yes in S102), the vehicular image generation function configuration unit 120 first performs the operation in the image information input unit 20. The captured image data obtained by capturing the area around the vehicle transmitted from the stereo cameras 11A to 11D is acquired. Further, the image information input unit 20 acquires vehicle speed information of the host vehicle 1 from the vehicle speed sensor 17. Then, the acquired captured image data is stored in the RAM 13 in association with vehicle speed information and shooting time information. In addition, the image information input unit 20 inputs the acquired captured image data to the coordinate information detection unit 21 and the projection image generation unit 22 (S104).
In the present embodiment, the stereo cameras 11A to 11D are digital video cameras, and each stereo camera performs shooting at a frame rate of 30 [fps]. The image information input unit 20 acquires captured image data captured at a timing corresponding to a preset sampling cycle from the captured image data.

  When the captured image data is input from the image information input unit 20, the coordinate information detection unit 21 captures two captured image data IL and IR corresponding to each stereo camera in the stereo cameras 11 </ b> A to 11 </ b> D with the same capturing time. A search process of the coordinates (XR, YR) of the corresponding points of the captured image data IR with respect to the coordinates (XL, YL) of each target point of the image data IL is executed. Thereafter, using the coordinates (XL, YL), the coordinates (XR, YR), the focal length f, and the base line length b, the object included in each captured image data from the above formulas (1) and (2). Is calculated (detected). The coordinate information detection unit 21 inputs the information of the three-dimensional coordinate value detected in this way to the projection image generation unit 22 (S106).

  The projection image generation unit 22 recognizes the shape of each object included in each captured image data based on the three-dimensional coordinate value of each captured image data corresponding to the stereo cameras 11A to 11D. Further, a projection plane is set for each object whose shape is recognized. The projection image generation unit 22 performs perspective transformation of an image corresponding to each object included in the captured image data on a projection plane set for each object based on a preset reference viewpoint (XVr, YVr, ZVr). Project by. Thereby, projection image data corresponding to each imaging range of the stereo cameras 11A to 11D is generated. The projection image generation unit 22 inputs the generated projection image data to the vehicle surrounding image generation unit 23 (S108).

  The vehicle surrounding image generation unit 23 combines the projection image data corresponding to each shooting range and the three-dimensional CG model of the host vehicle 1 to generate vehicle surrounding image data having a three-dimensional structure as viewed from the reference viewpoint. To do. The vehicle surrounding image generation unit 23 stores the generated vehicle surrounding image data in the RAM 13 in association with the shooting time information, and inputs it to the vehicle image detection unit 24 (S110).

  When the vehicle surrounding image data is input from the vehicle surrounding image generation unit 23, the vehicle image detection unit 24 reads the template data from the ROM 12. And the visible image part of another vehicle is detected from the input vehicle surrounding image data by the template matching process using the read template. When the vehicle image detecting unit 24 detects the visible image portion of the other vehicle, the vehicle image detecting unit 24 inputs the detected visible image portion data and coordinate information of the other vehicle to the blind spot information estimating unit 25. In addition, the running state determination unit 30 is notified that the visible image portion of the other vehicle has been detected (S112). Here, the following operation | movement is demonstrated as the visible image part of the other vehicle having been detected.

  When the traveling state determination unit 30 receives a notification from the vehicle image detection unit 24 that another vehicle has been detected, each of the two captured image data continuous in time series and the vehicle speed corresponding to each captured image data are received from the RAM 13. The coordinate information of other vehicles is read out. Further, information indicating the correspondence between the vehicle speed and the position displacement of the image (object) is read from the ROM 12. Then, based on the read data, the traveling state determination unit 30 determines that the positional displacement amount of the image of the other vehicle included in the two captured image data is a positional displacement in the traveling state with respect to the vehicle speed of the host vehicle 1. It is determined whether the amount is the amount of displacement when the vehicle is stopped. Thereby, it is determined whether each other vehicle included in the captured image data is in a traveling state. The traveling state determination unit 30 inputs the determination result for each other vehicle to the blind spot information estimation unit 25 (S114).

The blind spot information estimation unit 25 determines that another vehicle has been detected based on the detection result of the vehicle image detection unit 24 (Yes in S200), and estimates information on the blind spot part of the other vehicle based on the image information of the visible image partial data. To do.
Hereinafter, a case where the other vehicle 2 is photographed by the stereo cameras 11A to 11D mounted on the host vehicle 1 in the positional relationship illustrated in FIG. 9 will be described as an example.

  In the case of the positional relationship illustrated in FIG. 9, the other vehicle 2 is included in the shooting ranges A and B of the stereo cameras 11A and 11B. On the other hand, when the arrangement relationship of FIG. 9 is viewed by setting the virtual viewpoint (XVR, YVR, ZVR) on the right side of the host vehicle 1 and the other vehicle 2, as shown in FIG. The rear side is within the shooting range, and the front, top, and left sides of other vehicles are outside the shooting range. Therefore, as shown in FIG. 9, when the virtual viewpoint (XVL, YVL, ZVL) is set on the left side of the host vehicle 1 and the other vehicle 2, as shown in FIG. The scenery on the opposite side that is blocked by the upper surface, the left side surface, and the other vehicle 2 is out of the shooting range. That is, in the case of the positional relationship shown in FIG. 9, the front surface, the upper surface, and the left side surface of the other vehicle 2 are blind spots (outside the imaging range).

  As shown in the right diagram of FIG. 12A, when the other vehicle 2 is viewed from above, the thick line portion becomes the visible image portion 200 of the other vehicle 2. On the other hand, as shown in the left diagram of FIG. 12A, the visible image portion 200 includes a visible side image 220. The blind spot information estimation unit 25 estimates the shape of the blind spot part of the other vehicle 2 based on the image data of the visible side surface image 220. Here, since the image data of the entire side surface of the other vehicle 2 has been obtained, the blind spot information estimation unit 25 has the same shape as that of the visible side surface image 220 as the left side surface, which is the blind spot part of the other vehicle 2. I guess it is a shape. In addition, from the shape of the visible image portion 200, it is estimated that the other vehicle 2 is a box-shaped vehicle (S204).

For example, information indicating the correspondence relationship between the shape of the vehicle side surface and the shape of the entire vehicle is stored in the ROM 12, and the shape is estimated by comparing with the shape of the visible side image of the other vehicle 2.
As a result, a special shape such as an open car or a truck with a loading platform, or a box shape that is commonly seen like a sedan or a one-box car is estimated from the side image.
Furthermore, the blind spot information estimation unit 25 analyzes the color of the left side surface based on the visible side image data. For example, the number of pixels of each color is counted. In the example of FIG. 12A, since the body color of the other vehicle 2 is white, the number of white pixels is the largest. Therefore, the blind spot information estimation unit 25 estimates that the color of the blind spot part is white (S206).

  Next, the blind spot information estimation unit 25 determines whether or not the visible side image 220 includes an illumination part based on the visible side image data (S208). In the example of FIG. 12A, the visible side image 220 includes a part of the headlamp and a part of the blinker lamp (front and rear). The blind spot information estimation unit 25 detects these illumination parts by, for example, pattern matching processing, color shading change (luminance change), or the like. And based on this detection result, it determines with including an illumination site | part here (Yes of S208). The blind spot information estimation unit 25 determines that the illumination part of the left side part, which is the blind spot part of the other vehicle 2, cannot be estimated because the visible side image 220 includes the illumination part. Then, information that cannot be estimated about the illumination part is generated (S210).

  In addition, about a headlamp, since it will not be in the state where only one is turned on like a blinker lamp, you may determine with guessing. However, the headlamp and the front blinker lamp are often arranged close to each other. For this reason, it may be difficult to determine the boundary between the headlamp and the blinker lamp. Therefore, in this embodiment, it is determined that the illumination system cannot be estimated together.

  Next, the blind spot information estimation unit 25 determines whether or not the other vehicle 2 is in the traveling state based on the determination result from the traveling state determination unit 30 (S212). Here, it is assumed that the determination result that the other vehicle 2 is in the traveling state is acquired. The blind spot information estimation unit 25 determines that the other vehicle 2 is in a traveling state (Yes in S212), and determines that the left side door part that is the blind spot part of the other vehicle 2 can be estimated. And the guessable information about a door part is produced | generated (S214).

The blind spot information estimation unit 25 provides the image complement unit 26 with visible image partial data and coordinate information, and blind spot information including estimated shape information, color information, illumination part unpredictable information, and door part presumable information. Input (S220). Subsequently, when another other vehicle is detected, the same processing as described above is repeated. Here, for convenience of explanation, it is assumed that there is only one other vehicle 2.
When acquiring the blind spot information from the blind spot information estimating part 25, the image complementing part 26 determines that another vehicle has been detected from the content of the blind spot information, and reads the target vehicle surrounding image data from the RAM 13 (Yes in S300). Further, the image complementing unit 26 determines that the information on the blind spot part can be estimated based on the content of the blind spot information (Yes in S302).

  From the shape information included in the blind spot information, the image complementing unit 26 determines that the left side surface that is the blind spot part of the other vehicle 2 is bilaterally symmetric with the right side surface that is the visible side. As a result, as shown in FIG. 12 (b), the image complementing unit 26 horizontally inverts the visible side image 220 based on the visible side image data copied (copied) from the original image acquired from the vehicle image detection unit 24. The inverted side surface image 220T is generated (S304). At this time, depending on the steering angle of the front wheels, the direction of the front wheels is opposite, so the image complementing unit 26 generates a reversed side image 220T + that is a left-right reversed image portion of the front wheels (S306). That is, the image obtained by horizontally flipping the visible side image 220 is 220T, and in addition, the image obtained by horizontally flipping the front wheel image portion in 220T is 220T +.

As shown in FIG. 12C, the image complementing unit 26 supplements the image on the side surface (left side surface) on the opposite side of the other vehicle 2 with the generated inverted side surface image 220T + (S308). This is the replication complementing process of complementing (1) shown in FIG.
Subsequently, the image complementing unit 26 determines from the shape information that the other vehicle 2 is box-shaped. As a result, the image complementing unit 26 interpolates the blind spot portion between the visible image portion 200 and the inverted side surface image 220T + that is the complementary image portion by interpolation calculation, as shown by the dotted line portion in FIG. (S310). This is the complementing processing of complementing (2) shown in FIG. In addition, the shaded part in FIG. 12C is a part (part on which a surface is formed) complemented by drawing complementation, and the color is unknown.

  Next, the image complementation part 26 determines with the color of the blind spot part of the other vehicle 2 being white from the color information contained in blind spot information. Thereby, as shown in FIG.12 (d), the image complement part 26 colors the shaded part of FIG.12 (c) in which the color was unknown to white (S312). In addition, the image complementing unit 26 determines that the illumination part in the inverted side surface image 220T + cannot be estimated from the information that cannot be estimated for the illumination part included in the blind spot information (Yes in S314). Accordingly, the image complementing unit 26 colors the image part of the illumination part in the inverted side surface image 220T + that is the complement part to a color (here, gray) indicating a preset non-estimable part (S316). This is the color complementation process of complement (3) shown in FIG.

Next, the image complementing part 26 determines that the door part in the inverted side face image 220T + can be estimated from the predictable information of the door part included in the blind spot information (No in S318). When the image complementing unit 26 can estimate, the door unit is left as it is.
Then, the image complementing unit 26 stores the vehicle surrounding image data after complementing the blind spot part in the RAM 13 in association with the photographing time information of the corresponding photographed image data. Further, the image complementing unit 26 inputs the vehicle surrounding image data after complementing to the image reconstruction unit 28.

The image reconstruction unit 28 reconstructs the complemented vehicle surrounding image viewed from the reference viewpoint into an image viewed from the virtual viewpoint (XVL, YVL, ZVL) illustrated in FIG. Then, the image reconstruction unit 28 inputs the vehicle surrounding image data after the reconstruction to the image display unit 29 (S120).
Since a technique for reconstructing an image formed by perspective transformation into an image viewed from a virtual viewpoint is well known, description thereof is omitted.

The image display unit 29 generates an image signal for displaying the vehicle surrounding image on the display 15 based on the input vehicle surrounding image data after reconstruction. Then, the image display unit 29 inputs the generated image signal to the display 15. As a result, the reconstructed vehicle surrounding image is displayed on the display 15.
In the image corresponding to the virtual viewpoint of FIG. 9 in the vehicle surrounding image displayed in this way, as shown in FIG. 13A, the blind spot part of the other vehicle 2 is complemented by the inverted side image 220T +, and Illuminated parts that are determined to be unguessable are colored gray.

Next, the image complement process when the blind spot information estimation unit 25 determines that the other vehicle 2 is in a stopped state (No in S212) and determines that the door unit cannot be estimated will be described.
If the blind spot information estimation unit 25 determines that the other vehicle 2 is in a stopped state, the blind spot information estimation unit 25 generates non-estimable information about the door unit (S216).
The other blind spot information is the same as when the other vehicle 2 is in a traveling state.

The blind spot information estimation unit 25 provides the image complement unit 26 with visible image partial data and coordinate information, and blind spot information including estimated shape information, color information, non-estimable information on the illumination part, and unpredictable information on the door part. Input (S220).
When acquiring the blind spot information from the blind spot information estimating part 25, the image complementing part 26 determines that another vehicle has been detected from the content of the blind spot information, and reads the target vehicle surrounding image data from the RAM 13 (Yes in S300). Further, the image complementing unit 26 determines that the information on the blind spot part can be estimated based on the content of the blind spot information (Yes in S302).

Since the processing up to the supplementation of the supplement (2) is the same as that when the other vehicle 2 shown in FIG.
The image complementing unit 26 determines that the color of the blind spot part of the other vehicle 2 is white from the color information included in the blind spot information. Thereby, as shown in FIG.14 (d), the image complement part 26 colors the shaded part of FIG.14 (c) in which the color was unknown to white (S312). In addition, the image complementing unit 26 determines that the illumination part in the inverted side face image 220T + cannot be estimated from the information that the illumination part included in the blind spot information cannot be estimated (Yes in S314). Thereby, the image complementation part 26 colors the image part of the illumination part in the reverse side surface image 220T + which is a complement part gray (S316). In addition, the image complementing unit 26 determines that the door portion in the inverted side surface image 220T + cannot be estimated from the door portion incapable information included in the blind spot information (Yes in S318). That is, since the other vehicle 2 is in a stationary state, the door part on the blind spot side may be in an open state, and thus the shape of the door part cannot be estimated. In this case, the image complementing unit 26 colors the door portion in the inverted side image 220T + in gray (S320). This is the color complementation process of complement (3) shown in FIG.

Then, the image complementing unit 26 stores the vehicle surrounding image data after complementing the blind spot part in the RAM 13 in association with the photographing time information of the corresponding photographed image data. Further, the image complementing unit 26 inputs the vehicle surrounding image data after complementing to the image reconstruction unit 28.
The image reconstruction unit 28 reconstructs the complemented vehicle surrounding image viewed from the reference viewpoint into an image viewed from the virtual viewpoint (XVL, YVL, ZVL) illustrated in FIG. Then, the image reconstruction unit 28 inputs the vehicle surrounding image data after the reconstruction to the image display unit 29 (S120).

The image display unit 29 generates an image signal based on the input vehicle surrounding image data after reconstruction. Then, the image display unit 29 inputs the generated image signal to the display 15. As a result, the reconstructed vehicle surrounding image is displayed on the display 15.
In the image corresponding to FIG. 9 in the vehicle surrounding image displayed in this way, as shown in FIG. 13B, the blind spot part of the other vehicle 2 is complemented by the inverted side image 220T + and cannot be estimated. The illumination part and door part determined to be colored gray.

Next, an operation when the blind spot information estimation unit 25 determines that the information of the blind spot part cannot be estimated will be described. That is, the operation is performed when there is no visible side image (80% or more of the side surface) in the visible image portion.
If the blind spot information estimation unit 25 determines that there is no visible side surface image in the visible image portion (No in S202), the blind spot information estimation unit 25 determines that the information on the blind spot part cannot be estimated. If the blind spot information estimation unit 25 determines that the information on the blind spot part cannot be estimated, the blind spot information estimation unit 25 generates non-predictable information on the information on the blind spot part (S218). Then, the blind spot information estimation unit 25 inputs visible image partial data and coordinate information, and blind spot information including information that cannot be estimated about the blind spot part, to the image complementing unit 26 (S220).

  Based on the blind spot information from the blind spot information estimation part 25, the image complementing part 26 determines that the blind spot part of the other vehicle cannot be estimated (No in S302). Thereby, the image complementation part 26 searches the vehicle image database 300 memorize | stored in HDD16 for the 3D CG model of the same vehicle model as another vehicle by making visible image partial data into search information (S322). When a three-dimensional CG model of the same vehicle type is found by this search (S324), complementary image data of a blind spot part of another vehicle is generated using the data (S326). The image complementation part 26 complements the image of the blind spot part of another vehicle using the produced | generated complement image data (S328). In this case as well, the door part that becomes the blind spot part may be colored gray when another vehicle stops, or the illumination part of the blind spot part may be colored gray.

The image complementing unit 26 stores the complemented vehicle surrounding image data in the RAM 13 and inputs the supplemented vehicle surrounding image data to the image reconstruction unit 28.
The image reconstruction unit 28 reconstructs the complemented vehicle surrounding image viewed from the reference viewpoint into an image viewed from the virtual viewpoint. Then, the image reconstruction unit 28 inputs the vehicle surrounding image data after the reconstruction to the image display unit 29 (S120).

The image display unit 29 generates an image signal based on the input vehicle surrounding image data after reconstruction. Then, the image display unit 29 inputs the generated image signal to the display 15. As a result, the reconstructed vehicle surrounding image is displayed on the display 15.
Here, in the above description, the image information input unit 20 constitutes an image input unit. The coordinate information detection unit 21 constitutes coordinate information detection means. The projection image generation unit 22 constitutes a projection image generation unit. The vehicle surrounding image generation unit 23 constitutes vehicle surrounding image generation means. The vehicle image detection unit 24 constitutes vehicle image detection means. The blind spot information estimation unit 25 constitutes blind spot information estimation means. The image complementing unit 26 constitutes image complementing means. The virtual viewpoint setting unit 27 constitutes virtual viewpoint information setting means. The image reconstruction unit 28 constitutes image reconstruction means. The traveling state determination unit 30 constitutes a traveling state determination unit.

(Effect of this embodiment)
This embodiment has the following effects.
(1) The image information input unit 20 inputs a plurality of images obtained by photographing a region around the vehicle with a plurality of cameras mounted on the vehicle. The coordinate information detection unit 21 detects the three-dimensional coordinate information of each object included in the plurality of images input by the image input unit. The projection image generation unit 22 projects a plurality of images corresponding to each object included in the plurality of images onto the projection plane of each object configured based on the three-dimensional coordinate information of each object detected by the coordinate information detection unit. Is generated. The vehicle surrounding image generation unit 23 combines the plurality of projection images generated by the projection image generation unit 22 to generate a vehicle surrounding image having a three-dimensional structure. The vehicle image detection unit 24 detects a visible image portion of another vehicle configured based on images captured by a plurality of cameras included in the vehicle surrounding image. If the blind spot information estimation unit 25 determines that the visible image portion of the other vehicle is included in the vehicle surrounding image based on the detection result of the vehicle image detection unit, the blind spot information estimation unit 25 is out of the shooting range of the plurality of cameras based on the image information of the visible image portion. The information on the blind spot part of the other vehicle is estimated. Based on the information estimated by the blind spot information estimating unit 25, the image complementing unit 26 uses the image information of the visible image portion to supplement the image of the blind spot part of the other vehicle included in the vehicle surrounding image. The virtual viewpoint setting unit 27 sets virtual viewpoint information. The image reconstruction unit 28 reconstructs the vehicle surrounding image complemented by the image complementing unit 26 into an image viewed from the virtual viewpoint indicated by the virtual viewpoint information set by the virtual viewpoint setting unit 27.

  The information of the blind spot part is estimated from the visible image part of the other vehicle, and the image of the blind spot part is complemented using the image information of the visible image part based on the estimated information. Thereby, even when the vehicle surrounding image is reconstructed into an image viewed from the virtual viewpoint, at least a part of the image of the blind spot part of the other vehicle can be complemented and displayed. Therefore, it is possible to display an easy-to-view vehicle surrounding image as compared with a vehicle surrounding image in which no image of the blind spot part of another vehicle is displayed. Further, the reliability of the vehicle surrounding image can be improved.

(2) The blind spot information estimation part 25 estimates the shape of the blind spot part of another vehicle based on the image information of a visible image part. Based on the shape of the blind spot part estimated by the blind spot information estimating part 25, the image complementing part 26 generates a complementary image from the visible image part, and complements the image of the blind spot part of the other vehicle using the generated supplemental image.
For example, the shape of the vehicle is often symmetrical with respect to the center line in the vehicle front-rear direction and a plurality of parts having the same shape exist in one vehicle. Therefore, depending on the contents of the visible image part, for example, it is possible to estimate the shape of the blind spot part that is paired with the visible part. Since the image of the blind spot part is complemented using the complement image generated from the visible image portion based on the shape estimated in this manner, the image of the blind spot part can be appropriately complemented.

(3) The blind spot information estimation unit 25 estimates that the image shape of the other vehicle is bilaterally symmetric when the visible image portion includes a visible side surface image formed from an image obtained by photographing the side surface of the other vehicle. Based on the estimation result of the blind spot information estimation unit 25, the image complementing unit 26 generates a complementary image obtained by horizontally inverting the visible side surface image, and uses the generated complementary image on the opposite side that is a part of the blind spot part of the other vehicle. Complement the side image.
When the image of the side of the other vehicle is included in the visible image part, it is assumed that it is bilaterally symmetric from the symmetry of the vehicle shape. The side image was complemented. Thereby, it becomes possible to complement the image of a blind spot part appropriately.

(4) When the image complementing unit 26 generates a complemented image in which the visible side surface image is reversed left and right, the image of the front wheel portion in the complemented image that is horizontally reversed is corrected to an image in which the steering angle of the front wheel portion is reversed.
When the complementary side image is generated by inverting the visible side image, an image in which the front wheels are reversed in direction is generated depending on the steering angle of the front wheels. With the configuration (4) above, the image portion of the front wheel can be corrected to an image with the steering angle reversed, so that a more accurate complementary image can be generated.

(5) When the image complementing unit 26 complements part of the image of the blind spot part, based on the shape of the blind spot part estimated by the blind spot information estimating part 25, the blind spot part between the complemented image part and the visible image part The shape of the image portion is complemented by interpolation calculation.
Most of the vehicles have a symmetrical shape and a box shape except for a specially shaped vehicle such as an open car. Therefore, for example, if images of both sides of another vehicle including a complement image can be formed, it is possible to complement both sides by interpolation drawing by interpolation calculation. Since such drawing completion is performed, it is possible to generate a complementary image with more reliability.

(6) The blind spot information estimation unit 25 estimates the color of the complemented image part complemented by the drawing complement based on the color information of the visible image part. The image complementing unit 26 colors at least a part of the complemented image portion in the color estimated by the blind spot information estimating unit 25.
Since the body color of a vehicle is often a single color, for example, if the colors used for the parts on the side of the vehicle are understood from the visible side image, it is assumed that the color of the blind spot such as the roof is the same color. It is possible. In this way, by complementing the color of the blind spot part, it is possible to generate a complementary image with more reliability.

(7) If the blind spot information estimation unit 25 determines that at least part of the information on the blind spot part cannot be estimated from the visible image part, non-predictable information indicating that the guess is impossible for the corresponding part. It outputs to the image complement part 26. When the image complementing unit 26 determines that there is a part that cannot be estimated among the blind spot parts of the other vehicle based on the information that cannot be estimated from the blind spot information estimation unit 25, the preset drawing content for the part that cannot be estimated Draw with.
For example, when there are few visible image parts of other vehicles and there is insufficient information to estimate the shape, color, etc., it is impossible to estimate the information on the blind spot part. Further, it is impossible to estimate information on a blind spot portion or the like whose shape or state may change depending on the traveling state of another vehicle. In such a case, the image complementing unit 26 is notified that it cannot be estimated. In addition, the image complementing unit 26 renders the blind spot portion determined to be unguessable with preset contents. For example, by coloring a part that cannot be estimated with a color that can clearly indicate that the part cannot be estimated, the part that cannot be estimated can be clearly indicated to the user. As a result, it is possible to prevent display of unclear information, and it is possible to prevent the user from making an erroneous determination by looking at the complementary image.

(8) The traveling state determination unit 30 determines the traveling state of the other vehicle. If the blind spot information estimation unit 25 determines that the other vehicle is stopped based on the determination result of the traveling state determination unit 30, the door part included in the blind spot part of the other vehicle is determined to be a part that cannot be estimated.
If the other vehicle is traveling, it can be estimated that all the doors of the other vehicle are closed. On the other hand, when the other vehicle is stopped, the door serving as a blind spot is not necessarily closed. Considering such a situation, when the other vehicle is in a stopped state, the door portion of the blind spot portion is determined as a portion that cannot be estimated. As a result, it is possible to prevent display of unclear information, and it is possible to prevent the user from making an erroneous determination by looking at the complementary image.

(9) The blind spot information estimating unit 25 determines that the part of the illumination system including at least the blinker lamp part among the blind spot parts of the other vehicle is an unpredictable part.
For example, the blinker lamp can have two states, a state where one of the left and right is not lit, a state where the other is lit, and a state where the other is not lit. There may also be a state where both the left and right lights. Considering such a situation, at least the blinker ramp portion of the blind spot portion is determined as a portion that cannot be estimated. As a result, it is possible to prevent display of unclear information, and it is possible to prevent the user from making an erroneous determination by looking at the complementary image.

(10) The image complementing unit 26 searches the vehicle image database 300 for a three-dimensional CG model corresponding to the other vehicle using the visible image portion of the other vehicle included in the vehicle surrounding image as search information. The image complementing unit 26 generates a complementary image of the blind spot part of the other vehicle based on the three-dimensional CG model searched out by the search, and complements the image of the blind spot part of the other vehicle using the generated complementary image.
When there is a three-dimensional CG model of a vehicle type having an image portion that matches the visible image portion among the three-dimensional CG models of a plurality of vehicle types stored in the vehicle image database 300, a complementary image generated using the image data Was used to complement the image of the blind spot of other vehicles. This makes it possible to appropriately complement the image of the blind spot part when the blind spot part cannot be estimated from the visible image part or when most of the blind spot part cannot be estimated.

(11) If the image complementing unit 26 determines that the information on the blind spot part could not be estimated based on the estimation result of the blind spot information estimating unit 25, the visible image part of the other vehicle included in the vehicle surrounding image is used as search information. A three-dimensional CG model corresponding to the other vehicle is searched from the image database 300, a complementary image of the blind spot part of the other vehicle is generated based on the three-dimensional CG model searched by the search, and the generated complementary image is used. To complement the image of the blind spot of another vehicle.
When the information of the blind spot part cannot be estimated from the visible image part, the image of the blind spot part of the other vehicle is complemented using the image data of the three-dimensional CG model detected from the visible image part of the vehicle image database 300. I did it. This makes it possible to display an easy-to-view vehicle surrounding image as compared with a vehicle surrounding image in which no image of the blind spot part of another vehicle is displayed. Further, the reliability of the vehicle surrounding image can be improved.

(12) The image information input unit 20 inputs a plurality of images obtained by photographing a region around the vehicle with a plurality of cameras mounted on the vehicle. The coordinate detection information output unit 21 detects the three-dimensional coordinate information of each object included in the plurality of images input by the image information input unit 20. The projection image generation unit 22 projects an image corresponding to each object included in the plurality of images onto the projection plane of each object configured based on the three-dimensional coordinate information of each object detected by the coordinate information detection unit 21. A plurality of projection images are generated. The vehicle surrounding image generation unit 23 combines the plurality of projection images generated by the projection image generation unit 22 to generate a vehicle surrounding image having a three-dimensional structure. The vehicle image detection unit 24 detects a visible image portion of another vehicle configured based on images captured by a plurality of cameras included in the vehicle surrounding image. The image complementing unit 26 searches the vehicle image database 300 for a three-dimensional CG (Computer Graphics) model corresponding to the other vehicle using the visible image portion of the other vehicle detected by the vehicle image detecting unit 24 as search information, A complementary image of the blind spot part of the other vehicle is generated based on the three-dimensional CG model searched out by the search, and the image of the blind spot part of the other vehicle is supplemented using the generated complementary image. The virtual viewpoint setting unit 27 sets virtual viewpoint information. The image reconstruction unit 28 reconstructs the vehicle surrounding image complemented by the image complementing unit 26 into an image viewed from the virtual viewpoint indicated by the virtual viewpoint information set by the virtual viewpoint setting unit 27.

  When there is a three-dimensional CG model of a vehicle type having an image portion that matches the visible image portion among the three-dimensional CG models of a plurality of vehicle types stored in the vehicle image database 300, a complementary image generated using the image data Was used to complement the image of the blind spot of other vehicles. This makes it possible to display an easy-to-view vehicle surrounding image as compared with a vehicle surrounding image in which no image of the blind spot part of another vehicle is displayed. Further, the reliability of the vehicle surrounding image can be improved.

(Modification)
(1) In the above embodiment, the imaging devices (imaging devices 11A to 11D) that capture the area around the host vehicle 1 are arranged one by one on the front, rear, left, and right sides of the host vehicle 1, respectively. Not exclusively.

For example, as shown in FIG. 15, another configuration may be adopted in which two imaging devices are arranged on each of the front, rear, left, and right sides of the host vehicle 1. In FIG. 15, each black dot corresponds to the imaging device of the above embodiment. The inside of the two lines extending from each black spot is the shooting range. Note that the imaging range in FIG. 15 shows a part, and is shown two-dimensionally by two lines, but actually has a conical shape.
In general, the wider the angle of the camera lens, the more the image is distorted. As in the example of FIG. 15, by increasing the number of cameras, the angle of view can be narrowed, and a vehicle surrounding image can be generated using a captured image with less distortion. In addition, it is easy to overlap with the imaging range of another imaging apparatus. As a result, it is possible to capture a region around the vehicle without a gap, and it is possible to generate a more accurate vehicle surrounding image.

(2) In the above-described embodiment, the configuration in which only the visible image portion included in the vehicle surrounding image generated this time is used to complement the image of the blind spot part of the other vehicle has been described as an example, but the configuration is not limited thereto. .
For example, as shown in FIG. 16, it is assumed that another vehicle 2 traveling in a lane adjacent to the host vehicle 1 has overtaken the host vehicle 1. In this case, when the other vehicle 2 is at the position (1) in FIG. 16, most of the front side of the other vehicle 2 can be photographed by the photographing range D of the own vehicle 1. Further, when the other vehicle 2 is at the position (2) in FIG. 16, the entire right side surface of the other vehicle 2 can be photographed by the photographing range B of the own vehicle 1. And when the other vehicle 2 exists in the position of (3) in FIG. 16, most of the back side of the other vehicle 2 can be image | photographed by the imaging range A of the own vehicle 1. FIG. This is the same when the host vehicle 1 overtakes the other vehicle 2 from behind. The same applies to the case where the other vehicle 2 is traveling in the oncoming lane only by changing the direction of the photographing target. Thus, when the positional relationship between the host vehicle 1 and the other vehicle 2 changes, the content of the image obtained for the same other vehicle changes. Based on this, as a configuration that complements the image of the blind spot part of the other vehicle included in the vehicle surrounding image by using the captured image data of the maximum range already obtained and the newly acquired captured image data together Also good. This makes it possible to more accurately complement the image of the blind spot part of the other vehicle.

(3) In the above embodiment, when the visible side image is not included in the visible image portion of the other vehicle, it is determined that the blind spot information cannot be estimated, but the configuration is not limited thereto. Even if there is no visible side image or only a part of it, for example, when there is a large part of the front side of the other vehicle and a large part of the visible image of the rear side, etc. If possible, the configuration may be such that information on the blind spot part is estimated.
(4) In the above embodiment, when it is determined that the information on the blind spot part cannot be estimated from the visible image part of the other vehicle, the vehicle image database 300 is used to supplement the blind spot part image. It is not limited to the configuration. For example, it is good also as a structure which complements the image of a blind spot part using a vehicle image database from the beginning, without estimating the information of a blind spot part.

The above embodiments are preferable specific examples of the present invention, and various technically preferable limitations are given. However, the scope of the present invention is described in particular in the above description to limit the present invention. As long as there is no, it is not restricted to these forms. In the drawings used in the above description, for convenience of illustration, the vertical and horizontal scales of members or parts are schematic views different from actual ones.
In addition, the present invention is not limited to the above-described embodiments, and modifications, improvements, equivalents, and the like within the scope that can achieve the object of the present invention are included in the present invention.

DESCRIPTION OF SYMBOLS 100 Vehicle image generation apparatus 120 Vehicle image generation function structure part 1 Own vehicle 2 Other vehicle 10 CPU
11 Vehicle surrounding imaging units 11A to 11D Imaging device, CCD camera, stereo camera 12 ROM
13 RAM
14 Operation unit 15 Display 16 HDD
17 vehicle speed sensor 20 image information input unit 21 coordinate information detection unit 22 projection image generation unit 23 vehicle surrounding image generation unit 24 vehicle image detection unit 25 blind spot information estimation unit 26 image complementation unit 27 operation unit 28 image reconstruction unit 29 image display unit 30 Running state determination unit 110A CCD camera 111A Laser range finder

Claims (13)

  1. Image input means for inputting a plurality of images obtained by photographing a region around the vehicle with a plurality of cameras mounted on the vehicle;
    Coordinate information detection means for detecting three-dimensional coordinate information of each object included in the plurality of images input by the image input means;
    Projecting an image corresponding to each object included in the plurality of images onto a projection plane of each object configured based on the three-dimensional coordinate information of each object detected by the coordinate information detection unit, a plurality of projection images Projection image generating means for generating
    Vehicle surrounding image generation means for combining the plurality of projection images generated by the projection image generation means to generate a vehicle surrounding image having a three-dimensional structure;
    Vehicle image detection means for detecting a visible image portion of another vehicle configured based on images taken by the plurality of cameras included in the vehicle surrounding image;
    When it is determined that the visible image portion of the other vehicle is included in the vehicle surrounding image based on the detection result of the vehicle image detection means, the image is out of the imaging range of the plurality of cameras based on the image information of the visible image portion. Blind spot information estimating means for estimating information of a blind spot part of the other vehicle;
    Based on the information estimated by the blind spot information estimating means, using the image information of the visible image portion, an image complementing means for complementing the image of the blind spot part of the other vehicle included in the vehicle surrounding image;
    Virtual viewpoint information setting means for setting virtual viewpoint information;
    Image reconstructing means for reconstructing the vehicle surrounding image complemented by the image complementing means into an image viewed from a virtual viewpoint indicated by the virtual viewpoint information set by the virtual viewpoint information setting means, A vehicle image generation device.
  2. The blind spot information estimation means estimates the shape of the blind spot part of the other vehicle based on the image information of the visible image part,
    The image complementing unit generates a complementary image from the visible image portion based on the shape of the blind spot part estimated by the blind spot information estimating unit, and complements the image of the blind spot part of the other vehicle using the generated complementary image. The vehicular image generation apparatus according to claim 1.
  3. The blind spot information estimation means estimates that the shape of the other vehicle is bilaterally symmetrical when the visible image portion includes a visible side image composed of an image of the side surface of the other vehicle.
    The image complementing unit generates a complemented image obtained by horizontally inverting the visible side surface image based on the estimation result of the blind spot information estimating unit, and uses the generated complemented image as a part of the blind spot part of the other vehicle. The vehicular image generation apparatus according to claim 2, wherein the image of the side surface on the side is complemented.
  4.   The image complementing means corrects the image of the front wheel portion in the complemented image obtained by horizontally flipping the visible side image to an image obtained by reversing the steering angle of the front wheel portion when generating the complemented image obtained by horizontally flipping the visible side image. The vehicle image generation device according to claim 3.
  5.   When the image complementing means complements a part of the image of the blind spot part, based on the shape of the blind spot part estimated by the blind spot information estimation part, the blind spot part between the complemented image part and the visible image part 5. The vehicle image generation device according to claim 2, wherein the shape of the image portion is supplemented by drawing by interpolation calculation.
  6. The blind spot information estimating means estimates the color of the complemented image part complemented by the drawing complement based on the color information of the visible image part,
    6. The vehicular image generation apparatus according to claim 5, wherein the image complementing unit colors at least a part of the complemented image portion into a color estimated by the blind spot information estimating unit.
  7. If the blind spot information estimation means determines that at least part of the information on the blind spot part cannot be estimated from the visible image part, the dead spot information estimation means indicates that the guess part cannot be estimated for the corresponding part. Output to image completion means,
    When the image complementing unit determines that there is a part that cannot be estimated among the blind spot parts of the other vehicle based on the non-predictable information from the blind spot information estimating unit, a part that cannot be estimated is set in advance. The vehicular image generation apparatus according to any one of claims 1 to 6, wherein the vehicular image generation apparatus performs drawing with drawing contents.
  8. A traveling state determining means for determining a traveling state of the other vehicle;
    If the blind spot information estimation unit determines that the other vehicle is stopped based on the determination result of the traveling state determination unit, the blind spot information estimation unit determines that the door part included in the blind spot part of the other vehicle is an unpredictable part. The vehicular image generation apparatus according to claim 7.
  9.   9. The vehicle according to claim 7, wherein the blind spot information estimation unit determines that a part of the illumination system including at least the blinker lamp part among the blind spot parts of the other vehicle is an unpredictable part. Image generation device.
  10. A vehicle image database storing a three-dimensional CG model of a plurality of types of vehicles;
    The image complementing means searches the vehicle image database for a three-dimensional CG model corresponding to the other vehicle using the visible image portion of the other vehicle included in the vehicle surrounding image as search information, and is searched by the search. The supplementary image of the blind spot part of the other vehicle is generated based on the three-dimensional CG model, and the blind spot part image of the other vehicle is supplemented using the generated supplemental image. Item 10. The vehicle image generation device according to any one of Item 9.
  11.   When the image complementing unit determines that the information on the blind spot part could not be estimated based on the estimation result of the blind spot information estimating unit, the visible image portion of the other vehicle included in the vehicle surrounding image is used as search information, A three-dimensional CG model corresponding to the other vehicle is searched from a vehicle image database, a complementary image of the blind spot part of the other vehicle is generated based on the three-dimensional CG model searched by the search, and the generated complementary image The vehicle image generation device according to claim 10, wherein an image of a blind spot part of the other vehicle is supplemented using a vehicle.
  12. Image input means for inputting a plurality of images obtained by photographing a region around the vehicle with a plurality of cameras mounted on the vehicle;
    Coordinate information detection means for detecting three-dimensional coordinate information of each object included in the plurality of images input by the image input means;
    Projecting an image corresponding to each object included in the plurality of images onto a projection plane of each object configured based on the three-dimensional coordinate information of each object detected by the coordinate information detection unit, a plurality of projection images Projection image generating means for generating
    Vehicle surrounding image generation means for combining the plurality of projection images generated by the projection image generation means to generate a vehicle surrounding image having a three-dimensional structure;
    Vehicle image detection means for detecting a visible image portion of another vehicle configured based on images taken by the plurality of cameras included in the vehicle surrounding image;
    A vehicle image database storing a three-dimensional CG model of a plurality of types of vehicles;
    Using the visible image portion of the other vehicle detected by the vehicle image detection means as search information, a three-dimensional CG (Computer Graphics) model corresponding to the other vehicle is searched from the vehicle image database, and is searched out by the search. Image complementing means for generating a complementary image of the blind spot part of the other vehicle based on the three-dimensional CG model, and complementing the image of the blind spot part of the other vehicle using the generated complementary image;
    Virtual viewpoint information setting means for setting virtual viewpoint information;
    Image reconstructing means for reconstructing the vehicle surrounding image complemented by the image complementing means into an image viewed from a virtual viewpoint indicated by the virtual viewpoint information set by the virtual viewpoint information setting means, A vehicle image generation device.
  13. A vehicle surrounding image generation step for generating a vehicle surrounding image having a three-dimensional structure based on a plurality of images obtained by photographing the vehicle surroundings with a plurality of cameras mounted on the vehicle;
    A vehicle image detection step of detecting a visible image portion of another vehicle configured based on images taken by the plurality of cameras included in the vehicle surrounding image;
    If it is determined that the visible image portion of the other vehicle is included in the vehicle surrounding image based on the detection result in the vehicle image detection step, the image is out of the shooting range of the plurality of cameras based on the image information of the visible image portion. An image information estimation step of estimating information of a blind spot part of the other vehicle;
    Based on the image information estimated in the image information estimation step, using the image information of the visible image portion, an image complementing step of complementing the image of the blind spot part of the other vehicle included in the vehicle surrounding image;
    A virtual viewpoint information setting step for setting virtual viewpoint information;
    An image reconstructing step of reconstructing the vehicle surrounding image supplemented in the image complementing step into an image viewed from a virtual viewpoint indicated by the virtual viewpoint information set in the virtual viewpoint information setting step. A vehicle image generation method.
JP2011158970A 2011-07-20 2011-07-20 Vehicle image generation device and vehicle image generation method Active JP5799631B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011158970A JP5799631B2 (en) 2011-07-20 2011-07-20 Vehicle image generation device and vehicle image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011158970A JP5799631B2 (en) 2011-07-20 2011-07-20 Vehicle image generation device and vehicle image generation method

Publications (2)

Publication Number Publication Date
JP2013025528A true JP2013025528A (en) 2013-02-04
JP5799631B2 JP5799631B2 (en) 2015-10-28

Family

ID=47783815

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011158970A Active JP5799631B2 (en) 2011-07-20 2011-07-20 Vehicle image generation device and vehicle image generation method

Country Status (1)

Country Link
JP (1) JP5799631B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150019857A (en) * 2013-08-16 2015-02-25 삼성전기주식회사 System for providing around image and method for providing around image
JP2016175586A (en) * 2015-03-20 2016-10-06 株式会社デンソーアイティーラボラトリ Vehicle periphery monitoring device, vehicle periphery monitoring method, and program
WO2018016316A1 (en) * 2016-07-19 2018-01-25 ソニー株式会社 Image processing device, image processing method, program, and telepresence system
US10208459B2 (en) 2014-12-12 2019-02-19 Hitachi, Ltd. Volume estimation device and work machine using same

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0744788A (en) * 1993-06-29 1995-02-14 Hitachi Ltd Method and device for monitoring video
JP2000210474A (en) * 1999-01-22 2000-08-02 Square Co Ltd Game device, data editing method and recording medium
JP2002314990A (en) * 2001-04-12 2002-10-25 Auto Network Gijutsu Kenkyusho:Kk System for visually confirming periphery of vehicle
JP2003189293A (en) * 2001-09-07 2003-07-04 Matsushita Electric Ind Co Ltd Device for displaying state of surroundings of vehicle and image-providing system
JP2005268847A (en) * 2004-03-16 2005-09-29 Olympus Corp Image generating apparatus, image generating method, and image generating program
JP2007318460A (en) * 2006-05-26 2007-12-06 Alpine Electronics Inc Vehicle upper viewpoint image displaying apparatus
JP2008092459A (en) * 2006-10-04 2008-04-17 Toyota Motor Corp Periphery monitoring apparatus
JP2008151507A (en) * 2006-11-21 2008-07-03 Aisin Aw Co Ltd Apparatus and method for merge guidance
US20100049393A1 (en) * 2008-08-21 2010-02-25 International Business Machines Corporation Automated dynamic vehicle blind spot determination
JP2011004201A (en) * 2009-06-19 2011-01-06 Konica Minolta Opto Inc Circumference display

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0744788A (en) * 1993-06-29 1995-02-14 Hitachi Ltd Method and device for monitoring video
JP2000210474A (en) * 1999-01-22 2000-08-02 Square Co Ltd Game device, data editing method and recording medium
JP2002314990A (en) * 2001-04-12 2002-10-25 Auto Network Gijutsu Kenkyusho:Kk System for visually confirming periphery of vehicle
JP2003189293A (en) * 2001-09-07 2003-07-04 Matsushita Electric Ind Co Ltd Device for displaying state of surroundings of vehicle and image-providing system
JP2005268847A (en) * 2004-03-16 2005-09-29 Olympus Corp Image generating apparatus, image generating method, and image generating program
JP2007318460A (en) * 2006-05-26 2007-12-06 Alpine Electronics Inc Vehicle upper viewpoint image displaying apparatus
JP2008092459A (en) * 2006-10-04 2008-04-17 Toyota Motor Corp Periphery monitoring apparatus
JP2008151507A (en) * 2006-11-21 2008-07-03 Aisin Aw Co Ltd Apparatus and method for merge guidance
US20100049393A1 (en) * 2008-08-21 2010-02-25 International Business Machines Corporation Automated dynamic vehicle blind spot determination
JP2011004201A (en) * 2009-06-19 2011-01-06 Konica Minolta Opto Inc Circumference display

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150019857A (en) * 2013-08-16 2015-02-25 삼성전기주식회사 System for providing around image and method for providing around image
KR101994721B1 (en) * 2013-08-16 2019-07-01 삼성전기주식회사 System for providing around image and method for providing around image
US10208459B2 (en) 2014-12-12 2019-02-19 Hitachi, Ltd. Volume estimation device and work machine using same
JP2016175586A (en) * 2015-03-20 2016-10-06 株式会社デンソーアイティーラボラトリ Vehicle periphery monitoring device, vehicle periphery monitoring method, and program
WO2018016316A1 (en) * 2016-07-19 2018-01-25 ソニー株式会社 Image processing device, image processing method, program, and telepresence system

Also Published As

Publication number Publication date
JP5799631B2 (en) 2015-10-28

Similar Documents

Publication Publication Date Title
JP5891280B2 (en) Method and device for optically scanning and measuring the environment
CN103778649B (en) Imaging surface modeling for camera modeling and virtual view synthesis
US9013286B2 (en) Driver assistance system for displaying surroundings of a vehicle
EP2763407B1 (en) Vehicle surroundings monitoring device
EP2739050B1 (en) Vehicle surroundings monitoring system
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
US9672432B2 (en) Image generation device
US9479740B2 (en) Image generating apparatus
DE60207655T2 (en) Device for displaying the environment of a vehicle and system for providing images
EP2531980B1 (en) Depth camera compatibility
JP5057936B2 (en) Bird&#39;s-eye image generation apparatus and method
EP1367408B1 (en) Vehicle surroundings monitoring device, and image production method
JP3843119B2 (en) Moving body motion calculation method and apparatus, and navigation system
CN106462996B (en) Method and device for displaying vehicle surrounding environment without distortion
DE102013220669A1 (en) Dynamic rearview indicator features
JP3861781B2 (en) Forward vehicle tracking system and forward vehicle tracking method
JP4899424B2 (en) Object detection device
JP4969269B2 (en) Image processing device
JP3624353B2 (en) Three-dimensional shape measuring method and apparatus
JP2018534699A (en) System and method for correcting erroneous depth information
JP5208203B2 (en) Blind spot display device
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
US8817079B2 (en) Image processing apparatus and computer-readable recording medium
JP5322789B2 (en) Model generation apparatus, model generation method, model generation program, point cloud image generation method, and point cloud image generation program
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140714

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20150716

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20150728

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150810

R151 Written notification of patent or utility model registration

Ref document number: 5799631

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151