WO2011078201A1 - 画像処理装置、画像処理システム、および画像処理方法 - Google Patents
画像処理装置、画像処理システム、および画像処理方法 Download PDFInfo
- Publication number
- WO2011078201A1 WO2011078201A1 PCT/JP2010/073078 JP2010073078W WO2011078201A1 WO 2011078201 A1 WO2011078201 A1 WO 2011078201A1 JP 2010073078 W JP2010073078 W JP 2010073078W WO 2011078201 A1 WO2011078201 A1 WO 2011078201A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- image
- images
- image processing
- mode
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 94
- 238000003672 processing method Methods 0.000 title claims description 3
- 239000002131 composite material Substances 0.000 claims abstract description 96
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 25
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 25
- 238000012790 confirmation Methods 0.000 description 31
- 230000006870 function Effects 0.000 description 28
- 238000000034 method Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 18
- 238000004891 communication Methods 0.000 description 15
- 230000000875 corresponding effect Effects 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 11
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 7
- 239000007787 solid Substances 0.000 description 6
- 230000007704 transition Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 238000003825 pressing Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3647—Guidance involving output of stored or live camera images or video streams
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
Definitions
- the present invention relates to a technique for displaying an image on a display device mounted on a vehicle.
- a device that enables monitoring of the periphery of a vehicle by acquiring images around the vehicle with a plurality of cameras mounted on the vehicle and displaying them on a display device automatically or by user operation.
- an apparatus that displays on the screen an image looking down directly above the vehicle as a virtual viewpoint, and provides a user with means for confirming safety around the entire vehicle.
- a first screen is displayed on the first screen, in which a plurality of cameras mounted on the vehicle are used to shoot from a viewpoint at a predetermined height, and the viewpoint position of the image captured by the camera is displayed together with an overall view of the vehicle.
- Japanese Patent Application Publication No. 2008-219559 discloses a technology for confirming safety around a vehicle on two screens, a second screen for displaying each camera image corresponding to the movement of the viewpoint position. (Patent Document 1).
- a screen showing the position of the viewpoint of the image captured by the camera together with the overall view of the vehicle, and a screen for displaying each camera image corresponding to the movement of the viewpoint position are displayed on different screens, and the information displayed on the two screens must be correlated by the user himself, which makes it difficult for the user to grasp the positional relationship between the vehicle and the obstacle.
- the present invention has been made in view of the above problems, and an object thereof is to provide a technique that allows a user to intuitively grasp the positional relationship between a vehicle and an obstacle around the vehicle.
- An image processing apparatus mounted on a vehicle, Image acquisition means for acquiring a plurality of camera images photographed by a plurality of cameras provided in the vehicle; Composite image generation means for generating a plurality of composite images including a plurality of first overhead images obtained by viewing the vehicle and its periphery from different viewpoints based on the plurality of camera images; An image processing apparatus, comprising: a display image providing unit that outputs information corresponding to a display image in which the plurality of synthesized images are continuously reproduced to a display device mounted on the vehicle.
- the plurality of composite images include a second bird's-eye view image of the vehicle and its surroundings viewed from directly above the vehicle, The image processing apparatus, wherein the display image is one in which the second overhead image is reproduced at least once while the plurality of first overhead images are being reproduced.
- Each of the plurality of first bird's-eye images shows the vehicle at the center thereof.
- the composite image generation means generates the composite images when the image processing apparatus is activated.
- the composite image generating means removes a portion having a lightness lower than a reference value in the plurality of camera images when generating the plurality of composite images.
- An image processing system A plurality of cameras provided in the vehicle;
- An image processing apparatus mounted on a vehicle, Image acquisition means for acquiring a plurality of camera images photographed by a plurality of cameras provided in the vehicle;
- Composite image generation means for generating a plurality of composite images including a plurality of overhead images obtained by viewing the vehicle and its surroundings from different viewpoints, based on the plurality of camera images;
- An image processing apparatus comprising: a display image providing unit that outputs information corresponding to a display image in which the plurality of composite images are continuously reproduced to a display device mounted on the vehicle. Processing system.
- An image processing method Acquiring a plurality of camera images taken by a plurality of cameras provided in the vehicle; Generating a plurality of composite images including a plurality of overhead images obtained by viewing the vehicle and its periphery from different viewpoints based on the plurality of camera images; Outputting information corresponding to a display image in which the plurality of composite images are continuously reproduced to a display device mounted on the vehicle.
- the user can check obstacles around the vehicle on one screen.
- the user can smoothly check obstacles around the entire vehicle when starting up the image processing apparatus, which is usually before the start of operation.
- FIG. 1 is a diagram illustrating a configuration of an image processing system.
- FIG. 2 is a diagram illustrating a position where the in-vehicle camera is arranged in the vehicle.
- FIG. 3 is a diagram illustrating an external configuration of a side camera unit in which the left side camera of the vehicle is accommodated in the housing.
- FIG. 4 is a diagram for explaining a method of generating a composite image.
- FIG. 5 is a diagram illustrating transition of operation modes of the image processing system.
- FIG. 6 is a diagram illustrating that the virtual viewpoint is continuously moved so as to go around the periphery of the vehicle.
- FIG. 7 is a diagram showing the circuit around the vehicle with the vehicle looking down.
- FIG. 8 is a diagram illustrating display mode transition in the front mode.
- FIG. 1 is a diagram illustrating a configuration of an image processing system.
- FIG. 2 is a diagram illustrating a position where the in-vehicle camera is arranged in the vehicle.
- FIG. 3
- FIG. 9 is a diagram illustrating display mode transition in the back mode.
- FIG. 10 is a diagram showing the direction of the optical axis when the door mirror is stored.
- FIG. 11 is a diagram illustrating a processing flow of the control unit of the image processing system in the surrounding confirmation mode.
- FIG. 12 is a diagram illustrating a processing flow of the control unit of the image processing system in the back mode.
- FIG. 1 is a block diagram showing a configuration of the image processing system 120.
- the image processing system 120 is mounted on a vehicle (in this embodiment, an automobile), generates an image by photographing the periphery of the vehicle, and uses the generated image as a navigation device 20 in the vehicle interior. It has a function of outputting to a display device.
- a user typically a driver of the image processing system 120 can grasp the state around the vehicle in almost real time.
- an image processing system 120 includes an image processing device 100 that generates a peripheral image indicating the periphery of the vehicle and outputs image information to a display device such as a navigation device 20, and a camera that captures the periphery of the vehicle. And a photographing unit 5 having the above.
- the navigation device 20 provides navigation guidance to the user, and includes a display 21 such as a liquid crystal provided with a touch panel function, an operation unit 22 that is operated by the user, and a control unit 23 that controls the entire device. Yes.
- the navigation device 20 is installed on an instrument panel or the like of the vehicle so that the screen of the display 21 is visible from the user.
- Various instructions from the user are received by the operation unit 22 and the display 21 as a touch panel.
- the control part 23 is comprised as a computer provided with CPU, RAM, ROM, etc., and various functions including a navigation function are implement
- the navigation apparatus 20 is communicably connected to the image processing apparatus 100, and can transmit and receive various control signals to and from the image processing apparatus 100 and receive peripheral images generated by the image processing apparatus 100. .
- An image based on the function of the navigation device 20 alone is normally displayed on the display 21 under the control of the control unit 23, but the periphery showing the state of the periphery of the vehicle generated by the image processing device 100 under a predetermined condition An image is displayed.
- the navigation device 20 also functions as a display device that receives and displays a peripheral image generated by the image processing device 100.
- the image processing apparatus 100 is configured as an ECU (Electronic Control Unit) whose main body 10 has a function of generating a peripheral image, and is arranged at a predetermined position of the vehicle.
- the image processing system 120 includes an imaging unit 5 that captures the periphery of the vehicle, and generates an image generated from a virtual viewpoint based on a captured image obtained by capturing the periphery of the vehicle with the imaging unit 5. Functions as a device.
- the plurality of in-vehicle cameras 51, 52, and 53 provided in the photographing unit 5 are arranged at appropriate positions on a vehicle different from the main body unit 10, but will be described in detail later.
- the main body unit 10 of the image processing apparatus 100 includes a control unit 1 that controls the entire apparatus, and an image generation unit 3 that processes the captured image acquired by the imaging unit 5 to generate a peripheral image for display (the image of the present invention). Acquisition means) and a navigation communication unit 42 that communicates with the navigation device 20.
- the image processing apparatus 100 includes a changeover switch 43 that receives an instruction to switch display contents from a user. A signal indicating a user instruction is also input to the control unit 1 from the changeover switch 43.
- the image processing apparatus 100 can operate in response to both a user operation on the navigation apparatus 20 and a user operation on the changeover switch 43.
- the changeover switch 43 is arranged at an appropriate position of the vehicle separately from the main body unit 10 so that the user can easily operate.
- the image generation unit 3 is configured as a hardware circuit capable of various image processing, and includes a composite image generation unit 31, an image range selection unit 32, and an image information output unit 33.
- the composite image generation unit 31 functions as a composite image generation unit of the present invention, and based on a plurality of captured images (camera images) acquired by the plurality of in-vehicle cameras 51, 52, 53 of the imaging unit 5, A composite image viewed from an arbitrary virtual viewpoint is generated.
- a method in which the composite image generation unit 31 generates a composite image viewed from a virtual viewpoint will be described later.
- the image range selection unit 32 selects and cuts out a predetermined range of the image based on the photographed image acquired by the side camera 53 of the photographing unit 5.
- the predetermined range of the image is an image range that includes an image of the subject that is substantially the same as the range that appears on the door mirror when the door mirror is unfolded.
- the predetermined range of the image is an image range indicating the rear of the side area of the vehicle.
- the predetermined range of the image is an image range including the outside of the front fender of the vehicle 9.
- the user can easily confirm the status of the region to be confirmed in the case of performing the width alignment for bringing the vehicle body to the end of the road.
- the image information output unit 33 outputs the image information selected by the image range selection unit 32 to the navigation device 20 via the navigation communication unit 42. Note that image information is output based on the control unit 1.
- parameters for each vehicle type stored in the nonvolatile memory 40 described later varies according to opening / closing of the door mirrors of the side cameras 53 attached to the left and right door mirrors for each vehicle type). Position, and data on the angle of the optical axis that changes in accordance with the opening and closing of the door mirror).
- the image information output unit 33 functions as a display image providing unit of the present invention, and outputs the composite image information (display image) generated by the composite image generation unit 31 to the navigation device 20. As a result, a peripheral image showing the periphery of the vehicle is displayed on the display 21 of the navigation device 20.
- the control unit 1 is configured as a computer including a CPU, a RAM, a ROM, and the like, and various control functions are realized by the CPU performing arithmetic processing according to a predetermined program.
- An image control unit 11 shown in the figure corresponds to one of the functions of the control unit 1 realized in this way.
- the image control unit 11 controls image processing executed by the image generation unit 3. For example, the image control unit 11 instructs various parameters necessary for generating a composite image generated by the composite image generation unit 31. Further, the image range selection unit 32 gives an instruction for selecting a predetermined range of the image taken by the side camera 53 based on the open / closed state of the door mirror and the parameter information for each vehicle type.
- the main body 10 of the image processing apparatus 100 further includes a nonvolatile memory 40, a card reading unit 44, and a signal input unit 41, which are connected to the control unit 1.
- the non-volatile memory 40 is composed of a flash memory that can maintain the stored contents even when the power is turned off.
- the nonvolatile memory 40 stores vehicle type data 4a.
- the vehicle type data 4a is data corresponding to the type of vehicle required when the composite image generation unit 31 generates a composite image, or the vehicle type required when the image range selection unit 32 selects a predetermined range of the image. These are positions that change according to the opening and closing of the door mirrors of the side cameras 53 attached to the left and right door mirrors, and optical axis angle data that changes according to the opening and closing of the door mirrors.
- the card reading unit 44 reads the memory card MC that is a portable recording medium.
- the card reading unit 44 includes a card slot in which the memory card MC can be attached and detached, and reads data recorded on the memory card MC installed in the card slot. Data read by the card reading unit 44 is input to the control unit 1.
- the memory card MC is configured by a flash memory or the like capable of storing various data, and the image processing apparatus 100 can use various data stored in the memory card MC.
- a program firmware that realizes the function of the control unit 1 can be updated by storing a program in the memory card MC and reading the program.
- vehicle type data corresponding to a vehicle of a type different from the vehicle type data 4a stored in the non-volatile memory 40 in the memory card MC, and reading out and storing it in the non-volatile memory 40, image processing is performed. It is also possible for the system 120 to correspond to different types of vehicles.
- the signal input unit 41 inputs signals from various devices provided in the vehicle.
- a signal from the outside of the image display system 120 is input to the control unit 1 via the signal input unit 41.
- signals indicating various information are input to the control unit 1 from the shift sensor 81, the vehicle speed sensor 82, the direction indicator 83, the mirror driving device 84, and the like.
- the operation position of the shift lever of the transmission of the vehicle 9 ie, “P (parking)”, “D (forward)”, “N (neutral)”, “R (reverse)”, etc.
- the shift position is input.
- the traveling speed (km / h) of the vehicle 9 at that time is input.
- a turn signal indicating a direction instruction based on the operation of the turn signal switch that is, a direction instruction intended by the driver of the vehicle is input.
- a turn signal is generated, and the turn signal indicates the operated direction (left direction or right direction).
- the turn signal switch is in the neutral position, the turn signal is turned off.
- the mirror driving device 84 stores / deploys (opens / closes) the door mirror of the vehicle in response to the operation of the driver. From the mirror driving device 84, the state (storage / deployment) of the door mirror is input.
- the photographing unit 5 of the image processing system 120 is electrically connected to the control unit 1 and operates based on a signal from the control unit 1.
- the photographing unit 5 includes a front camera 51, a back camera 52, and a side camera 53, which are in-vehicle cameras.
- Each of these on-vehicle cameras 51, 52, and 53 includes an image sensor such as a CCD or a CMOS and electronically acquires an image.
- FIG. 2 is a diagram showing positions where the in-vehicle cameras 51, 52, 53 are arranged on the vehicle 9.
- the three-dimensional XYZ orthogonal coordinates shown in the figure are used as appropriate when indicating the direction and direction.
- the XYZ axes are fixed relative to the vehicle 9.
- the X-axis direction is along the left-right direction of the vehicle 9
- the Y-axis direction is along the front-rear direction of the vehicle 9
- the Z-axis direction is along the vertical direction.
- the + X side is the right side of the vehicle 9
- the + Y side is the rear side of the vehicle 9
- the + Z side is the upper side.
- the front camera 51 is provided in the vicinity of the license plate mounting position at the front end of the vehicle 9, and its optical axis 51a is directed in the straight direction of the vehicle 9 (-Y side in the Y-axis direction in plan view).
- the back camera 52 is provided in the vicinity of the license plate mounting position at the rear end of the vehicle 9, and its optical axis 52 a is directed in the reverse direction of the vehicle 9 in the straight traveling direction (+ Y side in the Y-axis direction in plan view).
- the side cameras 53 are provided on the left and right door mirrors 93, respectively, and the optical axis 53a is directed to the outside along the left-right direction of the vehicle 9 (X-axis direction in plan view).
- the mounting position of the front camera 51 and the back camera 52 is preferably approximately the center in the left and right, but may be slightly shifted in the left and right directions from the center in the left and right.
- a fish-eye lens or the like is employed as the lens of these in-vehicle cameras 51, 52, 53, and the in-vehicle cameras 51, 52, 53 have an angle of view ⁇ of 180 degrees or more. For this reason, it is possible to shoot the entire periphery of the vehicle 9 by using the four in-vehicle cameras 51, 52, and 53.
- FIG. 3 is a diagram showing an external configuration of the side camera unit 70 in which the left side camera 53 of the vehicle 9 is accommodated in the housing. Since the configuration and arrangement of the side camera unit 70 are symmetrical on the left and right of the vehicle 9, the following description will be specifically made taking the left side of the vehicle 9 as an example, but the same applies to the right side. As shown in the figure, the side camera unit 70 is disposed below the door mirror 93 via a bracket 79.
- the side camera 53 includes a lens and an image sensor.
- the side camera 53 is disposed in the housing, and the optical axis is directed to the outside of the vehicle 9.
- the side camera 53 is fixed to the housing so that the direction of the optical axis is a predetermined angle (for example, about 45 degrees) with respect to the vertical direction.
- FIG. 4 is a diagram for explaining a method of generating a composite image.
- the front camera 51, the back camera 52, and the side camera 53 of the photographing unit 5 perform photographing simultaneously, four photographed images P1 to P4 respectively showing the front, rear, left side, and right side of the vehicle 9 are acquired.
- the four photographed images P1 to P4 acquired by the photographing unit 5 include information indicating the entire periphery of the vehicle 9 at the time of photographing.
- each pixel of the four captured images P1 to P4 is projected onto a three-dimensional curved surface SP2 in a virtual three-dimensional space.
- the three-dimensional curved surface SP2 has, for example, a substantially hemispherical shape (a bowl shape), and a center portion (a bottom portion of the bowl) is determined as a position where the vehicle 9 exists.
- a correspondence relationship is determined in advance between the position of each pixel included in the captured images P1 to P4 and the position of each pixel of the solid curved surface SP2. Therefore, the value of each pixel of the three-dimensional curved surface SP2 can be determined based on this correspondence and the value of each pixel included in the captured images P1 to P4.
- the in-vehicle cameras 51, 52, and 53 use the wide-angle cameras having the angle of view ⁇ of 180 degrees or more for capturing the captured images P1 to P4.
- a wide-angle camera When shooting with a wide-angle camera in this way, part of the image is obstructed by obstacles such as the camera hood and filter frame, and the amount of light in the surrounding area is reduced. (A portion with low brightness in a camera image) may occur. This phenomenon of shadowing is generally called “vignetting”.
- the three-dimensional curved surface SP1 shown in FIG. 4 shows a state in which a part of the captured images P1 to P4 is vignetted and a shadow due to a decrease in the amount of light is generated in a predetermined area around the three-dimensional curved surface SP1 on which these images are projected. Yes.
- the synthesized image viewed from a predetermined virtual viewpoint does not become a substantially hemispherical shape (a bowl shape).
- a composite image corresponding to an arbitrary virtual viewpoint is used by using the solid curved surface SP2 which is a substantially hemispherical (bowl-shaped) region in the center excluding the peripheral region in which the light amount is reduced due to vignetting in the solid curved surface SP1.
- the solid curved surface SP ⁇ b> 2 is formed by removing the peripheral region where the light amount is reduced due to vignetting with the broken line portion of the solid curved surface SP ⁇ b> 1 as a boundary.
- an image of a subject that has a substantially hemispherical shape (a bowl shape) can be formed, and the positional relationship between the vehicle and the obstacle displayed in a three-dimensional manner as if the user looked down from the top of the bowl, Can be provided.
- processing may be performed so as to detect a region where the brightness is lower than the reference value in the captured images (camera images) obtained by the in-vehicle cameras 51, 52, and 53 and to remove the region.
- the correspondence between the positions of the pixels of the captured images P1 to P4 and the positions of the pixels of the three-dimensional curved surface SP is determined by the arrangement of the four in-vehicle cameras 51, 52, and 53 in the vehicle 9 (the distance between each other, the ground height, Depending on the optical axis angle, etc.). For this reason, the table data indicating this correspondence is included in the vehicle type data 4 a stored in the nonvolatile memory 40.
- polygon data indicating the shape and size of the vehicle body included in the vehicle type data 4a is used, and a vehicle image which is a polygon model indicating the three-dimensional shape of the vehicle 9 is virtually configured.
- the configured vehicle image is arranged in a substantially hemispherical central portion determined as the position of the vehicle 9 in the three-dimensional space where the three-dimensional curved surface SP is set.
- the virtual viewpoint VP is set by the control unit 1 for the three-dimensional space where the solid curved surface SP exists.
- the virtual viewpoint VP is defined by the viewpoint position and the visual field direction, and is set to an arbitrary visual field position corresponding to the periphery of the vehicle 9 in this three-dimensional space toward an arbitrary visual field direction.
- a necessary area on the three-dimensional curved surface SP2 is cut out as an image as described above.
- the relationship between the virtual viewpoint VP and a necessary area in the three-dimensional curved surface SP is determined in advance, and is stored in advance in the nonvolatile memory 40 or the like as table data.
- rendering is performed on the vehicle image composed of polygons according to the set virtual viewpoint VP, and the resulting two-dimensional vehicle image is superimposed on the cut out image.
- the composite image which shows a mode that the vehicle 9 and the periphery of the vehicle 9 were seen from arbitrary virtual viewpoints will be produced
- the vehicle 9 ( Actually, a composite image CP1 showing the vehicle image) and the surroundings of the vehicle 9 is generated.
- the virtual viewpoint VP2 in which the viewpoint position is the left rear of the position of the vehicle 9 and the visual field direction is substantially in front of the vehicle 9 is set, the entire periphery from the left rear of the vehicle 9 is set.
- a composite image CP ⁇ b> 2 is generated that shows the vehicle 9 (actually a vehicle image) and the surroundings of the vehicle 9.
- FIG. 5 is a diagram illustrating transition of operation modes of the image processing system 120.
- the image processing system 120 has four operation modes: a navigation mode M0, a surrounding confirmation mode M1, a front mode M2, and a back mode M3. These operation modes can be switched by the control of the control unit 1 in accordance with the operation of the driver and the traveling state of the vehicle 9.
- the navigation mode M0 is an operation mode in which a map image for navigation guidance is displayed on the display 21 by the function of the navigation device 20.
- the functions of the image processing apparatus 100 are not used, and various displays are performed using the functions of the navigation apparatus 20 alone. For this reason, when the navigation device 20 has a function of receiving and displaying television broadcast radio waves, a television broadcast screen may be displayed instead of the map image for navigation guidance.
- Surrounding confirmation mode M1 is an operation mode in which an animation is expressed such that the vehicle 9 circulates around the vehicle 9 with the vehicle 9 looking down.
- the front mode M2 is an operation mode for displaying a display image that mainly indicates the front or side of the vehicle 9 that is required when moving forward.
- the back mode M3 is an operation mode for displaying a display image that mainly indicates the rear of the vehicle 9 that is required when the vehicle is moving backward.
- the surrounding confirmation mode M1 is first set.
- the surrounding confirmation mode M1 when a predetermined time (for example, 6 seconds) elapses after the animation expression that goes around the vehicle 9 is made, the mode is automatically switched to the front mode M2. Further, in the case of the front mode M2, for example, if the changeover switch 43 is continuously pressed for a predetermined time or more while the traveling speed is 0 km / h (stopped state), the surrounding confirmation mode M1 is switched. Note that the surrounding confirmation mode M1 may be switched to the front mode M2 in accordance with a predetermined instruction from the driver.
- the mode is switched to the navigation mode M0.
- the traveling speed input from the vehicle speed sensor 82 is less than 10 km / h in the navigation mode M0, the mode is switched to the front mode M2.
- the front mode M2 When the traveling speed of the vehicle 9 is relatively high, the front mode M2 is canceled in order to concentrate the driver on traveling. On the other hand, when the traveling speed of the vehicle 9 is relatively low, the driver drives in consideration of the situation around the vehicle 9, more specifically, approaching an intersection with poor visibility, changing direction, or shifting the width. There are many scenes where such as. For this reason, when the traveling speed is relatively low, the navigation mode M0 is switched to the front mode M2. When switching from the navigation mode M0 to the front mode M2, a condition that there is an explicit operation instruction from the driver may be added to the condition that the traveling speed is less than 10 km / h.
- the mode is switched to the surrounding confirmation mode M1. Then, when a predetermined time (for example, 6 seconds) elapses after the animation expression that goes around the vehicle 9 is performed, the mode is automatically switched to the front mode M2.
- the mode is switched to the back mode M3. That is, when the transmission of the vehicle 9 is operated to the “R (reverse)” position, since the vehicle 9 is in a reverse state, the vehicle 9 is switched to the back mode M3 mainly showing the rear of the vehicle 9.
- the back mode M3 when the position of the shift lever is other than “R (reverse)”, it is switched to the navigation mode M0 or the front mode M2 based on the traveling speed at that time. That is, if the traveling speed is 10 km / h or more, the mode is switched to the navigation mode M0, and if the traveling speed is less than 10 km / h, the mode is switched to the front mode M2.
- the display mode around the vehicle 9 in the surrounding confirmation mode M1 will be described.
- the virtual viewpoint VP is set so as to look down at the vehicle 9, and the virtual viewpoint VP is continuously moved so as to go around the periphery of the vehicle 9.
- the virtual viewpoint VP is initially set behind the vehicle 9 and then circulates around the vehicle 9 clockwise. In this way, when the virtual viewpoint VP moves to the rear again via the left side, the front side, and the right side of the vehicle 9, the virtual viewpoint VP moves to just above the vehicle 9.
- a plurality of composite images are generated continuously in time with the virtual viewpoint VP being moved.
- the plurality of generated composite images are sequentially output to the navigation device 20 and displayed on the display 21 continuously in time.
- each composite image RP is a composite image generated based on a substantially hemispherical three-dimensional curved surface SP2 excluding the peripheral area where the light amount is reduced due to vignetting in the three-dimensional curved surface SP1, and the user looks into the bowl from above. It is displayed in three dimensions. Therefore, the image is easy to grasp the positional relationship between the vehicle and the obstacle.
- the vehicle 9 is arranged near the center of the image so that the vehicle 9 and the surroundings of the vehicle 9 can be confirmed.
- a composite image ST ⁇ b> 1 (ST ⁇ b> 5) in which the vehicle 9 is looked down from the rear of the vehicle 9, a composite image ST ⁇ b> 2 in which the vehicle 9 is looked down from the left side of the vehicle 9, and the vehicle 9 from the front of the vehicle 9.
- a composite image ST3 that looks down, a composite image ST4 that looks down on the vehicle 9 from the right side surface of the vehicle 9, and a composite image ST6 that looks down on the vehicle 9 from just above (directly above) the center of the vehicle 9 are taken.
- the composite image generation unit 31 Based on the images input from the vehicle-mounted cameras 51, 52, and 53 of the unit 5, the composite image generation unit 31 generates them.
- the composite images ST1 to ST5 correspond to the first overhead image of the present invention, and the composite image ST6 corresponds to the second overhead image of the present invention.
- FIG. 7 for example, a plurality of images showing the user around the vehicle in a state where the user looks down at the vehicle continuously in the order of ST 1 ⁇ ST 2 ⁇ ST 3 ⁇ ST 4 ⁇ ST 5 ⁇ ST 6 on the navigation device 20.
- a plurality of images showing the user around the vehicle in a state where the user looks down at the vehicle continuously in the order of ST 1 ⁇ ST 2 ⁇ ST 3 ⁇ ST 4 ⁇ ST 5 ⁇ ST 6 on the navigation device 20.
- both a plurality of composite images showing that the position of the virtual viewpoint moves continuously and turning around the vehicle and a composite image viewed from directly above (directly above) the center of the vehicle are displayed.
- the user can intuitively grasp the positional relationship between the vehicle and the obstacle on one screen.
- safety around the vehicle can be confirmed from a plurality of viewpoints of the periphery of the vehicle and directly above (directly above) the approximate center of the vehicle.
- the safety confirmation can be performed again by the image above the vehicle displayed continuously with the image around the vehicle.
- the user it is not necessary for the user to check a wide range of the entire periphery of the vehicle at a time. For each image of a limited range around the vehicle, the entire vehicle in a wide range as viewed from directly above (directly above) the center of the vehicle. By displaying the images continuously, the user can confirm the safety around the vehicle more reliably on one screen.
- the user can intuitively check obstacles around the entire vehicle on one screen.
- the composite image shown here is only an example, and the height and direction overlooking the vehicle of the composite image, the continuous display of the composite image is paused, the rotation speed of the continuous display is adjusted, and the rotation of the continuous display is reversed.
- the user can arbitrarily change the setting.
- the selected part can be enlarged and displayed by pausing the continuous display and selecting any part of the display screen by user operation.
- a case has been described in which a composite image that displays the periphery of a vehicle is continuously displayed, and then a composite image viewed from directly above (directly above) the center of the vehicle is continuously displayed. After displaying the composite image viewed from directly above (directly above) the approximate center of the vehicle, the composite image around the vehicle may be continuously displayed.
- this embodiment described the case where the circumference
- the number of times of circulation around the periphery is not limited to one, but may be any number of times such as two or more times or a half circle.
- the start condition of the vehicle surrounding confirmation process is that the image processing apparatus 100 or the navigation apparatus 20 receives the ACC-ON signal from the vehicle power supply control apparatus.
- the user can confirm the vicinity of the vehicle when starting up.
- the image processing apparatus is activated in response to ACC-ON before the vehicle travels, so that the user can check the surroundings of the vehicle before the vehicle travels.
- Such a vehicle surrounding confirmation process start condition may be not only the reception of the ACC-ON signal but also the pressing of the changeover switch 43 continuously for a predetermined time or more (for example, 3 seconds or more). Thereby, when a user wants to confirm the circumference of a vehicle automatically, it can check anytime.
- a completion button (not shown) is provided on the navigation device 20 and the composite image is continuously displayed by pressing the completion button even during the continuous display. It is also possible to prevent the vehicle surroundings confirmation process from being started by ending or by previously providing a setting button (not shown) and pressing the ACC-ON or changeover switch 43 for a predetermined time or more.
- the user can confirm the situation around the entire vehicle 9 from the viewpoint of the vehicle 9 in front of the user.
- the positional relationship between the surrounding obstacles and the vehicle 9 can be grasped.
- FIG. 8 is a diagram showing display mode transition in the front mode M2.
- the front mode M2 there are four display modes of a traveling bird's-eye view mode M21, a host vehicle confirmation mode M22, a side camera mode M23, and a navigation mode M24, and these display modes have different display modes.
- the M21, M22, and M23 screens display a field-of-view guide 90 that indicates the field-of-view range in each display mode, and which area around the vehicle 9 is displayed to the user.
- the navigation mode M24 a map image around the vehicle 9 is displayed, and the current position of the vehicle 9 is displayed.
- These display modes are switched by the control of the control unit 1 in the order of the traveling bird's-eye view mode M21, the own vehicle confirmation mode M22, the side camera mode M23, and the navigation mode M24 each time the user presses the changeover switch 43.
- the changeover switch 43 is pressed in the navigation mode M24, the operation returns to the traveling bird's-eye view mode M21 again.
- the traveling bird's-eye view mode M21 displays on the display 21 a screen including a composite image FP1 showing the state of the vehicle 9 viewed from the virtual viewpoint VP immediately above the vehicle 9 and a front image FP2 obtained by photographing with the front camera 51 side by side. Display mode. That is, in the traveling bird's-eye view mode M21, two images of a composite image FP1 showing the entire periphery of the vehicle 9 and a front image FP2 showing the front of the vehicle 9 are shown on the same screen.
- the traveling bird's-eye view mode M21 such two images FP1 and FP2 can be viewed, so that the user can confirm the situation in front of the traveling direction of the vehicle 9 together with the entire periphery of the vehicle 9 at a glance. It can be said that the traveling bird's-eye view mode M21 is a display mode that can be used with high versatility in various scenes during forward movement.
- the own vehicle confirmation mode M22 displays a screen including a front image FP3 obtained by photographing with the front camera 51 and a composite image FP4 showing the state of the vehicle 9 viewed from the virtual viewpoint VP behind the vehicle 9 side by side. Is the display mode to be displayed. That is, in the own vehicle confirmation mode M22, two images of a front image FP3 showing the front of the vehicle 9 and a composite image FP4 showing the side of the vehicle 9 are shown on the same screen.
- the front image FP3 in the vehicle confirmation mode M22 has a wider field of view in the left-right direction than the front image FP2 in the traveling bird's-eye view mode M21. For this reason, it is possible to confirm an object that is present in front and in the left-right direction from the front end of the vehicle 9 that is likely to become a blind spot when entering an intersection with poor visibility.
- the composite image FP4 in the own vehicle confirmation mode M22 is moved to the rear of the vehicle 9 compared to the composite image FP1 in the traveling bird's-eye view mode M21. Although narrowed, the side of the vehicle 9 can be easily confirmed. For this reason, when passing the oncoming vehicle, the clearance with the oncoming vehicle can be easily confirmed.
- the side camera mode M23 is a display mode for displaying on the display 21 a screen including side images FP5 and FP6 that are respectively obtained by photographing with the left and right side cameras 53.
- the side images FP5 and FP6 show only the outside of the front fender 94 that tends to be a blind spot from the driver's seat.
- the user can easily check the situation of the region to be confirmed when performing the width adjustment to bring the vehicle body to the end of the road. Can be confirmed.
- the navigation mode M24 is an operation mode in which a map image for navigation guidance is displayed on the display 21 by the function of the navigation device 20.
- the functions of the image processing apparatus 100 are not used, and various displays are performed using the functions of the navigation apparatus 20 alone. For this reason, when the navigation device 20 has a function of receiving and displaying television broadcast radio waves, a television broadcast screen may be displayed instead of the map image for navigation guidance.
- FIG. 9 is a diagram illustrating display mode transition in the back mode M3.
- the back mode M3 there are three display modes of a parking bird's-eye view mode M31, a front door mirror mode M32, and a rear door mirror mode M33, and these display modes have different display modes.
- a visual field guide 90 indicating the visual field range in each display mode is displayed, and it is indicated to the user which region around the vehicle 9 is displayed.
- the display modes of the front door mirror mode M32 and the rear door mirror mode M33 are switched from the parking overhead mode M31 by the control of the control unit 1 according to the state of the door mirror 93 input from the mirror driving device 86. Specifically, when the position of the shift lever is operated to the “R (reverse)” position, the parking overhead mode M31 is set. When the door mirror 93 is deployed in the normal state in the parking bird's-eye view mode M31 and the vehicle speed of the vehicle 9 is less than 10 km / h, when the switch 43 is pressed by the user, the front door mirror mode M32 is set.
- the image range including the outside of the front fender of the vehicle 9 is selected by the image range selection unit 32 of the image generation unit 3 among the images taken by the side camera 53 provided in the door mirror 93.
- the image information output unit 33 outputs the image information via the navigation communication unit 42 and displays it on the navigation device 20. Thereby, the user can easily confirm the status of the region to be confirmed in the case of performing the width alignment for bringing the vehicle body to the end of the road.
- the rear door mirror mode M33 is set.
- an image range that is substantially the same as the range displayed on the door mirror when the door mirror is unfolded is selected from the images captured by the side camera 53 provided on the door mirror 93. Specifically, an image range indicating the rear of the side area of the vehicle is selected. Thereby, even when the door mirror is stored when the vehicle passes through a narrow place, the user can check an image (the state of the subject) in substantially the same range as when the door mirror is deployed.
- the parking bird's-eye view mode M31 has a screen including a composite image BP1 showing a state of the vehicle 9 viewed from the virtual viewpoint VP immediately above the vehicle 9 and a back image BP2 obtained by photographing with the back camera 52 side by side on the display 21.
- the display mode to display. That is, in the parking bird's-eye view mode M31, two images of a composite image BP1 showing the entire periphery of the vehicle 9 and a back image BP2 showing the rear of the vehicle 9 are shown on the same screen.
- the parking bird's-eye view mode M31 since such two images BP1 and BP2 can be viewed, the user can confirm at a glance the situation behind the vehicle 9 along with the entire periphery of the vehicle 9. it can. It can be said that the parking bird's-eye view mode M31 is a display mode that can be used with high versatility in various situations during retreat.
- Other modes such as a back guide mode in which a parking guide line is displayed on the image BP2 are provided, and switching from one of these modes to the front door mirror mode M32 and the rear door mirror mode M33 is performed according to the open / closed state of the door mirror. You may make it do.
- the front door mirror mode M32 is a display mode in which a screen including side images FP5 and FP6 respectively obtained by photographing with the left and right side cameras 53 is displayed on the display 21.
- the two images F5 and FF6 can be viewed on one screen, there is a risk of collision when the user moves the vehicle backward when the vehicle moves backward. An image including the outside of the left and right front fenders can be confirmed.
- the rear door mirror mode M33 is a display mode in which a screen including side images BP3 and BP4 obtained by photographing with the left and right side cameras 53 is displayed on the display 21.
- the vehicle can be moved backward while confirming the rear left and right of the vehicle 9 on the same screen.
- the side camera 53 is provided on the door mirror 93, when the door mirror 93 is stored, the direction of the optical axis 53a is directed to the rear of the vehicle 9. In this state, the side camera 53 cannot acquire an image showing the entire side of the vehicle 9, so it is difficult to generate a composite image viewed from an arbitrary virtual viewpoint. However, since the optical axis 53a moves to the rear of the vehicle 9, a captured image with relatively little distortion can be acquired behind the side region of the vehicle 9. In the rear door mirror mode M ⁇ b> 33, two images BP ⁇ b> 3 and BP ⁇ b> 4 showing the rear of the side region of the vehicle 9 are generated and displayed using the captured image acquired by such a side camera 53.
- step S101 a processing flow of the control unit of the image processing system in the surrounding confirmation mode will be described with reference to FIG.
- the control unit 1 of the image processing apparatus 10 receives an ACC-ON signal from a vehicle power supply control device (not shown) via the signal input unit 41 (Yes in step S101)
- the control unit 1 passes through the navigation communication unit 42.
- Initial communication with the navigation device 20 is started (step S102).
- the initial communication refers to a process of confirming whether communication is possible between the control unit 1 of the image processing apparatus 10 and the navigation apparatus 20. If the control unit 1 has not received the ACC-ON signal from the vehicle power supply control device (No in step S101), the process ends.
- step S104 the control unit 1 reads data for performing a vehicle surrounding confirmation process from the nonvolatile memory 40 (step S104).
- vehicle periphery confirmation processing data include vehicle bitmap data, viewpoint movement data (viewpoint position and viewpoint direction data for each time), and the like.
- the control unit 1 again performs communication with the navigation device 20, or ends the process if communication cannot be performed even after a plurality of communication attempts.
- the case where the initial communication does not end normally is that the control unit 1 is not operating normally due to a failure or the like. In such a case, a warning that the image display system around the vehicle is broken can be displayed on the navigation device 20.
- the image generation unit 3 of the control unit 1 After reading the vehicle periphery confirmation processing data from the nonvolatile memory 40, the image generation unit 3 of the control unit 1 generates a composite image around the vehicle based on the read data (step S105).
- any three-dimensional curved surface SP2 that is a substantially hemispherical (bowl-shaped) region excluding the peripheral region in which the amount of light is reduced due to vignetting in the three-dimensional curved surface SP1 shown in FIG. Processing for generating a composite image corresponding to the virtual viewpoint is performed. And then.
- the generated composite image data is transmitted to the navigation device 20 (step S106).
- the image generation unit 3 generates a plurality of synthesized images (step S105) while changing the position of the virtual viewpoint in stages (step S107a). And the control part 1 transmits data to the navigation apparatus 20 in order so that a some synthesized image may have continuity. Thereby, the position of the virtual viewpoint is continuously changed, and an image circling around the vehicle 9 can be displayed on the navigation device 20 in a state where the vehicle 9 is looked down by the continuous display.
- step S107 When the transmission of the plurality of composite images to the navigation device 20 is completed (Yes in step S107), the composite image viewed from directly above (directly above) the center of the vehicle 9 is processed in the same process as the composite image around the vehicle. It generates (step S108), and transmits the composite image viewed from directly above (directly above) the substantially center of the generated vehicle 9 to the navigation device 20 (step S109).
- step S108 a plurality of composite images are displayed on the display device so as to circulate around the vehicle while looking down at the vehicle, so that the user can check the entire periphery of the vehicle from the viewpoint in front of the vehicle.
- the positional relationship between the vehicle and the obstacle can be grasped intuitively on one screen.
- FIG. 12 is a diagram illustrating a processing flow of the control unit of the image processing system in the back mode.
- the operation position of the shift lever is the “R (reverse)” shift position (step S201).
- step S201 When the operation position of the shift lever is “R (reverse)” (step S201 is Yes), the control unit 1 in the back mode M3 generates an image in the parking overhead mode M31 in the image generation unit 3, and An instruction signal for outputting the image information to the navigation device 20 is transmitted to the image generation unit 3 (step S202). If the shift lever operation position is not the “R (reverse)” shift position (No in step S201), the process ends.
- Step S203 When the image of the parking bird's-eye view mode M31 is displayed on the navigation device 20 and the user presses the changeover switch 43 (Yes in Step S203), the vehicle speed of the vehicle 9 is 10 km using the vehicle speed sensor 82. It is determined whether it is less than / h (step S204). When the user does not press the changeover switch 43 (No in Step S203), the control unit 1 continues the process for displaying the parking bird's-eye view mode M31 on the navigation device 20 (Step S209).
- Step S204 if the vehicle speed is less than 10 km / h (Yes in Step S204), it is determined whether or not the door mirror 93 of the vehicle 9 is deployed (Step S205).
- step S205 when the door mirror 93 is unfolded (step S205 is Yes), the control unit 1 transmits an instruction signal for performing the process of the front door mirror mode M32 to the image generation unit 3 (step S206), and proceeds to the next process. move on. Specifically, an image range including the outside of the front fender is selected from images captured using the side camera 53, and an instruction signal for outputting image information of the selected range is transmitted to the image generation unit 3. To do. Thereby, the user can easily confirm the status of the region to be confirmed in the case of performing the width alignment for bringing the vehicle body to the end of the road.
- control unit 1 continuously performs the process for displaying the parking overhead mode M31 on the navigation device 20 (Step S209).
- the control unit 1 transmits an instruction signal for performing the processing of the rear door mirror mode M33 to the image generation unit 3 (Step S207), and proceeds to the next processing.
- an instruction signal for selecting an image range that is substantially the same as the range reflected on the door mirror is transmitted, and image information on the selected range is displayed.
- image information on the selected range is displayed.
- Step S208 if the changeover switch 43 is not pressed by the user (No at Step S208), the process returns to Step S205, and the front door mirror mode M32 or the rear door mirror mode 33 is changed according to the open / closed state of the door mirror 93. A signal for instructing the image generation unit 3 to perform image selection and image information output processing corresponding to one of the modes is transmitted.
- Step S208 When the changeover switch 43 is pressed by the user (Yes in Step S208), the control unit 1 generates an image of the parking bird's-eye view mode M31 and navigates the image information in the same manner as the process described in Step S202. An instruction signal to be output to the device 20 is transmitted to the image generation unit 3 (step S209).
- step S203 after the determination of pressing the changeover switch in step S203, it is determined whether or not the vehicle speed of the vehicle 9 is less than 10 km / h in step S204. It may be determined whether or not is less than 10 km / h, and then the determination of pressing the changeover switch may be performed.
- the mode of the image processing system 120 when the mode of the image processing system 120 is the back mode, that is, when the vehicle 9 moves backward, either the front door mirror mode M32 or the rear door mirror mode M33 depending on the open / closed state of the door mirror 93.
- the image processing system 120 is in the front mode, that is, when the vehicle 9 moves forward, either the front door mirror mode M32 or the rear door mirror mode M33 is selected according to the open / closed state of the door mirror 93. May be displayed. ⁇ 3. Modification> Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments, and various modifications are possible. Below, such a modification is demonstrated. All forms including those described in the above embodiment and those described below can be combined as appropriate.
- the image processing apparatus 100 and the navigation apparatus 20 are described as separate apparatuses. However, the image processing apparatus 100 and the navigation apparatus 20 are arranged in the same casing and configured as an integrated apparatus. May be.
- the display device that displays the image generated by the image processing device 100 is described as the navigation device 20, but a general display device that does not have a special function such as a navigation function. It may be.
- control unit 1 of the image processing apparatus 100 may be realized by the control unit 23 of the navigation apparatus 20.
- part or all of the signals described as being input to the control unit 1 of the image processing apparatus 100 via the signal input unit 41 are input to the navigation apparatus 20. Also good. In this case, the signal may be input to the control unit 1 of the image processing apparatus 100 via the navigation communication unit 42.
- the direction instruction intended by the driver of the vehicle 9 is input from the direction indicator 83, but may be input by other means.
- the movement of the viewpoint of the driver may be detected from an image obtained by photographing the eyes of the driver, and a direction instruction intended by the driver may be input from the detection result.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
(1):車両に搭載される画像処理装置であって、
前記車両に設けられた複数のカメラで撮影された複数のカメラ画像を取得する画像取得手段と、
前記車両およびその周辺を互いに異なる視点から見た複数の第1俯瞰画像を含む複数の合成画像を前記複数のカメラ画像に基づいて生成する合成画像生成手段と、
前記複数の合成画像が連続的に再生される表示画像に対応する情報を、前記車両に搭載された表示装置へ出力する表示画像提供手段とを備えることを特徴とする画像処理装置。
前記複数の合成画像は、前記車両とその周囲を該車両の真上から見た第2俯瞰画像を含み、
前記表示画像は、前記複数の第1俯瞰画像が再生されている間に少なくとも一度前記第2俯瞰画像を再生されるものであることを特徴とする画像処理装置。
前記複数の第1俯瞰画像の各々は、前記車両をその中央部に示していることを特徴とする画像処理装置。
前記合成画像生成手段は、前記画像処理装置の起動時に前記複数の合成画像を生成することを特徴とする画像処理装置。
前記合成画像生成手段は、前記複数の合成画像の生成に際し、前記複数のカメラ画像における基準値よりも明度の低い部分を除去することを特徴とする画像処理装置。
車両に設けられる複数のカメラと、
車両に搭載される画像処理装置であって、
前記車両に設けられた複数のカメラで撮影された複数のカメラ画像を取得する画像取得手段と、
前記車両およびその周辺を互いに異なる視点から見た複数の俯瞰画像を含む複数の合成画像を前記複数のカメラ画像に基づいて生成する合成画像生成手段と、
前記複数の合成画像が連続的に再生される表示画像に対応する情報を、前記車両に搭載された表示装置へ出力する表示画像提供手段とを備える画像処理装置と
を備えることを特徴とする画像処理システム。
車両に設けられた複数のカメラで撮影された複数のカメラ画像を取得することと、
前記車両およびその周辺を互いに異なる視点から見た複数の俯瞰画像を含む複数の合成画像を前記複数のカメラ画像に基づいて生成することと、
前記複数の合成画像が連続的に再生される表示画像に対応する情報を、前記車両に搭載された表示装置へ出力することとを備えることを特徴とする画像処理方法。
図1は、画像処理システム120の構成を示すブロック図である。この画像処理システム120は、車両(本実施の形態では、自動車)に搭載されるものであり、車両の周辺を撮影して画像を生成し、その生成した画像を車室内のナビゲーション装置20などの表示装置に出力する機能を有している。画像処理システム120のユーザ(代表的にはドライバ)は、この画像処理システム120を利用することにより、当該車両の周辺の様子をほぼリアルタイムに把握できるようになっている。
次に、画像処理システム120の撮影部5について詳細に説明する。撮影部5は、制御部1に電気的に接続され、制御部1からの信号に基づいて動作する。
次に、画像生成部3の合成画像生成部31が、撮影部5で得られた複数の撮影画像に基づいて車両9の周辺を任意の仮想視点からみた様子を示す合成画像を生成する手法について説明する。合成画像を生成する際には、不揮発性メモリ40に予め記憶された車種別データ4aが利用される。図4は、合成画像を生成する手法を説明するための図である。
次に、画像処理システム120の動作モードについて説明する。図5は、画像処理システム120の動作モードの遷移を示す図である。画像処理システム120は、ナビモードM0、周囲確認モードM1、フロントモードM2、及び、バックモードM3の4つの動作モードを有している。これらの動作モードは、ドライバの操作や車両9の走行状態に応じて制御部1の制御により切り替えられるようになっている。
まず、周囲確認モードM1における車両9の周辺の表示態様について説明する。周囲確認モードM1においては、図6に示すように、車両9を見下ろすように仮想視点VPが設定され、車両9の周辺を周回するように仮想視点VPが連続的に移動される。仮想視点VPは、最初に車両9の後方に設定された後、右回りで車両9の周辺を周回する。このようにして仮想視点VPが、車両9の左側、前方及び右側を経由して再び後方まで移動すると、車両9の直上まで移動する。
次に、フロントモードM2における車両9の周辺の表示態様について詳細に説明する。図8は、フロントモードM2における表示モードの遷移を示す図である。フロントモードM2では、走行俯瞰モードM21、自車確認モードM22、サイドカメラモードM23、および、ナビモードM24の4つの表示モードがあり、これらの表示モードは互いに表示態様が異なっている。これらの画面のうちM21、M22、および、M23の画面には、各表示態様における視野範囲を示す視野ガイド90が表示され、ユーザに対して車両9の周辺のいずれの領域を表示しているかが示されるようになっている。また、ナビモードM24では、車両9の周辺の地図画像が表示され、車両9の現在位置の表示なども行われる。
次に、バックモードM3における車両9の周辺の表示態様について詳細に説明する。図9は、バックモードM3における表示モードの遷移を示す図である。バックモードM3では、駐車俯瞰モードM31、前方ドアミラーモードM32、および、後方ドアミラーモードM33の3つの表示モードがあり、これらの表示モードは互いに表示態様が異なっている。これらの画面にも、各表示態様における視野範囲を示す視野ガイド90が表示され、ユーザに対して車両9の周辺のいずれの領域を表示しているかが示されるようになっている。
<2-1.周囲確認モードにおける動作>
次に、周囲確認モードにおける画像処理システムの制御部の処理の流れについて図11を用いて説明する。画像処理装置10の制御部1が図示しない車両電源制御装置からのACC-ON信号を信号入力部41を介して受信する(ステップS101がYes)と、制御部1はナビ通信部42を介してナビゲーション装置20との初期通信を開始する(ステップS102)。ここで初期通信とは画像処理装置10の制御部1とナビゲーション装置20との間で通信可能か否かを確認する処理をいう。制御部1が車両電源制御装置からのACC-ON信号を受信していない場合(ステップS101がNo)は、処理を終了する。
次に、上記のようなドアミラー93の開閉状態に応じて撮影画像の画像範囲を選択して画像情報を出力する処理の流れについて説明する。図12はバックモードにおける画像処理システムの制御部の処理の流れについて示す図である。最初に画像処理システムのモードがバックモードか否かを判定するために、シフトレバーの操作位置が”R(後退)”のシフトポジションとなっているか否かを判定する(ステップS201)。
<3.変形例>
以上、本発明の実施の形態について説明してきたが、この発明は上記実施の形態に限定されるものではなく様々な変形が可能である。以下では、このような変形例について説明する。上記実施の形態で説明した形態及び以下で説明する形態を含む全ての形態は、適宜に組み合わせ可能である。
Claims (8)
- 車両に搭載される画像処理装置であって、
前記車両に設けられた複数のカメラで撮影された複数のカメラ画像を取得する画像取得手段と、
前記車両およびその周辺を互いに異なる視点から見た複数の第1俯瞰画像を含む複数の合成画像を前記複数のカメラ画像に基づいて生成する合成画像生成手段と、
前記複数の合成画像が連続的に再生される表示画像に対応する情報を、前記車両に搭載された表示装置へ出力する表示画像提供手段とを備えることを特徴とする画像処理装置。 - 請求項1に記載の画像処理装置において、
前記複数の合成画像は、前記車両とその周囲を該車両の真上から見た第2俯瞰画像を含み、
前記表示画像は、前記複数の第1俯瞰画像が再生されている間に少なくとも一度前記第2俯瞰画像が再生されるものであることを特徴とする画像処理装置。 - 請求項1に記載の画像処理装置において、
前記複数の第1俯瞰画像の各々は、前記車両をその中央部に示していることを特徴とする画像処理装置。 - 請求項2に記載の画像処理装置において、
前記複数の第1俯瞰画像の各々は、前記車両をその中央部に示していることを特徴とする画像処理装置。 - 請求項1ないし4のいずれかに記載の画像処理装置において、
前記合成画像生成手段は、前記画像処理装置の起動時に前記複数の合成画像を生成することを特徴とする画像処理装置。 - 請求項1ないし4のいずれかに記載の画像処理装置において、
前記合成画像生成手段は、前記複数の合成画像の生成に際し、前記複数のカメラ画像における基準値よりも明度の低い部分を除去することを特徴とする画像処理装置。 - 画像処理システムであって、
車両に設けられる複数のカメラと、
車両に搭載される画像処理装置であって、
前記車両に設けられた複数のカメラで撮影された複数のカメラ画像を取得する画像取得手段と、
前記車両およびその周辺を互いに異なる視点から見た複数の俯瞰画像を含む複数の合成画像を前記複数のカメラ画像に基づいて生成する合成画像生成手段と、
前記複数の合成画像が連続的に再生される表示画像に対応する情報を、前記車両に搭載された表示装置へ出力する表示画像提供手段とを備える画像処理装置と
を備えることを特徴とする画像処理システム。 - 画像処理方法であって、
車両に設けられた複数のカメラで撮影された複数のカメラ画像を取得することと、
前記車両およびその周辺を互いに異なる視点から見た複数の俯瞰画像を含む複数の合成画像を前記複数のカメラ画像に基づいて生成することと、
前記複数の合成画像が連続的に再生される表示画像に対応する情報を、前記車両に搭載された表示装置へ出力することとを備えることを特徴とする画像処理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/517,118 US9706175B2 (en) | 2009-12-24 | 2010-12-22 | Image processing device, image processing system, and image processing method |
CN2010800590899A CN102934427A (zh) | 2009-12-24 | 2010-12-22 | 图像处理装置、图像处理系统和图像处理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-291878 | 2009-12-24 | ||
JP2009291878A JP2011135253A (ja) | 2009-12-24 | 2009-12-24 | 画像処理装置、画像処理システム、および、画像処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011078201A1 true WO2011078201A1 (ja) | 2011-06-30 |
Family
ID=44195728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/073078 WO2011078201A1 (ja) | 2009-12-24 | 2010-12-22 | 画像処理装置、画像処理システム、および画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9706175B2 (ja) |
JP (1) | JP2011135253A (ja) |
CN (1) | CN102934427A (ja) |
WO (1) | WO2011078201A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013212723A (ja) * | 2012-03-30 | 2013-10-17 | Fujitsu Ten Ltd | 駐車支援装置、及び駐車支援方法 |
CN107346237A (zh) * | 2016-05-04 | 2017-11-14 | 北京京东尚科信息技术有限公司 | 一种基于iOS系统的多宫格图片处理方法及装置 |
US10240323B2 (en) | 2014-04-25 | 2019-03-26 | Komatsu Ltd. | Surroundings monitoring system, work vehicle, and surroundings monitoring method |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5858650B2 (ja) * | 2011-06-08 | 2016-02-10 | 富士通テン株式会社 | 画像生成装置、画像表示システム、及び、画像生成方法 |
CN104041018A (zh) * | 2012-01-12 | 2014-09-10 | 日立建机株式会社 | 自行式工业机械的周围监视装置 |
US9591331B2 (en) * | 2012-03-28 | 2017-03-07 | Qualcomm Incorporated | Merge signaling and loop filter on/off signaling |
JP5529943B2 (ja) * | 2012-09-21 | 2014-06-25 | 株式会社小松製作所 | 作業車両用周辺監視システム及び作業車両 |
JP5629740B2 (ja) * | 2012-09-21 | 2014-11-26 | 株式会社小松製作所 | 作業車両用周辺監視システム及び作業車両 |
JP6105244B2 (ja) * | 2012-09-28 | 2017-03-29 | 富士通テン株式会社 | 画像処理装置、及び画像処理システム |
TWI535587B (zh) | 2012-11-14 | 2016-06-01 | 義晶科技股份有限公司 | 利用觸控面板來控制車用影像之顯示的方法及其車用影像系統 |
JP6084097B2 (ja) * | 2013-03-29 | 2017-02-22 | 富士通テン株式会社 | 画像生成装置、画像表示システム及び画像生成方法 |
US9674490B2 (en) * | 2013-04-18 | 2017-06-06 | Magna Electronics Inc. | Vision system for vehicle with adjustable cameras |
JP2014232465A (ja) * | 2013-05-30 | 2014-12-11 | 富士通テン株式会社 | 画像処理装置、および、画像処理方法 |
DE112014004305B4 (de) | 2013-09-19 | 2020-09-10 | Fujitsu Ten Limited | Bildgenerierungsvorrichtung; Bildanzeigesystem; Bildgenerierungsverfahren und Bildanzeigeverfahren |
US9598012B2 (en) * | 2014-03-11 | 2017-03-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Surroundings monitoring system for a vehicle |
DE102014107235A1 (de) * | 2014-05-22 | 2015-11-26 | Dr. Ing. H.C. F. Porsche Aktiengesellschaft | Verfahren zur Darstellung einer Fahrzeugumgebung auf einer Anzeigevorrichtung; eine Anzeigevorrichtung; ein System aus einer Mehrzahl an Bilderfassungseinheiten und einer Anzeigevorrichtung; ein Computerprogramm |
KR102233391B1 (ko) * | 2014-06-16 | 2021-03-29 | 팅크웨어(주) | 전자 장치, 전자 장치의 제어 방법 및 컴퓨터 판독 가능한 기록 매체 |
KR101530826B1 (ko) * | 2014-08-18 | 2015-06-24 | 정관호 | 360도 공간영상 재생방법 및 그 시스템 |
JP6548900B2 (ja) | 2015-01-20 | 2019-07-24 | 株式会社デンソーテン | 画像生成装置、画像生成方法及びプログラム |
JP6261542B2 (ja) * | 2015-06-10 | 2018-01-17 | 株式会社デンソーテン | 画像処理装置および画像処理方法 |
JP2017069846A (ja) * | 2015-09-30 | 2017-04-06 | アイシン精機株式会社 | 表示制御装置 |
JP6477562B2 (ja) * | 2016-03-18 | 2019-03-06 | 株式会社デンソー | 情報処理装置 |
JP6370833B2 (ja) * | 2016-05-16 | 2018-08-08 | 株式会社デンソーテン | 画像生成装置、画像表示システム及び画像生成方法 |
JP6699370B2 (ja) * | 2016-06-06 | 2020-05-27 | アイシン精機株式会社 | 車両用画像処理装置 |
WO2018030285A1 (ja) * | 2016-08-08 | 2018-02-15 | 株式会社小糸製作所 | 複数カメラを用いた車両用モニタリングシステム |
WO2018037789A1 (ja) * | 2016-08-22 | 2018-03-01 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JP6730612B2 (ja) * | 2017-02-27 | 2020-07-29 | 株式会社Jvcケンウッド | 車両用表示制御装置、車両用表示制御システム、車両用表示制御方法およびプログラム |
JP6658642B2 (ja) | 2017-03-24 | 2020-03-04 | トヨタ自動車株式会社 | 車両用視認装置 |
JP6658643B2 (ja) | 2017-03-24 | 2020-03-04 | トヨタ自動車株式会社 | 車両用視認装置 |
EP3392801A1 (en) * | 2017-04-21 | 2018-10-24 | Harman International Industries, Incorporated | Systems and methods for driver assistance |
KR102310379B1 (ko) * | 2017-06-09 | 2021-10-12 | 현대자동차주식회사 | 주행 정보 안내 장치 및 방법, 그리고 차량 시스템 |
US10596970B2 (en) * | 2017-08-25 | 2020-03-24 | Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America | Auto-switch display intelligent rearview mirror system |
KR102441079B1 (ko) * | 2017-11-30 | 2022-09-06 | 현대자동차주식회사 | 차량의 디스플레이 제어 장치 및 방법 |
DE102018100211A1 (de) * | 2018-01-08 | 2019-07-11 | Connaught Electronics Ltd. | Verfahren zum Erzeugen einer Darstellung einer Umgebung durch Verschieben einer virtuellen Kamera in Richtung eines Innenspiegels eines Fahrzeugs; sowie Kameraeinrichtung |
JP7000980B2 (ja) * | 2018-05-08 | 2022-01-19 | トヨタ自動車株式会社 | 駐車支援装置 |
DE102018119481A1 (de) * | 2018-08-10 | 2020-02-13 | Connaught Electronics Ltd. | Verfahren zum Bereitstellen einer Bilddarstellung von mindestens einem Teil einer Umgebung eines Kraftfahrzeugs, Computerprogrammprodukt und Fahrerassistenzsystem |
US20230104858A1 (en) * | 2020-03-19 | 2023-04-06 | Nec Corporation | Image generation apparatus, image generation method, and non-transitory computer-readable medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003267171A (ja) * | 2002-03-13 | 2003-09-25 | Nissan Motor Co Ltd | 車両後方監視装置 |
JP2005236493A (ja) * | 2004-02-18 | 2005-09-02 | Nissan Motor Co Ltd | 運転支援装置 |
JP2006041741A (ja) * | 2004-07-23 | 2006-02-09 | Nikon Corp | カメラシステム |
JP2009253460A (ja) * | 2008-04-02 | 2009-10-29 | Denso Corp | 駐車支援装置 |
WO2009144994A1 (ja) * | 2008-05-29 | 2009-12-03 | 富士通株式会社 | 車両用画像処理装置、車両用画像処理方法 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3300334B2 (ja) | 1999-04-16 | 2002-07-08 | 松下電器産業株式会社 | 画像処理装置および監視システム |
WO2000064175A1 (fr) * | 1999-04-16 | 2000-10-26 | Matsushita Electric Industrial Co., Ltd. | Dispositif de traitement d'images et systeme de surveillance |
US7266219B2 (en) * | 2000-07-19 | 2007-09-04 | Matsushita Electric Industrial Co., Ltd. | Monitoring system |
JP4432801B2 (ja) * | 2005-03-02 | 2010-03-17 | 株式会社デンソー | 運転支援装置 |
JP2006341641A (ja) * | 2005-06-07 | 2006-12-21 | Nissan Motor Co Ltd | 映像表示装置及び映像表示方法 |
US20080136911A1 (en) * | 2006-12-08 | 2008-06-12 | Ford Global Technologies, Llc | Display system for a vehicle |
JP5047650B2 (ja) * | 2007-03-06 | 2012-10-10 | クラリオン株式会社 | 車載カメラシステム |
US20090096937A1 (en) * | 2007-08-16 | 2009-04-16 | Bauer Frederick T | Vehicle Rearview Assembly Including a Display for Displaying Video Captured by a Camera and User Instructions |
JP4458138B2 (ja) | 2007-09-18 | 2010-04-28 | 株式会社デンソー | 車両周辺監視装置 |
-
2009
- 2009-12-24 JP JP2009291878A patent/JP2011135253A/ja not_active Withdrawn
-
2010
- 2010-12-22 CN CN2010800590899A patent/CN102934427A/zh active Pending
- 2010-12-22 US US13/517,118 patent/US9706175B2/en active Active
- 2010-12-22 WO PCT/JP2010/073078 patent/WO2011078201A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003267171A (ja) * | 2002-03-13 | 2003-09-25 | Nissan Motor Co Ltd | 車両後方監視装置 |
JP2005236493A (ja) * | 2004-02-18 | 2005-09-02 | Nissan Motor Co Ltd | 運転支援装置 |
JP2006041741A (ja) * | 2004-07-23 | 2006-02-09 | Nikon Corp | カメラシステム |
JP2009253460A (ja) * | 2008-04-02 | 2009-10-29 | Denso Corp | 駐車支援装置 |
WO2009144994A1 (ja) * | 2008-05-29 | 2009-12-03 | 富士通株式会社 | 車両用画像処理装置、車両用画像処理方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013212723A (ja) * | 2012-03-30 | 2013-10-17 | Fujitsu Ten Ltd | 駐車支援装置、及び駐車支援方法 |
US10240323B2 (en) | 2014-04-25 | 2019-03-26 | Komatsu Ltd. | Surroundings monitoring system, work vehicle, and surroundings monitoring method |
CN107346237A (zh) * | 2016-05-04 | 2017-11-14 | 北京京东尚科信息技术有限公司 | 一种基于iOS系统的多宫格图片处理方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US9706175B2 (en) | 2017-07-11 |
JP2011135253A (ja) | 2011-07-07 |
CN102934427A (zh) | 2013-02-13 |
US20120257058A1 (en) | 2012-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011078201A1 (ja) | 画像処理装置、画像処理システム、および画像処理方法 | |
JP5302227B2 (ja) | 画像処理装置、画像処理システム、および、画像処理方法 | |
WO2011078183A1 (ja) | 画像処理装置、画像処理システム、および画像処理方法 | |
JP5087051B2 (ja) | 画像生成装置及び画像表示システム | |
JP5627253B2 (ja) | 画像処理装置、電子装置、および、画像処理方法 | |
JP5503259B2 (ja) | 車載照明装置、画像処理装置及び画像表示システム | |
JP5271154B2 (ja) | 画像生成装置及び画像表示システム | |
JP5914114B2 (ja) | 駐車支援装置、及び駐車支援方法 | |
WO2010137684A1 (ja) | 画像生成装置及び画像表示システム | |
JP5697512B2 (ja) | 画像生成装置、画像表示システム及び画像表示装置 | |
JP5658507B2 (ja) | 画像表示システム、画像生成装置、及び、画像表示方法 | |
WO2002089485A1 (fr) | Procede et dispositif pour la presentation d'une image de camera embarquee a bord d'un vehicule | |
JP2005110202A (ja) | カメラ装置及び車両周辺監視装置 | |
JP2005110207A (ja) | カメラ装置及び車両周辺監視装置 | |
JP2011193485A (ja) | 車両に搭載されるカメラ装置及び車両周辺監視装置 | |
JP5479639B2 (ja) | 画像処理装置、画像処理システム、および、画像処理方法 | |
JP2012046124A (ja) | 画像表示システム、画像処理装置、および、画像表示方法 | |
JP6118936B2 (ja) | 画像処理装置 | |
JP2012054664A (ja) | 画像処理装置、画像表示システム及び画像表示方法 | |
JP5677168B2 (ja) | 画像表示システム、画像生成装置及び画像生成方法 | |
JP5466743B2 (ja) | 画像生成装置及び画像表示システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080059089.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10839431 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13517118 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10839431 Country of ref document: EP Kind code of ref document: A1 |