WO2000007373A1 - Procede et appareil d'affichage d'images - Google Patents
Procede et appareil d'affichage d'images Download PDFInfo
- Publication number
- WO2000007373A1 WO2000007373A1 PCT/JP1999/004061 JP9904061W WO0007373A1 WO 2000007373 A1 WO2000007373 A1 WO 2000007373A1 JP 9904061 W JP9904061 W JP 9904061W WO 0007373 A1 WO0007373 A1 WO 0007373A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- image
- vehicle
- viewpoint
- correction
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Definitions
- Image display device image display method
- the present invention does not display images independently of each other with respect to a plurality of images that are shaded by several cameras, but intuitively displays the entire area photographed by the several cameras.
- the present invention relates to a device and a method for displaying a combined image, for example, a monitor device in a store or a monitor device around a vehicle as an aid for confirming safety when driving a vehicle. Background art
- a conventional general surveillance camera device generally has a configuration in which a part to be monitored is photographed with one or several force cameras in a store or the like, and the image is displayed on a monitor screen.
- the monitor screens are usually prepared by the number of cameras, but if the number of monitor screens cannot be prepared by the number of cameras, using a dividing device, etc.
- a method of integrating the several camera images into one image and displaying the images, or sequentially switching the camera images is used.
- these conventional devices have problems such as the necessity for an administrator to consider the continuity of independently displayed images in order to monitor images from the respective cameras.
- a monitoring device that solves the continuity problem by displaying an integrated image by means of the above, there is, for example, Japanese Patent Application Laid-Open No. H10-164656.
- Another application of the monitoring device is when it is installed in a vehicle.
- the following is a conventional example. That is, it is a monitoring device in which a camera for monitoring the surroundings of the vehicle is installed, and an image acquired by the camera is displayed on a monitor-television installed near the driver's seat. 2.
- Fig. 69 shows a conventional example in which a surveillance camera is installed in a vehicle.
- images from four monitoring cameras (C1 to C4) attached to the vehicle body are combined into a single image via a split adapter, and the image is displayed on a monitor TV in a divided manner. (D1 to D4).
- various measures have been taken, such as displaying an image that has been flipped horizontally.
- each camera can be rotated manually by the driver to obtain an image of the desired place.
- the monitoring device described above there is, for example, a device disclosed in Japanese Patent Application Laid-Open No. 5-31078.
- a wide-angle lens is generally used as a method for solving a point having a narrow field of view. Images from a wide-angle lens do not reveal details of a specific part, but using a wide-angle lens broadens the field of view and makes it easier to grasp the overall situation around the vehicle.
- FIG. 70 is a professional diagram showing an embodiment of the conventional vehicle periphery monitoring device.
- the images from the cameras 1 to N2201 input to the image conversion unit 2202 are converted to other coordinates by conversion, and are combined into one image by the image display unit 2203. Then, it is displayed on the TV monitor 2204 installed in the driver's seat.
- the position of the host vehicle is shifted from the center of the screen by signals corresponding to the gear position, vehicle speed, and wing force operation, so that the area around the vehicle to be viewed can be widened. Is also possible.
- Japanese Patent Application Laid-Open No. 7-168633 when presenting the surrounding situation to the driver, the road surface part and the other parts are distinguished in advance, and the road surface part is observed when the viewpoint is pointed downward above the vehicle center by coordinate transformation. In the area outside the road surface, the image from the camera is directly superimposed on the converted image at an appropriate location and displayed in an appropriate size. This accurately informs the situation of obstacles around the vehicle, especially other vehicles approaching from behind the vehicle.
- the present invention solves such a problem.
- a device installed in a vehicle it is easy to understand what kind of object exists near the vehicle over the entire periphery of the vehicle so as to be as realistic as possible.
- the purpose is to combine them as a single image and display it to the driver.
- the present invention provides a method for easily obtaining camera parameters such as a camera mounting position and a mounting angle required for the synthesis, and furthermore, the camera parameters are deviated by vibration or temperature during running. Also provided are devices and methods for detecting and correcting when an error occurs. Disclosure of the invention
- the present invention provides one or more cameras, a space reconstructing unit that maps an input image from the force camera to a predetermined spatial model of a predetermined three-dimensional space, A viewpoint conversion unit that creates an image viewed from an arbitrary virtual viewpoint in the predetermined three-dimensional space with reference to the spatial data mapped by the space reconstruction unit; and an image converted by the viewpoint conversion unit. And a display unit for displaying an image.
- an image generation method maps an input image from a camera existing in a predetermined three-dimensional space to a predetermined space model in the predetermined three-dimensional space and generates spatial data.
- a spatial reconstruction step of creating of creating an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the interim data.
- the device of the present invention has the following configuration.
- the basic configuration of the present invention includes: an image input unit that inputs an image from one or a plurality of cameras installed in a vehicle; a camera parameter table that stores camera parameters indicating the camera characteristics; and coordinates based on the vehicle.
- a spatial model creating means for creating a spatial model in the system a mapping means for mapping an image input from the force camera to the spatial model, a viewpoint set, and a single image viewed from the viewpoint, It is characterized by comprising: a viewpoint conversion unit that combines the data created by the mapping unit; and a display unit that displays an image converted by the viewpoint conversion unit.
- a first application configuration of the monitoring device according to the present invention is a distance sensor that measures a distance, and an obstacle detection that measures at least a distance to an obstacle existing around the vehicle using the distance sensor as a situation around the vehicle. Means.
- a space model that is appropriately set in advance or a space model that is set according to the distance to an obstacle around the vehicle detected by the obstacle detection unit is created by the space model creation unit.
- An image of the surroundings of the vehicle, which is input from the force mess installed on the vehicle by the image input means, is mapped to the space model by the matching means.
- one image viewed from the viewpoint determined by the viewpoint conversion means is synthesized from the matted image and displayed on the display means. At this time, the occupant of the vehicle can display an image from a desired viewpoint.
- a basic configuration of the present invention stores image input means for inputting an image from one or a plurality of force cameras installed in a vehicle, and stores camera parameters indicating the characteristics of the force sensors.
- a display unit that displays an image converted by the viewpoint conversion unit.
- a first application configuration of the present invention includes: a moving direction detecting unit that detects a moving direction of a vehicle; and a moving distance detecting unit that detects a moving distance of the vehicle in a unit time.
- a current position of the feature on the road surface is calculated using a processing result of the moving distance detecting means, and the spatial model is sequentially corrected based on the calculated current position of the vehicle.
- the second application configuration is characterized in that the display unit includes a feature correcting unit that corrects the processing result while displaying the processing result of the road surface characteristic detecting unit.
- features on the road surface such as a white line are detected by the road surface feature detecting means, and a space model is created by the space model creating means in accordance with the detected features.
- An image around the vehicle input from a camera installed in the vehicle by the image input means is mapped to the spatial model by the mapping means. Subsequently, one image viewed from the viewpoint determined by the viewpoint conversion means was mapped. It is synthesized from the image and displayed on the display.
- the space model is corrected according to the change, and an image is synthesized and displayed using the corrected space model.
- the present invention provides an image inputting means for inputting an image from one or a plurality of cameras installed in a vehicle; A table, mapping means for mapping an image input from the force camera to a space model in which a situation around the vehicle is modeled, and one image viewed from a desired virtual viewpoint to the mating means.
- Viewpoint conversion means for synthesizing from the data created by the above, camera parameter correction means for independently correcting the parameters of the camera by each camera, and display means for displaying an image converted by the viewpoint conversion means And characterized in that:
- the viewpoint conversion unit switches a virtual viewpoint between the processing by the camera parameter correction unit and the normal operation.
- the viewpoint conversion means matches the virtual viewpoint with one of the camera parameters of the on-vehicle camera at the time of processing by the camera parameter correction means.
- the viewpoint conversion means matches the virtual viewpoint with the camera parameter before the correction processing of the camera which is performing the correction at the time of processing by the camera parameter correction means.
- the relationship between the direction of the operation for changing the direction of the viewpoint and the direction of the viewpoint is set to be opposite to each other.
- the display means when displaying an image from each camera, It is characterized in that a mark indicating the boundary is superimposed on the composite image and displayed at a boundary portion where the image touches.
- the present invention provides an image inputting means for inputting an image from one or a plurality of cameras installed in a vehicle;
- a table mapping means for mapping an image input from the force camera to a spatial model that models a situation around the vehicle, a viewpoint parameter table for storing viewpoint parameters including at least a position and an orientation,
- a viewpoint conversion unit that synthesizes an image viewed from a desired virtual viewpoint using a result of the mapping process performed by the mapping unit; a viewpoint parameter correction unit that corrects a parameter of the virtual viewpoint; and the viewpoint conversion unit.
- Display means for joining and displaying the converted images.
- the one set of viewpoint parameters is stored in the viewpoint parameter table while being distinguished so as to be paired with any one of the cameras installed in the vehicle.
- the viewpoint parameter correcting means the virtual viewpoint parameter changing operation is performed.
- the viewpoint parameter correcting means a fixed temporary virtual viewpoint is provided when the viewpoint parameter is corrected, and the progress of correction at the virtual viewpoint being corrected is sequentially synthesized and displayed as an image from the temporary virtual viewpoint. It is characterized by.
- the display means when displaying an image from each camera, It is characterized in that a mark indicating the boundary is superimposed on the composite image and displayed at a boundary portion where the image touches.
- a mapping table that holds a correspondence relationship between pixels between the camera input image and the composite image; and stores the correspondence relationship obtained by the processing by the mapping unit and the viewpoint conversion unit in the mapping table. It is stored.
- mapping table correction means for recalculating the mapping table using the viewpoint parameters changed by the processing in the viewpoint parameter correction means.
- FIG. 2 is a block diagram showing an example of a basic configuration of an image generation device according to the present invention (claim 1).
- FIG. 4 is a block diagram illustrating a configuration example of an image generation device according to the present invention (claim 14).
- FIG. 5 is a block diagram illustrating a configuration example of an image generation device according to the present invention (claim 17).
- FIG. 6 is a block diagram showing a configuration example of an image generation device according to the present invention (claim 18). Block diagram showing an image generation device integrating Fig. 1 to Fig. 5
- FIG. 10 is a diagram showing data stored in the camera parameter table 103 in a table format.
- FIG. 16 is a conceptual diagram showing a state in which images taken from the virtual force camera are synthesized by the viewpoint conversion means 106 according to the present invention using the images taken by the on-vehicle camera 1 and the on-vehicle camera 2 in FIG.
- (B) As an example of the virtual camera installation location, a conceptual diagram showing a case where the camera is installed almost diagonally in front of the vehicle and facing the car
- FIG. 5 is a flowchart showing a procedure for updating the camera parameter table 103 according to the temperature in the calibration means 102.
- FIG. 24 is a flowchart showing a processing procedure in the spatial data conversion means 114.
- FIG. 27 is a flowchart showing the overall processing flow of the image generation apparatus according to the present invention.
- FIG. 3 is a flowchart showing the overall processing flow of the image generating apparatus according to the present invention.
- FIG. 36 Flowchart showing the flow of feature point extraction processing
- FIG. 40 is a conceptual diagram showing a state in which the feature correction processing is displayed on the display unit 107B.
- FIG. 4 is a flowchart showing the overall processing flow of the vehicle periphery monitoring device according to the present invention.
- FIG. 4 is a block diagram showing a configuration example of an image generation device according to the present invention (claim 45).
- FIG. 50 is a block diagram illustrating a configuration example of a technology related to the image generation device according to the present invention.
- Block diagram showing a basic configuration example of an image generation device according to the present invention (claim 51).
- Block diagram showing a configuration example of an image generation device according to the present invention (claim 55).
- FIG. 2 is a block diagram showing a configuration example of a technique related to the image generation device of the present invention. (Fig. 6 5)
- Block diagram showing a configuration example of a conventional vehicle surrounding monitoring device
- a Camera parameter table A Spatial model creation means A Matching means
- the basic configuration of the present invention is as follows: one or a plurality of cameras; a camera parameter table storing camera parameters indicating characteristics of the cameras; and an input image from the cameras based on the camera parameters.
- Spatial reconstruction means for creating spatial data by mapping to a spatial model of a dimensional space, a spatial data buffer for temporarily storing the spatial data created by the spatial reconstruction means, and referring to the spatial data.
- a viewpoint conversion unit that creates an image viewed from an arbitrary viewpoint; and a display unit that displays the image converted by the viewpoint conversion unit.
- a first applied configuration of the image generating apparatus according to the present invention is characterized in that the image generating apparatus is provided with calibration means for obtaining a camera parameter indicating camera characteristics by input or calculation.
- a second applied configuration of the image generating apparatus includes: a feature point generating unit configured to generate a plurality of points capable of identifying a three-dimensional coordinate in a field of view of a force film; And a feature point extracting means for extracting
- a third application configuration of the image generation device according to the present invention is characterized by including a temperature sensor and a temperature correction table.
- a fourth application configuration of the image generating apparatus includes a moving direction detecting means for detecting a moving direction of the vehicle, a moving distance detecting means for detecting a moving distance of the vehicle per unit time, a moving direction of the vehicle, Spatial data conversion means for converting the spatial data stored in the spatial data buffer using the moving distance.
- a fifth application configuration of the image generation apparatus includes a camera correction instruction unit for instructing a driver to perform camera calibration when a situation requiring camera calibration is detected, and camera calibration. And a correction history recording means for recording the date and time of the operation and the traveling distance.
- the image generation method of the present invention includes a spatial reconstruction step of creating spatial data in which each pixel constituting an input image from a camera is associated with a point in a three-dimensional space based on camera parameters indicating camera characteristics. And a viewpoint conversion step of creating an image viewed from an arbitrary viewpoint with reference to the spatial data.
- a first application configuration of the image generation method according to the present invention includes a calibration step of acquiring or calculating a camera parameter indicating the camera characteristic by inputting or calculating, and if necessary, correcting the camera parameter according to temperature. It is characterized by inclusion.
- a second application configuration of the image generation method according to the present invention is characterized in that the calibration means extracts a plurality of feature points required for calculating the camera parameters. It is characterized by including a point extraction step.
- a third application configuration of the image generation method according to the present invention is characterized in that it includes a feature point generation step of generating a plurality of points capable of identifying a three-dimensional coordinate in a visual field of a force film.
- a fourth application configuration of the image generation method according to the present invention includes a moving direction detecting step of detecting a moving direction of the vehicle, a moving distance detecting step of detecting a moving distance of the vehicle per unit time, and the moving direction detecting step. And a spatial data conversion step of converting the spatial data using the moving direction of the vehicle detected by the method and the moving distance of the vehicle detected by the moving distance detecting step.
- a fifth application configuration of the image generation method according to the present invention is a camera correction instruction step of detecting a situation where camera calibration is required and instructing a driver of camera calibration when calibration is required. And a correction history recording step for recording the date and time of the camera calibration and the traveling distance.
- the field of view of each of a plurality of cameras is integrated by one of the following three steps and combined as a single image.
- the spatial reconstruction means calculates the correspondence between each pixel constituting the image obtained from the camera and the point in the three-dimensional coordinate system, and creates spatial data. The calculation is performed on all pixels of the image obtained from each camera c
- a plurality of points capable of identifying three-dimensional coordinates around the vehicle body or the like by the feature point generating means are provided.
- the feature point extracting means By generating and extracting those feature points by the feature point extracting means, camera parameters indicating the characteristics of each camera are automatically obtained.
- the lens distortion that changes delicately as the temperature rises or falls is corrected, and the lens is always optimized. To keep.
- the image generating apparatus provides a method of viewing an image of a blind spot from a camera as an application example of the image generating apparatus to a vehicle. That is, the moving direction and moving distance of the vehicle are detected, and the previously acquired image is converted into an image viewed from the current position using a calculation formula derived from the detection result. Specifically, the spatial data of a place that was previously visible but not currently visible is represented as spatial data if the image of the place is stored in the spatial data buffer as spatial data. It is supplemented by conversion by conversion means.
- correction of camera parameters indicating camera characteristics that is, camera calibration is performed. Detects a situation that must be met and instructs the driver accordingly.
- FIG. 1 is a block diagram showing an example of a basic configuration of an image generating apparatus according to the present invention (claim 1).
- the image generation apparatus includes, as a basic configuration example, a plurality of cameras 101 attached to grasp the status of the monitoring target area, a camera parameter table 103 storing camera parameters indicating characteristics of the cameras, Spatial reconstruction means 104 for creating spatial data by mapping an input image from the force camera to a spatial model of a three-dimensional space based on camera parameters, and temporarily storing spatial data created by the spatial reconstruction means 104.
- a spatial data buffer 105 to be stored in the memory a viewpoint conversion means 106 for creating an image viewed from an arbitrary viewpoint with reference to the spatial data, and a display means 107 for displaying the image converted by the viewpoint conversion means 106.
- FIG. 2 is a block diagram showing an example of the configuration of an image generating apparatus in which the present invention described in claims 8, 9, and 12 is combined.
- the image generation apparatus shown in FIG. 1 is further compared with the camera mounting position, camera mounting angle, camera lens distortion correction value, camera lens focal length, and the like.
- Calibration means 102 obtained by input or calculation; feature point generation means 109 for generating a plurality of points capable of identifying three-dimensional coordinates in the field of view of the camera; and feature point extraction means 108 for extracting those feature points. It is possible to easily obtain camera parameters that indicate the characteristics of each camera. are doing.
- FIG. 3 is a block diagram showing a configuration example of an image generation device according to the present invention (an example of claim 14).
- a temperature sensor 110 and a temperature correction table 111 are further installed on the image generation device shown in Fig. 1 to reduce lens distortion that changes subtly as the temperature rises and falls. It is possible to make corrections and keep the lens always optimal. The method of correcting lens distortion due to temperature in the calibration means 102 will be described later in detail.
- FIG. 4 is a block diagram showing a configuration example of an image generating apparatus according to the present invention (an example of claim 17).
- FIG. 4 shows an example of the configuration of an image generation device as an example of application to a vehicle.
- the spatial data is used in the present invention. This can be compensated for by conversion by the constituent spatial data conversion means 114. Details of the supplementary method will be described later.
- FIG. 5 is a block diagram showing a configuration example of an image generation device as an application example of the present invention (an example of claim 18) to a vehicle.
- the driver is instructed to perform force calibration.
- a camera correction instructing means 116 and a correction history recording means 115 for recording the date and time when the camera calibration was performed and the traveling distance are added.
- Fig. 6 is a block diagram showing an image generation device that integrates Figs. 1 to 5, and is an example of a configuration in which the image generation devices of Figs. 1 to 5 are integrated into one, and can be obtained with each configuration. The effects can be integrated and used.
- an operation example of the image generating apparatus according to the present invention will be described using the configuration example of FIG. Next, each component constituting the present invention will be described in detail.
- the camera is a TV camera that captures images of the space to be monitored, such as the situation around the vehicle. This camera usually has a large angle of view so that you can get a large field of view.
- FIG. 7 is a conceptual diagram showing an example of attaching a camera to a vehicle.
- Figure 7 shows an example in which six cameras are installed on the roof of the vehicle so that the vehicle can look around. As shown in the example in Fig. 7, if the vehicle is mounted on the roof of the vehicle and on the side or on the boundary between the roof and the rear, the field of view is wide and the number of cameras is small.
- the calibration means 102 performs camera calibration.
- Camera calibration refers to a camera placed in the 3D real world, the camera mounting position in the 3D real world, The purpose is to determine and correct camera parameters representing the camera characteristics, such as an attachment angle, a camera lens distortion correction value, and a camera lens focal length.
- the camera parameter table 103 is a table for storing force camera parameters obtained by the calibration means 102 (details of processing will be described later).
- a three-dimensional spatial coordinate system is used as a preparation for the detailed description of the camera parameter table 103. Is defined.
- FIG. 7 described above is a conceptual diagram showing a state in which a camera is installed on a vehicle.
- FIG. 7 shows a three-dimensional spatial coordinate system centered on the vehicle.
- Fig. 7 as an example of a three-dimensional spatial coordinate system,
- X-axis is a straight line on the road parallel to the rear surface just below the rear surface of the vehicle.
- the axis that extends vertically from the road surface in the center of the rear surface of the vehicle is the Y axis.
- -Z-axis is a straight line on the road that passes through the center of the rear surface of the vehicle and is perpendicular to the rear surface
- a three-dimensional space coordinate system a world coordinate system, or simply a three-dimensional space refers to a three-dimensional space coordinate system according to the present definition.
- FIG. 9 shows the data stored in the camera parameter table 103 in a table format.
- the contents shown in Fig. 9 are as follows, starting from the left column of the table. As shown below, in this table, the items in the second to ninth columns are examples of camera parameters.
- the parameters of camera 1 in FIG. 7 are described in the second line of camera parameter table 103 in FIG. 9, and the content is that camera 1 is located at coordinates (xl, yl, 0), The orientation is at an angle of 45 degrees with respect to the Y-Z plane and at an angle of 130 degrees with respect to the X-Z plane, and the focal length is such that the lens distortion coefficients ⁇ 1 and f 2 are both 0. I understand.
- the parameters of the virtual camera are described in the eighth row of the camera parameter table 103 in FIG. 9, and the content is that the virtual camera is located at the position of the coordinates (0, yl, 0) and the direction is Y — At 0 degree to the Z plane and at 120 degrees to the X—Z plane, the focal length is f, the lens distortion coefficient / cl, and ⁇ 2 are both 0.
- This virtual camera is a concept introduced in the present invention. That is, the conventional image generating apparatus can display only an image obtained from a camera actually installed, but the image generating apparatus according to the present invention has a space reconstructing means 104 and a viewpoint converting means which will be described in detail later. 106, a virtual camera can be freely installed and its virtual It is possible to obtain an image from the mental memory by calculation. The calculation method will be described later in detail.
- the calibration means 102 performs camera calibration. That is to say, it is to determine the camera parameters.
- a determination method for example, a method of manually inputting all data manually by an input device such as a keyboard and a mouse, and a method of calculating some of the calibration data by calculation There are methods.
- the camera parameters are parameters representing the camera characteristics, such as a camera mounting position and a camera mounting angle in a certain reference coordinate system, a camera lens distortion correction value, and a camera lens focal length. If a large number of pairs of points corresponding to the feature points of the image captured by the camera and the positions of the feature points in the reference coordinate system are known, the parameters are approximately calculated by calculation. It is possible to ask. In other words, when calculating camera parameters by calculation, multiple pairs of points that have a correspondence between points in the image captured by the camera and their positions in the three-dimensional coordinate system are required. . The minimum number of such sets required depends on the calculation method used. For example, regarding the method of calculating the camera parameters used in the example of Fig.
- lens distortion correction is performed on a camera input image using the lens distortion coefficient
- a large number of calculations are usually required, which is not suitable for real-time processing. Therefore, it is assumed that lens distortion does not occur unless there is a drastic temperature change, and the correspondence between the coordinate values of each pixel is calculated in advance between the image before distortion correction and the image after distortion correction. Keep it.
- a method of holding the calculation result in a memory in a data format such as a table or a matrix and performing distortion correction using the data is effective as a high-speed correction process.
- FIG. 10 is a diagram showing an example of the temperature correction table 111 according to the present invention in a table format. As shown in FIG. 10, the temperature correction table 111 stores, as data, the amount of change in the camera parameter that changes in accordance with the temperature.
- the temperature value of the temperature sensor 110 is sequentially observed for each camera, and the contents of the camera parameter table 103 are updated as necessary.
- FIG. 22 is a flowchart illustrating a procedure of updating the camera parameter table 103 according to the temperature in the calibration means 102. The details will be described with reference to FIG.
- one camera is accompanied by one temperature sensor 110 as a pair, and the temperature detected by the temperature sensor 110 is almost equal to the camera lens temperature. Assume that:
- the temperature that needs to be corrected is 0 degrees or lower or 40 degrees or higher. 3. (1303) If correction is necessary, obtain the camera parameter correction value from the temperature correction table 111 and set the camera parameter of the camera attached to the temperature sensor 110. Write the result of updating the meter to the camera parameter table 103. (1304) If no correction is required, the result of returning the focal length, distortion coefficient / cl, and distortion coefficient / 2 of the lens to the initial setting values is written in the camera parameter table 103.
- FIG. 11 is an example of the camera parameter table 103 rewritten using the temperature correction table 111 of the example of FIG.
- the camera 1 and the camera 2 reach a temperature of 40 ° C. or more due to direct sunlight at a certain point, etc., and maintain a temperature value of 0 ° to less than 40 ° C. at other times. Is shown.
- the camera parameters of the camera 1 and the camera 2 are changed by the temperature correction processing when the temperature is 40 degrees or more.
- the focal length of the lens increases by df l
- the virtual camera described above can be an ideal lens whose focal length and lens distortion do not change with temperature, and therefore are not subject to this correction processing.
- plastic can be used in addition to glass, but plastic is usually severely deformed by a change in temperature. This can be dealt with by the above-described correction.
- the spatial reconstruction means 104 associates each pixel constituting the input image from the camera with a point in the three-dimensional space based on the camera parameters calculated by the calibration means 102 Create spatial data. That is, the spatial reconstruction means 104 calculates where each object included in the image captured by the camera exists in the three-dimensional space, and stores the spatial data as a calculation result in the spatial data buffer 105. To be stored.
- the processing may be speeded up by skipping every several pixels and mapping to spatial data.
- each pixel constituting an image captured by a camera is generally represented as coordinates on a UV plane including a CCD image plane. Therefore, in order to associate each pixel constituting the input image with a point on the world coordinate system, a point on the U-V plane where the image captured by the camera exists corresponds to a point on the world coordinate system. What is necessary is just to find the formula to add.
- Figure 8 shows an example of the relationship between points in the U-V coordinate system set on the plane containing the image captured by the camera (hereafter referred to as the viewing plane) and points in the three-dimensional space coordinate system.
- the association is performed in the following procedure.
- V — y ⁇
- Equation (3) holds between Pw (Xw, Yw, Zw), (tx, ty, tz), ⁇ , and ⁇ . Equation 3
- the unknown variables in the above three equations are six, “tx, ty, tz, ⁇ ,, f”, the pixel Pv (u, V) on the viewing plane and the coordinates Pw on the world coordinate system (Xw, If there are at least two pairs of points with known correspondences of Yw, Zw), the above unknown variables can be obtained.
- each point P w (Xw, Yw, Z w) is associated with the point P v (u, V) on the viewing plane.
- FIG. 12 shows a description example of a spatial data buffer 105 according to the present invention for storing the spatial data in a table format.
- the spatial data buffer 105 stores association data between points in the camera image and points in space.
- one spatial data is described in each row except the first row, and each column as information constituting each spatial data includes the following contents.
- R component of the color of the point in the viewing plane coordinate system including the image (for example, quantized to 0 to 255 gradations)
- G component of the color of the point in the viewing plane coordinate system including the image (for example, quantized by 0 to 255 gradation)
- FIGS. 13 to 15 are used as a supplementary explanation.
- FIGS. 13 to 15 are diagrams showing the correspondence between feature points on the road surface as a plane in the world coordinate system and feature points on an image taken by a camera installed on the vehicle.
- Fig. 14 is a conceptual diagram of the positional relationship between feature points A, B, C, D, and E on the road surface and the vehicle viewed from above.
- Fig. 14 shows the feature points A, B, and C of the on-board camera 1
- FIG. 15 is a conceptual diagram showing an image of a road surface including the feature points C, D, and E captured by the vehicle-mounted camera 2 in FIG. Then, in the spatial data buffer 105 in the example of FIG. 12, the five feature points A, B, C, D, and E described in FIGS. 13 to 15 are described as examples of the spatial data.
- FIGS. Focus on feature point A in FIGS. Assuming that the point A on the world coordinate system in FIG. 13 and the point A on the viewing plane coordinate system in FIG. 14 are associated by the association processing by the space reconstruction means 104 described above, FIG.
- the third row of the table in Fig. 13 is an example of spatial data corresponding to feature point A in Figs.
- the point A in the world coordinate system is the coordinates (X3, 0, Z2), and when it is captured from the camera 1, the coordinates of the point A on the captured image are (Ul, VI) and the colors are in RGB order. (80, 80, 80), which means that the time when this data was created is tl.
- each is stored in the spatial data buffer 105 as independent spatial data.
- point C in FIGS. 13 to 15 corresponds to that example.
- Point C is observed from the two cameras in FIG. 13, that is, force camera 1 and camera 2, as is clear from FIG. 14 and FIG.
- the spatial data created based on the observation result of camera 1, that is, the spatial data in the seventh row in Fig. 12 is as follows.
- the point C on the world coordinate system expresses the coordinates ( ⁇ , ⁇ , ⁇ 2).
- the coordinates of point C on the captured image were (U3, V3) and the colors were RGB (140, 140, 140) in RGB order, and this data was created.
- Time is tl.
- the spatial data created based on the observation result of camera 2 that is, the spatial data on the eighth line in Fig. 12 indicates that point C on the world coordinate system has coordinates (0, 0, Z2).
- the coordinates of point C on the captured image are (U4, V4) and the colors are RGB (150, 150, 150) in RGB order.
- the generated time is t l.
- information that associates each pixel of an image captured by each camera with a point in the field coordinate system is stored in the spatial data buffer 105 in the form of spatial data in each thread.
- the viewpoint conversion means 106 refers to the spatial data created by the space reconstructing means 104 and creates an image photographed by installing a camera at the viewpoint of the subject.
- the outline of the method is to perform a process reverse to the process performed by the spatial reconstruction means 104. That is, the point Pw (Xw, Yw, Zw) of the world coordinate system formed by the spatial reconstruction means 104 is placed on an image plane Pv (u, v) photographed by installing a camera at an arbitrary viewpoint. This is equivalent to obtaining a transformation to be projected.
- the most significant feature of the present invention is that the viewpoint conversion means 106 can freely reproduce an image from a virtual camera not installed in a vehicle.
- FIG. 16 to 19 show images obtained by placing a virtual camera at an appropriate viewpoint using images captured by a camera installed on a vehicle at feature points on the road surface in the world coordinate system.
- Fig. 16 is a conceptual diagram showing an example of the positional relationship between feature points A, B, and C on the road surface and the vehicle
- Fig. 17 is the in-vehicle power camera 1 in Fig. 16.
- FIG. 18 is a conceptual diagram illustrating an image of a road surface including the feature points A and B.
- FIG. 20 (a) is a conceptual diagram showing a case where the force camera is installed downward substantially above the center of the vehicle as an example of the installation location of the virtual camera.
- the image captured by the virtual camera represents the surroundings of the vehicle.
- the vehicle-mounted power camera since the image constituting the composite image is taken by the vehicle-mounted power camera, when the vehicle-mounted power camera is arranged around the vehicle as shown in FIG. Does not include the body roof.
- Figure 20 (b) shows the virtual camera placed diagonally above and in front of the car. It is a perspective view which shows the example which looks at a car. In this way, the virtual camera is not limited to just above, and it is possible to look at the car diagonally.
- FIG. 20 (c) is a composite diagram of an image created using FIG. 20 (b). The impression seen from diagonally above appears.
- the spatial data conversion means 114 according to the present invention is a means required when the image generating device according to claim 1 of the present invention is applied to a vehicle.
- in-vehicle cameras are usually installed on the upper part of the vehicle body to obtain a better view.
- the shape of the car body is, for example, forming a convex curve toward the outside of the car body
- an image taken with such a force camera located above the car body shows that the road surface immediately around the car body has a blind spot.
- a simple way to solve this problem is to install a camera at the bottom of the vehicle, but adding extra cameras requires extra costs.
- the spatial data conversion means 114 according to the present invention solves the above-mentioned problem without adding a camera to the lower part of the vehicle body or the like.
- the solution is based on the premise that the car moves. FIG.
- FIG. 23 is a flowchart showing the procedure of the process in the spatial data conversion means 114
- FIG. 24 is a conceptual diagram used to assist the explanation of the spatial data conversion means 114.
- Fig. 24 shows the relationship between the position and orientation of the vehicle at the start time (hereinafter U) and the end time (hereinafter t2) of the above-mentioned fixed time when the vehicle moves during the certain time.
- U start time
- t2 the end time
- the travel distance is a straight line between the vehicle positions at time tl and time t2. Define by distance. In other words, it is the distance between Ol and 02 in FIG.
- the total distance from Ol to 02 is represented by (t'x, 0, t'z).
- a method of detecting the moving distance for example, a method of measuring the number of rotations of a tire or the like is used.
- the moving direction is defined as the amount of change in the direction of the vehicle at time t2 with respect to both directions at time tl.
- the amount of change in direction is represented by an angle ⁇ between the Z1 axis and the Z2 axis.
- a method of detecting the moving direction for example, a method of measuring by the rotation angle of the steering wheel is used.
- Equation 5 (1403) Using the moving distance and moving direction of the vehicle from time tl to time t2, formula (5) for converting the spatial data acquired at tl to the spatial data at t2 is created. However, in equation (5), it is assumed that the vehicle movement from time tl to time t2 has no change in the vertical component, that is, the road surface is flat. Equation 5
- xl, yl, zl are the coordinates of a point in the XI-Y1-Zl world coordinate system (origin Ol) centered on the vehicle body at time U, and x2, y2, z2 are This represents the coordinates of the point at time t2 in the X2-Y2Z2 world coordinate system (origin 02) centered on the vehicle body.
- the result of substituting xl, yl, and zl into the right side of Eq. (5) is x2, y2, ⁇ 2.
- the spatial data synthesized at time t1 is converted to spatial data at time t2.
- the data in columns 5 to 7 in the table in Figure 12 may be left blank.
- the data in the eighth to eleventh columns is used as it is.
- the problem here is that adding spatial data in the past to spatial data at the current time as described above will eventually cause an overflow of the limited spatial data buffer 105.
- the spatial data buffer 105 since each spatial data has information on the creation time of the data, it is possible to erase data that is past a certain time or more from the current time. good.
- the feature point generating means 109 according to the present invention generates a plurality of points capable of identifying three-dimensional coordinates in the field of view of the camera. Then, the feature point extracting means 108 according to the present invention extracts the generated feature points.
- FIG. 21 is a conceptual diagram showing an embodiment of the feature point generating means 109, the feature points and the feature point extracting means 108.
- FIG. 21 (a) shows an embodiment in which a pattern light irradiating device as the feature point generating means 109 is mounted on the upper part of the side of the vehicle body. A case where a rectangular pattern is radiated in a lattice shape is shown.
- Figure 21 (b) shows the pattern light irradiation device mounted on the upper part of This is an example in which the pattern light is illuminated from above the vehicle.
- Fig. 21 (C) is an example showing a rectangular pattern light illuminated on the road surface photographed from a force camera in this way.
- the feature points some points representing features such as corners and centers of a rectangle created by pattern light irradiation may be used.
- PI1 to PI-8 are examples of feature points.
- the feature point can be set when the coordinates in the cold coordinate system are known. These feature points also have known coordinate positions in the viewing plane coordinate system, and have a correspondence between the world coordinate system and the viewing plane coordinate system. Therefore, by using the above-mentioned equations (1), (2) and (3), the camera parameters tx, ty, tz, hi, j3, f can be calculated by the calibration means 102 according to the present invention.
- the correction instructing means according to the present invention is a means necessary when applying the image generating apparatus according to claim 1 of the present invention to a vehicle, and the correction instructing means can be used in a situation where a camera calibration is required. Instructs the driver to calibrate the camera if it is detected and calibration is required. Further, the correction history recording means 115 according to the present invention records the date and time when the camera calibration was performed and the running distance as data necessary for detecting a situation requiring calibration.
- Fig. 25 shows the procedure of the process of confirming the record of the correction history and issuing a correction instruction if necessary in the form of a flowchart.
- the camera calibration instruction means 116 Instruct the driver to carry out the shot, and end the process. However, when the driver performs the force calibration according to the instruction, the record of the correction history is updated. If the elapsed time is smaller than the predetermined time, the process proceeds to the next process 3.
- FIG. 26 is a flowchart showing the overall processing flow when the image generating apparatus according to the present invention is applied to a vehicle.
- the configuration of FIG. 6 is assumed as an example of the configuration of the image generation device.
- a characteristic point is generated around the vehicle body by a force for manually inputting a camera parameter or by the characteristic point generating means 109 according to the present invention, and the characteristic point is characterized.
- the camera parameters may be calculated by the calibration means 102 using the result extracted by the point extraction means 108.
- the temperature value of the temperature sensor 110 is sequentially observed for each camera, and the content of the camera parameter table 103 is updated as necessary.
- the spatial data converting means 114 converts the spatial data according to the moving distance and moving direction of the vehicle. If the spatial data buffer 105 is empty, this processing is omitted.
- the spatial reconstruction means 104 creates spatial data in which each pixel constituting the image photographed in 5 is associated with a point in the world coordinate system. If the spatial data converted by the spatial data converting means 114 of 3 already exists, and the spatial data whose coordinates in the world coordinate system already exist, the converted spatial data is discarded. In other words, the spatial data of a point in the world coordinate system holds only the most recently captured data from the camera, and deletes past data or data that has passed some time.
- the camera is installed at a desired viewpoint to create an image taken.
- the viewpoint position is desirably fixed at a place where the composite image is suitable for driving assistance, and for example, a camera position over the vehicle body and overlooking the surroundings of the vehicle is good as in the example of FIG. 8.
- a spatial model may be generated according to the object, but in reality, it is usually impossible. It is.
- each of the input images constituting the input image is capable of synthesizing the image at high speed and maintaining the quality of the synthesized image to some extent.
- a method of associating a pixel with a point in a three-dimensional space is disclosed.
- an image obtained from a camera whose mounting position and mounting angle are already known by the calibration means 102 is projected on a road surface as an example of a plane that forms a part of a three-dimensional space.
- each object included in the image is attached to the X-Z plane of the three-dimensional space coordinate system (hereinafter sometimes referred to as the world coordinate system), and there is no object having a component in the Y-axis direction.
- the image on the viewing plane is projected on the road surface in the world coordinate system. 50
- a device in which a camera is attached to a vehicle and the surroundings of the vehicle are monitored has been described.
- this technology of synthesizing an image from an arbitrary viewpoint using images from a limited number of cameras is not applicable.
- it is not limited to the on-board camera.
- a large number of surveillance cameras can be installed in a store or the like, and those camera images can be used to synthesize an image viewed from directly above.
- the method and apparatus of the present invention implements some of its functions in such a remote location. It is, of course, possible to place them.
- a buffer for storing the mapped spatial data is not particularly necessary when processing the spatial data as it is.
- the virtual viewpoint is not specified manually by an administrator or driver, but rather is selected from one of the viewpoint positions where an image useful for monitoring and driving is obtained, and the image from that position is displayed. Let me do it. As a result, it is not necessary to move the virtual viewpoint position, so that the work load of the user can be further reduced.
- an image from an arbitrary viewpoint can be synthesized using images from a limited number of cameras.
- the lens distortion that changes delicately with rising and falling temperatures is corrected, and the lens is always optimized. Can be maintained.
- a correction value that optimally controls the lens distortion coefficient that changes due to the expansion is obtained from the temperature correction table force, and the lens distortion parameter is changed based on the correction value. do it.
- the image generation device of the present invention provides a method of viewing an image of a blind spot from a camera.
- the camera when the camera is mounted on the upper part of the vehicle body and the shape of the vehicle body below the mounting position of the force lens is convex toward the outside of the vehicle body, it is physically impossible to see an image directly below the camera.
- it is possible to convert a previously acquired image into an image viewed from the current position, depending on the moving direction and the moving distance of the vehicle.
- the image generation device of the present invention detects a situation in which correction of a camera parameter indicating a camera characteristic, that is, a situation in which camera calibration must be performed, and presents the fact to the driver. It is possible to This has the effect of preventing the driver from forgetting to correct the force parameter for a long time.
- FIG. 27 and FIG. 28 show an example of the present invention (claims 35 and 37).
- the image generating apparatus includes, as an example of a basic configuration, a plurality of cameras 101A mounted to grasp a situation around a vehicle, and characteristics of the force camera.
- a camera parameter table 102A for storing force measurment parameters indicating a vehicle
- a space model creation means 103A for creating a space model in a coordinate system based on a vehicle, and mapping an image input from the force measurment to the space model.
- Display means 106A for displaying FIG. 27 (b) is a block diagram showing a configuration example of the image generation device of the present invention.
- the image generation device shown in Fig. 27 (a) is compared with an obstacle that measures the distance to at least the obstacle around the vehicle.
- the configuration is such that a detection means 108A is added.
- the camera is a television camera that captures images of the space to be monitored, such as the situation around the vehicle. The details are described in detail in FIG.
- the camera parameter table 102A is a table for storing camera parameters (similar to the camera parameter table 103 described above).
- the data stored in the camera parameter table 102A is the same as in FIG. 9 described above.
- an image from the virtual camera can be obtained by calculation.
- the calculation method will be described later in detail.
- the space model creating means 103A creates a space model in a coordinate system based on a vehicle, for example.
- the spatial model is a mapping means described later. 51
- FIGS. 28 (a) to 28 (c) are conceptual views showing the spatial model according to the present invention in a bird's-eye view, and FIGS. 28 (a) and 28 (d) respectively show a space composed of only planes.
- An example of a model Figure 28 (b) shows an example of a spatial model composed of only curved surfaces, and Figure 28 (c) shows an example of a spatial model composed of planar and curved surfaces.
- the spatial model in Fig. 28 (a) shows a spatial model consisting of five planes, as described below.
- Plane 1 Plane as road surface (ie, in contact with vehicle tires)
- Plane 2 A plane perpendicular to the road surface (plane 1) standing in front of the vehicle
- Plane 3 Plane that is perpendicular to the road surface (plane 1) that is standing on the left side in the direction of travel of the vehicle
- Plane 4 A plane that is perpendicular to the road surface (plane 1) standing behind the vehicle
- Plane 5 A plane that is perpendicular to the road surface (plane 1) that is standing on the right side in the traveling direction of the vehicle
- the planes 2 to 5 are set up without any gap, and the image captured by the on-board camera is mapped to any of the planes 1 to 5.
- how much distance from the vehicle is required and how much height is required can be determined according to the angle of view and installation location of the onboard camera.
- a bowl-shaped curved surface is used for the spatial model.
- the vehicle is installed at the bottom of the bowl in the bowl-shaped space model, and the image taken by the onboard camera is mapped to the inner surface of the bowl.
- a bowl-shaped model a sphere, a parabolic rotator, or a catenary rotator can be considered.
- the space model can be represented by a small number of mathematical expressions, the calculation of the mapping will be performed at high speed. It becomes possible.
- the model in Fig. 28 (c) shows a space model composed of a combination of planes and curved surfaces described below.
- Plane The plane as the road surface (ie, in contact with the tire of the car)
- Curved surface a cylindrical or elliptical wall placed on the plane so as to surround the vehicle
- the shape of the curved surface and the distance from the vehicle should be determined according to the angle of view and the installation location of the vehicle-mounted camera.
- a space model with a wall around the vehicle surrounding the vehicle has the following effects.
- mapping the camera image on the road surface causes a problem that objects with height components above the road surface are greatly distorted.
- the vehicle is first surrounded by a plane or curved surface that is perpendicular or almost perpendicular to the road surface. If these planes are set so as not to be too far from the vehicle, objects with height components are mapped to these planes, so that distortion can be reduced.
- distortion during matting is small, it can be expected that the displacement at the junction between the two camera images will also be reduced.
- the mapping means 104A converts each pixel constituting the input image from the vehicle-mounted camera based on the camera parameters into a spatial model according to the present invention. It maps to the space model created by the generating means 103. That is, each image captured by the on-board camera is perspectively projected onto the space model.
- Fig. 29 shows the spatial model of an in-vehicle camera image by converting the coordinates of points in the U-V coordinate system set on the plane containing the image (hereinafter referred to as the viewing plane) to the coordinates of points in the world coordinate system.
- FIG. 9 is a diagram used as an aid for explanation of mapping to a surface to be configured. '
- mapping Before explaining mapping, a method of converting viewing plane coordinates into world coordinates will be described first.
- the conversion is performed according to the following procedure.
- the coordinates in the viewing plane coordinate system can be determined for each pixel of the image projected on the viewing plane.
- the rotation matrix can be easily obtained by obtaining a rotation matrix around each of the X, Y, and Z axes by using a parameter indicating the direction in the camera parameter table 102A and combining them.
- a point is represented by Pw (Xw, Yw, Zw) in the world coordinate system and represented by the viewing plane coordinate system Pe (Xe, Ye, Ze), Pe (Xe, Ye, Ze), Pw (X, Yw, Zw),
- the pixel PV (u, V) on the viewing plane can be transformed into the coordinates P w (X w, Y w, Z w) on the world coordinate system. Can be converted to
- FIG. 30 shows the mapping procedure in the form of a flow chart.
- the processing contents of the mapping will be described with reference to FIG.
- the pixel represented as the coordinates in the UV coordinate system set on the viewing plane in which the camera image is reflected is converted into coordinates in the world coordinate system.
- the equations (1), (2), and (8) shown immediately before may be used.
- the viewpoint conversion unit 105 combines the result of mapping the image of the on-vehicle camera onto the space model by the mapping unit 104 as an image captured from a camera installed at an arbitrary viewpoint.
- the outline of the method is to perform a process reverse to the process performed in the mapping means 104.
- equation for calculating this transformation can be expressed by the equations (1), (2) and (9) (corresponding to the inverse transformation of equation (8)) described in detail above.
- Ps (Xs, Ys, Zs) is input, and Pv (u, v) is calculated by the above three equations.
- the camera parameters of the camera can specify any desired values. In other words, it means that the force lens can be placed at a desired angle at a desired viewpoint.
- the viewpoint conversion means 105 A a camera is placed at the arbitrary viewpoint. 60
- the color may be replaced with a color that can be identified as a portion where no object exists, for example, black.
- Fig. 28 (d) is a conceptual diagram showing a bird's-eye view of an example of the space model with the front surface.
- the space model in Fig. 28 (d) shows the road surface plane and the left rear of the vehicle on the road surface.
- FIG. 3 is a conceptual diagram showing a spatial model in which a plane as a connection is set up, one at a right rear side. It is of course possible to set up the front surface at a predetermined place with respect to the vehicle ⁇ , but as described above, an obstacle having a height component is found around the vehicle upward from the surface. You may be able to make the front surface only when you have. At that time, it is necessary to decide which position and in which direction the surface is to be set up. As an example, the following shows how to set up the surface in accordance with the detection result of obstacles.
- the obstacle detection means 108 measures at least the distance to an obstacle existing around the vehicle using the distance sensor 107A as a situation around the vehicle.
- the distance sensor 107A There are various types of distance sensors 107A.
- laser light, ultrasonic waves, stereo optics, camera focus from the focal length when focusing on the target object, the distance between the camera and the object
- the distance sensor 107A laser light, ultrasonic wave, or the like, it is desirable to mount a large number around the vehicle.
- the stereo optical system or the focus of the camera it may be provided alongside the vehicle-mounted camera. However, if part of the vehicle-mounted camera is used as it is, the cost can be reduced.
- FIG. 31 is a conceptual diagram showing a method of erecting a surface in a three-dimensional space based on a distance between a vehicle and an obstacle existing around the vehicle using an obstacle sensor.
- each of the on-vehicle cameras is provided with the obstacle detection sensor, and the obstacle sensor is oriented in the same direction as the line of sight of the force mera, and measures the distance to an obstacle existing in the direction.
- the parameters used in Fig. 31 are as follows.
- a plane having the normal vectors (dx, dy, dz) and passing through the points (pxl, pyl, pzl) (Equation (15)) is defined as a vertical plane.
- dx (x-pxl) + dy (y-pyl) + dz (z-pzl) 0 Equation (15)
- FIG. 31 shows a state in which images input from cameras 1 and 2 are mapped to planes 1 and 2 respectively, which constitute a spatial model.
- mapping the value of the width of the exposed surface is one of the important factors that determine the quality of the synthesized image. Since other vehicles are generally used as obstacles, for example, if the distance between the vehicle and the other vehicle is equal to the distance required to create a surface, the surface must be more than 2/3 of the other vehicle.
- the policy is to set the width so that it can be mapped to, and determine the width of the line.
- the distance between the vehicle and the obstacle which is a condition for deciding whether or not to make a contact surface, should be set empirically to a value between 50 cm and lm.
- the methods listed below may be used alone or in combination.
- FIG. 32 is a flowchart showing the overall processing flow of the image generating apparatus according to the present invention.
- the configuration of FIG. 27 (b) is assumed as a configuration example of the image generation device.
- the obstacle detection means 108A measures the distance to an obstacle existing around the vehicle with the distance sensor 107A.
- a space model is created by the space model creation means 103A.
- mapping means 104A maps the image from the onboard camera to the spatial model.
- an image from an arbitrary viewpoint is synthesized using images from a limited number of cameras.
- a space model other than the conventionally used only one road surface space model is introduced, and by using this space model, objects with heights are reduced in distortion and space is reduced. It is mapped to. Therefore, when an object having a height is reflected in the two camera images, the displacement of the overlap of the objects when each image is mapped to the spatial model is greatly improved as compared with the plane model, and The quality of the converted and synthesized image is improved, and the driver can expect that the surrounding situation will be more easily recognized by the synthesized image, and that the driver can perform a proper driving operation.
- FIG. 33 shows an example of the basic configuration of a monitoring device according to the present invention (an example of claim 39).
- the vehicle surroundings monitoring device includes, as a basic configuration, a plurality of cameras 101B attached to grasp a situation around the vehicle, and a camera storing camera parameters indicating characteristics of the cameras.
- Parameter table 102B Road surface feature detection means 103B that detects features on the road surface, such as white lines, arrows drawn on the road surface, characters, and pedestrian crossings, for example, a vehicle-based coordinate system
- FIG. 33 (b) shows a moving direction detecting means 109B for detecting the moving direction of the vehicle and a moving distance of the vehicle per unit time in addition to the monitoring device shown in FIG. 33 (a).
- the present invention is characterized in that the current position of the feature on the road surface is calculated using the processing result of the moving distance detecting means 108B, and the spatial model is sequentially corrected based on the calculated current position of the vehicle.
- FIG. 33 (c) shows a feature correction that corrects the processing result while displaying the processing result of the road surface feature detecting means 103B on the display means 107B with respect to the monitoring device shown in FIG. 33 (b).
- the configuration including the means 110B when a feature on the road surface is shifted during execution of the processing, the shift can be corrected.
- the camera is a TV camera that captures images of the space to be monitored, such as the situation around the vehicle. This camera usually has a large angle of view so that you can get a large field of view.
- An example of mounting the camera on the vehicle is as described in Figure 7.
- the camera parameter table 102B is a table for storing camera parameters. The contents are as described above.
- the data stored in the camera parameter table 102B is shown in a table format, which is shown in FIG.
- the parameters of the virtual camera are described in the eighth row of the camera parameter table 102 in FIG. 9, and the content is that the virtual camera is located at the coordinates (0, y1, 0), The orientation is at 0 degree with respect to the Y-Z plane and 120 degrees with respect to the X-Z plane, and the focal length is f, the lens distortion coefficient / l, and / 2 are both 0. I understand.
- a virtual camera is installed at a desired viewpoint set by the viewpoint conversion means 106B, and to obtain an image from the virtual camera by calculation.
- the calculation method will be described later in detail.
- Spatial model generating means 1 0 4 B sets the coordinate system based on the vehicle, in this coordinate system, c Figure 3 4 creating a space model in accordance with the road surface feature detection unit 1 0 3 B processing results (a),
- (b) is a conceptual diagram showing a space model according to the present invention. In this diagram, as the features on the road surface, the end points of the white line indicating the parking space drawn in the parking lot or the corner formed by the intersection of the white lines are detected, and five points as shown in the figure based on those points are detected. Forming a spatial model using a plane
- FIG. 34 (a) is a conceptual diagram showing the space model in a bird's eye view
- FIG. 34 (b) is a diagram perspectively projected downward from above the vehicle in FIG. 34 (a).
- feature points 1 to 4 are shown as examples of features on the road surface.
- the features or feature points on the road surface indicate these four feature points.
- Feature point 1 a number such as “Feature point 1” is assigned.
- the five planes are determined as follows (here, left and right are determined in the rearward direction of the vehicle).
- Plane 1 Plane as road surface (ie, in contact with vehicle tires)
- Plane 2 A plane whose left end touches Plane 2 and is perpendicular to Plane 1.
- Plane 3 A plane that is along the line connecting feature points 1 and 2 and is perpendicular to plane 1
- Plane 4 A plane that is along the line that connects feature points 2 and 3 and is perpendicular to plane 1
- Plane 5 A plane that is along the line connecting feature points 3 and 4, and that is perpendicular to plane 1.
- Plane 6 A plane whose right end touches plane 5 and is perpendicular to plane 1.
- the road surface feature detecting means 103B is for extracting such a road surface feature.
- FIGS. 35 (a) to (d) are diagrams for explaining an example of a process of extracting feature points by the road surface feature detection means 103B, and FIG. 36 shows a flow of the feature point extraction process. It is a flow chart shown, and the processing procedure in the road surface feature detecting means 103B will be described below with reference to these figures.
- Processing 701 An image including a white line indicating the parking space is captured from one of the on-board cameras.
- FIG. 35 (a) shows the photographed image.
- Processing 70 2 Histogram that binarizes the image captured by the onboard camera with an appropriate threshold value, scans it in the horizontal and vertical directions, and uses the number of pixels corresponding to the white line in the scan line as the frequency For each of the vertical and horizontal directions. The position where the feature point exists is estimated from the result of the histogram.
- Figure 35 (b) shows an example of the process.
- Process 73 An edge extraction process is performed on the image captured by the on-board camera, and then a straight line extraction process is further performed on the edge processing result, and an intersection or an end point of the obtained straight line is estimated as a feature point.
- FIG. 35 (c) shows an example of the processing.
- a Sobel operator may be used for edge extraction
- a Hough transform may be used for straight line extraction.
- Processing 704 A feature point is determined using the estimated value of the feature point obtained in the above processing 702 and processing 703. As a method of determining the feature points, for example, an intermediate point between the feature points obtained by the respective methods of the processing 72 and the processing 73 may be used. It should be noted that the same result can be obtained regardless of which of the processing 720 and the processing 703 is executed; '
- Processing 705 The coordinates of the feature points obtained from the camera image ⁇ in the three-dimensional coordinate system are obtained. From these coordinates, it is possible to obtain a plane constituting the space model.
- the mapping means 104B maps each pixel constituting the input image from the on-board camera to the space model created by the space model creation means 104B based on the camera parameters. That is, each image captured by the on-vehicle camera is perspectively projected onto the space model.
- Figure 29 shows the mapping of the in-vehicle force image to the surface that constitutes the spatial model by converting the coordinates of the points in the U-V coordinate system set on the plane containing the image to the coordinates of the points in the world coordinate system. It is a figure used as an aid of the explanation to This has already been described in detail.
- each pixel constituting the image captured by the on-vehicle camera is generally represented as coordinates on a plane including the image plane, and the details have already been described with reference to FIG.
- the viewpoint conversion unit 106B combines the result of mapping the image of the on-vehicle camera onto the space model by the mapping unit 105B as an image captured from a camera installed at an arbitrary viewpoint.
- the outline of the method has been described in relation to the viewpoint conversion means 105 # described above.
- a color is mapped to a point of a spatial model corresponding to a certain pixel of the synthesized image. You may not. In such a case, a color that can be identified as a portion where no object exists, such as black, may be used. '
- the position of the feature point detected by the road surface feature detecting means l'O3B changes with the movement of the vehicle. Since the spatial model is created based on the three-dimensional coordinates of the feature points, it is necessary to recreate the spatial model every time the position of the feature point changes due to the movement of the vehicle. In other words, while the vehicle is moving, it is necessary to always find the position of the feature points using the method described immediately above, but the process of finding the feature points from the image generally requires a large amount of calculation, which is costly. Get on. One way to avoid this is to always measure the speed and direction of the vehicle and calculate the coordinates of the characteristic points using the measurement results.
- the vehicle surroundings monitoring device includes a moving direction detecting unit 109B for detecting a moving direction of the vehicle and a moving distance of the vehicle in a unit time in order to execute the processing. It is provided with a moving distance detecting means 108B to be detected, and calculates the current position of the feature on the road surface using the processing results of the moving direction detecting means 109B and the moving distance detecting means 108B. .
- FIG. 37 is a flowchart showing the procedure of processing for calculating the position of a feature point in accordance with the movement of a vehicle in the vehicle surroundings monitoring device of the present invention (an example of claim 41). It is a conceptual diagram used for assistance of description of the processing.
- FIG. 38 shows that when the vehicle moves during a certain period of time, the position and direction of the vehicle at the start time (hereinafter tl) and end time (hereinafter t2) of the certain period of time. 7 J
- Process 901 Detects the moving distance of the vehicle during a certain period of time.
- the moving distance is defined as a straight-line distance between the vehicle positions at the respective times of the time Ui 1 and the time Ui 2. That is, the distance between O 1 and O 2 in FIG. '
- the moving distance is represented by the vector from Ol to O2 by (t'x, 0, t'z).
- a method of detecting the moving distance for example, a method of measuring the rotation speed of a tire or the like is used.
- Processing 902 The moving direction of the vehicle during the predetermined time is detected.
- the moving direction is defined as the amount of change in the direction of the vehicle at time t2 with respect to the direction of the vehicle at time t1.
- the amount of change in the direction is represented by an angle ⁇ between the Z1 axis and the Z2 axis.
- a method of detecting the moving direction for example, a method of measuring the rotation angle of the steering wheel or the like is used.
- Processing 9 03 Using the moving distance and moving direction of the vehicle from time t 1 to the time I t2, the coordinates of the feature point acquired at t 1 are converted to the coordinates of the feature point at t 2. The above equation (5) to be converted is created.
- xl, y 1, ⁇ 1 are coordinates of a point in the X 1—Y 1—Z 1 world coordinate system (origin O 1) centered on the vehicle body at time t 1.
- X 2, y 2, and z 2 represent coordinates in the X 2 —Y 2 —Z 2 world coordinate system (origin 0 2) around the vehicle body at time t 2 of the point.
- the result of substituting xl, y1, z1 into the right side of equation (5) is X2, y2, z2.
- the moving direction detecting means 109, moving distance detecting means, and characteristic position calculating means are introduced, and the exact coordinates of the characteristic points are always calculated by the calculation method described immediately above. Obtainable. However, since it is practically impossible to measure the moving direction and the moving distance of the vehicle without errors, it is necessary to correct the positions of the feature points as necessary.
- the characteristic correcting means 110B displays the processing result of the road surface characteristic detecting means 103B on the display means 107B, the occupant of the vehicle instructs to correct the characteristic, and according to the instruction, the characteristic position of the road surface. To correct.
- FIG. 39 is a conceptual diagram showing how the feature correction process is displayed on the display means 107B.
- FIG. 40 is a flowchart showing the flow of the process in the feature correction process. is there.
- the processing procedure in the characteristic correcting means 110B will be described with reference to these drawings.
- Display means 107 B displays the current position of the feature point.
- each image taken from the vehicle-mounted power camera and the spatial model are superimposed, and an image obtained by perspectively projecting the image downward from above is displayed.
- Processing 1 202 A feature point whose position is shifted is specified.
- Point 2 and point 4 are different points and are specified. For example, when a touch panel is attached to the display device, the location can be easily specified by touching the display screen with a finger.
- Processing 1 203 A correct location of the shifted feature point is specified.
- the location of the correct feature point must be known to the operator in advance.
- Process 1 204 If the 'feature points to be corrected' remain, the above processes 1 201 to 1 203 are continuously repeated, and if not, the feature point correction process ends.
- FIG. 41 is a flowchart showing the overall processing flow of the vehicle periphery monitoring device according to the present invention.
- the configuration example of the vehicle periphery monitoring device is based on the configuration in Fig. 33 (c).
- Process 1 301 In order to operate this device normally, enter the correct camera parameters for each of the on-board cameras in the camera parameter table 102B.
- Processing 1302 Extracts feature points on the road from images captured by the on-board camera.
- Processing 133 The three-dimensional coordinates of the extracted feature point are calculated from the coordinates of the extracted feature point in the image and the camera parameters. However, if the vehicle periphery monitoring device process is being executed, the current position of the feature point coordinates is calculated using the result of detecting the moving direction and moving distance of the vehicle.
- Processing 1304 An image around the vehicle is captured by the on-board camera.
- Process 13 05 A space model is created inside the image captured in Process 4 by the space model creating means 104B.
- Processing 13 06 The mapping means 1 051 maps the image from the on-board camera to the spatial model.
- Processing 13 07 ⁇ Refers to the image mapped to the space model and synthesizes the image viewed from the viewpoint set by the driver.
- Process 13 08 Display the image synthesized in Process 13'07.
- Processing 1309 Check whether the position of the feature point is displaced in the display image.
- Processing 1310 If there is a deviation, an interrupt for feature point correction is inserted, and feature point correction processing is performed. If there is no need to correct the feature points, return to processing 1303 and repeat the processing. For example, when the driver is going to put the vehicle in the parking space, the above-mentioned processing 1302 to 1308 is repeated, and this processing may be ended when parking is completed.
- the situation around the vehicle in the present invention includes not only the characteristics of the road surface described above but also, for example, the state of a parked vehicle. In that case, a space model corresponding to the state of the parked vehicle is generated.
- an image from an arbitrary viewpoint is synthesized using images from a limited number of cameras.
- a general-purpose space model created using features on the road surface obtained from camera images was introduced instead of a simple plane model.
- the height component is converted into a depth component in the direction of the camera's line of sight.Therefore, objects with a height component above the road surface are distorted when projected on the road surface.
- an object having a height is also mapped in space with less distortion.
- FIG. 42 is a block diagram showing a configuration example of an image generation device according to the present invention (an example of claim 45).
- the image generation apparatus of the present embodiment has a camera parameter table 1 that stores, as a basic configuration, a plurality of cameras 101C and a camera 101C attached to grasp the situation around the vehicle.
- mapping means 104C for mapping an image input from the camera 101C to a spatial model 103C that models the situation of the surroundings of the vehicle, and a single image viewed from a desired virtual viewpoint is created by the mapping means 104C.
- Display parameter conversion means 105C which is synthesized from the acquired data
- camera parameter correction means 106C which independently corrects the parameters of camera 101C, and displays the image converted by viewpoint conversion means 105C. It has a configuration consisting of display means 107C.
- the camera 101C is a television camera that captures images of the space to be monitored, such as the situation around the vehicle.
- Fig. 43 (a) is a conceptual diagram showing an example in which three cameras are mounted on a vehicle, but as shown in Fig. 43 (a), the mounting positions on the vehicle are the body roof and the side or the side. At the border between the roof and the back Using a camera with only a large angle of view widens the field of view and requires a small number of cameras.
- cameras that face backwards are installed on the left and right door mirrors, and the images from those cameras are displayed on the monitor installed in the vehicle, so that they function as door mirrors, so it is possible to remove the door mirrors from the vehicle Therefore, it is possible to design a car that is both excellent in design and aerodynamics.
- the camera parameter table 102C is a table for storing camera parameters. The details are as described above.
- Figure 44 is a conceptual diagram showing a three-dimensional spatial coordinate system centered on the vehicle.
- 4 is an example of a three-dimensional spatial coordinate system.
- the axis that extends vertically from the road surface in the center of the rear surface of the vehicle is the Y axis.
- ⁇ The rotation angle around the optical axis of the camera after rotating at the angle of [3] above is denoted by ⁇ , and is expressed by ⁇ .
- a three-dimensional space coordinate system a world coordinate system, or simply a three-dimensional space refers to a three-dimensional space coordinate system according to this definition.
- Fig. 45 shows the data stored in the camera parameter table 102C. It is shown in the format. The contents described in Fig. 45 are as follows in order from the left column of the table. As shown below, in this table, the items up to the 9th item in the 2nd IJ An example is shown.
- Second column X coordinate of camera position in 3D space coordinate system
- the parameters of camera 1 in FIG. 44 are described in the second row of the camera parameter table 102 C in FIG. 45, and the contents are as follows. ), The orientation is 0 degree with respect to the YZ plane, 130 degrees with respect to the X-Z plane, there is no rotation around the optical axis, the focal length is fl, the lens distortion coefficient It is written that 1 and / 2 are both 0.
- an image taken from the on-vehicle power camera into an image viewed from a desired virtual viewpoint by the viewpoint conversion means 105C (to be described in detail later).
- an image viewed from a virtual viewpoint is an image that should be visible if a camera is temporarily placed in a desired direction at a desired location. Therefore virtual viewpoint Can be represented using the same camera parameters as those described above.
- both the lens distortion coefficients ⁇ 1 and i 2 can be set to 0.
- FIG. 46 shows the data stored in the viewpoint parameter table 102C in a table format.
- the items from the second column to the ninth column show examples of viewpoint parameters in order from the left column of the table.
- the contents are as follows: the virtual viewpoint is located at the coordinates (0, 0, z 2), the orientation is 0 degree with respect to the Y-Z plane, and the angle is 190 degrees with respect to the X-Z plane, There is no rotation around the optical axis, the focal length is f2, and the lens distortion coefficients ⁇ 1 and ⁇ 2 are both 0.
- the mapping means ⁇ 04 C maps each pixel constituting the input image from the onboard camera to the spatial model 103 C based on the camera parameters. That is, each image captured by the on-vehicle camera is perspectively projected onto the spatial model 103C.
- the spatial model 103 C refers to a three-dimensional model that maps an image from a camera to a three-dimensional spatial coordinate system in the mapping means 104 C.
- a plane or a curved surface or a model composed of a plane and a curved surface used.
- a method of mapping an image from an in-vehicle camera to the plane model using a plane model as a road surface will be described. I do.
- the simple model has problems such as large distortion of an object having a height component.
- a space model combining a plane and a curved surface may be used.
- an accurate three-dimensional model of an obstacle around the vehicle can be measured in real time, a more accurate synthesized image can be obtained by using the three-dimensional model.
- Fig. 47 shows the in-vehicle force mera image by converting the coordinates of points in the U-V coordinate system set on the plane containing the image (hereinafter referred to as ⁇ plane) into the coordinates of points in the world coordinate system.
- FIG. 21 is a diagram used as an aid for explanation of mapping to a surface constituting 03C. The conversion is performed according to the following procedure.
- the coordinates in the viewing plane coordinate system can be determined for each pixel of the image projected on the viewing plane.
- Step 3 A calculation formula for associating the viewing plane coordinate system with the world coordinate system is obtained.
- the viewing plane coordinate system spatially has the following relationship around the world coordinate system.
- the rotation matrix is obtained from the camera parameter table 102C using the parameters (H, ⁇ , y) representing the rotation around the optical axis of the camera and the X, ⁇ , and ⁇ rotation axes. It can be easily obtained by synthesizing them. Now, if a point is represented by Pw (Xw, Yw, Zw) in the world coordinate system and represented by the viewing plane coordinate system Pe (Xe, Ye, Ze), Pe (Xe, Ye , Z e), P w (Xw, Yw, Zw), (tx, ty, tz), and the relationship of the equation (8) is established among the rotation matrices.
- each pixel constituting an image captured by the on-vehicle camera is generally represented as coordinates on a plane including the image plane, and the details have already been described with reference to FIG.
- FIG. 47 shows an example in which an XZ plane is used as a surface forming the space model 103C.
- the viewpoint conversion unit 105C sets the result of mapping the image of the onboard camera onto the space model 103C by the mapping unit 104C at an arbitrary virtual viewpoint. With the image taken from the camera.
- the outline of the method has been described in relation to the viewpoint conversion means 105A.
- the viewpoint conversion means 105C when combining images obtained by placing a camera at the arbitrary viewpoint and viewing, a color is mapped to a point of the spatial model 103C corresponding to a certain pixel of the composite image. It may not be possible. In that case, '
- the color of the point can be obtained by interpolation, extrapolation, etc., from the surrounding points to which the color is mapped.
- the camera parameter correcting means 106C corrects the camera parameters independently for each camera.
- FIG. 48 is a conceptual diagram showing a configuration example of an operation unit for correcting a camera parameter by the camera parameter correcting unit 106C.
- Camera selection button 901C to select the camera to be modified
- Zoom button 904C forward / backward to move back and forth in the camera's optical axis direction
- Translation button 90 2C up / down / left / right to move the camera vertically in the camera's optical axis direction
- Consists of a joystick 903C that changes and corrects the rotation around the optical axis of the camera
- Equation (8) The corrected result is immediately reflected in equation (8). That is, in Eq. (8), the rotation matrix shown in Eq. 0 is calculated again using the new camera angle, and (tx, ty, tz) is replaced with the coordinates of the new camera position. Then, by using the modified equation (8), the image as a result of the correction operation is synthesized, and the operator can check at a glance whether the correction operation of the user is correct or not. .
- the two are at least essential.
- FIG. 49 is a block diagram showing a configuration example of a technique related to the image generation device according to the present invention.
- mapping table 108 C for holding a pixel correspondence between the camera input image and the composite image is further provided.
- FIG. 50 is a conceptual diagram showing the mapping table 108 C in a table format.
- the table is composed of cells of the number of pixels of the screen displayed by the display means 107C. That is,
- the table is configured so that And each cell is ⁇ Camera number
- the upper left cell of FIG. 50 indicates the upper left portion of the display screen, that is, (0, 0), and the mapping means 104C determines the data content (1, 10 0) stored in the cell. , 10), a process of “displaying the data of the pixel (10, 10) of the image captured by the first camera on the display screen (0, 0)” is performed.
- mapping table 108C it is necessary to calculate the correspondence between pixels between the camera input image and the composite image. This calculation is performed by the mapping means 104C and the viewpoint conversion means 1 It can be easily calculated by 05C. More specifically, it is good to perform the following processes 1-2 for all cameras.
- the viewpoint conversion means 105C determines the coordinates (referred to as coordinates 2) on the viewing plane when the pixels mapped in 2.1 are viewed from the virtual viewpoint.
- 2.3 Mapping table 1 Write the data (Cn, coordinate 1) as a set to the cell corresponding to coordinate 2 in 08C. However, if the display area of each camera image is determined, data is written only when the coordinate 2 is the display area of C.n.
- the display means 107C uses the processing results of the mapping means 104C and the viewpoint conversion means 105C. Instead of displaying the composite image, the table is used to directly replace the camera image with the display image. This makes it possible to speed up the process of synthesizing multiple camera images.
- Fig. 51 is a flowchart showing the flow of camera parameter correction processing.
- Figs. 52 and 53 are conceptual diagrams used to assist in explaining the camera parameter correction processing.
- Figs. 52 (a) and (b) The display screen when the camera parameters are corrected is shown in Figs. 53 (a) and 53 (b).
- Processing 1 201 Select the camera to be corrected using the camera selection button 901C or the like. At this time, a state where the synthesized image is shifted is displayed on the screen as shown in FIG. 53 (a).
- Process 1 202 The virtual viewpoint is changed according to the position and orientation of the selected camera, and the image is synthesized.
- an image from the viewpoint position of the selected camera is displayed on the screen as shown in FIG. 52 (a), and it is possible to grasp the state of the camera shift from the viewpoint position.
- Processing 1 203 Correct the camera's three-dimensional space position and camera orientation using the zoom button 904C, translation button 902C, and joystick 903C. During the correction process, the virtual viewpoint is kept fixed. If the correction is successful, the screen will appear as shown in Figure 52 (b) with no deviation.
- Process 1 204 When the correction of the camera is completed, the camera parameters after the correction Data to the camera parameter table 102C.
- Process 1205 If there is another camera that needs to be modified, the camera is selected, and the above processes 1201 to 1204 are repeated. If not, proceed to the next process 1 206.
- Processing 1 206 The virtual viewpoint changed in processing 1 2 and 2 is returned to the virtual viewpoint before the camera correction processing, and images are synthesized. As a result of the synthesis, the image without any shift is displayed on the screen as shown in Fig. 52 (b), and the camera parameter correction processing ends with the confirmation.
- a marker for example, a line
- Marking makes it easy to see which camera's image is displayed where.
- an image from an arbitrary virtual viewpoint is synthesized using images from a limited number of cameras.
- the camera parameter correcting means for correcting the viewpoint is used. As if adjusting the rearview mirror, the driver checks the displayed image each time, and optimizes only the image of the displaced camera by moving the viewpoint, changing the direction, etc. To get the image.
- the driver can easily correct the deviation without requiring much cost and labor for the adjustment.
- the viewpoint conversion means Since the image is synthesized by switching the virtual viewpoint to the position of the vehicle-mounted camera to be corrected, the direction and position of the vehicle-mounted camera can be corrected as if the rearview mirror were corrected.
- the camera parameters such as the direction and the position of the virtual viewpoint that are virtually set are corrected instead of adjusting the actual camera direction, a mechanism for changing the camera direction is not required. Costs can be kept low.
- FIG. 54 is a block diagram showing another embodiment of the present invention, and the same means as in FIG. The new point is that it has guide data storage means 11 OC for storing guide data that serves as a guide when correcting camera parameters indicating camera characteristics.
- This guide data is superimposed on the input image by the image synthesizing means 109 C and displayed by the display means 107 C.
- This guide data can be generated by any method. For example, it is generated and used by the method described below.
- Fig. 55 shows, in addition to the configuration shown in Fig. 54, a feature generating means 111C that generates a point light source at a predetermined location on the vehicle, and a feature extracting means 111C that can extract the features.
- the feature extraction result can be stored in the storage means 11 OC as the above guide data.
- Fig. 56 features a point light source with several lamps 113C mounted in front of the vehicle (a).
- This lamp 113C corresponds to the feature generating means 111C.
- images of the front of the vehicle taken from cameras 1 and 2 (f) mounted on the front side of the vehicle are shown (b, ..
- the feature extraction means 1 1 and 2C show the shining features from the images.
- Image processing and extraction (d, e).
- Figures 57 (a), (b), (c), (d), and (e) show examples of features of a part of the vehicle body. In other words, the line segment at the rear right corner of the vehicle is image-processed and extracted.
- a black circle indicates an image that should have an original point light source.
- the white circles indicate the point light sources that were taken when the camera was out of alignment.
- a shifted image is obtained.
- the parameters are corrected and matched using the joystick 903C as described above (b, c). This allows easy calibration.
- FIG. 59 shows an example in which calibration was performed using line segments as described above.
- FIG. 60 is a block diagram showing a configuration example of an image generating apparatus according to the present invention (an example of claim 55).
- the image generating apparatus stores, as a basic configuration, a plurality of cameras 101 D attached to grasp the situation around the vehicle, and camera parameters indicating the characteristics of the cameras 101 D.
- Camera parameter table 102 D input from camera 101 D to space model 103 D that models the situation around the vehicle Mapping means 104 for mapping images, viewpoint parameter table 108 D for storing viewpoint parameters including at least the position and orientation, one image viewed from a desired virtual viewpoint, mapping means 1 ⁇ 4 Viewpoint conversion means 105D synthesized from the data created in D, viewpoint parameter correction means 106D for correcting the virtual viewpoint parameters, and an image converted by viewpoint conversion means 105D
- a display means 107 D for joining and displaying them.
- the camera 101D is a television camera that captures an image of a space to be monitored, such as a situation around a vehicle. This camera is mounted on the vehicle as shown in Figure 43.
- the camera parameter table 102D is a table for storing camera parameters.
- the camera parameters are as described above.
- a three-dimensional spatial coordinate system based on a vehicle is defined. It is as described in FIG.
- the data stored in the camera parameter table 102D is the same as that in FIG.
- FIG. 61 shows the data stored in the viewpoint parameter table 108 D in the form of a table. In Fig. 61, three viewpoint parameters are stored for each, but each of the three is associated one-to-one with three onboard cameras.
- an image captured by camera 1 is viewpoint-converted to an image viewed from virtual viewpoint 1
- an image captured by camera 2 is viewpoint-converted to an image viewed from virtual viewpoint 2.
- the virtual viewpoint is set for each camera in this way, if the junction between the image from another camera and the image from the camera on the display screen is shifted due to a camera position shift, etc. It is possible to intuitively correct the deviation by changing only the parameters of the virtual viewpoint viewing the camera image. Details of the correction method will be described later in the description of the viewpoint parameter correction unit.
- the items from the second column to the ninth column show specific examples of the viewpoint parameters, and are as follows in order from the left column of the table.
- the virtual viewpoint 1 is at the position of the coordinates (0, 0, z 2), the direction is 0 degree with respect to the Y-Z plane, and the X-Z It is stated that it forms an angle of 90 degrees with the plane, does not rotate around the central axis of the line of sight, and has a focal length of f 2, a lens distortion coefficient cl and ⁇ 2 of 0.
- the mapping means 104D maps each pixel constituting the input image from the onboard camera to the spatial model 103D based on the camera parameters. That is, each image captured by the on-board camera is perspectively projected onto the space model 103D.
- the spatial model 103D refers to a three-dimensional model that maps an image from a camera to a three-dimensional spatial coordinate system in the mapping means 104D.
- a plane or a curved surface or a model composed of a plane and a curved surface is used. used.
- a method of mapping an image from an onboard camera onto the plane model using a plane model as a road surface as an example of the simplest spatial model 103D will be described. I do.
- the simple model causes problems such as large distortion of an object having a height component.
- a space model combining several planes or a space model combining a plane and a curved surface is used. Good.
- an accurate three-dimensional model of an obstacle around the vehicle can be measured in real time, a more accurate synthesized image can be obtained by using the three-dimensional model.
- mapping processing in the mapping means 104 D it is necessary to first explain a method of converting the viewing plane coordinates into the world coordinates, which is the same as that described with reference to FIG. is there.
- each pixel constituting an image captured by a vehicle-mounted camera is generally expressed as coordinates on a plane including the image plane, and the details have already been described in FIG. 7 of '233. .
- the viewpoint conversion means 105D converts the result of mapping the image of the on-vehicle camera to the space model 103D by the mapping means 104D into an image taken from a camera installed at an arbitrary virtual viewpoint. Combine. The outline of the method has been described regarding the viewpoint conversion means 105A.
- FIG. 62 shows that each pixel constituting the input image from the vehicle-mounted camera is mapped to a plane as the spatial model 103 by the matching means 104D, and the result mapped to the plane is obtained by using the camera installed at the virtual viewpoint.
- This is Hi, a concept that shows how images are composited with images taken from.
- the viewpoint parameter correcting means 106D can independently correct the parameters of the virtual viewpoint provided for each camera.
- FIG. 63 shows a configuration example of an operation unit for correcting a virtual viewpoint parameter by the viewpoint parameter correcting unit 106D.
- -Translation button 1 002 (translation up / down / left / right with respect to the image plane) to translate the viewpoint position perpendicular to the direction of the line of sight from the virtual viewpoint
- equation (8) the corrected result is immediately reflected in equation (8). That is, in equation (8), the round matrix shown in equation 0 is calculated again using the new line-of-sight direction, and (tx, ty, tz) is replaced with the coordinates of the new virtual viewpoint position.
- the image as a result of the correction operation is synthesized by using the modified equation (8), and the operator can see at a glance whether the correction operation is correct by viewing the synthesized image. It is possible to
- the joystick 1003 instead of setting the center axis of the line of sight in the same manner as the direction of the axis tilted by operating the joystick 1003D, the joystick 1003 It is sufficient to move the gaze direction in the opposite direction to the direction in which the 3D axis is moved, and to write the changed gaze direction in the gaze parameter table 102D as the changed gaze direction. For example, if the joystick 1003D is tilted 10 degrees to the right, the viewpoint parameter value at that time is a value in which the line of sight is tilted 10 degrees to the left with respect to the current line of sight.
- the viewpoint parameter value at that time is a value obtained by rotating the rotation angle around the center axis of the line of sight by 5 degrees counterclockwise. And The same applies to the operation of the translation button 1004D.
- FIG. 64 is a block diagram showing a configuration example of a technique related to the image generation device of the present invention.
- mapping table 109 D for holding the correspondence between pixels between the camera input image and the composite image is provided.
- the conceptual diagram showing the mapping table 109D in the form of a table has the same content as that of FIG.
- the table is composed of cells for the number of pixels of the screen to be displayed by the display means 107D.
- a portion of the correspondence relationship of pixels between the camera input image and the composite image, which is changed by the correction is defined as: It is sufficient to recalculate and rewrite according to the above procedure.
- the mapping table 109D is created, the display means 107D displays the composite image by using the processing results of the mapping means 104D and the viewpoint conversion means 105D. Instead, the table is used to directly replace the camera image with the displayed image. This makes it possible to speed up the process of synthesizing multiple camera images. '
- FIG. 65 (a) is a block diagram showing a configuration example of the technology related to the image generation device of the present invention, (b) is a conceptual diagram showing an example of an in-vehicle camera composite image before correction of viewpoint parameters, and (c).
- FIG. 3 is a conceptual diagram showing an example of an in-vehicle camera composite image after correction of viewpoint parameters.
- the above-mentioned apparatus further includes a mapping table correcting means 110 D for recalculating the mapping table by using the viewpoint parameters changed by the processing in the viewpoint parameter correcting means, and a mapping table correcting means.
- This configuration has a buffer 111D for temporarily storing data in the middle of processing the 110D.
- mapping table correcting means 110D By using the mapping table correcting means 110D, the correction result of the viewpoint parameter is reflected in the equation (8), and the contents of the above-mentioned mapping table are changed without complicated calculation of the equation (8). It is possible to Hereinafter, a method of changing the mapping table will be described step by step.
- FIGS. 66 (a) to (c) are conceptual diagrams used to help explain the method of changing the mapping table.
- the parallelogram shown in gray is the image plane before virtual viewpoint correction.
- the parallelogram shown in white represents the image plane after correcting the virtual viewpoint. Represents.
- the configuration of the operation unit for correcting the virtual viewpoint parameters is
- a buffer 111D for temporarily storing the result of changing the mapping table is provided in advance.
- the viewpoint parameter correction means obtains any of the viewpoint parameter correction information of translation, direction, rotation, and zoom. According to the content of this information, one of the following four processes is used to correct the virtual viewpoint. Calculates the data (hereinafter referred to as table conversion data) of the correspondence between the display coordinates (u, V) and the corrected display coordinates (u ', v'), and writes them to the buffer 111D.
- Equation 10 (u ⁇ v- ⁇ P i: (U ⁇ , ⁇ )
- the zoom button When modifying the zoom (magnification), the zoom button is used to obtain the magnification k after the change before the change, and the relationship between the coordinate P4 before the change and the coordinate P4 'after the change is obtained by equation (14). To buffer 1 1 1D. Equation 14
- mapping table (k ⁇ u 4, k ⁇ u 4) processing 3.
- Change the mapping table by referring to the key ID.
- the changed contents of the coordinates 1 of each cell of the mapping table are written in the buffer 111D as, for example, the following table conversion data.
- the camera number of the cell matches the in-vehicle camera number corresponding to the virtual viewpoint to be corrected.
- the coordinates before cell correction 1 P matches the coordinates of the left-hand term of the table conversion data.
- a cell whose camera number matches the in-vehicle camera number corresponding to the virtual viewpoint to be corrected, but whose cell at coordinate 1 has not been corrected is outside the field of view when viewed from the corrected virtual viewpoint. Therefore, write the coordinates outside the display area of the display means.
- the display means may display, for example, black as a color indicating the outside of the field of view when a coordinate value that cannot be displayed is input.
- FIG. 67 is a flowchart showing the flow of the viewpoint parameter correction process in the image generation apparatus of the present invention.
- FIGS. 68 (a) to 68 (c) are conceptual diagrams used to assist in explaining the viewpoint parameter correction process. .
- an image from an arbitrary virtual viewpoint is synthesized using images from a limited number of cameras. If the position and orientation of the camera shifts due to vibrations while the vehicle is running, the method of restoring the shift by fine adjustment of the camera itself has been used in the past, so the shifted image was replaced with another image. The fitting operation was difficult.
- the concept of a virtual viewpoint is introduced, and while viewing the image viewed from the virtual viewpoint, only the displaced image is adjusted by the viewpoint parameter correction unit to adjust the camera as if it were at the virtual viewpoint. It can be modified as if it were.
- the state after the movement of the viewpoint and the state after the change of the direction are fed back as an image each time, so that the repair operator can obtain an optimal image while checking the displayed image each time.
- the work of adjusting the image shift is significantly easier, and if the shift is small, it is not necessary to readjust the camera parameters using a large-scale device.
- the parameters such as the direction and position of the virtual viewpoint that are virtually set are corrected instead of adjusting the actual camera direction, a mechanism for changing the camera direction is not required, and the apparatus is not required. Costs can be kept low.
- the image generation device of the present invention can be applied to fields such as vehicle periphery monitoring, store / monitor monitoring, and the like. Can be easily created. Another advantage is that even if the direction of the camera shifts, it can be easily corrected on the image.
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/744,787 US7307655B1 (en) | 1998-07-31 | 1999-07-29 | Method and apparatus for displaying a synthesized image viewed from a virtual point of view |
EP99933145A EP1115250B1 (en) | 1998-07-31 | 1999-07-29 | Method and apparatus for displaying image |
JP2000563072A JP3286306B2 (ja) | 1998-07-31 | 1999-07-29 | 画像生成装置、画像生成方法 |
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP21726198 | 1998-07-31 | ||
JP10/217261 | 1998-07-31 | ||
JP10/286233 | 1998-10-08 | ||
JP28623398 | 1998-10-08 | ||
JP10/317393 | 1998-11-09 | ||
JP10/317407 | 1998-11-09 | ||
JP31739398 | 1998-11-09 | ||
JP31740798 | 1998-11-09 | ||
JP32470198 | 1998-11-16 | ||
JP10/324701 | 1998-11-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000007373A1 true WO2000007373A1 (fr) | 2000-02-10 |
Family
ID=27529636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1999/004061 WO2000007373A1 (fr) | 1998-07-31 | 1999-07-29 | Procede et appareil d'affichage d'images |
Country Status (4)
Country | Link |
---|---|
US (1) | US7307655B1 (ja) |
EP (4) | EP2309453A3 (ja) |
JP (1) | JP3286306B2 (ja) |
WO (1) | WO2000007373A1 (ja) |
Cited By (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259818A (ja) * | 1999-03-09 | 2000-09-22 | Toshiba Corp | 状況情報提供装置及びその方法 |
JP2001163132A (ja) * | 1999-09-30 | 2001-06-19 | Toyota Autom Loom Works Ltd | 車両後方監視装置用画像変換装置 |
JP2001285715A (ja) * | 2000-03-31 | 2001-10-12 | Matsushita Electric Ind Co Ltd | 画像合成装置 |
JP2001334869A (ja) * | 2000-05-25 | 2001-12-04 | Matsushita Electric Ind Co Ltd | 運転支援装置 |
JP2001334870A (ja) * | 2000-05-26 | 2001-12-04 | Matsushita Electric Ind Co Ltd | 画像処理装置および監視システム |
JP2001339715A (ja) * | 2000-05-25 | 2001-12-07 | Matsushita Electric Ind Co Ltd | 運転支援装置 |
JP2001347909A (ja) * | 2000-04-05 | 2001-12-18 | Matsushita Electric Ind Co Ltd | 運転操作補助方法および装置 |
JP2002010115A (ja) * | 2000-06-16 | 2002-01-11 | Toyota Central Res & Dev Lab Inc | 車載撮像装置 |
WO2002007443A1 (fr) * | 2000-07-19 | 2002-01-24 | Matsushita Electric Industrial Co., Ltd. | Systeme de controle |
JP2002027446A (ja) * | 2000-07-04 | 2002-01-25 | Matsushita Electric Ind Co Ltd | 監視システム |
JP2002029350A (ja) * | 2000-07-12 | 2002-01-29 | Nissan Motor Co Ltd | 駐車支援装置 |
JP2002051331A (ja) * | 2000-05-24 | 2002-02-15 | Matsushita Electric Ind Co Ltd | 描画装置 |
JP2002083284A (ja) * | 2000-06-30 | 2002-03-22 | Matsushita Electric Ind Co Ltd | 描画装置 |
JP2002092598A (ja) * | 2000-09-14 | 2002-03-29 | Nissan Motor Co Ltd | 駐車時の自車位置検出装置 |
JP2002114098A (ja) * | 2000-06-30 | 2002-04-16 | Matsushita Electric Ind Co Ltd | 描画装置 |
JP2002117496A (ja) * | 2000-10-12 | 2002-04-19 | Matsushita Electric Ind Co Ltd | 車載後方確認支援装置と車載ナビゲーション装置 |
JP2002125224A (ja) * | 2000-07-19 | 2002-04-26 | Matsushita Electric Ind Co Ltd | 監視システム |
JP2002166802A (ja) * | 2000-11-30 | 2002-06-11 | Toyota Motor Corp | 車両周辺モニタ装置 |
JP2002262156A (ja) * | 2000-12-26 | 2002-09-13 | Matsushita Electric Ind Co Ltd | カメラ装置、カメラシステムおよび画像処理方法 |
WO2002080557A1 (en) * | 2001-03-28 | 2002-10-10 | Matsushita Electric Industrial Co., Ltd. | Drive supporting device |
JP2002316602A (ja) * | 2001-04-24 | 2002-10-29 | Matsushita Electric Ind Co Ltd | 車載カメラの撮像画像表示方法及びその装置 |
WO2002089484A1 (fr) * | 2001-04-24 | 2002-11-07 | Matsushita Electric Industrial Co., Ltd. | Procede et appareil de synthese et d'affichage d'images issues de cameras disposees dans un vehicule |
JP2002326540A (ja) * | 2001-04-28 | 2002-11-12 | Setsuo Kuroki | 視認カメラ装着車 |
JP2002369187A (ja) * | 2001-06-11 | 2002-12-20 | Clarion Co Ltd | パノラマ画像表示方法及びパノラマ画像表示装置 |
JP2002373327A (ja) * | 2001-06-13 | 2002-12-26 | Denso Corp | 車両周辺画像処理装置及び記録媒体 |
JP2003030627A (ja) * | 2001-07-16 | 2003-01-31 | Denso Corp | 車両周辺画像処理装置及び記録媒体 |
JP2003061084A (ja) * | 2001-08-09 | 2003-02-28 | Matsushita Electric Ind Co Ltd | 運転支援表示装置 |
US6542840B2 (en) | 2000-01-27 | 2003-04-01 | Matsushita Electric Industrial Co., Ltd. | Calibration system, target apparatus and calibration method |
WO2003034738A1 (fr) | 2001-10-10 | 2003-04-24 | Matsushita Electric Industrial Co., Ltd. | Unite de traitement d'images |
JP2003158736A (ja) * | 2000-07-19 | 2003-05-30 | Matsushita Electric Ind Co Ltd | 監視システム |
JP2003244688A (ja) * | 2001-12-12 | 2003-08-29 | Equos Research Co Ltd | 車両の画像処理装置 |
JP2003256874A (ja) * | 2002-03-04 | 2003-09-12 | Matsushita Electric Ind Co Ltd | 画像合成変換装置 |
JP2003264827A (ja) * | 2003-01-14 | 2003-09-19 | Matsushita Electric Ind Co Ltd | 画像処理装置 |
JP2004289386A (ja) * | 2003-03-20 | 2004-10-14 | Clarion Co Ltd | 画像表示方法、画像表示装置及び画像処理装置 |
JP2005258792A (ja) * | 2004-03-11 | 2005-09-22 | Olympus Corp | 画像生成装置、画像生成方法、および画像生成プログラム |
JP2005269010A (ja) * | 2004-03-17 | 2005-09-29 | Olympus Corp | 画像生成装置、画像生成プログラム、及び画像生成方法 |
US6999602B2 (en) | 2000-06-30 | 2006-02-14 | Matsushita Electric Industrial Co., Ltd. | Image generation for assistance of drivers of vehicles |
JP2006054662A (ja) * | 2004-08-11 | 2006-02-23 | Mitsubishi Electric Corp | 運転支援装置 |
US7012548B2 (en) | 2000-04-05 | 2006-03-14 | Matsushita Electric Industrial Co., Ltd. | Driving operation assisting method and system |
WO2006087993A1 (ja) * | 2005-02-15 | 2006-08-24 | Matsushita Electric Industrial Co., Ltd. | 周辺監視装置および周辺監視方法 |
US20060203092A1 (en) * | 2000-04-28 | 2006-09-14 | Matsushita Electric Industrial Co., Ltd. | Image processor and monitoring system |
JP2007028363A (ja) * | 2005-07-20 | 2007-02-01 | Alpine Electronics Inc | トップビュー画像生成装置及びトップビュー画像表示方法 |
JP2007043318A (ja) * | 2005-08-01 | 2007-02-15 | Nissan Motor Co Ltd | 車両用周囲監視装置及び車両周囲監視方法 |
JP2007104538A (ja) * | 2005-10-07 | 2007-04-19 | Nissan Motor Co Ltd | 車両用死角映像表示装置 |
JP2007102798A (ja) * | 2006-10-11 | 2007-04-19 | Denso Corp | 車両周辺監視システム |
JP2007124609A (ja) * | 2005-09-28 | 2007-05-17 | Nissan Motor Co Ltd | 車両周囲映像提供装置 |
JP2007174113A (ja) * | 2005-12-20 | 2007-07-05 | Sumitomo Electric Ind Ltd | 障害物検出システム及び障害物検出方法 |
JP2008502228A (ja) * | 2004-06-01 | 2008-01-24 | エル‐3 コミュニケーションズ コーポレイション | ビデオフラッシュライトを実行する方法およびシステム |
JP2008022454A (ja) * | 2006-07-14 | 2008-01-31 | Sumitomo Electric Ind Ltd | 障害物検出システム及び障害物検出方法 |
WO2008087707A1 (ja) * | 2007-01-16 | 2008-07-24 | Pioneer Corporation | 車両用画像処理装置及び車両用画像処理プログラム |
WO2008087709A1 (ja) * | 2007-01-16 | 2008-07-24 | Pioneer Corporation | 車両用画像処理装置、車両用撮像装置、車両用画像処理プログラム、車両用画像調整方法 |
JP2008174212A (ja) * | 2006-12-20 | 2008-07-31 | Aisin Aw Co Ltd | 運転支援方法及び運転支援装置 |
JP2008177856A (ja) * | 2007-01-18 | 2008-07-31 | Sanyo Electric Co Ltd | 俯瞰画像提供装置、車両、および俯瞰画像提供方法 |
JP2009118416A (ja) * | 2007-11-09 | 2009-05-28 | Alpine Electronics Inc | 車両周辺画像生成装置および車両周辺画像の歪み補正方法 |
WO2009116328A1 (ja) | 2008-03-19 | 2009-09-24 | 三洋電機株式会社 | 画像処理装置及び方法、運転支援システム、車両 |
JP2009294109A (ja) * | 2008-06-05 | 2009-12-17 | Fujitsu Ltd | キャリブレーション装置 |
JP2010136082A (ja) * | 2008-12-04 | 2010-06-17 | Alpine Electronics Inc | 車両周辺監視装置およびカメラ位置・姿勢判定方法 |
WO2011090163A1 (ja) * | 2010-01-22 | 2011-07-28 | 富士通テン株式会社 | パラメータ決定装置、パラメータ決定システム、パラメータ決定方法、及び記録媒体 |
US8009977B2 (en) | 2009-05-14 | 2011-08-30 | Fujitsu Ten Limited | On-vehicle lighting apparatus |
US8063936B2 (en) | 2004-06-01 | 2011-11-22 | L-3 Communications Corporation | Modular immersive surveillance processing system and method |
WO2012002415A1 (ja) | 2010-06-29 | 2012-01-05 | クラリオン株式会社 | 画像のキャリブレーション方法および装置 |
JP2012019274A (ja) * | 2010-07-06 | 2012-01-26 | Nissan Motor Co Ltd | 車両用監視装置 |
JP2012019552A (ja) * | 2001-03-28 | 2012-01-26 | Panasonic Corp | 運転支援装置 |
JP2012065225A (ja) * | 2010-09-17 | 2012-03-29 | Panasonic Corp | 車載用画像処理装置、周辺監視装置、および、車両 |
US8319618B2 (en) | 2008-11-28 | 2012-11-27 | Fujitsu Limited | Image processing apparatus, image processing method, and recording medium |
WO2013145880A1 (ja) * | 2012-03-30 | 2013-10-03 | 株式会社 日立製作所 | カメラキャリブレーション装置 |
US8576285B2 (en) | 2009-03-25 | 2013-11-05 | Fujitsu Limited | In-vehicle image processing method and image processing apparatus |
JP2014064224A (ja) * | 2012-09-24 | 2014-04-10 | Clarion Co Ltd | カメラのキャリブレーション方法及び装置 |
JP2014085925A (ja) * | 2012-10-25 | 2014-05-12 | Fujitsu Ltd | 画像処理装置、方法、及びプログラム |
JP2016201585A (ja) * | 2015-04-07 | 2016-12-01 | 株式会社ソシオネクスト | 画像処理装置および画像処理装置の制御方法 |
JP2017034543A (ja) * | 2015-08-04 | 2017-02-09 | 株式会社デンソー | 車載表示制御装置、車載表示制御方法 |
US9591280B2 (en) | 2011-07-29 | 2017-03-07 | Fujitsu Limited | Image processing apparatus, image processing method, and computer-readable recording medium |
EP3413287A1 (en) | 2010-04-19 | 2018-12-12 | SMR Patents S.à.r.l. | Method for indicating to a driver of a vehicle the presence of an object at least temporarily moving relative to the vehicle |
JP2019024196A (ja) * | 2017-07-21 | 2019-02-14 | パナソニックIpマネジメント株式会社 | カメラパラメタセット算出装置、カメラパラメタセット算出方法及びプログラム |
DE112017002951T5 (de) | 2016-06-13 | 2019-02-28 | Denso Corporation | Bilderzeugungsgerät und -programm |
CN109978753A (zh) * | 2017-12-28 | 2019-07-05 | 北京京东尚科信息技术有限公司 | 绘制全景热力图的方法和装置 |
JP2019204393A (ja) * | 2018-05-25 | 2019-11-28 | アルパイン株式会社 | 画像処理装置および画像処理方法 |
US10621743B2 (en) | 2013-04-24 | 2020-04-14 | Sumitomo Heavy Industries, Ltd. | Processing-target image creating device, processing-target image creating method, and operation assisting system |
US10645365B2 (en) | 2017-05-01 | 2020-05-05 | Panasonic Intellectual Property Management Co., Ltd. | Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium |
JP2020119492A (ja) * | 2019-01-28 | 2020-08-06 | パナソニックIpマネジメント株式会社 | 画像合成装置、及び、制御方法 |
US11056001B2 (en) | 2016-06-07 | 2021-07-06 | Panasonic Intellectual Property Management Co., Ltd. | Image generating apparatus, image generating method, and recording medium |
Families Citing this family (218)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6822563B2 (en) | 1997-09-22 | 2004-11-23 | Donnelly Corporation | Vehicle imaging system with accessory control |
US6891563B2 (en) | 1996-05-22 | 2005-05-10 | Donnelly Corporation | Vehicular vision system |
DE10037130B4 (de) * | 1999-09-13 | 2015-10-29 | Volkswagen Ag | Einpark- und/oder Rangierhilfeeinrichtung für Pkw oder Lkw |
JP3645196B2 (ja) * | 2001-02-09 | 2005-05-11 | 松下電器産業株式会社 | 画像合成装置 |
JP4512293B2 (ja) | 2001-06-18 | 2010-07-28 | パナソニック株式会社 | 監視システムおよび監視方法 |
US7697027B2 (en) | 2001-07-31 | 2010-04-13 | Donnelly Corporation | Vehicular video system |
US9428186B2 (en) * | 2002-04-09 | 2016-08-30 | Intelligent Technologies International, Inc. | Exterior monitoring for vehicles |
EP1504276B1 (en) | 2002-05-03 | 2012-08-08 | Donnelly Corporation | Object detection system for vehicle |
WO2004008744A1 (ja) * | 2002-07-12 | 2004-01-22 | Iwane Laboratories, Ltd. | 道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 |
US7450165B2 (en) * | 2003-05-02 | 2008-11-11 | Grandeye, Ltd. | Multiple-view processing in wide-angle video camera |
US20050031169A1 (en) * | 2003-08-09 | 2005-02-10 | Alan Shulman | Birds eye view virtual imaging for real time composited wide field of view |
FI117217B (fi) * | 2003-10-01 | 2006-07-31 | Nokia Corp | Menetelmä ja järjestelmä käyttöliittymän (User Interface) hallitsemiseksi, vastaava laite ja ohjelmalliset (Software) välineet menetelmän toteuttamiseksi |
JP2005124010A (ja) * | 2003-10-20 | 2005-05-12 | Nissan Motor Co Ltd | 撮像装置 |
JP2005242606A (ja) * | 2004-02-26 | 2005-09-08 | Olympus Corp | 画像生成装置、画像生成プログラム、及び画像生成方法 |
WO2005088970A1 (ja) * | 2004-03-11 | 2005-09-22 | Olympus Corporation | 画像生成装置、画像生成方法、および画像生成プログラム |
US7526103B2 (en) | 2004-04-15 | 2009-04-28 | Donnelly Corporation | Imaging system for vehicle |
EP1748654A4 (en) * | 2004-04-27 | 2013-01-02 | Panasonic Corp | VISUALIZATION OF CIRCUMFERENCE OF A VEHICLE |
JP2006026790A (ja) * | 2004-07-15 | 2006-02-02 | Fanuc Ltd | 教示モデル生成装置 |
JP2006050263A (ja) * | 2004-08-04 | 2006-02-16 | Olympus Corp | 画像生成方法および装置 |
JP2006119843A (ja) * | 2004-10-20 | 2006-05-11 | Olympus Corp | 画像生成方法およびその装置 |
US7720580B2 (en) | 2004-12-23 | 2010-05-18 | Donnelly Corporation | Object detection system for vehicle |
JP4244040B2 (ja) * | 2005-03-10 | 2009-03-25 | 任天堂株式会社 | 入力処理プログラムおよび入力処理装置 |
US7920959B1 (en) * | 2005-05-01 | 2011-04-05 | Christopher Reed Williams | Method and apparatus for estimating the velocity vector of multiple vehicles on non-level and curved roads using a single camera |
JP2007099261A (ja) * | 2005-09-12 | 2007-04-19 | Aisin Aw Co Ltd | 駐車支援方法及び駐車支援装置 |
FR2891934B1 (fr) * | 2005-10-12 | 2008-01-18 | Valeo Electronique Sys Liaison | Dispositif de traitement de donnees video pour un vehicule automobile |
JP4762698B2 (ja) * | 2005-11-30 | 2011-08-31 | アルパイン株式会社 | 車両周辺画像表示装置 |
JP4297111B2 (ja) * | 2005-12-14 | 2009-07-15 | ソニー株式会社 | 撮像装置、画像処理方法及びそのプログラム |
US8238695B1 (en) * | 2005-12-15 | 2012-08-07 | Grandeye, Ltd. | Data reduction techniques for processing wide-angle video |
JP2007180803A (ja) * | 2005-12-27 | 2007-07-12 | Aisin Aw Co Ltd | 運転支援方法及び運転支援装置 |
JP4104631B2 (ja) * | 2006-03-27 | 2008-06-18 | 三洋電機株式会社 | 運転支援装置 |
CN101438590B (zh) * | 2006-05-09 | 2011-07-13 | 日产自动车株式会社 | 车辆周围图像提供装置和车辆周围图像提供方法 |
JP4812510B2 (ja) | 2006-05-17 | 2011-11-09 | アルパイン株式会社 | 車両周辺画像生成装置および撮像装置の測光調整方法 |
US8339402B2 (en) * | 2006-07-16 | 2012-12-25 | The Jim Henson Company | System and method of producing an animated performance utilizing multiple cameras |
US7972045B2 (en) | 2006-08-11 | 2011-07-05 | Donnelly Corporation | Automatic headlamp control system |
US7728879B2 (en) * | 2006-08-21 | 2010-06-01 | Sanyo Electric Co., Ltd. | Image processor and visual field support device |
JP4642723B2 (ja) | 2006-09-26 | 2011-03-02 | クラリオン株式会社 | 画像生成装置および画像生成方法 |
JP4257356B2 (ja) | 2006-09-26 | 2009-04-22 | 株式会社日立製作所 | 画像生成装置および画像生成方法 |
US7865285B2 (en) * | 2006-12-27 | 2011-01-04 | Caterpillar Inc | Machine control system and method |
US8017898B2 (en) | 2007-08-17 | 2011-09-13 | Magna Electronics Inc. | Vehicular imaging system in an automatic headlamp control system |
DE102007049821A1 (de) | 2007-10-16 | 2009-04-23 | Daimler Ag | Verfahren zum Kalibrieren einer Anordnung mit mindestens einer omnidirektionalen Kamera und einer optischen Anzeigeeinheit |
JP2009101718A (ja) * | 2007-10-19 | 2009-05-14 | Toyota Industries Corp | 映像表示装置及び映像表示方法 |
US7912283B1 (en) * | 2007-10-31 | 2011-03-22 | The United States Of America As Represented By The Secretary Of The Air Force | Image enhancement using object profiling |
JP5057948B2 (ja) * | 2007-12-04 | 2012-10-24 | アルパイン株式会社 | 歪曲補正画像生成ユニットおよび歪曲補正画像生成方法 |
JP2009172122A (ja) * | 2008-01-24 | 2009-08-06 | Brother Ind Ltd | ミシン |
CA2714492C (en) | 2008-02-08 | 2014-07-15 | Google, Inc. | Panoramic camera with multiple image sensors using timed shutters |
JP5222597B2 (ja) * | 2008-03-19 | 2013-06-26 | 三洋電機株式会社 | 画像処理装置及び方法、運転支援システム、車両 |
EP2107503A1 (en) | 2008-03-31 | 2009-10-07 | Harman Becker Automotive Systems GmbH | Method and device for generating a real time environment model for vehicles |
US8675068B2 (en) | 2008-04-11 | 2014-03-18 | Nearmap Australia Pty Ltd | Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features |
US8497905B2 (en) * | 2008-04-11 | 2013-07-30 | nearmap australia pty ltd. | Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features |
EP2285109B1 (en) * | 2008-05-29 | 2018-11-28 | Fujitsu Limited | Vehicle image processor, and vehicle image processing system |
JP4377439B1 (ja) * | 2008-06-12 | 2009-12-02 | 本田技研工業株式会社 | 車両周辺監視装置 |
FR2936479B1 (fr) * | 2008-09-30 | 2010-10-15 | Dav | Procede de commande et systeme d'aide a la conduite associe |
EP2179892A1 (de) * | 2008-10-24 | 2010-04-28 | Magna Electronics Europe GmbH & Co. KG | Verfahren zum automatischen Kalibrieren einer virtuellen Kamera |
WO2010052772A1 (ja) * | 2008-11-05 | 2010-05-14 | 富士通株式会社 | カメラ角度算出装置、カメラ角度算出方法およびカメラ角度算出プログラム |
JP5190712B2 (ja) | 2009-03-24 | 2013-04-24 | アイシン精機株式会社 | 障害物検出装置 |
JP5195592B2 (ja) * | 2009-03-31 | 2013-05-08 | 富士通株式会社 | 映像処理装置 |
US20100259371A1 (en) * | 2009-04-10 | 2010-10-14 | Jui-Hung Wu | Bird-View Parking Aid Apparatus with Ultrasonic Obstacle Marking and Method of Maneuvering the same |
WO2010119496A1 (ja) | 2009-04-13 | 2010-10-21 | 富士通株式会社 | 画像処理装置、画像処理プログラム、画像処理方法 |
JP2010258691A (ja) * | 2009-04-23 | 2010-11-11 | Sanyo Electric Co Ltd | 操縦支援装置 |
EP2437494B1 (en) | 2009-05-25 | 2017-10-11 | Panasonic Intellectual Property Management Co., Ltd. | Device for monitoring area around vehicle |
JP2010274813A (ja) * | 2009-05-29 | 2010-12-09 | Fujitsu Ten Ltd | 画像生成装置及び画像表示システム |
EP2451156A4 (en) * | 2009-06-29 | 2014-08-13 | Panasonic Corp | VEHICLE-ASSEMBLED VIDEO DISPLAY DEVICE |
JP5500369B2 (ja) * | 2009-08-03 | 2014-05-21 | アイシン精機株式会社 | 車両周辺画像生成装置 |
CN102714690A (zh) * | 2009-09-04 | 2012-10-03 | 布瑞特布里克有限公司 | 移动广角视频记录系统 |
US20110063425A1 (en) | 2009-09-15 | 2011-03-17 | Delphi Technologies, Inc. | Vehicle Operator Control Input Assistance |
KR101302944B1 (ko) * | 2009-11-04 | 2013-09-06 | 주식회사 만도 | 차량 인식 방법 및 장치 |
JP5503259B2 (ja) | 2009-11-16 | 2014-05-28 | 富士通テン株式会社 | 車載照明装置、画像処理装置及び画像表示システム |
JP5299231B2 (ja) * | 2009-11-17 | 2013-09-25 | 富士通株式会社 | キャリブレーション装置 |
US8872889B2 (en) | 2010-01-14 | 2014-10-28 | Innovmetric Logiciels Inc. | Synchronization of the orientation of a 3D measurement device and the orientation of an intelligent guidance device |
JP5604146B2 (ja) * | 2010-03-25 | 2014-10-08 | 富士通テン株式会社 | 車載照明装置、画像処理装置、画像表示システム及び照明方法 |
JP5550970B2 (ja) * | 2010-04-12 | 2014-07-16 | 住友重機械工業株式会社 | 画像生成装置及び操作支援システム |
JP5090496B2 (ja) * | 2010-04-12 | 2012-12-05 | 住友重機械工業株式会社 | 画像生成装置及び操作支援システム |
JP5135380B2 (ja) * | 2010-04-12 | 2013-02-06 | 住友重機械工業株式会社 | 処理対象画像生成装置、処理対象画像生成方法、及び操作支援システム |
JP5362639B2 (ja) | 2010-04-12 | 2013-12-11 | 住友重機械工業株式会社 | 画像生成装置及び操作支援システム |
US10703299B2 (en) | 2010-04-19 | 2020-07-07 | SMR Patents S.à.r.l. | Rear view mirror simulation |
JP5552892B2 (ja) * | 2010-05-13 | 2014-07-16 | 富士通株式会社 | 画像処理装置および画像処理プログラム |
JP2011257940A (ja) * | 2010-06-08 | 2011-12-22 | Panasonic Corp | 逆変換テーブル生成方法、逆変換テーブル生成プログラム、画像変換装置、画像変換方法、及び画像変換プログラム |
US9064293B2 (en) | 2010-06-15 | 2015-06-23 | Mitsubishi Electric Corporation | Vehicle surroundings monitoring device |
JP5523954B2 (ja) * | 2010-06-30 | 2014-06-18 | 富士通テン株式会社 | 画像表示システム |
CN102469291A (zh) * | 2010-11-05 | 2012-05-23 | 屈世虎 | 视频通话系统和方法 |
WO2012073722A1 (ja) * | 2010-12-01 | 2012-06-07 | コニカミノルタホールディングス株式会社 | 画像合成装置 |
WO2012075250A1 (en) * | 2010-12-01 | 2012-06-07 | Magna Electronics Inc. | System and method of establishing a multi-camera image using pixel remapping |
EP2511137B1 (en) * | 2011-04-14 | 2019-03-27 | Harman Becker Automotive Systems GmbH | Vehicle Surround View System |
WO2012145818A1 (en) | 2011-04-25 | 2012-11-01 | Magna International Inc. | Method and system for dynamically calibrating vehicular cameras |
WO2012145822A1 (en) | 2011-04-25 | 2012-11-01 | Magna International Inc. | Method and system for dynamically calibrating vehicular cameras |
US8934017B2 (en) | 2011-06-01 | 2015-01-13 | Honeywell International Inc. | System and method for automatic camera placement |
JP6058256B2 (ja) | 2011-06-13 | 2017-01-11 | アルパイン株式会社 | 車載カメラ姿勢検出装置および方法 |
US20140192159A1 (en) * | 2011-06-14 | 2014-07-10 | Metrologic Instruments, Inc. | Camera registration and video integration in 3d geometry model |
US8976280B2 (en) | 2011-07-06 | 2015-03-10 | Morpho, Inc. | Distortion estimating image processing device, method, and non-transitory storage medium |
WO2013008623A1 (ja) * | 2011-07-12 | 2013-01-17 | 日産自動車株式会社 | 車両用監視装置、車両用監視システム、端末装置及び車両の監視方法 |
WO2013016409A1 (en) | 2011-07-26 | 2013-01-31 | Magna Electronics Inc. | Vision system for vehicle |
US9098751B2 (en) | 2011-07-27 | 2015-08-04 | Gentex Corporation | System and method for periodic lane marker identification and tracking |
WO2013019707A1 (en) | 2011-08-01 | 2013-02-07 | Magna Electronics Inc. | Vehicle camera alignment system |
EP2554434B1 (en) * | 2011-08-05 | 2014-05-21 | Harman Becker Automotive Systems GmbH | Vehicle surround view system |
JP5750344B2 (ja) * | 2011-09-16 | 2015-07-22 | 日立建機株式会社 | 作業機の周囲監視装置 |
US20140218535A1 (en) | 2011-09-21 | 2014-08-07 | Magna Electronics Inc. | Vehicle vision system using image data transmission and power supply via a coaxial cable |
DE102011084554A1 (de) * | 2011-10-14 | 2013-04-18 | Robert Bosch Gmbh | Verfahren zur Darstellung eines Fahrzeugumfeldes |
WO2013074604A2 (en) | 2011-11-15 | 2013-05-23 | Magna Electronics, Inc. | Calibration system and method for vehicular surround vision system |
WO2013081985A1 (en) | 2011-11-28 | 2013-06-06 | Magna Electronics, Inc. | Vision system for vehicle |
KR101265711B1 (ko) * | 2011-11-30 | 2013-05-20 | 주식회사 이미지넥스트 | 3d 차량 주변 영상 생성 방법 및 장치 |
WO2013086249A2 (en) | 2011-12-09 | 2013-06-13 | Magna Electronics, Inc. | Vehicle vision system with customized display |
US9600863B2 (en) * | 2012-02-13 | 2017-03-21 | Omnivision Technologies, Inc. | Method for combining images |
US10457209B2 (en) | 2012-02-22 | 2019-10-29 | Magna Electronics Inc. | Vehicle vision system with multi-paned view |
DE102012203171A1 (de) * | 2012-02-29 | 2013-08-29 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren und Vorrichtung zur Bildverarbeitung von Bilddaten |
JP2013186245A (ja) * | 2012-03-07 | 2013-09-19 | Denso Corp | 車両周辺監視装置 |
JP5891880B2 (ja) * | 2012-03-19 | 2016-03-23 | 株式会社日本自動車部品総合研究所 | 車載画像表示装置 |
CN104321665B (zh) * | 2012-03-26 | 2017-02-22 | 罗伯特·博世有限公司 | 基于多表面模型的跟踪 |
JP6029306B2 (ja) * | 2012-03-29 | 2016-11-24 | 住友建機株式会社 | 作業機械用周辺監視装置 |
US8768583B2 (en) | 2012-03-29 | 2014-07-01 | Harnischfeger Technologies, Inc. | Collision detection and mitigation systems and methods for a shovel |
WO2013155203A2 (en) * | 2012-04-13 | 2013-10-17 | Lightcraft Technology Llc | Hybrid precision tracking |
US20130293683A1 (en) * | 2012-05-03 | 2013-11-07 | Harman International (Shanghai) Management Co., Ltd. | System and method of interactively controlling a virtual camera |
JP5634643B2 (ja) | 2012-05-22 | 2014-12-03 | 三菱電機株式会社 | 画像処理装置 |
TW201403553A (zh) * | 2012-07-03 | 2014-01-16 | Automotive Res & Testing Ct | 自動校正鳥瞰影像方法 |
US9143670B1 (en) * | 2012-07-20 | 2015-09-22 | COPsync, Inc. | Video capture system including two independent image sensors |
JP5911775B2 (ja) * | 2012-08-21 | 2016-04-27 | 富士通テン株式会社 | 画像生成装置、画像表示システム及び画像生成方法 |
EP2892230A4 (en) | 2012-08-30 | 2015-09-02 | Fujitsu Ltd | IMAGE PROCESSING DEVICE, IMAGE PROCESSING AND PROGRAM |
WO2014041864A1 (ja) * | 2012-09-14 | 2014-03-20 | 本田技研工業株式会社 | 対象物識別装置 |
EP2904349B1 (en) * | 2012-10-01 | 2020-03-18 | Bodybarista ApS | A method of calibrating a camera |
US9723272B2 (en) | 2012-10-05 | 2017-08-01 | Magna Electronics Inc. | Multi-camera image stitching calibration system |
WO2014073282A1 (ja) * | 2012-11-08 | 2014-05-15 | 住友重機械工業株式会社 | 舗装機械用画像生成装置及び舗装機械用操作支援システム |
US9743002B2 (en) | 2012-11-19 | 2017-08-22 | Magna Electronics Inc. | Vehicle vision system with enhanced display functions |
KR102003562B1 (ko) * | 2012-12-24 | 2019-07-24 | 두산인프라코어 주식회사 | 건설기계의 감지 장치 및 방법 |
US9232200B2 (en) * | 2013-01-21 | 2016-01-05 | Devin L. Norman | External vehicle projection system |
US9349056B2 (en) * | 2013-02-15 | 2016-05-24 | Gordon Peckover | Method of measuring road markings |
US10179543B2 (en) | 2013-02-27 | 2019-01-15 | Magna Electronics Inc. | Multi-camera dynamic top view vision system |
US9688200B2 (en) | 2013-03-04 | 2017-06-27 | Magna Electronics Inc. | Calibration system and method for multi-camera vision system |
US9205776B2 (en) | 2013-05-21 | 2015-12-08 | Magna Electronics Inc. | Vehicle vision system using kinematic model of vehicle motion |
US9563951B2 (en) | 2013-05-21 | 2017-02-07 | Magna Electronics Inc. | Vehicle vision system with targetless camera calibration |
US9210417B2 (en) * | 2013-07-17 | 2015-12-08 | Microsoft Technology Licensing, Llc | Real-time registration of a stereo depth camera array |
US20150070498A1 (en) * | 2013-09-06 | 2015-03-12 | Caterpillar Inc. | Image Display System |
US9315192B1 (en) * | 2013-09-30 | 2016-04-19 | Google Inc. | Methods and systems for pedestrian avoidance using LIDAR |
JP6347934B2 (ja) | 2013-10-11 | 2018-06-27 | 株式会社デンソーテン | 画像表示装置、画像表示システム、画像表示方法、及び、プログラム |
KR102175961B1 (ko) * | 2013-11-29 | 2020-11-09 | 현대모비스 주식회사 | 차량 후방 주차 가이드 장치 |
DE102013226632A1 (de) * | 2013-12-19 | 2015-06-25 | Conti Temic Microelectronic Gmbh | Kamerasystem und Verfahren zum Betrieb eines Kamerasystems |
EP3085074B1 (en) * | 2013-12-19 | 2020-02-26 | Intel Corporation | Bowl-shaped imaging system |
KR101572065B1 (ko) * | 2014-01-03 | 2015-11-25 | 현대모비스(주) | 영상 왜곡 보정 방법 및 이를 위한 장치 |
US9902341B2 (en) | 2014-02-26 | 2018-02-27 | Kyocera Corporation | Image processing apparatus and image processing method including area setting and perspective conversion |
KR101670847B1 (ko) * | 2014-04-04 | 2016-11-09 | 주식회사 와이즈오토모티브 | 차량 주변 이미지 생성 장치 및 방법 |
KR101543159B1 (ko) * | 2014-05-02 | 2015-08-10 | 현대자동차주식회사 | 카메라를 이용한 영상 조정 시스템 및 방법 |
US9386302B2 (en) * | 2014-05-21 | 2016-07-05 | GM Global Technology Operations LLC | Automatic calibration of extrinsic and intrinsic camera parameters for surround-view camera system |
WO2016037014A1 (en) * | 2014-09-03 | 2016-03-10 | Nextvr Inc. | Methods and apparatus for capturing, streaming and/or playing back content |
US10442355B2 (en) | 2014-09-17 | 2019-10-15 | Intel Corporation | Object visualization in bowl-shaped imaging systems |
US10127463B2 (en) | 2014-11-21 | 2018-11-13 | Magna Electronics Inc. | Vehicle vision system with multiple cameras |
US9916660B2 (en) | 2015-01-16 | 2018-03-13 | Magna Electronics Inc. | Vehicle vision system with calibration algorithm |
DE102015205507B3 (de) * | 2015-03-26 | 2016-09-29 | Zf Friedrichshafen Ag | Rundsichtsystem für ein Fahrzeug |
US10946799B2 (en) | 2015-04-21 | 2021-03-16 | Magna Electronics Inc. | Vehicle vision system with overlay calibration |
JP2016225865A (ja) * | 2015-06-01 | 2016-12-28 | 東芝アルパイン・オートモティブテクノロジー株式会社 | 俯瞰画像生成装置 |
KR101559739B1 (ko) * | 2015-06-11 | 2015-10-15 | 주식회사 미래엔에스 | 가상 모델과 카메라 영상의 융합시스템 |
US10373378B2 (en) | 2015-06-26 | 2019-08-06 | Paccar Inc | Augmented reality system for vehicle blind spot prevention |
KR101860610B1 (ko) * | 2015-08-20 | 2018-07-02 | 엘지전자 주식회사 | 디스플레이 장치 및 이를 포함하는 차량 |
US10262466B2 (en) * | 2015-10-14 | 2019-04-16 | Qualcomm Incorporated | Systems and methods for adjusting a combined image visualization based on depth information |
US10187590B2 (en) | 2015-10-27 | 2019-01-22 | Magna Electronics Inc. | Multi-camera vehicle vision system with image gap fill |
DE102015221356B4 (de) | 2015-10-30 | 2020-12-24 | Conti Temic Microelectronic Gmbh | Vorrichtung und Verfahren zur Bereitstellung einer Fahrzeugrundumansicht |
JP6524922B2 (ja) | 2016-01-12 | 2019-06-05 | 株式会社デンソー | 運転支援装置、運転支援方法 |
US20190026924A1 (en) * | 2016-01-15 | 2019-01-24 | Nokia Technologies Oy | Method and Apparatus for Calibration of a Multi-Camera System |
JP6597415B2 (ja) | 2016-03-07 | 2019-10-30 | 株式会社デンソー | 情報処理装置及びプログラム |
US10326979B2 (en) | 2016-05-23 | 2019-06-18 | Microsoft Technology Licensing, Llc | Imaging system comprising real-time image registration |
US10339662B2 (en) | 2016-05-23 | 2019-07-02 | Microsoft Technology Licensing, Llc | Registering cameras with virtual fiducials |
JP6551336B2 (ja) | 2016-08-12 | 2019-07-31 | 株式会社デンソー | 周辺監査装置 |
US10678240B2 (en) * | 2016-09-08 | 2020-06-09 | Mentor Graphics Corporation | Sensor modification based on an annotated environmental model |
WO2018125870A1 (en) * | 2016-12-27 | 2018-07-05 | Gentex Corporation | Rear vision system with eye-tracking |
US10313584B2 (en) | 2017-01-04 | 2019-06-04 | Texas Instruments Incorporated | Rear-stitched view panorama for rear-view visualization |
US10452076B2 (en) | 2017-01-04 | 2019-10-22 | Magna Electronics Inc. | Vehicle vision system with adjustable computation and data compression |
JP6658642B2 (ja) | 2017-03-24 | 2020-03-04 | トヨタ自動車株式会社 | 車両用視認装置 |
JP7038345B2 (ja) | 2017-04-20 | 2022-03-18 | パナソニックIpマネジメント株式会社 | カメラパラメタセット算出方法、カメラパラメタセット算出プログラム及びカメラパラメタセット算出装置 |
US10683034B2 (en) * | 2017-06-06 | 2020-06-16 | Ford Global Technologies, Llc | Vehicle remote parking systems and methods |
US10585430B2 (en) | 2017-06-16 | 2020-03-10 | Ford Global Technologies, Llc | Remote park-assist authentication for vehicles |
US10775781B2 (en) | 2017-06-16 | 2020-09-15 | Ford Global Technologies, Llc | Interface verification for vehicle remote park-assist |
CN110785789A (zh) * | 2017-06-27 | 2020-02-11 | 索尼公司 | 图像处理装置、图像处理方法和程序 |
JP6962372B2 (ja) * | 2017-08-25 | 2021-11-05 | 株式会社ソシオネクスト | 補正装置、補正プログラム及び記録媒体 |
JP7013751B2 (ja) * | 2017-09-15 | 2022-02-01 | 株式会社アイシン | 画像処理装置 |
US10580304B2 (en) | 2017-10-02 | 2020-03-03 | Ford Global Technologies, Llc | Accelerometer-based external sound monitoring for voice controlled autonomous parking |
US10744941B2 (en) | 2017-10-12 | 2020-08-18 | Magna Electronics Inc. | Vehicle vision system with bird's eye view display |
US10580299B2 (en) * | 2017-10-13 | 2020-03-03 | Waymo Llc | Lane change notification |
US10627811B2 (en) | 2017-11-07 | 2020-04-21 | Ford Global Technologies, Llc | Audio alerts for remote park-assist tethering |
US10578676B2 (en) | 2017-11-28 | 2020-03-03 | Ford Global Technologies, Llc | Vehicle monitoring of mobile device state-of-charge |
US10688918B2 (en) | 2018-01-02 | 2020-06-23 | Ford Global Technologies, Llc | Mobile device tethering for a remote parking assist system of a vehicle |
US10814864B2 (en) | 2018-01-02 | 2020-10-27 | Ford Global Technologies, Llc | Mobile device tethering for a remote parking assist system of a vehicle |
US11148661B2 (en) | 2018-01-02 | 2021-10-19 | Ford Global Technologies, Llc | Mobile device tethering for a remote parking assist system of a vehicle |
US10583830B2 (en) | 2018-01-02 | 2020-03-10 | Ford Global Technologies, Llc | Mobile device tethering for a remote parking assist system of a vehicle |
US10974717B2 (en) | 2018-01-02 | 2021-04-13 | Ford Global Technologies, I.LC | Mobile device tethering for a remote parking assist system of a vehicle |
US10585431B2 (en) | 2018-01-02 | 2020-03-10 | Ford Global Technologies, Llc | Mobile device tethering for a remote parking assist system of a vehicle |
US10737690B2 (en) | 2018-01-02 | 2020-08-11 | Ford Global Technologies, Llc | Mobile device tethering for a remote parking assist system of a vehicle |
US10684773B2 (en) | 2018-01-03 | 2020-06-16 | Ford Global Technologies, Llc | Mobile device interface for trailer backup-assist |
US10747218B2 (en) | 2018-01-12 | 2020-08-18 | Ford Global Technologies, Llc | Mobile device tethering for remote parking assist |
US10917748B2 (en) | 2018-01-25 | 2021-02-09 | Ford Global Technologies, Llc | Mobile device tethering for vehicle systems based on variable time-of-flight and dead reckoning |
DE102018102051B4 (de) * | 2018-01-30 | 2021-09-02 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren zum Darstellen eines Umgebungsbereichs eines Kraftfahrzeugs mit einem Bildfenster in einem Bild, Computerprogrammprodukt sowie Anzeigesystem |
US10684627B2 (en) | 2018-02-06 | 2020-06-16 | Ford Global Technologies, Llc | Accelerometer-based external sound monitoring for position aware autonomous parking |
FR3078224B1 (fr) * | 2018-02-16 | 2020-02-07 | Renault S.A.S | Methode de surveillance d’un environnement de vehicule automobile en stationnement comprenant une camera asynchrone |
US11188070B2 (en) | 2018-02-19 | 2021-11-30 | Ford Global Technologies, Llc | Mitigating key fob unavailability for remote parking assist systems |
US10507868B2 (en) | 2018-02-22 | 2019-12-17 | Ford Global Technologies, Llc | Tire pressure monitoring for vehicle park-assist |
US10732622B2 (en) | 2018-04-05 | 2020-08-04 | Ford Global Technologies, Llc | Advanced user interaction features for remote park assist |
US10683004B2 (en) | 2018-04-09 | 2020-06-16 | Ford Global Technologies, Llc | Input signal management for vehicle park-assist |
US10793144B2 (en) | 2018-04-09 | 2020-10-06 | Ford Global Technologies, Llc | Vehicle remote park-assist communication counters |
US10493981B2 (en) | 2018-04-09 | 2019-12-03 | Ford Global Technologies, Llc | Input signal management for vehicle park-assist |
US10759417B2 (en) | 2018-04-09 | 2020-09-01 | Ford Global Technologies, Llc | Input signal management for vehicle park-assist |
JP7187182B2 (ja) * | 2018-06-11 | 2022-12-12 | キヤノン株式会社 | データ生成装置、方法およびプログラム |
CN110619807B (zh) * | 2018-06-20 | 2022-12-02 | 北京京东尚科信息技术有限公司 | 生成全局热力图的方法和装置 |
US10384605B1 (en) | 2018-09-04 | 2019-08-20 | Ford Global Technologies, Llc | Methods and apparatus to facilitate pedestrian detection during remote-controlled maneuvers |
US10821972B2 (en) | 2018-09-13 | 2020-11-03 | Ford Global Technologies, Llc | Vehicle remote parking assist systems and methods |
US10717432B2 (en) | 2018-09-13 | 2020-07-21 | Ford Global Technologies, Llc | Park-assist based on vehicle door open positions |
US10529233B1 (en) | 2018-09-24 | 2020-01-07 | Ford Global Technologies Llc | Vehicle and method for detecting a parking space via a drone |
US10967851B2 (en) | 2018-09-24 | 2021-04-06 | Ford Global Technologies, Llc | Vehicle system and method for setting variable virtual boundary |
US10908603B2 (en) | 2018-10-08 | 2021-02-02 | Ford Global Technologies, Llc | Methods and apparatus to facilitate remote-controlled maneuvers |
US10628687B1 (en) | 2018-10-12 | 2020-04-21 | Ford Global Technologies, Llc | Parking spot identification for vehicle park-assist |
US11097723B2 (en) | 2018-10-17 | 2021-08-24 | Ford Global Technologies, Llc | User interfaces for vehicle remote park assist |
US11137754B2 (en) | 2018-10-24 | 2021-10-05 | Ford Global Technologies, Llc | Intermittent delay mitigation for remote vehicle operation |
JP2020098412A (ja) * | 2018-12-17 | 2020-06-25 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US11789442B2 (en) | 2019-02-07 | 2023-10-17 | Ford Global Technologies, Llc | Anomalous input detection |
US11195344B2 (en) | 2019-03-15 | 2021-12-07 | Ford Global Technologies, Llc | High phone BLE or CPU burden detection and notification |
US11169517B2 (en) | 2019-04-01 | 2021-11-09 | Ford Global Technologies, Llc | Initiation of vehicle remote park-assist with key fob |
US11275368B2 (en) | 2019-04-01 | 2022-03-15 | Ford Global Technologies, Llc | Key fobs for vehicle remote park-assist |
CN112208438B (zh) * | 2019-07-10 | 2022-07-29 | 台湾中华汽车工业股份有限公司 | 行车辅助影像产生方法及系统 |
JP2021103481A (ja) | 2019-12-25 | 2021-07-15 | パナソニックIpマネジメント株式会社 | 運転支援装置、運転支援方法及びプログラム |
US11661722B2 (en) | 2020-11-19 | 2023-05-30 | Deere & Company | System and method for customized visualization of the surroundings of self-propelled work vehicles |
US20220194428A1 (en) * | 2020-12-17 | 2022-06-23 | 6 River Systems, Llc | Systems and methods for calibrating sensors of autonomous vehicles |
KR102473407B1 (ko) * | 2020-12-28 | 2022-12-05 | 삼성전기주식회사 | 틸트 카메라를 이용한 차량의 svm 시스템 |
CN117098693A (zh) * | 2021-04-07 | 2023-11-21 | 石通瑞吉股份有限公司 | 包括偏移相机视图的车辆显示器 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS58110334A (ja) | 1981-12-23 | 1983-06-30 | Hino Motors Ltd | 路面視界表示装置 |
JPH03166534A (ja) * | 1989-11-25 | 1991-07-18 | Seikosha Co Ltd | カメラ用測距装置 |
JPH09114979A (ja) * | 1995-10-18 | 1997-05-02 | Nippon Telegr & Teleph Corp <Ntt> | カメラシステム |
JPH09305796A (ja) * | 1996-05-16 | 1997-11-28 | Canon Inc | 画像情報処理装置 |
JPH1040499A (ja) * | 1996-07-24 | 1998-02-13 | Honda Motor Co Ltd | 車両の外界認識装置 |
US5745126A (en) | 1995-03-31 | 1998-04-28 | The Regents Of The University Of California | Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
JPH10124704A (ja) * | 1996-08-30 | 1998-05-15 | Sanyo Electric Co Ltd | 立体モデル作成装置、立体モデル作成方法および立体モデル作成プログラムを記録した媒体 |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62155140A (ja) * | 1985-12-27 | 1987-07-10 | Aisin Warner Ltd | 車両制御用道路画像入力方式 |
JPH0399952A (ja) | 1989-09-12 | 1991-04-25 | Nissan Motor Co Ltd | 車両用周囲状況モニタ |
JPH05265547A (ja) * | 1992-03-23 | 1993-10-15 | Fuji Heavy Ind Ltd | 車輌用車外監視装置 |
JPH05310078A (ja) | 1992-05-07 | 1993-11-22 | Clarion Co Ltd | 車両安全確認装置及びその装置に使用するカメラ |
JP3391405B2 (ja) * | 1992-05-29 | 2003-03-31 | 株式会社エフ・エフ・シー | カメラ映像内の物体同定方法 |
EP0841648B1 (en) | 1992-09-30 | 2004-06-02 | Hitachi, Ltd. | Vehicle driving support system and vehicle therewith |
DE69324224T2 (de) * | 1992-12-29 | 1999-10-28 | Koninkl Philips Electronics Nv | Bildverarbeitungsverfahren und -vorrichtung zum Erzeugen eines Bildes aus mehreren angrenzenden Bildern |
US5670935A (en) * | 1993-02-26 | 1997-09-23 | Donnelly Corporation | Rearview vision system for vehicle including panoramic view |
JP3468428B2 (ja) * | 1993-03-24 | 2003-11-17 | 富士重工業株式会社 | 車輌用距離検出装置 |
JP2887039B2 (ja) * | 1993-03-26 | 1999-04-26 | 三菱電機株式会社 | 車両周辺監視装置 |
US5638116A (en) * | 1993-09-08 | 1997-06-10 | Sumitomo Electric Industries, Ltd. | Object recognition apparatus and method |
JP3431962B2 (ja) * | 1993-09-17 | 2003-07-28 | 本田技研工業株式会社 | 走行区分線認識装置を備えた自動走行車両 |
DE4333112A1 (de) | 1993-09-29 | 1995-03-30 | Bosch Gmbh Robert | Verfahren und Vorrichtung zum Ausparken eines Fahrzeugs |
JP3381351B2 (ja) | 1993-12-24 | 2003-02-24 | 日産自動車株式会社 | 車両用周囲状況表示装置 |
JP3522317B2 (ja) * | 1993-12-27 | 2004-04-26 | 富士重工業株式会社 | 車輌用走行案内装置 |
JP3205477B2 (ja) * | 1994-02-17 | 2001-09-04 | 富士フイルムマイクロデバイス株式会社 | 車間距離検出装置 |
JP2919284B2 (ja) * | 1994-02-23 | 1999-07-12 | 松下電工株式会社 | 物体認識方法 |
US5473364A (en) * | 1994-06-03 | 1995-12-05 | David Sarnoff Research Center, Inc. | Video technique for indicating moving objects from a movable platform |
EP0838068B1 (en) * | 1995-07-10 | 2005-10-26 | Sarnoff Corporation | Method and system for rendering and combining images |
JP3866328B2 (ja) * | 1996-06-06 | 2007-01-10 | 富士重工業株式会社 | 車両周辺立体物認識装置 |
JP3147002B2 (ja) * | 1996-09-26 | 2001-03-19 | 富士電機株式会社 | 距離検出値の補正方法 |
US5994701A (en) * | 1996-10-15 | 1999-11-30 | Nippon Avonics Co., Ltd. | Infrared sensor device with temperature correction function |
JPH10164566A (ja) | 1996-11-28 | 1998-06-19 | Aiphone Co Ltd | マルチ天井カメラ装置 |
JPH10244891A (ja) | 1997-03-07 | 1998-09-14 | Nissan Motor Co Ltd | 駐車補助装置 |
JPH10264841A (ja) | 1997-03-25 | 1998-10-06 | Nissan Motor Co Ltd | 駐車誘導装置 |
US6396535B1 (en) * | 1999-02-16 | 2002-05-28 | Mitsubishi Electric Research Laboratories, Inc. | Situation awareness system |
-
1999
- 1999-07-29 EP EP10012829A patent/EP2309453A3/en not_active Withdrawn
- 1999-07-29 JP JP2000563072A patent/JP3286306B2/ja not_active Expired - Lifetime
- 1999-07-29 WO PCT/JP1999/004061 patent/WO2000007373A1/ja active Application Filing
- 1999-07-29 EP EP10008892A patent/EP2259220A3/en not_active Withdrawn
- 1999-07-29 EP EP99933145A patent/EP1115250B1/en not_active Expired - Lifetime
- 1999-07-29 EP EP10011659A patent/EP2267656A3/en not_active Withdrawn
- 1999-07-29 US US09/744,787 patent/US7307655B1/en not_active Expired - Lifetime
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS58110334A (ja) | 1981-12-23 | 1983-06-30 | Hino Motors Ltd | 路面視界表示装置 |
JPH03166534A (ja) * | 1989-11-25 | 1991-07-18 | Seikosha Co Ltd | カメラ用測距装置 |
US5745126A (en) | 1995-03-31 | 1998-04-28 | The Regents Of The University Of California | Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
JPH09114979A (ja) * | 1995-10-18 | 1997-05-02 | Nippon Telegr & Teleph Corp <Ntt> | カメラシステム |
JPH09305796A (ja) * | 1996-05-16 | 1997-11-28 | Canon Inc | 画像情報処理装置 |
JPH1040499A (ja) * | 1996-07-24 | 1998-02-13 | Honda Motor Co Ltd | 車両の外界認識装置 |
JPH10124704A (ja) * | 1996-08-30 | 1998-05-15 | Sanyo Electric Co Ltd | 立体モデル作成装置、立体モデル作成方法および立体モデル作成プログラムを記録した媒体 |
Cited By (117)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259818A (ja) * | 1999-03-09 | 2000-09-22 | Toshiba Corp | 状況情報提供装置及びその方法 |
JP2001163132A (ja) * | 1999-09-30 | 2001-06-19 | Toyota Autom Loom Works Ltd | 車両後方監視装置用画像変換装置 |
US6542840B2 (en) | 2000-01-27 | 2003-04-01 | Matsushita Electric Industrial Co., Ltd. | Calibration system, target apparatus and calibration method |
JP2001285715A (ja) * | 2000-03-31 | 2001-10-12 | Matsushita Electric Ind Co Ltd | 画像合成装置 |
JP2001347909A (ja) * | 2000-04-05 | 2001-12-18 | Matsushita Electric Ind Co Ltd | 運転操作補助方法および装置 |
US7012548B2 (en) | 2000-04-05 | 2006-03-14 | Matsushita Electric Industrial Co., Ltd. | Driving operation assisting method and system |
US20060203092A1 (en) * | 2000-04-28 | 2006-09-14 | Matsushita Electric Industrial Co., Ltd. | Image processor and monitoring system |
JP2002051331A (ja) * | 2000-05-24 | 2002-02-15 | Matsushita Electric Ind Co Ltd | 描画装置 |
JP2001334869A (ja) * | 2000-05-25 | 2001-12-04 | Matsushita Electric Ind Co Ltd | 運転支援装置 |
JP2001339715A (ja) * | 2000-05-25 | 2001-12-07 | Matsushita Electric Ind Co Ltd | 運転支援装置 |
US6912001B2 (en) | 2000-05-26 | 2005-06-28 | Matsushita Electric Industrial Co., Ltd. | Image processor and monitoring system |
JP2001334870A (ja) * | 2000-05-26 | 2001-12-04 | Matsushita Electric Ind Co Ltd | 画像処理装置および監視システム |
JP2002010115A (ja) * | 2000-06-16 | 2002-01-11 | Toyota Central Res & Dev Lab Inc | 車載撮像装置 |
US6999602B2 (en) | 2000-06-30 | 2006-02-14 | Matsushita Electric Industrial Co., Ltd. | Image generation for assistance of drivers of vehicles |
JP2002083284A (ja) * | 2000-06-30 | 2002-03-22 | Matsushita Electric Ind Co Ltd | 描画装置 |
JP2002114098A (ja) * | 2000-06-30 | 2002-04-16 | Matsushita Electric Ind Co Ltd | 描画装置 |
JP2002027446A (ja) * | 2000-07-04 | 2002-01-25 | Matsushita Electric Ind Co Ltd | 監視システム |
JP2002029350A (ja) * | 2000-07-12 | 2002-01-29 | Nissan Motor Co Ltd | 駐車支援装置 |
US7266219B2 (en) | 2000-07-19 | 2007-09-04 | Matsushita Electric Industrial Co., Ltd. | Monitoring system |
WO2002007443A1 (fr) * | 2000-07-19 | 2002-01-24 | Matsushita Electric Industrial Co., Ltd. | Systeme de controle |
JP2003158736A (ja) * | 2000-07-19 | 2003-05-30 | Matsushita Electric Ind Co Ltd | 監視システム |
EP1303140A1 (en) * | 2000-07-19 | 2003-04-16 | Matsushita Electric Industrial Co., Ltd. | Monitoring system |
JP2002125224A (ja) * | 2000-07-19 | 2002-04-26 | Matsushita Electric Ind Co Ltd | 監視システム |
EP1303140A4 (en) * | 2000-07-19 | 2007-01-17 | Matsushita Electric Ind Co Ltd | MONITORING SYSTEM |
JP2002092598A (ja) * | 2000-09-14 | 2002-03-29 | Nissan Motor Co Ltd | 駐車時の自車位置検出装置 |
JP2002117496A (ja) * | 2000-10-12 | 2002-04-19 | Matsushita Electric Ind Co Ltd | 車載後方確認支援装置と車載ナビゲーション装置 |
JP2002166802A (ja) * | 2000-11-30 | 2002-06-11 | Toyota Motor Corp | 車両周辺モニタ装置 |
JP2002262156A (ja) * | 2000-12-26 | 2002-09-13 | Matsushita Electric Ind Co Ltd | カメラ装置、カメラシステムおよび画像処理方法 |
JP2012019552A (ja) * | 2001-03-28 | 2012-01-26 | Panasonic Corp | 運転支援装置 |
JP2002359838A (ja) * | 2001-03-28 | 2002-12-13 | Matsushita Electric Ind Co Ltd | 運転支援装置 |
DE10296593B4 (de) * | 2001-03-28 | 2017-02-02 | Panasonic Intellectual Property Management Co., Ltd. | Fahrunterstützungsvorrichtung |
WO2002080557A1 (en) * | 2001-03-28 | 2002-10-10 | Matsushita Electric Industrial Co., Ltd. | Drive supporting device |
US7218758B2 (en) | 2001-03-28 | 2007-05-15 | Matsushita Electric Industrial Co., Ltd. | Drive supporting device |
EP1383332A4 (en) * | 2001-04-24 | 2006-03-29 | Matsushita Electric Ind Co Ltd | METHOD AND DEVICE FOR DISPLAYING A RECORD IMAGE OF A CAMERA INSTALLED IN A VEHICLE |
WO2002089485A1 (fr) * | 2001-04-24 | 2002-11-07 | Matsushita Electric Industrial Co., Ltd. | Procede et dispositif pour la presentation d'une image de camera embarquee a bord d'un vehicule |
WO2002089484A1 (fr) * | 2001-04-24 | 2002-11-07 | Matsushita Electric Industrial Co., Ltd. | Procede et appareil de synthese et d'affichage d'images issues de cameras disposees dans un vehicule |
US7139412B2 (en) | 2001-04-24 | 2006-11-21 | Matsushita Electric Industrial Co., Ltd. | Image synthesis display method and apparatus for vehicle camera |
EP1383332A1 (en) * | 2001-04-24 | 2004-01-21 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for displaying pickup image of camera installed in vehicle |
JP2002316602A (ja) * | 2001-04-24 | 2002-10-29 | Matsushita Electric Ind Co Ltd | 車載カメラの撮像画像表示方法及びその装置 |
JP2002324235A (ja) * | 2001-04-24 | 2002-11-08 | Matsushita Electric Ind Co Ltd | 車載カメラの画像合成表示方法及びその装置 |
JP2002326540A (ja) * | 2001-04-28 | 2002-11-12 | Setsuo Kuroki | 視認カメラ装着車 |
JP2002369187A (ja) * | 2001-06-11 | 2002-12-20 | Clarion Co Ltd | パノラマ画像表示方法及びパノラマ画像表示装置 |
JP2002373327A (ja) * | 2001-06-13 | 2002-12-26 | Denso Corp | 車両周辺画像処理装置及び記録媒体 |
JP2003030627A (ja) * | 2001-07-16 | 2003-01-31 | Denso Corp | 車両周辺画像処理装置及び記録媒体 |
JP2003061084A (ja) * | 2001-08-09 | 2003-02-28 | Matsushita Electric Ind Co Ltd | 運転支援表示装置 |
WO2003034738A1 (fr) | 2001-10-10 | 2003-04-24 | Matsushita Electric Industrial Co., Ltd. | Unite de traitement d'images |
JP2003244688A (ja) * | 2001-12-12 | 2003-08-29 | Equos Research Co Ltd | 車両の画像処理装置 |
JP2003256874A (ja) * | 2002-03-04 | 2003-09-12 | Matsushita Electric Ind Co Ltd | 画像合成変換装置 |
US7538798B2 (en) | 2002-03-04 | 2009-05-26 | Panasonic Corporation | Image combination/conversion apparatus |
JP2003264827A (ja) * | 2003-01-14 | 2003-09-19 | Matsushita Electric Ind Co Ltd | 画像処理装置 |
JP2004289386A (ja) * | 2003-03-20 | 2004-10-14 | Clarion Co Ltd | 画像表示方法、画像表示装置及び画像処理装置 |
JP2005258792A (ja) * | 2004-03-11 | 2005-09-22 | Olympus Corp | 画像生成装置、画像生成方法、および画像生成プログラム |
JP2005269010A (ja) * | 2004-03-17 | 2005-09-29 | Olympus Corp | 画像生成装置、画像生成プログラム、及び画像生成方法 |
JP2008502228A (ja) * | 2004-06-01 | 2008-01-24 | エル‐3 コミュニケーションズ コーポレイション | ビデオフラッシュライトを実行する方法およびシステム |
US8063936B2 (en) | 2004-06-01 | 2011-11-22 | L-3 Communications Corporation | Modular immersive surveillance processing system and method |
JP2008502229A (ja) * | 2004-06-01 | 2008-01-24 | エル‐3 コミュニケーションズ コーポレイション | ビデオフラッシュライト/視覚警報 |
JP2006054662A (ja) * | 2004-08-11 | 2006-02-23 | Mitsubishi Electric Corp | 運転支援装置 |
JPWO2006087993A1 (ja) * | 2005-02-15 | 2008-07-03 | 松下電器産業株式会社 | 周辺監視装置および周辺監視方法 |
WO2006087993A1 (ja) * | 2005-02-15 | 2006-08-24 | Matsushita Electric Industrial Co., Ltd. | 周辺監視装置および周辺監視方法 |
US8139114B2 (en) | 2005-02-15 | 2012-03-20 | Panasonic Corporation | Surroundings monitoring apparatus and surroundings monitoring method for reducing distortion caused by camera position displacement |
JP4601505B2 (ja) * | 2005-07-20 | 2010-12-22 | アルパイン株式会社 | トップビュー画像生成装置及びトップビュー画像表示方法 |
JP2007028363A (ja) * | 2005-07-20 | 2007-02-01 | Alpine Electronics Inc | トップビュー画像生成装置及びトップビュー画像表示方法 |
JP4677847B2 (ja) * | 2005-08-01 | 2011-04-27 | 日産自動車株式会社 | 車両用周囲監視装置及び車両周囲監視方法 |
JP2007043318A (ja) * | 2005-08-01 | 2007-02-15 | Nissan Motor Co Ltd | 車両用周囲監視装置及び車両周囲監視方法 |
JP2007124609A (ja) * | 2005-09-28 | 2007-05-17 | Nissan Motor Co Ltd | 車両周囲映像提供装置 |
US8174576B2 (en) | 2005-09-28 | 2012-05-08 | Nissan Motor Co., Ltd. | Vehicle periphery video providing apparatus and method |
US8345095B2 (en) | 2005-10-07 | 2013-01-01 | Nissan Motor Co., Ltd. | Blind spot image display apparatus and method thereof for vehicle |
JP2007104538A (ja) * | 2005-10-07 | 2007-04-19 | Nissan Motor Co Ltd | 車両用死角映像表示装置 |
JP2007174113A (ja) * | 2005-12-20 | 2007-07-05 | Sumitomo Electric Ind Ltd | 障害物検出システム及び障害物検出方法 |
JP2008022454A (ja) * | 2006-07-14 | 2008-01-31 | Sumitomo Electric Ind Ltd | 障害物検出システム及び障害物検出方法 |
JP2007102798A (ja) * | 2006-10-11 | 2007-04-19 | Denso Corp | 車両周辺監視システム |
US8094192B2 (en) | 2006-12-20 | 2012-01-10 | Aisin Aw Co., Ltd. | Driving support method and driving support apparatus |
JP2008174212A (ja) * | 2006-12-20 | 2008-07-31 | Aisin Aw Co Ltd | 運転支援方法及び運転支援装置 |
WO2008087707A1 (ja) * | 2007-01-16 | 2008-07-24 | Pioneer Corporation | 車両用画像処理装置及び車両用画像処理プログラム |
WO2008087709A1 (ja) * | 2007-01-16 | 2008-07-24 | Pioneer Corporation | 車両用画像処理装置、車両用撮像装置、車両用画像処理プログラム、車両用画像調整方法 |
JP4758481B2 (ja) * | 2007-01-16 | 2011-08-31 | パイオニア株式会社 | 車両用画像処理装置及び車両用画像処理プログラム |
JPWO2008087707A1 (ja) * | 2007-01-16 | 2010-05-06 | パイオニア株式会社 | 車両用画像処理装置及び車両用画像処理プログラム |
JP2008177856A (ja) * | 2007-01-18 | 2008-07-31 | Sanyo Electric Co Ltd | 俯瞰画像提供装置、車両、および俯瞰画像提供方法 |
JP2009118416A (ja) * | 2007-11-09 | 2009-05-28 | Alpine Electronics Inc | 車両周辺画像生成装置および車両周辺画像の歪み補正方法 |
WO2009116328A1 (ja) | 2008-03-19 | 2009-09-24 | 三洋電機株式会社 | 画像処理装置及び方法、運転支援システム、車両 |
JP2009294109A (ja) * | 2008-06-05 | 2009-12-17 | Fujitsu Ltd | キャリブレーション装置 |
US8319618B2 (en) | 2008-11-28 | 2012-11-27 | Fujitsu Limited | Image processing apparatus, image processing method, and recording medium |
JP2010136082A (ja) * | 2008-12-04 | 2010-06-17 | Alpine Electronics Inc | 車両周辺監視装置およびカメラ位置・姿勢判定方法 |
US8576285B2 (en) | 2009-03-25 | 2013-11-05 | Fujitsu Limited | In-vehicle image processing method and image processing apparatus |
US8009977B2 (en) | 2009-05-14 | 2011-08-30 | Fujitsu Ten Limited | On-vehicle lighting apparatus |
US8947533B2 (en) | 2010-01-22 | 2015-02-03 | Fujitsu Ten Limited | Parameter determining device, parameter determining system, parameter determining method, and recording medium |
JP2011151666A (ja) * | 2010-01-22 | 2011-08-04 | Fujitsu Ten Ltd | パラメータ取得装置、パラメータ取得システム、パラメータ取得方法、及び、プログラム |
WO2011090163A1 (ja) * | 2010-01-22 | 2011-07-28 | 富士通テン株式会社 | パラメータ決定装置、パラメータ決定システム、パラメータ決定方法、及び記録媒体 |
EP3413287A1 (en) | 2010-04-19 | 2018-12-12 | SMR Patents S.à.r.l. | Method for indicating to a driver of a vehicle the presence of an object at least temporarily moving relative to the vehicle |
WO2012002415A1 (ja) | 2010-06-29 | 2012-01-05 | クラリオン株式会社 | 画像のキャリブレーション方法および装置 |
US9030561B2 (en) | 2010-06-29 | 2015-05-12 | Clarion Co., Ltd. | Image calibration method and image calibration device |
JP2012019274A (ja) * | 2010-07-06 | 2012-01-26 | Nissan Motor Co Ltd | 車両用監視装置 |
JP2012065225A (ja) * | 2010-09-17 | 2012-03-29 | Panasonic Corp | 車載用画像処理装置、周辺監視装置、および、車両 |
US9591280B2 (en) | 2011-07-29 | 2017-03-07 | Fujitsu Limited | Image processing apparatus, image processing method, and computer-readable recording medium |
WO2013145880A1 (ja) * | 2012-03-30 | 2013-10-03 | 株式会社 日立製作所 | カメラキャリブレーション装置 |
JP2014064224A (ja) * | 2012-09-24 | 2014-04-10 | Clarion Co Ltd | カメラのキャリブレーション方法及び装置 |
JP2014085925A (ja) * | 2012-10-25 | 2014-05-12 | Fujitsu Ltd | 画像処理装置、方法、及びプログラム |
US9478061B2 (en) | 2012-10-25 | 2016-10-25 | Fujitsu Limited | Image processing apparatus and method that synthesizes an all-round image of a vehicle's surroundings |
EP3282419A1 (en) | 2012-10-25 | 2018-02-14 | Fujitsu Limited | Image processing apparatus and method |
US10621743B2 (en) | 2013-04-24 | 2020-04-14 | Sumitomo Heavy Industries, Ltd. | Processing-target image creating device, processing-target image creating method, and operation assisting system |
JP2016201585A (ja) * | 2015-04-07 | 2016-12-01 | 株式会社ソシオネクスト | 画像処理装置および画像処理装置の制御方法 |
JP2017034543A (ja) * | 2015-08-04 | 2017-02-09 | 株式会社デンソー | 車載表示制御装置、車載表示制御方法 |
US10464484B2 (en) | 2015-08-04 | 2019-11-05 | Denso Corporation | Apparatus for presenting support images to a driver and method thereof |
WO2017022496A1 (ja) * | 2015-08-04 | 2017-02-09 | 株式会社デンソー | 運転者に支援画像を提示する装置及びその方法 |
US11056001B2 (en) | 2016-06-07 | 2021-07-06 | Panasonic Intellectual Property Management Co., Ltd. | Image generating apparatus, image generating method, and recording medium |
US11948462B2 (en) | 2016-06-07 | 2024-04-02 | Panasonic Intellectual Property Management Co., Ltd. | Image generating apparatus, image generating method, and recording medium |
US11615709B2 (en) | 2016-06-07 | 2023-03-28 | Panasonic Intellectual Property Management Co., Ltd. | Image generating apparatus, image generating method, and recording medium |
US11183067B2 (en) | 2016-06-07 | 2021-11-23 | Panasonic Intellectual Property Management Co., Ltd. | Image generating apparatus, image generating method, and recording medium |
DE112017002951T5 (de) | 2016-06-13 | 2019-02-28 | Denso Corporation | Bilderzeugungsgerät und -programm |
US10645365B2 (en) | 2017-05-01 | 2020-05-05 | Panasonic Intellectual Property Management Co., Ltd. | Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium |
US10659677B2 (en) | 2017-07-21 | 2020-05-19 | Panasonic Intellectual Property Managment Co., Ltd. | Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium |
JP2019024196A (ja) * | 2017-07-21 | 2019-02-14 | パナソニックIpマネジメント株式会社 | カメラパラメタセット算出装置、カメラパラメタセット算出方法及びプログラム |
CN109978753A (zh) * | 2017-12-28 | 2019-07-05 | 北京京东尚科信息技术有限公司 | 绘制全景热力图的方法和装置 |
CN109978753B (zh) * | 2017-12-28 | 2023-09-26 | 北京京东尚科信息技术有限公司 | 绘制全景热力图的方法和装置 |
JP2019204393A (ja) * | 2018-05-25 | 2019-11-28 | アルパイン株式会社 | 画像処理装置および画像処理方法 |
JP7021001B2 (ja) | 2018-05-25 | 2022-02-16 | アルパイン株式会社 | 画像処理装置および画像処理方法 |
JP2020119492A (ja) * | 2019-01-28 | 2020-08-06 | パナソニックIpマネジメント株式会社 | 画像合成装置、及び、制御方法 |
Also Published As
Publication number | Publication date |
---|---|
EP2267656A2 (en) | 2010-12-29 |
EP1115250B1 (en) | 2012-06-06 |
EP2259220A2 (en) | 2010-12-08 |
EP2259220A3 (en) | 2012-09-26 |
EP1115250A4 (en) | 2005-06-22 |
EP2309453A3 (en) | 2012-09-26 |
JP3286306B2 (ja) | 2002-05-27 |
EP1115250A1 (en) | 2001-07-11 |
EP2309453A2 (en) | 2011-04-13 |
US7307655B1 (en) | 2007-12-11 |
EP2267656A3 (en) | 2012-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2000007373A1 (fr) | Procede et appareil d'affichage d'images | |
JP2002135765A (ja) | カメラキャリブレーション指示装置及びカメラキャリブレーション装置 | |
JP7245295B2 (ja) | 車両・被牽引車両コンビの周辺シーンを表示するための方法、並びに、装置 | |
JP3300334B2 (ja) | 画像処理装置および監視システム | |
US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
CN1831479B (zh) | 驾驶辅助系统 | |
US20170234692A1 (en) | Birds eye view virtual imaging for real time composited wide field of view | |
US7554573B2 (en) | Drive assisting system | |
WO2000064175A1 (fr) | Dispositif de traitement d'images et systeme de surveillance | |
US20140114534A1 (en) | Dynamic rearview mirror display features | |
JP4248570B2 (ja) | 画像処理装置並びに視界支援装置及び方法 | |
EP2079053A1 (en) | Method and apparatus for calibrating a video display overlay | |
KR100494540B1 (ko) | 투시투영 화상 생성을 위한 시스템, 방법 및 프로그램, 및그 프로그램을 기억하는 기록 매체 | |
CN101252679B (zh) | 驾驶辅助方法及驾驶辅助装置 | |
CN102045546A (zh) | 一种全景泊车辅助系统 | |
JP6151535B2 (ja) | パラメータ取得装置、パラメータ取得方法及びプログラム | |
EP3326146B1 (en) | Rear cross traffic - quick looks | |
CN117611438A (zh) | 一种基于单目图像的2d车道线到3d车道线的重构方法 | |
JP7074546B2 (ja) | 画像処理装置および方法 | |
WO2010007960A1 (ja) | 車載用カメラの視点変換映像システム及び視点変換映像取得方法 | |
US20220222947A1 (en) | Method for generating an image of vehicle surroundings, and apparatus for generating an image of vehicle surroundings | |
CN112389459A (zh) | 基于全景环视的人机交互方法及装置 | |
Gao et al. | A low-complexity yet accurate calibration method for automotive augmented reality head-up displays | |
CN117011185A (zh) | 一种电子后视镜cms图像矫正方法、系统及电子后视镜 | |
CN117241003A (zh) | 全景图像处理方法、装置、存储介质及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 09744787 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1999933145 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1999933145 Country of ref document: EP |