WO2005084027A1 - Dispositif de generation d'images, programme de generation d'images, et procede de generation d'images - Google Patents

Dispositif de generation d'images, programme de generation d'images, et procede de generation d'images Download PDF

Info

Publication number
WO2005084027A1
WO2005084027A1 PCT/JP2005/002150 JP2005002150W WO2005084027A1 WO 2005084027 A1 WO2005084027 A1 WO 2005084027A1 JP 2005002150 W JP2005002150 W JP 2005002150W WO 2005084027 A1 WO2005084027 A1 WO 2005084027A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
virtual viewpoint
display
viewpoint
space
Prior art date
Application number
PCT/JP2005/002150
Other languages
English (en)
Japanese (ja)
Inventor
Hidekazu Iwaki
Takashi Miyoshi
Akio Kosaka
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Publication of WO2005084027A1 publication Critical patent/WO2005084027A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/168Driving aids for parking, e.g. acoustic or visual feedback on parking space
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • Image generation device image generation program, and image generation method
  • the present invention captures a plurality of images taken by one or several cameras with the one or several cameras rather than displaying the images independently of each other.
  • the present invention relates to an apparatus and a method for displaying a combined image so that the whole state of an area can be intuitively intensified.
  • the present invention relates to a suitable technique applicable to a monitor device in a store, a vehicle periphery monitor device for assisting safety confirmation when driving a vehicle, and the like.
  • Patent Document 1 an image generation device that displays images captured by a plurality of cameras in an easily viewable manner has been disclosed (for example, Patent Document 1.) o
  • Patent Document 1 an area captured by a plurality of cameras (for example, a vehicle) (Nearby) is synthesized as one continuous image, and an image generating apparatus for displaying the synthesized image is disclosed! / Puru.
  • an appropriately set space model is created by a space model creation means, or set in accordance with a distance to an obstacle around the vehicle detected by the obstacle detection means.
  • the created space model is created by the space model creation means.
  • the image of the periphery of the vehicle, which is input by a camera installed on the vehicle by the image input unit, is mapped to the space model by the mapping unit.
  • one image viewed from the viewpoint determined by the viewpoint conversion unit is synthesized with the mapped image data, and displayed by the display unit.
  • Patent Document 1 Japanese Patent No. 3286306
  • Patent Document 2 Japanese Patent Application Laid-Open No. 05-265547
  • Patent Document 3 JP-A-06-266828
  • An image generating apparatus includes: a space reconstructing unit that maps an input image from one or a plurality of cameras to a predetermined space model in a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space, with reference to the obtained spatial data, and a plurality of the above-described plurality of the viewpoints viewed from different viewpoints.
  • Display control means for controlling display of the virtual viewpoint image.
  • an image generating apparatus includes a space reconstructing unit that maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a viewpoint conversion unit for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit;
  • Display control means for controlling display of the plurality of viewed virtual viewpoint images, and display form selection means for selecting a display form by the display control means according to a state of the vehicle.
  • an image generating apparatus includes a space reconstructing unit that maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a viewpoint conversion unit for generating a virtual viewpoint replacement which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit;
  • Display control means for controlling the display of the plurality of virtual viewpoint images viewed from above, and display form selection means for selecting a display form by the display control means in accordance with an operation state of the vehicle.
  • an image generating apparatus includes space reconstructing means for mapping an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a viewpoint conversion unit for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit;
  • Display control means for controlling display of the plurality of viewed virtual viewpoint images, and Display form selecting means for selecting a display form by the display control means.
  • the image generation device includes: a space reconstructing unit that maps an input image from one or a plurality of cameras onto a predetermined space model of a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the mapped space data, and a similar object captured in the virtual viewpoint image
  • the image processing apparatus includes a color classifying unit for classifying each color, and a display control unit for displaying an image converted by the color classifying unit.
  • an image generating apparatus includes: a space reconstructing unit that maps an input image having one or more camera forces onto a predetermined space model of a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by Display control means for displaying, and the viewpoint conversion means generates the virtual viewpoint image based on user information which is information specific to the user or information on the state of the user.
  • a computer-readable recording medium on which the image generation program according to the present invention is recorded is a spatial reconstruction process that maps an input image of one or a plurality of cameras into a predetermined spatial model in a three-dimensional space. And a viewpoint conversion process of generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process. And a display control process of controlling display of the plurality of virtual viewpoint images viewed from the computer.
  • a computer-readable recording medium on which an image generation program according to the present invention is recorded is a spatial reproduction device that maps an input image of one or a plurality of cameras onto a predetermined space model in a three-dimensional space.
  • the computer causes a computer to execute a color-coding process for color-coding each same type of object photographed in the image and a display process for displaying the image converted by the color-coding process.
  • a computer-readable recording medium on which an image generation program according to the present invention is recorded is a spatial reproduction device that maps an input image of one or a plurality of cameras onto a predetermined space model in a three-dimensional space.
  • a configuration process a viewpoint conversion process of generating a virtual viewpoint image as an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process; And causing the computer to execute a display control process of displaying an image converted by the process.
  • the viewpoint conversion process is performed based on user information that is information specific to the user or information on the state of the user. To generate the virtual viewpoint image.
  • a computer-readable recording medium recording the image generation program according to the present invention maps an input image from one or more cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a space reconstruction process for generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process.
  • a display control process for controlling display of a plurality of the virtual viewpoint images viewed from different viewpoints, and a display format selection process for selecting a display format by the display control process according to a state of the vehicle And cause the computer to execute.
  • a computer-readable recording medium recording an image generation program converts an input image from one or a plurality of cameras mounted on a vehicle into a predetermined space model of a three-dimensional space.
  • Space reconstruction processing for mapping, and viewpoint conversion processing for generating a virtual viewpoint replacement that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction processing A display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoints, and a display mode by the display control process according to an operation state of the vehicle. And a display mode selection process.
  • a computer-readable recording medium on which an image generation program according to the present invention is recorded can convert an input image from one or more cameras mounted on a vehicle into a predetermined three-dimensional space model. Spatial reconstruction processing to be mapped, and any temporary data in the three-dimensional space with reference to the spatial data mapped by the spatial reconstruction processing.
  • the image generation method is characterized in that an input image from one or a plurality of cameras is mapped to a predetermined space model in a three-dimensional space, and the 3D space is referred to by referring to the mapped space data.
  • a virtual viewpoint image which is an image viewed with an arbitrary virtual viewpoint power in a dimensional space is generated, and display of the plurality of virtual viewpoint images viewed with different viewpoint powers from each other is controlled.
  • the image generation method the input image from one or a plurality of cameras is mapped to a predetermined spatial model of a three-dimensional space, and the mapped spatial data is referred to.
  • a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in a three-dimensional space, is generated, and the same kind of object photographed in the virtual viewpoint image is color-coded and the color-coded image is displayed.
  • the image generation method the input image from one or a plurality of cameras is mapped to a predetermined space model of a three-dimensional space, and the mapped space data is referred to.
  • An image generation method for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in a three-dimensional space, and displaying the virtual viewpoint image, wherein the virtual viewpoint image includes information unique to a user or a user. It is generated based on user information that is information on the state.
  • an image generation method maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and executes the mapping.
  • a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated with reference to the spatial data, and a display mode of the plurality of virtual viewpoint images viewed from different viewpoints is controlled. The display mode is selected according to the state of the vehicle.
  • the image generation method maps input images from a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and Virtual viewpoint replacement, which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space, with reference to the spatial data obtained in the above-mentioned three-dimensional space.
  • the display mode is controlled, and the display mode is selected according to the operation status of the vehicle.
  • the image generation method maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and executes the mapping.
  • a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated with reference to the spatial data, and a display mode of the plurality of virtual viewpoint images viewed from different viewpoints is controlled.
  • the display mode is selected according to the operation status of the operator.
  • FIG. 1 is a diagram showing an image generation device according to a first embodiment.
  • FIG. 2 is a diagram showing a display flow of a virtual viewpoint image in the first embodiment.
  • FIG. 3 is a diagram showing a virtual viewpoint image superimposed and displayed in the first embodiment.
  • FIG. 4 is a diagram showing an example in which a plurality of virtual viewpoint images according to the first embodiment are panorama-synthesized.
  • FIG. 5 is a diagram in which four virtual viewpoint images according to the first embodiment are displayed side by side.
  • FIG. 6 is a diagram showing the relationship between the switching of the operation mode and the display mode in the first embodiment.
  • FIG. 7 is a diagram showing a correspondence relationship between a vehicle state and a display mode in the second embodiment.
  • FIG. 8 is a diagram showing an example of a mode of an operation state of a vehicle according to a third embodiment.
  • FIG. 9A is a diagram showing buttons used for mode switching in the third embodiment.
  • FIG. 9B is a diagram showing a mode button whose arrangement has been changed by an arrangement change (automatic change / manual change) function by learning according to the third embodiment.
  • FIG. 10 is a diagram illustrating an image generation device according to a fourth embodiment.
  • FIG. 11 is a diagram showing a processing flow in a fourth embodiment.
  • FIG. 12 is a diagram showing an example of displaying both a wide-angle virtual viewpoint image and a virtual viewpoint image in which an obstacle is enlarged in proximity to the wide-angle virtual viewpoint image in the fifth embodiment.
  • FIG. 13 is a diagram illustrating an image generation device according to a sixth embodiment.
  • FIG. 14 is a diagram showing three virtual viewpoint images superimposed on a part of a car navigation image in the seventh embodiment.
  • FIG. 15 is a configuration block diagram of a hardware environment of the image generation device according to the first to eighth embodiments.
  • Patent Document 1 an image of an area (for example, near a vehicle) captured by a plurality of cameras is synthesized as one continuous image, and the synthesized image is mapped into a virtual three-dimensional space model.
  • the main theme is how to generate an image (virtual viewpoint image) in which the viewpoint of the mapped data is virtually changed in three dimensions, and its display method and display form, etc. It is not a sufficiently specific suggestion to improve the usability of the user interface.
  • FIG. 1 shows an image generating apparatus 10000 according to an embodiment of the present invention.
  • an image generation apparatus 10000 according to the present invention includes a plurality of cameras 101, a camera parameter table 103, a spatial reconstruction means 104, a spatial data buffer 105, a viewpoint conversion means 106, a display control means 10001, a display mode selection. Means 10002 and display means 107.
  • the plurality of cameras 101 are provided in a state suitable for grasping the status of the monitoring target area.
  • the camera 101 is, for example, a plurality of television cameras that capture images of a space to be monitored such as a situation around the vehicle. Usually, it is preferable to use a camera having a large angle of view so that a large field of view can be obtained.
  • a known mode such as that disclosed in Patent Document 1 may be used.
  • the camera parameter table 103 stores camera parameters indicating the characteristics of the camera 101.
  • Image generation device 10000 A calibration means (not shown) is provided to perform camera calibration.
  • Camera calibration refers to a camera placed in a three-dimensional space, such as the camera mounting position, camera mounting angle, camera lens distortion correction value, and camera lens focal length in the three-dimensional space. This is to determine and correct camera parameters representing the characteristics of 101.
  • the calibration means and the camera parameter table 103 are described in detail in, for example, Patent Document 1! RU
  • the spatial reconstruction means 104 creates spatial data in which an input image from the camera 101 is mapped to a three-dimensional spatial model based on the camera parameters. That is, the spatial reconstruction means 104 associates each pixel constituting the input image from the camera 101 with a point in the three-dimensional space based on the camera parameters calculated by the calibration means (not shown). Create spatial data. That is, the spatial reconstruction means 104 calculates where each object included in the image captured by the camera 101 exists in the three-dimensional space, and stores the spatial data obtained as a result of the calculation in the spatial data buffer. Store in 105.
  • the spatial data buffer 105 temporarily stores the spatial data created by the spatial reconstruction means 104. This spatial data buffer 105 is also described in detail in Patent Document 1, for example.
  • the viewpoint conversion means 106 creates an image viewed from an arbitrary viewpoint with reference to the spatial data. That is, referring to the space data created by the space reconstructing means 104, an image taken by installing a camera at an arbitrary viewpoint is created.
  • This viewpoint conversion means 106 may have the configuration described in detail in Patent Document 1, for example.
  • the display control unit 10001 controls the display mode when displaying the image converted by the viewpoint conversion unit 106.
  • the display control unit 10001 controls the display of an image by using the selection operation of the display mode selection unit 10002 as a trigger.
  • a plurality of images converted by the viewpoint conversion unit 106 are displayed in a superimposed manner, displayed as one continuous image, or displayed side by side.
  • a plurality of images from different viewpoints can be displayed at the same time, for example, a virtual viewpoint image viewed with a knock mirror and a virtual viewpoint image viewed from a bird's-eye view viewpoint can be displayed simultaneously. In this way, images having different contents can be divided and displayed on a screen, displayed in a superimposed manner, displayed side by side, and the like.
  • the display form selection means 10002 is for instructing a change of the viewpoint and a change of the angle of view. These instructions are transmitted to the viewpoint changing means 106 via the display control means 10001 (or may be transmitted directly from the display form selecting means 10002 to the viewpoint changing means 106). A viewpoint image is created.
  • the display mode selection unit 10002 may instruct not only the change of the viewpoint and the change of the angle of view but also the change of the zoom, focus, exposure, shutter speed, and the like. Further, the display mode selection unit 10002 can also select a display mode as described later.
  • the display unit 107 is, for example, a display or the like, and displays an image controlled by the display control unit 10001.
  • FIG. 2 shows a display flow of the virtual viewpoint image in the present embodiment.
  • the respective fields of view of a plurality of cameras are integrated by the following procedure and combined as one image.
  • the spatial reconstruction means 104 calculates the correspondence between each pixel constituting the image obtained from the camera 101 and a point in the three-dimensional coordinate system, and creates spatial data. This calculation is performed for all pixels of the image obtained from each camera 101 (Sl).
  • a known embodiment disclosed in Patent Document 1 is applied. it can.
  • a desired viewpoint is designated by the viewpoint conversion means 106 (S2).
  • S2 a desired viewpoint is designated by the viewpoint conversion means 106 (S2).
  • the viewpoint conversion means 106 also reproduces the spatial data force of the image having the viewpoint power specified in S2, and the display control means 10001 controls the display mode of the reproduced image (S3). Then, the image is output to the display means 107 and displayed on the display means 107 (S4).
  • FIG. 3 shows a virtual viewpoint image superimposed and displayed in the present embodiment.
  • the virtual viewpoint images 10010 and 10011 are force-superimposed, that is, the virtual viewpoint image 10011 as a child screen can be displayed in the virtual viewpoint image 10010 using the rpicture In PictureJ function.
  • two screens are superimposed in the figure, the present invention is not limited to this, and a plurality of screens may be superimposed.
  • FIG. 4 shows an example in which a plurality of virtual viewpoint images according to the present embodiment are subjected to panoramic synthesis.
  • the two virtual viewpoint images 10020 and 10021 overlap the part where the angle of view overlaps (the part where the boundary part overlaps in the image) and combine them into a single continuous image (hereinafter referred to as a seamless image).
  • a seamless image a single continuous image
  • two screens are overlapped in the figure, the present invention is not limited to this, and a plurality of overlapping screens having different angles of view may be further overlapped to form a seamless image.
  • FIG. 5 shows a diagram in which four virtual viewpoint images 10030, 10031, 10032, and 10033 according to the present embodiment are displayed side by side.
  • the virtual viewpoint images displayed here need not be related to each other, unlike FIG. In the figure, the force of arranging four screens is not limited to this, and multiple screens may be arranged.
  • the control of the display mode in FIGS. 3 to 5 is controlled by the display control unit 10001, and the display mode can be selected by the display mode selection unit 10002.
  • FIG. 6 shows the relationship between the switching of the operation mode and the display mode in the present embodiment.
  • the driving modes include, for example, a right turn mode, a left turn mode, a forward mode, and a back mode. Mode, high-speed running mode, and the like.
  • the viewpoint, angle of view, display mode (superimposed display (see Fig. 3), seamless display (see Fig. 4), side-by-side display (see Fig. 5), etc.) are determined according to each driving mode.
  • a virtual viewpoint image having a wide angle of view with a rightward viewpoint force may be displayed so as to be superimposed on the force navigation image.
  • the virtual image of the rear side may be superimposed on the seamless image of the front side and displayed.
  • the display mode selecting means 10002 detects the operation mode.
  • a situation of the vehicle at that time can be detected by a gear, a speed, a win force, a steering angle, and the like.
  • Patent Document 1 discloses that a joystick is used to arbitrarily adjust a viewpoint.
  • an appropriate virtual viewpoint image can be displayed more easily by switching to a preset viewpoint, angle of view, display mode, or the like in accordance with the driving operation.
  • zoom, focus, exposure, shutter speed, and the like may be added to the conditions of the displayed image by switching.
  • the force for generating the virtual viewpoint image based on the invention of Patent Document 1 is not limited to this. That is, as long as a virtual viewpoint image is obtained, any known technique may be used.
  • control of the display mode of the virtual viewpoint image when the image generation device 10000 is mounted on a vehicle in a mode different from that of the first embodiment will be described.
  • the conceptual configuration of image generating apparatus 10000 itself in the present embodiment is the same as that in FIG.
  • the display mode selection unit 10002 selects the display mode according to the state of the vehicle, the operation status, and the like as described below. Note that this function may be added to the display mode selection unit 10002 of the first embodiment.
  • FIG. 7 shows the correspondence between the vehicle state and the display mode in the present embodiment.
  • Vehicles are equipped with sensors (temperature, humidity, pressure, illuminance, etc.), cameras (for in-vehicle use, body photography), and measuring instruments (or tachometer, Spy Existing measuring instruments such as a dodometer, coolant temperature gauge, oil pressure gauge, fuel gauge, etc. may be used.)
  • the vehicle state "running steering angle” indicates that the display mode changes according to the degree of the steering angle during running. For example, when the steering wheel is turned by a predetermined steering angle or more, an image in the direction of the steering angle (hereinafter, referred to as an image such as a virtual viewpoint image) is displayed on the display.
  • an image in the direction of the steering angle hereinafter, referred to as an image such as a virtual viewpoint image
  • the vehicle state "speed" indicates that the display mode changes in accordance with the degree of the speed. For example, a distant image (for example, linked to the safety stop distance) is displayed when the speed increases, and an entire surrounding image is displayed when the vehicle starts at a low speed or starts.
  • a distant image for example, linked to the safety stop distance
  • the vehicle state "acceleration" indicates that the display mode changes according to the degree of the acceleration. For example, a backward image is displayed during deceleration, and a far forward image is displayed during acceleration.
  • the vehicle state “gear” indicates that the display mode changes according to the force (eg, 1st, 2nd, 3rd,..., Back, etc.) where the gear is. For example, the back image is displayed during backing.
  • the force eg, 1st, 2nd, 3rd,..., Back, etc.
  • the vehicle state “wiper” indicates that the display mode changes according to the operating state of the wiper (for example, whether or not the wiper is operating, the operating speed of the wiper, and the like). For example, an image in rain mode (a virtual viewpoint image on which processing such as specular reflection component removal and water droplet removal has been performed) is displayed in conjunction with the wiper.
  • the operating state of the wiper for example, whether or not the wiper is operating, the operating speed of the wiper, and the like.
  • an image in rain mode a virtual viewpoint image on which processing such as specular reflection component removal and water droplet removal has been performed
  • the vehicle state "headlight, vehicle interior light ONZOFF brightness, etc.” indicates that the display mode changes according to the brightness of the headlight, vehicle interior light, and the like. For example, the brightness of the liquid crystal of the display is adjusted.
  • the volume of voice guidance is adjusted according to the stereo volume.
  • the display mode may be changed according to the stereo sound volume.
  • the vehicle state "dirt" indicates that the display mode changes according to the degree of dirt on the vehicle. ing. For example, warning of dirt on the vehicle (for example, front, rear, bumper, roof, bonnet, door, window, wheel, tire, etc.) (especially if the camera light-receiving part that may interfere with the system) Will be displayed.
  • the vehicle state “temperature, humidity” indicates that the display mode changes according to the temperature and humidity inside or outside the vehicle. For example, a warning is displayed for a place where the temperature or humidity inside the vehicle or on the road surface is abnormally high (or low, for example, when the temperature outside the vehicle is low and the road surface is covered with ice). Thereby, for example, slip prevention due to freezing can be achieved.
  • the vehicle state “oil amount” indicates that the display mode changes according to the remaining amount of oil! For example, when there is a possibility of an oil leak, an image is displayed so that it is possible to confirm whether or not the oil is leaking in an image behind or directly below the vehicle.
  • the vehicle state "number of passengers, riding position, and weight of loaded luggage” indicates that the display mode changes according to the number of passengers, the sitting position of the occupant, and the weight of loaded luggage. For example, a warning for correcting the safe stopping distance (specifically displaying the stopping distance superimposed on the video) is displayed.
  • the vehicle state "open / close window” indicates that the display mode changes according to the open / close state of the window. For example, the surrounding image is displayed so that the user can confirm whether there is no danger due to opening and closing, and the image of the window that can be opened or closed can be monitored with his / her hand or head.
  • the display related to the vehicle state is not limited to the one described above, and the braking distance calculated in consideration of the vehicle state and the road surface state and the weather is displayed in the traveling direction of the own vehicle. May be displayed in a bar graph-like pattern!
  • a preset display mode can be selected according to the vehicle state as described above, so that an appropriate virtual viewpoint image can be displayed more easily.
  • This embodiment is a modification of the second embodiment.
  • a display mode is selected for each state of each part of the vehicle.
  • a macro operation state of the vehicle is detected, and a display mode according to the operation state is selected. That is, in the present embodiment, the display mode selection unit 10002 focuses on the operation status of the vehicle, and selects the display mode corresponding to the operation status. Note that the image generation in the present embodiment The device is the same as in the first or second embodiment.
  • FIG. 8 shows an example of a mode of the operation state of the vehicle in the present embodiment.
  • the operation status modes of the present embodiment include “right turn mode”, “left turn mode”, “surrounding monitoring mode at start”, “in-vehicle monitoring mode”, “high-speed driving mode”, “backward monitoring mode”, “ There are “rainy driving mode”, “parallel parking mode” and “garage putting mode”. Now, each mode will be described.
  • the "right turn mode” displays an image in a direction in which the vehicle turns to the front. Specifically, when the vehicle makes a right turn, an image in front and an image in the right turn direction are displayed. “Left turn mode” displays images in the forward and ⁇ directions. More specifically, when the vehicle is turning left, an image in front and an image in the direction of left turn are displayed.
  • the “starting surrounding monitoring mode” displays a monitoring video around the vehicle when the vehicle starts.
  • the “in-vehicle monitoring mode” displays a monitoring image of the inside of the vehicle.
  • the “high-speed running mode” displays an image far ahead in the high-speed running mode.
  • a video image is displayed to confirm whether the driver can apply the sudden brake, that is, whether there is enough distance from the following vehicle to stop the driver with the sudden brake.
  • Such a mode change may be automatically recognized by the image generation device 10000 according to the operation of the vehicle, or may be set by the user. In addition, these modes can be freely combined.
  • FIGS. 9A and 9B show an example of a setting format in the case where the user can set the above mode in the present embodiment.
  • the "right turn” / "left turn mode” button is It is displayed as a button for “surrounding monitoring at start-up and in-vehicle monitoring mode”, a button for “far and highway at high speed”, and a button for “rain”. Then, by pressing each selection button, the mode is switched to the mode.
  • the display mode (arrangement mode) of the buttons can be changed.
  • variations of the display form include “selection by selection button”, “arrangement of modes according to the situation”, and “frequently used functions (modes) by the learning function. (Display on top) ".
  • Selection by selection button is an array of normal selection buttons, and is an array set by default. This array can be arbitrarily changed by the user. By doing so, the arrangement of the selection buttons can be changed according to the user's own preference.
  • the arrangement of the modes is changed according to the situation means that the arrangement of the selection buttons is changed according to the operation state of the vehicle, the surrounding environment, the weather, and the like. That is, by detecting the operation of the vehicle, the surrounding environment, the weather, and the like, the arrangement of the selection buttons is changed, and a selection button that is more suitable for the situation is displayed, for example, at the top. Also, the arrangement of the selection buttons may be changed according to the situation of each part in the second embodiment.
  • the function (mode) frequently used by the learning function is displayed at the front (upper)
  • the selection history of the selection button of the user is sequentially recorded, and this history power is statistically determined. Is to calculate whether or not is used more frequently, and arrange the selection buttons in order of the selection frequency. More specifically, for example, as shown in FIGS.
  • a car navigation information display frame 10035 capable of superimposing images on the periphery and the course of the vehicle is outside (in the present embodiment, the lower part)
  • the icon power of each button of "Back” 10037, “Left turn” 10038, "Right turn” 10036 is adaptively arranged in the order sequence according to the selection frequency (in this embodiment, the left power is also the right) Will do.
  • FIG. 9A shows the arrangement of the mode buttons before the arrangement is changed. For example, as shown in FIG. 9A, when the frequency of pressing “back” 10037 increases, the selection frequency increases as shown in FIG. 9B. Back "10037 is placed on the left end.
  • the display mode can be changed according to the operation state of the vehicle and the user's preference. Therefore, it is possible to display an optimal virtual viewpoint image for the user.
  • Patent Document 1 there is no particular mention as to how an image subjected to image processing such as viewpoint conversion is easily viewed ⁇ A clear display is made. Therefore, in the present embodiment, a clearer display is realized by a simple color-coded display.
  • FIG. 10 shows an image generation device 10000 according to the present embodiment.
  • the image generating apparatus 10000 of the figure is the same as the image generating apparatus 10000 of the first embodiment, except that a color coding / simplification unit 10040 and an object recognition unit 10041 are added.
  • the display mode selection unit 10002 is removed in the present embodiment, it may be attached according to the application.
  • the object recognizing means 10041 is means for recognizing each object (object) captured in the captured image.
  • Methods for recognizing these objects include methods that use only images obtained with the camera power of a single eye, methods that use the distance and images obtained from the camera obtained from the laser range finder, and methods that are obtained from the stereo method.
  • the stereo method is a method in which the same object is imaged by a plurality of cameras, corresponding points in the imaged images are extracted, and a distance image is calculated by triangulation.
  • a device that obtains a range image using these methods and the like is referred to as a stereo sensor.
  • An image including the distance information and the luminance or color information acquired by such a stereo sensor is referred to as a stereo image or a stereoscopic image.
  • Patent Document 2 a distance distribution is obtained over the entire captured image, and a three-dimensional object or a road shape is relied on based on the information of the distance distribution together with an accurate position and size.
  • a vehicular exterior monitoring device for highly sensitive detection is disclosed.
  • the type of an object can be identified based on the shape and size and position of a contour image of the detected object.
  • Patent Document 3 a side wall as a continuous three-dimensional object that serves as a boundary of a road, such as a guardrail, an implant, a pylon row, is reliably detected.
  • a vehicle exterior monitoring device that detects data in a form that is easy to process is disclosed.
  • the stereo optical system captures an image of an object within the installation range outside the vehicle, processes the image captured by the stereo optical system, and calculates the distance distribution over the entire image. Then, the three-dimensional position of each part of the subject corresponding to the distance distribution information is calculated, and the shape of the road and a plurality of three-dimensional objects are detected using the three-dimensional position information.
  • the object recognizing unit 10041 in the present embodiment can also recognize each object (object) captured in the image captured by the stereo image method as described above.
  • the processing itself can be performed by a conventional method, and a detailed description of the processing is omitted.
  • a plurality of cameras different from the camera 101 may be used, or the camera 101 may also be used.
  • the synchronization between the plurality of images is lost.
  • the object recognizing unit 10041 performs coloring on the same object in the virtual viewpoint image corresponding to the object recognized by the unit, so that the stereo image and the virtual viewpoint A table (association table 10042) for associating an object with an image is provided.
  • the color coding / simplification means 10040 performs a process of performing color coding for each type of object such as a road surface, an obstacle, and a vehicle in the virtual viewpoint image.
  • FIG. 11 shows a processing flow in the present embodiment.
  • spatial data creation processing S11
  • viewpoint specification processing S12
  • Sll and S12 are the same as the processing of SI and S2 in FIG. 2, respectively.
  • the object is recognized (S13). As described above, these processes themselves can use known methods exemplified in Patent Documents 2 and 3.
  • the recognized objects are classified by type.
  • the object is recognized by a known method described in, for example, Patent Document 2 or Patent Document 3, and the recognized object is classified by type.
  • the virtual viewpoint image acquired in S12 is associated with the object recognized in S13 (S14). This association is performed using an association table.
  • the position of each pixel constituting the image captured by the camera 101 is generally represented as coordinates on the UV plane including the CCD image plane.
  • a point on the UV plane where the pixel captured by the camera 101 exists is associated with a point in the world coordinate system.
  • the processing is performed by the spatial reconstruction means 104.
  • the position of the object recognized by the stereo image camera can be determined on the world coordinate system via the UV coordinate system. Can be specified. This makes it possible to associate the object in the stereo image with the object in the same virtual viewpoint image.
  • coloring is performed for each type of object in the virtual viewpoint image (S15).
  • the process is performed by the color classification / simple dangling means 10040, and the process of performing color classification for each type of object such as a road surface, an obstacle, and a vehicle in the virtual viewpoint image associated in S14.
  • each object recognized by the object recognition means 10041 is colored differently according to the type of article such as a road or a vehicle classified in S13.
  • the road surface in the virtual viewpoint image is colored in gray
  • the vehicle is red
  • the other obstacles are blue
  • the coloring for each object in the virtual viewpoint image, the texture tone may be monotone for each object to be displayed in a single color, or a semi-transparent color may be displayed in a superimposed manner.
  • the display mode of the image after the processing of S15 is controlled (S16).
  • the display mode is controlled in order to output to the output means 107.
  • a display mode may be selected as in the first embodiment.
  • the image is output to the display means 107 and displayed on the display means 107 (S17).
  • the force performed based on the processing flow of Fig. 11 is not limited to this. Anything is fine.
  • a stereo sensor may be used at Sl and S12 to acquire three-dimensional information at the same time.
  • the space model used for generating the space model in S11 described above is a space model having five plane forces and a bowl-shaped space. It may be a space model, a space model configured by combining a plane and a curved surface, a space model in which a surface is introduced, or a combination thereof. Note that the space model is not limited to these space models, and is not particularly limited as long as the space model is a combination of planes, a curved surface, or a combination of a plane and a curved surface.
  • the image projected for each space model may be similarly colored for each object type.
  • the display is made clearer by color-coding etc. according to each object model of the space model.
  • objects in the projected image that are not only colored may be simplified, that is, displayed as a figure abstracted by a circle, square, triangle, rectangle, trapezoid, or the like.
  • the present invention is not limited to these figures, and is not particularly limited as long as it is a simplified diagram, symbol, symbol, or the like.
  • the objects in the virtual viewpoint image are color-coded and simplified, so that an obstacle or the like can be easily recognized even during driving.
  • the virtual viewpoint image may be displayed as a wide-angle virtual viewpoint image and an enlarged virtual viewpoint image.
  • FIG. 12 shows an example in which a wide-angle virtual viewpoint image 10050 and a virtual viewpoint image 10051 in which an obstacle is enlarged in the vicinity thereof are displayed together in the present embodiment.
  • Patent Literature 1 merely discloses that any point can be displayed with respect to the viewpoint adjustment when the driver changes or the seating position is adjusted differently. Therefore, in the present embodiment, by using the viewpoint preset for each driver, The displayed image more easily.
  • FIG. 13 shows an image generating apparatus 10000 according to the present embodiment.
  • the image generating apparatus 10000 of the figure is obtained by adding a user information acquisition unit 10060 to the image generating apparatus 10000 of the first embodiment.
  • the display form selection means 10002 may be attached in accordance with the purpose of the power removal.
  • a face image of a driver is obtained by a camera that monitors the inside of a vehicle, and a viewpoint is obtained by extracting an eyeball position from the face image by a general image processing technique.
  • the position of the virtual viewpoint for displaying the virtual viewpoint image can be customized based on the measurement result.
  • the driver's visual field may be measured from the face image, and a virtual visual point image having an angle of view corresponding to the visual field may be displayed.
  • the measurement of the visual field may be estimated based on the degree of opening of the eyes in the face image.
  • another example of the user information acquisition unit 10060 may be configured to estimate the driver's posture or the like and calculate the power of the user's viewpoint, for example. For example, since the driver's head position can be measured from the driver's height (or sitting height) registered in advance and the current seat inclination angle, the approximate driver's viewpoint position can be determined from this.
  • the viewpoint position may be calculated in this manner.
  • the viewpoint position or the angle of view is determined in consideration of the position of the seat (slid forward or backward, slide or height, etc.) or whether the driver is wearing glasses. It may be calculated.
  • the user information obtained by the user information obtaining means 10060 in this way is transmitted to the display control means 10001. Then, the display control means 10001 determines the optimal virtual information for the user based on the information. Display the viewpoint image.
  • each driver can view the virtual viewpoint image without feeling uncomfortable. .
  • a virtual viewpoint image with a virtual viewpoint power is displayed so as to be superimposed on a part of the image of the car navigation system.
  • the virtual viewpoint image is placed at an appropriate position according to the map information. Display superimposed.
  • the main body of the image generating apparatus according to the present embodiment is the same as that of the first embodiment.
  • the car navigation image is a kind of virtual viewpoint image, and another virtual viewpoint image is superimposed on this image in the same manner as in the other embodiments.
  • FIG. 14 shows virtual viewpoint images 10071, 10072, and 10073 superimposed on a part of the car navigation image 10070 in the present embodiment.
  • Such control is performed by the display control means 10001.
  • the virtual viewpoints and images 10071, 10072, and 10073 are displayed as a semi-transparent display so as to be superimposed on the image 10070 of the car navigation system.
  • the virtual viewpoint image can be viewed while being navigated by the car navigation system.
  • the virtual viewpoint image is controlled to be displayed at a position that does not interfere with the navigation, the user does not feel inconvenience to navigate.
  • Patent Document 1 there is no mention of focus adjustment of a camera, and there is no particular consideration for appropriately displaying an image of an obstacle particularly when the camera is at a short distance.
  • the camera is provided with an AF (auto focus) function, and when the camera is closer to the stereoscopic view, the focus of the lens is adjusted to the short distance side. That is, the mode is generally adjusted to a macro mode in which a large image is taken at a position close to the object to be imaged. In this way, an image focused for three-dimensional reconstruction can be obtained at a short distance.
  • AF auto focus
  • a high-precision image can be similarly obtained for a long-distance image, and a long-range accuracy can be obtained. Leads to improvement.
  • This embodiment can be combined with the first to seventh embodiments.
  • the present invention is not limited to this, and the respective embodiments of the first to eighth embodiments can be combined with each other depending on the application.
  • the space model and the screen display shown in FIGS. 3 to 5, 9A, 9B, 12 and 14 can be applied between the embodiments.
  • FIG. 15 is a configuration block diagram of a hardware environment of the image generation device 10000 according to the first to eighth embodiments.
  • an image generation device 10000 includes at least a control device 10080 such as a central processing unit (CPU) and a storage device 10081 such as a read only memory (ROM), a random access memory (RAM), or a large capacity storage device.
  • a control device 10080 such as a central processing unit (CPU)
  • ROM read only memory
  • RAM random access memory
  • the devices connected to the input IZF include, for example, a camera 101, an in-vehicle camera, a stereo sensor and various sensors (refer to the second to fourth embodiments), input devices such as a keyboard and a mouse, and CD- Examples include a portable storage medium reading device such as a ROM and a DVD, and other peripheral devices.
  • the device connected to the IZF 10084 is, for example, a car navigation system or a communication device connected to the Internet or GPS.
  • the communication medium may be a communication network such as the Internet, a LAN, a WAN, a dedicated line, a wired line, a wireless line, or the like.
  • the storage device 10081 various types of storage devices such as a hard disk and a magnetic disk can be used, and the program of the flow described in the above embodiment, the table (for example, display of vehicle status, etc.) A table relating to the form, various setting values, and the like are stored. This program is read by the control device 10080, and each process of the flow is executed.
  • This program may also be provided via the Internet via the communication provider IZF10084, and may be stored in the storage device 10081. Further, this program may be stored in a portable storage medium which is sold and distributed. Then, the portable storage medium in which the program is stored can be set in the reading device and executed by the control device.
  • portable storage media various types of storage media such as CD-ROM, DVD, flexible disk, optical disk, magneto-optical disk, and IC card can be used. The program can be used, and the program stored in such a storage medium is read by the reading device.
  • a keyboard a mouse, an electronic camera, a microphone, a scanner, a sensor, a tablet, or the like can be used.
  • other peripheral devices can be connected.
  • Patent Document 1 is incorporated in this specification by reference to this specification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention a trait à un dispositif de génération d'images comportant : un moyen de reconfiguration de l'espace pour la projection d'une image d'entrée provenant d'une ou de plusieurs caméras sur un modèle spatial tridimensionnel ; un moyen de conversion de point de vue pour la mise en référence d'une donnée d'espace projetée par le moyen de reconfiguration et la génération d'une image de point de vue virtuel sous la forme d'une image vue à partir d'un point de vue virtuel arbitraire dans l'espace tridimensionnel ; et un moyen de commande d'affichage des images de point de vue virtuel vues des points de vue différents.
PCT/JP2005/002150 2004-02-26 2005-02-14 Dispositif de generation d'images, programme de generation d'images, et procede de generation d'images WO2005084027A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-050784 2004-02-26
JP2004050784A JP2005242606A (ja) 2004-02-26 2004-02-26 画像生成装置、画像生成プログラム、及び画像生成方法

Publications (1)

Publication Number Publication Date
WO2005084027A1 true WO2005084027A1 (fr) 2005-09-09

Family

ID=34908603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/002150 WO2005084027A1 (fr) 2004-02-26 2005-02-14 Dispositif de generation d'images, programme de generation d'images, et procede de generation d'images

Country Status (2)

Country Link
JP (1) JP2005242606A (fr)
WO (1) WO2005084027A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008087706A1 (fr) * 2007-01-16 2008-07-24 Pioneer Corporation Dispositif d'affichage pour véhicule, procédé d'affichage pour véhicule et programme d'affichage pour véhicule
WO2011129275A1 (fr) * 2010-04-12 2011-10-20 住友重機械工業株式会社 Dispositif de génération d'image cible de traitement, procédé de génération d'image cible de traitement et système d'aide à l'exploitation
JP2019079173A (ja) * 2017-10-23 2019-05-23 パナソニックIpマネジメント株式会社 3次元侵入検知システムおよび3次元侵入検知方法
JPWO2021079517A1 (fr) * 2019-10-25 2021-04-29
DE102018106039B4 (de) 2017-04-26 2024-01-04 Denso Ten Limited Bildreproduktionsvorrichtung, bildreproduktionssystem und bildreproduktionsverfahren
WO2024070203A1 (fr) * 2022-09-30 2024-04-04 株式会社東海理化電機製作所 Système de surveillance à distance

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007318198A (ja) * 2006-05-23 2007-12-06 Alpine Electronics Inc 車両周辺画像生成装置および撮像装置の測光調整方法
JP4858017B2 (ja) * 2006-08-31 2012-01-18 パナソニック株式会社 運転支援装置
JP4808119B2 (ja) * 2006-09-28 2011-11-02 クラリオン株式会社 停車予測位置を表示するナビゲーション装置
JP4888831B2 (ja) * 2006-12-11 2012-02-29 株式会社デンソー 車両周辺監視装置
JP5277603B2 (ja) * 2007-10-09 2013-08-28 株式会社デンソー 画像表示装置
JP5102718B2 (ja) * 2008-08-13 2012-12-19 株式会社Ihi 植生検出装置および方法
US8583406B2 (en) 2008-11-06 2013-11-12 Hoya Lens Manufacturing Philippines Inc. Visual simulator for spectacle lens, visual simulation method for spectacle lens, and computer readable recording medium recording computer readable visual simulation program for spectacle lens
JP5494372B2 (ja) * 2010-09-08 2014-05-14 株式会社デンソー 車両用表示装置
US9911257B2 (en) * 2011-06-24 2018-03-06 Siemens Product Lifecycle Management Software Inc. Modeled physical environment for information delivery
JP6115278B2 (ja) * 2013-04-16 2017-04-19 株式会社デンソー 車両用表示装置
JP6287583B2 (ja) * 2014-05-27 2018-03-07 株式会社デンソー 運転支援装置
JP6565148B2 (ja) * 2014-09-05 2019-08-28 アイシン精機株式会社 画像表示制御装置および画像表示システム
US10134112B2 (en) * 2015-09-25 2018-11-20 Hand Held Products, Inc. System and process for displaying information from a mobile computer in a vehicle
JP6723820B2 (ja) 2016-05-18 2020-07-15 株式会社デンソーテン 画像生成装置、画像表示システムおよび画像表示方法
JP6812181B2 (ja) * 2016-09-27 2021-01-13 キヤノン株式会社 画像処理装置、画像処理方法、及び、プログラム
JPWO2018101227A1 (ja) * 2016-11-29 2019-10-31 シャープ株式会社 表示制御装置、ヘッドマウントディスプレイ、表示制御装置の制御方法、および制御プログラム
KR101855345B1 (ko) * 2016-12-30 2018-06-14 도로교통공단 시뮬레이션된 가상현실 영상물을 다중시점으로 분리표출하는 다중시각 디스플레이 장치 및 방법
JP6427258B1 (ja) * 2017-12-21 2018-11-21 キヤノン株式会社 表示制御装置、表示制御方法
JP7219262B2 (ja) * 2018-03-20 2023-02-07 住友建機株式会社 ショベル及びショベル用の表示装置
JP7353782B2 (ja) 2019-04-09 2023-10-02 キヤノン株式会社 情報処理装置、情報処理方法、及びプログラム
JP7378243B2 (ja) * 2019-08-23 2023-11-13 キヤノン株式会社 画像生成装置、画像表示装置および画像処理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000064175A1 (fr) * 1999-04-16 2000-10-26 Matsushita Electric Industrial Co., Ltd. Dispositif de traitement d'images et systeme de surveillance
JP2002083284A (ja) * 2000-06-30 2002-03-22 Matsushita Electric Ind Co Ltd 描画装置
JP2003116125A (ja) * 2001-10-03 2003-04-18 Auto Network Gijutsu Kenkyusho:Kk 車両周辺視認装置
JP2004048295A (ja) * 2002-07-10 2004-02-12 Toyota Motor Corp 画像処理装置、駐車支援装置、及び画像処理方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05265547A (ja) * 1992-03-23 1993-10-15 Fuji Heavy Ind Ltd 車輌用車外監視装置
JP3324821B2 (ja) * 1993-03-12 2002-09-17 富士重工業株式会社 車輌用車外監視装置
JP3463144B2 (ja) * 1996-04-26 2003-11-05 アイシン・エィ・ダブリュ株式会社 建造物形状地図表示装置
JP3286306B2 (ja) * 1998-07-31 2002-05-27 松下電器産業株式会社 画像生成装置、画像生成方法
JP2001282824A (ja) * 2000-03-31 2001-10-12 Pioneer Electronic Corp メニュー表示システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000064175A1 (fr) * 1999-04-16 2000-10-26 Matsushita Electric Industrial Co., Ltd. Dispositif de traitement d'images et systeme de surveillance
JP2002083284A (ja) * 2000-06-30 2002-03-22 Matsushita Electric Ind Co Ltd 描画装置
JP2003116125A (ja) * 2001-10-03 2003-04-18 Auto Network Gijutsu Kenkyusho:Kk 車両周辺視認装置
JP2004048295A (ja) * 2002-07-10 2004-02-12 Toyota Motor Corp 画像処理装置、駐車支援装置、及び画像処理方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008087706A1 (fr) * 2007-01-16 2008-07-24 Pioneer Corporation Dispositif d'affichage pour véhicule, procédé d'affichage pour véhicule et programme d'affichage pour véhicule
JPWO2008087706A1 (ja) * 2007-01-16 2010-05-06 パイオニア株式会社 車両用表示装置、車両用表示方法及び車両用表示プログラム
WO2011129275A1 (fr) * 2010-04-12 2011-10-20 住友重機械工業株式会社 Dispositif de génération d'image cible de traitement, procédé de génération d'image cible de traitement et système d'aide à l'exploitation
JP2011223409A (ja) * 2010-04-12 2011-11-04 Sumitomo Heavy Ind Ltd 処理対象画像生成装置、処理対象画像生成方法、及び操作支援システム
US8848981B2 (en) 2010-04-12 2014-09-30 Sumitomo Heavy Industries, Ltd. Processing-target image generation device, processing-target image generation method and operation support system
DE102018106039B4 (de) 2017-04-26 2024-01-04 Denso Ten Limited Bildreproduktionsvorrichtung, bildreproduktionssystem und bildreproduktionsverfahren
JP2019079173A (ja) * 2017-10-23 2019-05-23 パナソニックIpマネジメント株式会社 3次元侵入検知システムおよび3次元侵入検知方法
JPWO2021079517A1 (fr) * 2019-10-25 2021-04-29
WO2021079517A1 (fr) * 2019-10-25 2021-04-29 日本電気株式会社 Dispositif de génération d'image, procédé de génération d'image, et programme
JP7351345B2 (ja) 2019-10-25 2023-09-27 日本電気株式会社 画像生成装置、画像生成方法、及びプログラム
WO2024070203A1 (fr) * 2022-09-30 2024-04-04 株式会社東海理化電機製作所 Système de surveillance à distance

Also Published As

Publication number Publication date
JP2005242606A (ja) 2005-09-08

Similar Documents

Publication Publication Date Title
WO2005084027A1 (fr) Dispositif de generation d'images, programme de generation d'images, et procede de generation d'images
JP6806156B2 (ja) 周辺監視装置
US8754760B2 (en) Methods and apparatuses for informing an occupant of a vehicle of surroundings of the vehicle
JP6148887B2 (ja) 画像処理装置、画像処理方法、及び、画像処理システム
JP4323377B2 (ja) 画像表示装置
CN104163133B (zh) 使用后视镜位置的后视摄像机系统
EP2763407B1 (fr) Dispositif de surveillance de l'environnement d'un véhicule
JP4364471B2 (ja) 車両の画像処理装置
US8514282B2 (en) Vehicle periphery display device and method for vehicle periphery image
US9706175B2 (en) Image processing device, image processing system, and image processing method
US10647256B2 (en) Method for providing a rear mirror view of a surroundings of a vehicle
US20100054580A1 (en) Image generation device, image generation method, and image generation program
WO2012169355A1 (fr) Dispositif de génération d'image
JP5516998B2 (ja) 画像生成装置
EP2631696B1 (fr) Générateur d'image
KR20120118073A (ko) 차량 주변 감시 장치
JP2004240480A (ja) 運転支援装置
JP5966513B2 (ja) 車両の後側方撮影装置
JP3876761B2 (ja) 車両用周辺監視装置
JP2022095303A (ja) 周辺画像表示装置、表示制御方法
JP5516997B2 (ja) 画像生成装置
JP2005269010A (ja) 画像生成装置、画像生成プログラム、及び画像生成方法
JP4552525B2 (ja) 車両の画像処理装置
JP2007302238A (ja) 車両の画像処理装置
KR20180094717A (ko) Avm을 이용한 운전 지원 장치 및 시스템

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase