WO2005124735A1 - 画像表示システム、画像表示方法および画像表示プログラム - Google Patents
画像表示システム、画像表示方法および画像表示プログラム Download PDFInfo
- Publication number
- WO2005124735A1 WO2005124735A1 PCT/JP2005/010463 JP2005010463W WO2005124735A1 WO 2005124735 A1 WO2005124735 A1 WO 2005124735A1 JP 2005010463 W JP2005010463 W JP 2005010463W WO 2005124735 A1 WO2005124735 A1 WO 2005124735A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- display
- face
- data
- image data
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0261—Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0414—Vertical resolution change
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0421—Horizontal resolution change
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/14—Solving problems related to the presentation of information to be displayed
- G09G2340/145—Solving problems related to the presentation of information to be displayed related to small screens
Definitions
- Image display system image display method, and image display program
- the present invention relates to an image display system, an image display method, and an image display program, and particularly to an image display system, an image display method, and an image display program used for monitoring.
- a special surveillance system is required when a person is required to visually capture a range beyond the field of view of the person (hereinafter referred to as the wide area).
- the technology used in this special system is to capture the image of the surveillance target using a wide-range imaging device (hereinafter referred to as a wide-range imaging device), and to fit the captured image within the field of view.
- a wide-range imaging device hereinafter referred to as a wide-range imaging device
- a wide-area photographing apparatus there is an image-capturing apparatus capable of photographing a wide area with a single apparatus such as a panoramic camera.
- wide-range photographing apparatuses that combine images photographed by a plurality of photographing apparatuses to form a wide-area image. The observer can visually recognize a range beyond the field of view by viewing a wide range image displayed on the display device.
- a reduced display image When displaying a wide-range image in a reduced size, it may be difficult to obtain information necessary for monitoring from the reduced image (hereinafter, referred to as a reduced display image). In such a case, there is a technology that can obtain a desired amount of information by increasing the resolution of a specific area (hereinafter, referred to as a specific area) of the reduced display image or by enlarging the display.
- a specific area a specific area
- a visual information providing device is disclosed in Japanese Patent Application Laid-Open No. 5-139209.
- This visual information providing device includes a first generation unit, a first display unit, a second generation unit, a second display unit, a presentation unit, a switching signal generation unit, and an image control device.
- the first generation means generates first visual information.
- the first display means displays the first visual information as a first image.
- the second generation means generates second visual information including right-eye information and left-eye information.
- the second display means displays the second visual information as a three-dimensional second image.
- the presenting means can present the first and second images individually or in an overlapping manner.
- the switching signal generating means outputs a selection switching signal for selecting an image to be presented by the presenting means.
- the image control device controls the presenting means based on the switching signal.
- an eye direction (line of sight) detection sensor for detecting the line of sight of the observer is required to specify the line of sight.
- the gaze detection sensor is attached to, for example, the observer's head, detects the inclination of the head in response to the movement, and detects the inclination.
- the viewpoint is specified in response to the movement of the eyeball, which is worn by the observer wearing glasses. Techniques using the latter include, for example,
- Japanese Patent Application Laid-Open No. Hei 9-305156 discloses an image display method and apparatus.
- This video display method is a video display method for generating and displaying a fine video at high speed.
- An image is generated with normal accuracy over the entire image to be displayed, and the line of sight is continuously detected using the eye direction detecting means, and the position of the center of the visual field at each moment on the image is determined.
- the image near the center of the field of view is calculated using at least one of high resolution to increase the number of display pixels and high reproducibility to increase the number of display colors and gradation. It is characterized by generating.
- an imaging device is disclosed in Japanese Patent Application Laid-Open No. 2000-59665.
- This imaging device includes three imaging units each having two imaging units arranged on the left and right sides of one imaging unit.
- the optical axis of the left imaging unit and the optical axis of the right imaging unit intersect at the same angle with respect to the optical axis of the central imaging unit, and the intersection of the optical axes becomes the same point.
- the relationship is The three imaging units are located at positions where the distance from the intersection of the optical axes to the front principal point of each imaging unit is the same in the direction closer to the subject side than the position of the intersection of the optical axes.
- Japanese Patent Application Laid-Open No. 8-116556 discloses an image processing method and apparatus.
- This picture The image processing method includes a multi-viewpoint image input step of inputting an image obtained from a plurality of viewpoint positions arranged on a plurality of different straight lines, and a position and an eye position of an observer watching the image.
- a viewpoint detection step for detecting a direction in which the viewpoint is located, an image reconstruction step for reconstructing an image viewed from the viewpoint position detected in the viewpoint detection step from multi-view image data, and an image output device for reconstructing the reconstructed image. And outputting the image via
- An object of the present invention is to provide an image display system and an image display method for appropriately presenting a necessary amount of information to a display device from a wide-range image that does not fit in the human field of view or an image captured from multiple viewpoints. Is to provide.
- an object of the present invention is to take pictures from a wide range of images and multi-viewpoints that cannot be included in a person's field of view without wearing a special device for detecting eye movement or head tilt. It is an object of the present invention to provide an image display system and an image display method capable of appropriately presenting a shadowed image within the field of view of the person and appropriately extracting a necessary amount of information from the image.
- an object of the present invention is to allow a human to unconsciously move his or her viewpoint in order to intermittently look at an area other than a part where the person is consciously gazing.
- Another object of the present invention is to provide an image display system and an image display method capable of stably displaying an area that a human needs consciously.
- an image display system of the present invention includes an image display device, an image generation device, a face image photographing device, and a face front point detection device.
- the image generation device generates display image data of a display image displayed on the image display device.
- the display image has a plurality of regions.
- the face image capturing apparatus captures a face image of a human looking at a display image.
- the face front point detection device generates face image data from a face image, and detects a face front point as a point located in front of a human face on a display image based on the face image data.
- the image generation device specifies a specific region corresponding to the face front point from the plurality of regions, and generates display image data by increasing the amount of information provided by the image corresponding to the specific region.
- the face front point detection device detects a new face front point.
- the image generation device detects when a new face front point transitions to a specific area force. In this case, it is preferable to specify a new area and generate new display image data by increasing the amount of information provided by the image corresponding to the new area.
- the image generation device can increase the amount of information provided by the image by relatively increasing the display size of the image when displayed on the image display device. preferable.
- the image generation device increases the amount of information provided by the image by relatively increasing the resolution of the image when displayed on the image display device. Is preferred.
- the image generation device configures, as the display image, a wide-range image captured by a wide-range imaging device that captures an image in a range exceeding the range of human vision.
- the image display device displays display image data composed of a wide-range image so as to correspond to a plurality of regions and to be within the range of human vision.
- the image generation device synthesizes a plurality of images shot by a plurality of shooting devices as a wide range shooting device with a display image. It is preferable that the image display device displays the display image data corresponding to the plurality of regions so as to be within the range of the human visual field.
- the image generation device preferably includes an information storage unit.
- the information storage unit stores display device data indicating information of the image display device and face image photographing device data indicating information of the face image photographing device.
- the face front point detection device performs three-dimensional image processing based on the face image to generate face image data, and detects the face front point based on the face image data, the display device data, and the face image photographing device data. Preferably, it is detected.
- an image display method includes: (a) generating display image data of a display image displayed on an image display device, the display image has a plurality of regions, (B) a step of capturing a human face image for viewing a display image; and (c) generating face image data from the face image and positioning the image in front of the human face on the display image based on the face image data. (D) identifying a specific area corresponding to the face front point from a plurality of areas to increase the amount of information provided by the image corresponding to the specific area; And (e) displaying the display image data on an image display device.
- the step (a) comprises the step of: (al) forming a wide-range image captured by a wide-range image capturing apparatus that captures an image in a range exceeding the range of human vision into a display image.
- the step (e) includes the step of: (el) displaying display image data composed of a wide-range image so as to correspond to a plurality of regions and fall within the range of human vision.
- the (al) step includes a step of (all) synthesizing a plurality of images photographed by a plurality of photographing devices as a wide-range photographing device with a display image.
- Each of the plurality of regions corresponds to each of the plurality of images.
- the (el) step includes a step of displaying the display image data corresponding to the plurality of regions so as to fall within the range of human vision.
- the step (c) includes (cl) display device data indicating information of the image display device, and face image shooting device data indicating information of the face image shooting device that shoots a face image. And (c2) performing three-dimensional image processing based on the face image to generate face image data, and determining a face front point based on the face image data, the display device data, and the face image capturing device data. And detecting.
- a program includes: (h) a step of generating display image data of a display image displayed on an image display device; (I) generating face image data from a human face image viewing a display image, and detecting a point located in front of the human face on the display image as a human face front point based on the face image data; (J) a plurality of area powers; a step of specifying a specific area corresponding to a face front point, increasing the amount of information provided by an image corresponding to the specific area, and generating display image data; k) outputting the display image data to the image display device.
- the amount of information provided by the image is increased by relatively increasing the resolution of the image when displayed on the image display device.
- the (h) step includes (hi) a step of composing a wide-range image captured by a wide-range imaging device that captures an image in a range exceeding the range of human vision into a display image.
- the (k) step includes: (kl) displaying the display image data to the image display device so that the display image data composed of a wide range image is displayed so as to correspond to a plurality of regions and to be within the range of the human visual field.
- the (hi) step preferably includes a step of (hi 1) combining a plurality of images photographed by a plurality of photographing devices as a wide-range photographing device with a display image. Each of the plurality of regions corresponds to each of the plurality of images.
- the (kl) step includes the step of (kll) outputting the display image data to the image display device so that the display image data is displayed within a range of human sight and corresponding to a plurality of regions. Is preferred.
- step includes (il) display device data indicating information of the image display device and face image shooting device data indicating information of the face image shooting device that shoots a face image. Reading out, and (i2) performing 3D image processing based on the face image to Generating image data, and detecting a face front point based on the face image data, the display device data, and the face image photographing device data.
- an image display system that appropriately presents a necessary amount of information to a display device from a wide-range image that does not fit in the human field of view or an image captured from multiple viewpoints. become.
- the image is taken from a wide range of images and multi-viewpoints that cannot be included in the person's field of view without attaching a special device for detecting eye movement or head tilt to the person. This makes it possible to appropriately present the image in the person's field of view, and to appropriately extract the necessary information amount from the image.
- FIG. 1 is a conceptual diagram showing a configuration of an embodiment of an image display system according to the present invention.
- FIG. 2 is a conceptual diagram showing an operation of the embodiment of the image display system of the present invention.
- FIG. 3 is a block diagram showing a detailed configuration of an embodiment of the image display system of the present invention.
- FIG. 4 is a flowchart showing an operation of the embodiment of the image display system of the present invention.
- FIG. 5 is a diagram showing a configuration in a case where the image display system is applied to vehicle operation
- FIG. 6 is a diagram exemplifying external image data captured by a first image capturing device to a third image capturing device.
- FIG. 7 is a diagram showing an operation of generating display image data from first to third image data.
- FIG. 8 is a conceptual diagram showing a configuration when the present embodiment is applied to wide area monitoring.
- FIG. 9 is a conceptual diagram showing an operation when the present invention is applied to a wide-area monitoring system.
- FIG. 1 is a conceptual diagram showing the configuration of an embodiment of the image display system of the present invention.
- the image display system 1 includes a monitoring target image capturing device 2, a face image capturing device 3, an image display device 4, and a computer 5.
- the monitoring target image capturing device 2 and the computer 5 are electrically connected to each other.
- the computer 5 receives data output from the monitoring target image capturing device 2.
- the face image capturing device 3 and the computer 5 are electrically connected to each other.
- the computer 5 receives data output from the face image capturing device 3.
- the image display device 4 is electrically connected to the computer 5. Display image data transmitted from the computer 5 is received and an image is displayed.
- the connection of the monitoring target image capturing device 2, the face image capturing device 3, the image display device 4, and the computer 5 may be performed via a predetermined network.
- the observer 10 observes the image displayed on the image display device 4.
- the face front point P shown in FIG. 1 is displayed on the screen of the image display device 4 located in front of the face. Is shown.
- the observer 10 can use the image display system 1 without having detailed knowledge of the technology of the image display system 1.
- the monitoring target image capturing apparatus 2 has a range (hereinafter, referred to as a wide range) that exceeds the range of human visibility.
- the monitoring target image capturing device 2 includes a plurality of capturing devices. As a result, it becomes possible to execute multi-view monitoring in which a specific point is observed from a plurality of points.
- the image of the observation target captured by the monitoring target image capturing device 2 is converted into observation image data and transmitted to the computer 5.
- the monitoring target image capturing device 2 of the present invention is not limited to this configuration. For example, when observing a wide area from a specific point, it may be constituted by a single photographing device such as a panoramic camera capable of photographing a wide area.
- the face image capturing device 3 captures a face image of the observer 10. Examples are optical cameras and CCD cameras.
- the face image photographing apparatus 3 converts the photographed face image into face image data and transmits the face image data to the computer 5. It is desirable that the face image photographing device 3 is installed in a place where the relative position with respect to the position of the face of the observer 10 can be specified. Alternatively, by specifying the coordinates of the face image capturing device 3 with the predetermined position of the image display device 4 as the origin, the face image capturing device 3 It is good to specify the installation position of 3,
- the computer 5 executes data processing based on image data (observation image data / face image data) transmitted from the monitoring target image capturing device 2 and the face image capturing device 3. ⁇ Exemplified by workstations and personal computers.
- the computer 5 executes a predetermined program to perform image processing in response to the received image data (observation image data Z face image data).
- the computer 5 outputs the display image data generated by the image processing to the image display device 4.
- the image display device 4 displays display image data transmitted from the computer 5. Examples are CRTs and liquid crystal displays.
- FIG. 2 is a conceptual diagram showing the operation of the embodiment of the image display system of the present invention.
- the monitoring target image capturing device 2 captures the observation target using the first to third three capturing devices.
- each imaging device has a viewing angle of approximately 90 degrees with a horizontal included angle
- observation at a horizontal angle of 270 degrees is possible with the three imaging devices described above.
- the monitoring target image capturing device 2 captures an observation target using three capturing devices. Then, a plurality of image data is obtained. Each of the first image data Gl, the second image data G2, and the third image data G3 as a plurality of image data indicates an image captured by the first to third imaging devices. The monitoring target image capturing device 2 transmits a plurality of pieces of captured image data to the computer 5.
- the computer 5 generates one piece of reference image data GO based on the plurality of pieces of transmitted image data. Further, the face image capturing device 3 captures a face image of the observer 10 and transmits the face image data to the computer 5. The computer 5 performs a predetermined calculation in response to the received face image data, and calculates a face front point P.
- the computer 5 generates display image data based on the reference image data GO and the face front point P, and transmits the display image data to the image display device 4.
- the computer 5 determines from the reference image data GO and the face front point P that the observer 10 is paying attention to the first image data G1.
- the computer 5 generates display image data that can display the first image data G1 with high resolution and a large display area.
- the second image data G2 and the third image data G3 have a low resolution and a narrow display area.
- the computer 5 transmits the generated display image data to the image display device 4.
- the image display device 4 receives the received display image data.
- the display image 40 in which the first image data G1 is enlarged (high-resolution display) is displayed on the display screen based on the data.
- the displayed image 40 allows the observer 10 to execute observation of an observation object existing in a range exceeding the range of the visual field of the individual observer 10.
- FIG. 3 is a block diagram showing a detailed configuration of the embodiment of the image display system of the present invention.
- the image display system 1 of the present embodiment includes the monitoring target image capturing device 2, the face image capturing device 3, the image display device 4, and the computer 5.
- the monitoring target image capturing device 2 includes a plurality of capturing devices (2— :! to 2—n).
- the monitoring target image capturing device 2 can perform monitoring over a wide range and from multiple viewpoints by installing a plurality of capturing devices (2— :! to 2—n) at arbitrary positions. become.
- the computer 5 includes a CPU 6, a memory 7, and a storage device 8.
- the computer 5 may be connected to a network (not shown).
- the CPU 6, the memory 7, and the storage device 8 are connected to each other via a bus 9.
- the CPU 6 is an arithmetic processing device provided at least in the computer 5.
- the CPU 6 executes a command for controlling a device built in or external to the computer 5 and executes a calculation process on input data, and outputs a result of the calculation process.
- the memory 7 is an information storage device provided in the computer 5.
- the memory 7 is exemplified by a semiconductor storage device such as a RAM.
- the predetermined data is stored in response to a command from the CPU 6.
- the storage device 8 is a large-capacity storage device provided in the computer 5.
- the storage device 8 is exemplified by an HDD and has a nonvolatile storage area. It stores electronic data (11 to 14) and computer programs (21 to 24) used in this embodiment.
- the electronic data stored in the storage device 8 includes display device data 11, face image photographing device data 12, face posture initial data 13, and layout data 14.
- Display device data 11, face image capturing device data 12, face posture initial data 13 and layout data 14 These are used to execute the operation of the image display system 1.
- the display device data 11 is data indicating information on the image display device 4.
- the data is exemplified by information necessary for displaying an image on the display screen, such as the display screen size of the image display device 4 ⁇ displayable resolution.
- the face image capturing device data 12 is data indicating an installation position of the face image capturing device 3.
- the data is used to calculate the face posture data (data for identifying the line of sight of the observer 10) from the image data captured by the facial image capturing device 3, and the installation position of the facial image capturing device 3 is required. Is exemplified in the information about.
- the face posture initial data 13 is initial data used for calculating face posture data.
- the face image data in which the observer 10 is gazing at the left corner of the display screen of the image display device 4 is exemplified as the face posture initial data 13.
- the relative distance between the observer 10 and a specific point of the image display device 4 may be stored as the face posture initial data 13.
- the layout data 14 is data relating to a display layout when displaying the monitoring target image on the image display device 4.
- the monitoring target image (wide-range image or multi-viewpoint image) is generated by executing an image processing by the computer 5 on the image captured by the monitoring target image capturing device 2.
- the computer 5 specifies an area corresponding to the face front point P.
- the computer 5 generates display data so that the specified area can be displayed with high resolution. Therefore, by changing the setting to the layout data 14, the display form on the image display device 4 can be arbitrarily changed.
- the computer programs stored in the storage device 8 include a reference image data generation program 21, a face posture data generation program 22, a face front point detection program 23, and a display image data generation program 24.
- Each of the reference image data generation program 21, the face posture data generation program 22, the face front point detection program 23, and the display image data generation program 24 is used to execute the operation of the image display system 1.
- the reference image data generation program 21 executes the monitoring transmitted from the monitoring target image capturing apparatus 2. Based on the image data to be viewed (eg, G1 to G3 in FIG. 2), reference image data (eg, GO in FIG. 2) serving as a reference for generating display image data is generated. For example, the reference image data generation program 21 generates a seamless image without overlap when an image captured by a plurality of imaging devices (2— :! to 2—n) has an overlap. Generate the corresponding reference image data.
- the face posture data generation program 22 generates the current face posture data for the observer 10 based on the face image data transmitted from the face image photographing device 3, the initial face posture data 13 and the display device data 11. (Data for specifying the viewing direction of the observer 10).
- the face front point detection program 23 based on the face posture data, the display device data 11, and the face image photographing device data 12 generated by the face posture data generating program 22, displays the image on the display screen of the image display device 4.
- the face front point P of is calculated.
- the display image data generation program 24 generates a display based on the reference image data generated by the reference image data generation program 21, the face front point P calculated by the face front point detection program 23, and the layout data 14. Generate image data.
- the face posture data generation program 22 may generate the display image data without using the layout data 14. Even in that case, the operation and effect of the present invention are not affected.
- FIG. 4 is a flowchart showing the operation of the embodiment of the image display system of the present invention. In the following operation, it is assumed that required data is stored in the storage device 8 in advance.
- the image display system 1 is activated, and the image display device 4 displays the monitoring target image.
- the displayed image is reference image data that has been continuously photographed at predetermined time intervals by the monitoring target image photographing device 2 and has been subjected to predetermined image processing by the computer 5.
- the reference image data is continuously updated.
- the observer 10 looks at the displayed image with the naked eye.
- the face image capturing device 3 captures a face image of the observer 10.
- the time intervals at which this imaging is performed are performed at intervals that are short enough to detect the movement of the observer 10 in front of the face.
- the face image capturing device 3 converts the captured face image into a format that can be subjected to data processing as face image data, and transmits the converted image to the computer 5 via the network.
- step S102 the computer 5 activates the face posture data generation program 22 in response to the face image data transmitted from the face image photographing device 3.
- the CPU 6 of the computer 5 reads the face posture initial data 13 and the display device data 11 in response to the activation of the face posture data generation program 22.
- the present face posture data of the observer 10 is calculated based on the face posture initial data 13, the face image data, and the display device data 11.
- An example of a technique for performing this processing is described in “High-speed 'high-accuracy face pose estimation using a 3D appearance model' (Transactions of the Institute of Electronics, Information and Communication Engineers 2004, D-12-99)”.
- a face posture estimation technique can be used.
- the face posture of the observer 10 with respect to the face image photographing device 3 can be calculated. Therefore, by obtaining the positional relationship between the face image capturing device 3 and the image display device 4 in advance, the face posture data of the observer 10 with respect to the image display device 4 can be calculated.
- the computer 5 proceeds to the process of step S103 in response to the completion of the calculation of the face posture data.
- step S103 the computer 5 activates the face front point detection program 23.
- the CPU 6 of the computer 5 further reads the face image capturing device data 12 in response to the activation of the face front point detection program 23. Then, the face front point P is specified based on the face posture data, the display device data 11 and the face image photographing device data 12. The computer 5 proceeds to the process of step S103 in response to the completion of the identification of the face front point P.
- step S104 the computer 5 starts the display image data generation program 24.
- the CPU 6 of the computer 5 responds to the activation of the display image data generation program 24, based on the reference image data captured by the monitoring target image capturing device 2 and image-processed by the computer 5, the front face point P, and the layout data 14.
- To generate display image data The display image data has a high resolution in the display area (high-resolution display area) corresponding to the face front point P.
- the image is generated so that the image is displayed.
- the computer 5 proceeds to the process of step S105 in response to the completion of the generation of the display image data.
- step S105 it is determined whether or not the front face point P has moved during the execution of step 104. For example, the process such as step S103 is re-executed, and if the previously identified face front point is the same as the currently identified face front point, it is determined that the face has not moved. As a result of the determination, when the movement of the face front point is not detected (step S105: N ⁇ ), the computer 5 transmits the display image data generated in step S104 to the image display device 4 via the network. I do. If the face front point movement is detected after the display image data is generated (step S105: YES), the process proceeds to step S106.
- step S106 the computer 5 executes reading of data on the detected face front point P after the movement (calculated in step S105) (hereinafter referred to as face front point data).
- the face front point data is exemplified by coordinates in the screen of the image display device 4.
- step S107 the computer 5 executes the reading of the display image data (generated in step S104) in response to the completion of the reading of the face front point data. In response to the completion of the reading of the face front point data and the display image data, the process proceeds to step S108.
- step S108 the computer 5 determines whether or not the face front point P force corresponds to the high-resolution display area of the read display image data. As a result of the determination, when the image is made to correspond to the current display image data, the area where the image is displayed in high resolution (high-resolution display area) does not correspond to the face front point P (step S108: YES), the process returns to step S101, and new display image data is generated. If the face front point P has moved in step S108, but has not deviated from the area where the image is displayed in high resolution (step S108: N ⁇ ), the computer 5 generates the display generated in step S104. The image data is transmitted to the image display device 4 via the network, and the process proceeds to step S109.
- step S109 the image display device 4 receives the display image data transmitted from the computer 5, and displays an image corresponding to the display image data on the display screen.
- the image display system 1 automatically follows the movement of the observer 10 to the front of the face and automatically displays a region corresponding to the point of interest of the observer 10 at a high resolution. it can.
- steps S105 to S108 may be omitted. In that case, the execution and update of the image display become faster.
- FIG. 5 is a diagram showing a configuration when the image display system 1 is applied to the operation of a vehicle.
- the vehicle 30 is a moving body including the image display system 1.
- FIG. 5 shows that the vehicle 30 enters the intersection P.
- the motorcycle B1 runs near the front left of the vehicle 30, and the vehicle C1 exists on the road on the right front side with respect to the traveling direction of the vehicle 30.
- the vehicle 30 includes a monitoring target image capturing device 2 (2— :! to 2-3), a face image capturing device 3, an image display device 4, and a computer 5. Further, the observer 10 (the driver of the vehicle 30) operates the vehicle while watching the image displayed on the image display device 4.
- the first image capturing device 2-1 captures an external image in a range corresponding to the field of view R1.
- the second image capturing device 2-2 captures an external image in a range corresponding to the field of view R2.
- the third image capturing device 2-3 captures an external image in a range corresponding to the field of view R3.
- the face image capturing device 3 is installed inside the vehicle 30 and constantly captures a face image of the observer 10.
- the first image capturing device 2-1 to the third image capturing device 2-3 convert captured external images into electronic data. Then, the data is converted into a format that can be processed by the computer 5 and transmitted to the computer 5 via a network (not shown) inside the vehicle 30. Similarly, the face image photographing device 3 converts the photographed face image into electronic data. Then, the format is converted into a format that can be processed by the computer 5 and transmitted to the computer 5 via the network.
- the computer 5 executes image processing based on the received external image data and face image data to generate display image data. The generated display image data is transmitted to the image display device 4 via the above-described network.
- FIG. 6 is a diagram exemplifying external image data captured by the first image capturing device 2-1 to the third image capturing device 2-3.
- the first image data G1 indicates image data captured by the first image capturing device 2-1.
- the second image G2 data is obtained by the second image capturing device 2_2.
- the third image data G3 indicates image data captured, and the third image data G3 indicates image data captured by the third image capturing device 2-3.
- FIG. 7 is a diagram showing an operation of generating display image data 40 (40a, 40b) from the above-described first image data G1 to third image data G3.
- the reference external image data GO is reference image data generated based on the first to third image data G1 to G3.
- the reference image data GO shown in FIG. 7 is generated by combining the image data (G1 to G3), and the relative position of each image data (first image data G1 to third image data G3) is calculated. Specified. Note that, in order to reduce the information amount of the reference image data GO, the reference image data GO in which the horizontal size of each image (G1 to G3) is reduced at a predetermined ratio may be generated.
- the generated reference image data GO is displayed on the image display device 4 when the front face point P is not detected.
- face front point data 50 is generated.
- the face front point data 50 indicates the face front point P corresponding to the reference image data GO to facilitate understanding of the present invention, but the actual face front point data is a numerical value indicating the coordinates on the screen. It may be data.
- the computer 5 corresponds to the point indicated by the coordinates. Calculate the area to perform. Then, the computer 5 generates display image data based on the calculated area and the face front point data 50 so that the image Gla corresponding to the above-described first image data G1 can be displayed at a high resolution.
- the generated display image data is output to the image display device 4 via the network.
- the image display device 4 displays the display image 40a on the screen based on the display image data supplied from the computer 5.
- a case is considered in which it is determined that the position of the face front point P does not exist in the area of the display image Gla due to the observer 10 moving his / her line of sight.
- the computer 5 monitors the face front point movement of the observer 10 based on the face image data transmitted from the face image photographing device 3. As shown in FIG. 7, when the observer 10 moves his / her gaze, the coordinates of the face front point P in the face front point data 50 indicate the face front point P (5, 25) (hereinafter, referred to as The face front point P is referred to as a “new face front point P.”), and the computer 5 determines that the new face front point P exists in the area of the display image Gla and does not exist.
- the computer 5 responds to the result of the determination by determining the coordinates of the new face front point P (face front point P (5, 25)) and the display image data output to the image display device 4. Then, new display image data is generated so that the image G2a corresponding to the above-described second image data G2 can be displayed at a high resolution. The generated new display image data is output to the image display device 4 via the network. The image display device 4 displays a new display image 40b on the screen based on the new display image data supplied from the computer 5.
- the image of the attention portion is specified by the observer 10 merely moving the face a little, and the observer 10 can see the image of the specified area with high resolution.
- the observer 10 By using the present invention for vehicle operation, it is possible to reduce blind spots in the images in front and left and right of the observer 10 (driver of the vehicle).
- the observer 10 when the observer 10 turns the vehicle 30 to the left, the observer 10 must confirm that the two-wheeled vehicle B1 running on the left side is involved. Normally, the observer 10 has to turn his face to see the left motorcycle B1. Performing the left visual inspection while driving may cause you to be careless in the front, and often cause accidents due to forgetting the left visual observation. Also, if the left motorcycle B1 is distracted, the car C1 coming from the right may be overlooked.
- the image display device 4 which displays an image in a range where the whole can be seen even in the front-facing state, has a wide-field image including the left motorcycle B1 to the right automobile C1.
- the image is displayed. Therefore, the observer 10 can quickly and easily judge the situation around the intersection.
- the second image data G2 needs to be displayed with high resolution in order to accurately determine the situation and check the relative distance to the motorcycle B1 and the size of the motorcycle B1. If the observer 10 looking at the display image Gla notices the presence of the motorcycle B1 displayed in the display image Gla, he only needs to move his face by 20 to 30 degrees (or less). It is possible to see the same image as when the left visual observation is performed during traveling.
- the right-hand car C1 is always displayed at a low resolution, so it is possible to notice the presence of the two-wheeler B1 while checking it, which is less likely to be overlooked.
- the force S that describes the case where the image display system of the present invention is applied to a car traveling on a road is used. Not limited to vehicles that travel.
- the image display system It is also possible to provide the same configuration as the stem in a ship navigating on water or the like.
- FIG. 8 is a conceptual diagram showing a configuration when the present embodiment is applied to wide area monitoring.
- the wide-area monitoring system includes a plurality of photographing devices (2— :! to 2—6) for photographing different points, a facial image photographing device 3, an image display device 4, It consists of computer 5.
- each device denoted by the same reference numeral as the above-described device has the same configuration as that described above, and detailed description thereof will be omitted.
- each of the plurality of photographing devices (2_ :! to 2_6) is composed of, for example, six surveillance cameras that photograph six different places.
- Each photographing device is connected to a computer 5 via a network (not shown).
- the plurality of photographing devices (2— :! to 2—6) generate image data of predetermined pixels that photograph the state of each point, and output the image data to the computer 5 via the network.
- the face image photographing device 3 and the image display device 4 are also connected to the computer 5 via the network.
- the observer 10 is a person who observes an image displayed on the image display device 4.
- the face image photographing device 3 is a face image photographing device for photographing the face of the observer 10.
- the face image capturing device 3 is connected to the computer 5 and the image display device 4 via the above-described network.
- the face image of the observer 10 captured by the face image capturing device 3 is converted into face image data and input to the computer 5.
- the computer 5 detects the face front point P of the observer 10 based on the supplied face image data and generates face front point data.
- FIG. 9 is a conceptual diagram showing an operation when the present invention is applied to a wide area monitoring system.
- the photographing devices (2— :! to 2—6) installed at each photographing point convert the photographed image into electronic data and transmit it to the computer 5.
- the computer 5 receives the transmitted image data via the network, and activates the reference image data generation program 21 in response to the reception.
- the computer 5 generates the reference image data by executing the started reference image data generation program 21.
- the reference image data is obtained by arranging the images transmitted from the first image capturing device 2 —: !! 2-16 at a reduced resolution and arranging one image that can be displayed on the image display device 4. Generated by integrating the images.
- the computer 5 generates face posture data based on the face image data transmitted from the face image photographing device 3.
- the generated face image data is generated by executing the face posture data generation program 22.
- the computer 5 uses the face posture data detected by the face posture data generation program 22 and the position information of the screen of the display device data 11 to determine the face front point P (the point on the screen located in front of the face). Determine the position and output as face front point P.
- display image data is generated by increasing the resolution of the surveillance camera image displayed in the area including the face front point P and decreasing the resolution of the other surveillance camera images.
- the first image capturing device 2-1 to the sixth image capturing device 2-6 constituting the wide area monitoring system capture image data G1 to image data G6 at different points.
- the captured image data (G1 to G6) is transmitted to the computer 5 via the network.
- the pixel of each transmitted image data is 360 ⁇ 240 pixels
- the displayable pixel size of the image display device 4 is 360 ⁇ 240 pixels.
- the computer 5 executes the reference image data generation program 21, and executes the first image capturing device 2
- the computer 5 allocates the area 41 to 240 ⁇ 160 pixels and the areas 42 to 46 to 120 ⁇ 80 pixels as in the display image 40c, and assigns the image to the image display device 4. Is displayed.
- the computer 5 When the system is operating, the computer 5 continues to photograph the face image of the observer 10 using the face image photographing device 3.
- the computer 5 executes the face posture data generation program 22, and detects the three-dimensional position and orientation of the face of the observer 10 based on the face image data transmitted from the face image photographing device 3. Further, the computer 5 uses the face posture data calculated by executing the face posture data generation program 22, the display device data 11, and the face image photographing device data 12 to obtain the face front point P (the positional force of the face is also determined by the face position). Straightened in the direction The point at which the line intersects the screen of the image display device 4) is determined as the face front point P.
- the computer 5 determines which area of the area 41 to the area 46 includes the face front point P, and determines that the area including the face front point P is an area of 240 x 160 pixels. Then, new display image data is generated. Except when the system is started, it is determined which area the face front point P is in based on the display image data at the time of the previous display. For example, when the face front point P is within the area 46, the computer 5 generates the display image data by setting the area 46 to a lower left area of 240 ⁇ 160 pixels as shown in the display image 40d. In the display image data, regions 41 to 45 are regions of 120 ⁇ 80 pixels. That is, in the monitoring target image displayed on the image display device 4, the display image data is displayed by changing the image data G1 to the image data G6 to the resolution of the region 41 to the region 46, respectively.
- the force S for which the amount of resolution enlargement is constant at twice can be arbitrarily changed by changing the setting of the layout data 14.
- the viewer can change the amount of change in resolution as necessary by performing processing such as increasing the size of the face when it is close to the display and reducing it when it is far away. It is.
- the observer can constantly monitor a plurality of points at once, and an image displaying a point suspected of having an abnormality is displayed. Can be enlarged and displayed so as to follow the line of sight.
- the observer 10 can display and confirm the target area at a high resolution only by slightly moving the face without performing a switch operation or the like. it can. Further, according to the present invention, the observer 10 can perform wide-area monitoring without wearing a special device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/570,863 US7839423B2 (en) | 2004-06-18 | 2005-06-08 | Image display system with gaze directed zooming |
EP05748915A EP1768097A1 (en) | 2004-06-18 | 2005-06-08 | Image display system, image display method, and image display program |
JP2006514702A JP4952995B2 (ja) | 2004-06-18 | 2005-06-08 | 画像表示システム、画像表示方法および画像表示プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004180485 | 2004-06-18 | ||
JP2004-180485 | 2004-06-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005124735A1 true WO2005124735A1 (ja) | 2005-12-29 |
Family
ID=35509944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/010463 WO2005124735A1 (ja) | 2004-06-18 | 2005-06-08 | 画像表示システム、画像表示方法および画像表示プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US7839423B2 (ja) |
EP (1) | EP1768097A1 (ja) |
JP (1) | JP4952995B2 (ja) |
KR (1) | KR100911066B1 (ja) |
CN (1) | CN1969313A (ja) |
WO (1) | WO2005124735A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007215063A (ja) * | 2006-02-13 | 2007-08-23 | Alpine Electronics Inc | 運転支援画像表示制御装置 |
JP2013081039A (ja) * | 2011-10-03 | 2013-05-02 | Hitachi Kokusai Electric Inc | 映像表示装置 |
WO2016075774A1 (ja) * | 2014-11-12 | 2016-05-19 | 三菱電機株式会社 | 表示制御装置および情報表示装置 |
WO2016129241A1 (ja) * | 2015-02-12 | 2016-08-18 | 株式会社デンソー | 表示制御装置及び表示システム |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9513699B2 (en) | 2007-10-24 | 2016-12-06 | Invention Science Fund I, LL | Method of selecting a second content based on a user's reaction to a first content |
US9582805B2 (en) | 2007-10-24 | 2017-02-28 | Invention Science Fund I, Llc | Returning a personalized advertisement |
US20090113297A1 (en) * | 2007-10-24 | 2009-04-30 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Requesting a second content based on a user's reaction to a first content |
JP4743234B2 (ja) * | 2008-07-02 | 2011-08-10 | ソニー株式会社 | 表示装置及び表示方法 |
DE102009020328A1 (de) | 2009-05-07 | 2010-11-11 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren zur Darstellung von unterschiedlich gut sichtbaren Objekten aus der Umgebung eines Fahrzeugs auf der Anzeige einer Anzeigevorrichtung |
FR2956364B1 (fr) * | 2010-02-18 | 2016-01-29 | Peugeot Citroen Automobiles Sa | Dispositif d'aide aux manoeuvres d'un vehicule par affichage de points de vue fonction de la position de la tete du conducteur |
US8463075B2 (en) * | 2010-08-11 | 2013-06-11 | International Business Machines Corporation | Dynamically resizing text area on a display device |
US10884577B2 (en) * | 2013-01-15 | 2021-01-05 | Poow Innovation Ltd. | Identification of dynamic icons based on eye movement |
US20150355463A1 (en) * | 2013-01-24 | 2015-12-10 | Sony Corporation | Image display apparatus, image display method, and image display system |
US11747895B2 (en) * | 2013-03-15 | 2023-09-05 | Intuitive Surgical Operations, Inc. | Robotic system providing user selectable actions associated with gaze tracking |
CN104228684B (zh) * | 2014-09-30 | 2017-02-15 | 吉林大学 | 一种消除汽车a柱盲区的方法 |
TWI547177B (zh) * | 2015-08-11 | 2016-08-21 | 晶睿通訊股份有限公司 | 視角切換方法及其攝影機 |
JP6449501B1 (ja) * | 2018-03-28 | 2019-01-09 | Eizo株式会社 | 表示システム及びプログラム |
CN113055587A (zh) * | 2019-12-27 | 2021-06-29 | 财团法人工业技术研究院 | 全景视频处理方法、全景视频处理装置与全景视频系统 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63118988A (ja) * | 1986-09-12 | 1988-05-23 | ウエスチングハウス・エレクトリック・コーポレーション | 可変的に換算される表示を発生する方法及び装置 |
JPH01252993A (ja) * | 1988-04-01 | 1989-10-09 | Nippon Telegr & Teleph Corp <Ntt> | 画像表示方法 |
JPH03173000A (ja) * | 1989-11-30 | 1991-07-26 | Toshiba Corp | 画像表示方法及び画像表示装置 |
JPH08322796A (ja) * | 1995-05-29 | 1996-12-10 | Sharp Corp | 視線方向検出方法及び装置及びそれを含むマンマシンインターフェース装置 |
JPH09251342A (ja) * | 1996-03-15 | 1997-09-22 | Toshiba Corp | 注視箇所推定装置とその方法及びそれを使用した情報表示装置とその方法 |
JPH09304814A (ja) * | 1996-05-17 | 1997-11-28 | Kajima Corp | 作業機械の操作支援画像システム |
JPH1078845A (ja) * | 1996-06-25 | 1998-03-24 | Sun Microsyst Inc | 視標追跡主導型テキスト拡大の方法および装置 |
JP2001055100A (ja) * | 1999-08-18 | 2001-02-27 | Matsushita Electric Ind Co Ltd | 多機能車載カメラシステムと多機能車載カメラの画像表示方法 |
JP2005136561A (ja) * | 2003-10-29 | 2005-05-26 | Denso Corp | 車両周辺画像表示装置 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3031013B2 (ja) | 1991-11-15 | 2000-04-10 | 日産自動車株式会社 | 視覚情報提供装置 |
JPH08116556A (ja) | 1994-10-14 | 1996-05-07 | Canon Inc | 画像処理方法および装置 |
US5850352A (en) * | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
JPH0981309A (ja) * | 1995-09-13 | 1997-03-28 | Toshiba Corp | 入力装置 |
US5912721A (en) * | 1996-03-13 | 1999-06-15 | Kabushiki Kaisha Toshiba | Gaze detection apparatus and its method as well as information display apparatus |
JPH09305156A (ja) | 1996-05-16 | 1997-11-28 | Nippon Telegr & Teleph Corp <Ntt> | 映像表示方法及び装置 |
JP2000059665A (ja) | 1998-08-06 | 2000-02-25 | Mitsubishi Electric Corp | 撮像装置 |
US6292713B1 (en) * | 1999-05-20 | 2001-09-18 | Compaq Computer Corporation | Robotic telepresence system |
JP2000347692A (ja) * | 1999-06-07 | 2000-12-15 | Sanyo Electric Co Ltd | 人物検出方法、人物検出装置及びそれを用いた制御システム |
JP2003116125A (ja) * | 2001-10-03 | 2003-04-18 | Auto Network Gijutsu Kenkyusho:Kk | 車両周辺視認装置 |
JP3909251B2 (ja) * | 2002-02-13 | 2007-04-25 | アルパイン株式会社 | 視線を用いた画面制御装置 |
US6943799B2 (en) * | 2002-03-29 | 2005-09-13 | The Boeing Company | Gaze directed visual system |
US20050047629A1 (en) * | 2003-08-25 | 2005-03-03 | International Business Machines Corporation | System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking |
US7561143B1 (en) * | 2004-03-19 | 2009-07-14 | The University of the Arts | Using gaze actions to interact with a display |
-
2005
- 2005-06-08 WO PCT/JP2005/010463 patent/WO2005124735A1/ja active Application Filing
- 2005-06-08 CN CNA2005800200774A patent/CN1969313A/zh active Pending
- 2005-06-08 KR KR1020087004520A patent/KR100911066B1/ko active IP Right Grant
- 2005-06-08 EP EP05748915A patent/EP1768097A1/en not_active Withdrawn
- 2005-06-08 JP JP2006514702A patent/JP4952995B2/ja active Active
- 2005-06-08 US US11/570,863 patent/US7839423B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63118988A (ja) * | 1986-09-12 | 1988-05-23 | ウエスチングハウス・エレクトリック・コーポレーション | 可変的に換算される表示を発生する方法及び装置 |
JPH01252993A (ja) * | 1988-04-01 | 1989-10-09 | Nippon Telegr & Teleph Corp <Ntt> | 画像表示方法 |
JPH03173000A (ja) * | 1989-11-30 | 1991-07-26 | Toshiba Corp | 画像表示方法及び画像表示装置 |
JPH08322796A (ja) * | 1995-05-29 | 1996-12-10 | Sharp Corp | 視線方向検出方法及び装置及びそれを含むマンマシンインターフェース装置 |
JPH09251342A (ja) * | 1996-03-15 | 1997-09-22 | Toshiba Corp | 注視箇所推定装置とその方法及びそれを使用した情報表示装置とその方法 |
JPH09304814A (ja) * | 1996-05-17 | 1997-11-28 | Kajima Corp | 作業機械の操作支援画像システム |
JPH1078845A (ja) * | 1996-06-25 | 1998-03-24 | Sun Microsyst Inc | 視標追跡主導型テキスト拡大の方法および装置 |
JP2001055100A (ja) * | 1999-08-18 | 2001-02-27 | Matsushita Electric Ind Co Ltd | 多機能車載カメラシステムと多機能車載カメラの画像表示方法 |
JP2005136561A (ja) * | 2003-10-29 | 2005-05-26 | Denso Corp | 車両周辺画像表示装置 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007215063A (ja) * | 2006-02-13 | 2007-08-23 | Alpine Electronics Inc | 運転支援画像表示制御装置 |
JP4668803B2 (ja) * | 2006-02-13 | 2011-04-13 | アルパイン株式会社 | 運転支援画像表示制御装置 |
JP2013081039A (ja) * | 2011-10-03 | 2013-05-02 | Hitachi Kokusai Electric Inc | 映像表示装置 |
WO2016075774A1 (ja) * | 2014-11-12 | 2016-05-19 | 三菱電機株式会社 | 表示制御装置および情報表示装置 |
US10417993B2 (en) | 2014-11-12 | 2019-09-17 | Mitsubishi Electric Corporation | Display control device and information display device |
WO2016129241A1 (ja) * | 2015-02-12 | 2016-08-18 | 株式会社デンソー | 表示制御装置及び表示システム |
Also Published As
Publication number | Publication date |
---|---|
US7839423B2 (en) | 2010-11-23 |
EP1768097A1 (en) | 2007-03-28 |
KR100911066B1 (ko) | 2009-08-06 |
JP4952995B2 (ja) | 2012-06-13 |
CN1969313A (zh) | 2007-05-23 |
US20080036790A1 (en) | 2008-02-14 |
KR20080024545A (ko) | 2008-03-18 |
JPWO2005124735A1 (ja) | 2008-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100911066B1 (ko) | 화상 표시 시스템, 화상 표시 방법 및 기록 매체 | |
JP4323377B2 (ja) | 画像表示装置 | |
US9077861B2 (en) | Image processing apparatus, electronic apparatus, and image processing method | |
JP4262011B2 (ja) | 画像提示方法及び装置 | |
JP5341789B2 (ja) | パラメータ取得装置、パラメータ取得システム、パラメータ取得方法、及び、プログラム | |
US6389153B1 (en) | Distance information generator and display device using generated distance information | |
JP2007142735A (ja) | 周辺監視システム | |
JP5869712B1 (ja) | 没入型仮想空間に実空間のユーザの周辺環境を提示するためのヘッドマウント・ディスプレイ・システムおよびコンピュータ・プログラム | |
JP2010109452A (ja) | 車両周囲監視装置及び車両周囲監視方法 | |
JP2011004201A (ja) | 周辺表示装置 | |
EP3537714B1 (en) | Bird's-eye-view video image generation device, bird's-eye-view video image generation system, bird's-eye-view video image generation method, and program | |
JP2000242896A (ja) | モニタ装置 | |
JP6669182B2 (ja) | 乗員監視装置 | |
JPWO2018225804A1 (ja) | 画像表示装置、画像表示方法、及び画像表示プログラム | |
JP6649010B2 (ja) | 情報処理装置 | |
JP2008037118A (ja) | 車両用表示装置 | |
JP2005269010A (ja) | 画像生成装置、画像生成プログラム、及び画像生成方法 | |
KR101383997B1 (ko) | 실시간 동영상 합병 방법 및 시스템, 실시간 동영상 합병을 이용한 영상 감시 시스템 및 가상 영상 투어 시스템 | |
KR100856966B1 (ko) | 화상 표시 시스템, 화상 표시 방법 및 기록 매체 | |
JP2009044597A (ja) | 広角画像取得装置 | |
EP4075786A1 (en) | Image processing device, system, image processing method and image processing program | |
JP3206874B2 (ja) | 遠隔施工支援用の画像システム | |
JP7135378B2 (ja) | 周辺監視装置 | |
JP2021033872A (ja) | 車両用表示制御装置、車両用表示制御方法、およびプログラム | |
JP2008011187A (ja) | 表示制御装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006514702 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020067026496 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005748915 Country of ref document: EP Ref document number: 200580020077.4 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11570863 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11570863 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067026496 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2005748915 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11570863 Country of ref document: US |