WO2014148394A1 - 画像表示装置および画像表示方法 - Google Patents
画像表示装置および画像表示方法 Download PDFInfo
- Publication number
- WO2014148394A1 WO2014148394A1 PCT/JP2014/056952 JP2014056952W WO2014148394A1 WO 2014148394 A1 WO2014148394 A1 WO 2014148394A1 JP 2014056952 W JP2014056952 W JP 2014056952W WO 2014148394 A1 WO2014148394 A1 WO 2014148394A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- model
- background image
- dimensional model
- image display
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000013507 mapping Methods 0.000 claims abstract description 60
- 238000000605 extraction Methods 0.000 claims abstract description 57
- 238000006243 chemical reaction Methods 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims description 148
- 230000015572 biosynthetic process Effects 0.000 claims description 22
- 238000003786 synthesis reaction Methods 0.000 claims description 22
- 238000005259 measurement Methods 0.000 claims description 16
- 238000012544 monitoring process Methods 0.000 description 121
- 239000000284 extract Substances 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000001174 ascending effect Effects 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/304—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/50—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the display information being shared, e.g. external display, data transfer to other traffic participants or centralised traffic controller
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/60—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
Definitions
- the present invention relates to an image display device that converts and displays an image captured by a camera into an image actually viewed from a viewpoint different from the viewpoint of the camera (hereinafter referred to as “virtual viewpoint”).
- an image taken by one or more cameras into an image viewed from a designated virtual viewpoint (hereinafter referred to as “virtual viewpoint image”).
- virtual viewpoint image an image around a vehicle is captured by one or a plurality of cameras attached to the vehicle, and texture mapping is performed on a space model in a three-dimensional space based on the information of the captured images.
- An image of a car model (hereinafter referred to as “mapping”) is generated, a change in a movable member such as a front door of the car is further detected, and a car model in a three-dimensional space is detected according to the detected change amount of the movable member.
- the position of the virtual viewpoint is changed and displayed, and the detected movable member is deformed and displayed.
- the 3D information of the object may be lost.
- An example in which the converted image is distorted when an image obtained by photographing a height object is converted into a virtual viewpoint image will be described with reference to FIG.
- the vehicle 700 is photographed by the monitoring camera 600 attached to a position where the vehicle 700 traveling on the roadway 10 is photographed diagonally forward, and the image photographed by the monitoring camera 600 is displayed as the true image of the vehicle 700. It is converted into an image viewed from the virtual viewpoint 610 which is the upper side.
- the virtual viewpoint 610 is a position directly above the vehicle 700, it is necessary to convert the image of the vehicle 700 captured by the monitoring camera 600 into an image viewed from directly above the vehicle 700.
- the image captured by the monitoring camera 600 is converted into an image viewed from the virtual viewpoint 610, the image 710 viewed from the virtual viewpoint 610 is longer in proportion to the height of the vehicle 700, as shown in FIG. Become. That is, since the rear surface portion of the vehicle 700 cannot be seen from the monitoring camera 600, the length of the vehicle is equal to the length of the projection 720 on the roadway 10 as viewed from the monitoring camera 600 of the vehicle 700.
- the image 710 also increases in proportion to the height of the vehicle 700.
- the monitoring camera 600 is installed at the same position. Even so, there is a problem that the distortion of the image 710 viewed from the virtual viewpoint 610 increases as the height of the vehicle 700 increases.
- the present invention has been made in view of such a situation, and an object thereof is to provide an image display device capable of solving the above-described problems.
- An image display device of the present invention includes a background image acquisition unit that extracts a background image from an image, a virtual object model extraction unit that extracts a virtual object model from the image and the background image, and a three-dimensional object model from the virtual object model.
- Object 3D model generation means for generating, background image viewpoint conversion means for converting the viewpoint of the background image, object 3D model viewpoint conversion means for converting the viewpoint of the object 3D model, and the background image subjected to viewpoint conversion And a synthesis means for synthesizing the object three-dimensional model.
- the object three-dimensional model generation means of the image display device of the present invention is characterized by comprising mapping means for mapping the image to the virtual object model.
- mapping means of the image display device of the present invention is characterized in that mapping is performed using the images taken at different angles by at least two cameras. Further, the mapping means of the image display device of the present invention is characterized in that mapping is performed using the image having a large number of pixels when areas to be mapped overlap with each other.
- the virtual object model extracting means of the image display device of the present invention includes a feature extracting means for extracting features of the virtual object model, and the object three-dimensional model is selected based on the features extracted by the feature extracting means. And an object three-dimensional model selection means.
- the virtual object model extracting means of the image display device of the present invention comprises speed measuring means for measuring the speed of an object corresponding to the virtual object model, and the object three-dimensional model selecting means uses the speed.
- the object three-dimensional model to be combined with the background image is selected.
- the image display device of the present invention is configured such that when the object three-dimensional model is combined with the background image, the object tertiary is arranged in the same direction with respect to the direction of the region where the object three-dimensional model is arranged. It is characterized by setting the original model.
- the present invention provides an image display device capable of converting and displaying an object image so as not to be distorted when the object image captured by the camera is converted into an object image viewed from a virtual viewpoint.
- FIG. 5 is a flowchart of viewpoint conversion processing, a flowchart of background / virtual object model synthesis processing, and a flowchart of image display processing according to the embodiment of the present invention. It is a figure which shows the example which displays the image of the vehicle seen from the virtual viewpoint which concerns on embodiment of this invention, and the image of an actual vehicle. It is a figure which shows the example which displays the image of the vehicle seen from the virtual viewpoint which concerns on embodiment of this invention simultaneously with the image of an actual vehicle. It is a figure explaining the example which the image of the vehicle converted when converting the image of the conventional vehicle into a virtual viewpoint image is distorted.
- Embodiments relate to an image display system that converts an image of a camera installed to capture a vehicle traveling on a roadway into a virtual viewpoint image and displays the image.
- an area where an object is present from an image captured in an area (hereinafter referred to as “object 3D model creation area”) in which surveillance cameras are installed in the front, rear, left, and right sides (hereinafter referred to as “object 3D model creation area”) "Object region”) is extracted, and a three-dimensional model (hereinafter referred to as “object three-dimensional model”) is generated from the object region.
- the image of the vehicle is not distorted by combining the image of the object three-dimensional model and the image of the area other than the object area (hereinafter referred to as “background image”) by converting the viewpoint to a virtual viewpoint image. Convert and display.
- the roadway 10 is divided by a center line 11, and the upper side of the center line 11 is an up lane 12 and the lower side of the center line 11 is a down lane 13 in the drawing of FIG.
- An object three-dimensional model creation area 14 is provided in the ascending lane 12, and an object three-dimensional model creation area 15 is provided in the descending lane 13.
- vehicle monitoring cameras 105 to 107 are installed along the roadway 10 at almost equal intervals.
- a front surveillance camera 108 for photographing the front side of the vehicle passing through the area 3 for creating the object 3D model for the down lane 13 and a back side for the object 3D model creating area 15 for the down lane 13 are photographed.
- a rear monitoring camera 109, a side monitoring camera 110 that images the side surface on the center line 11 side, and a side monitoring camera 111 that images the side surface on the roadside side are installed.
- vehicle monitoring cameras 112 to 114 are installed along the roadway 10 at almost equal intervals.
- the image display system 20 includes a monitoring camera 100 (a general term for monitoring cameras in the embodiment including the monitoring cameras 101 to 114), a network 200, an image display device 300, a monitor 400, and a mouse 500.
- the image display device 300, the monitor 400, and the mouse 500 are installed in the monitoring center 20C where the monitoring staff is stationed.
- the surveillance camera 100 captures an image of a vehicle traveling on the roadway 10, converts the captured image into image data, and transmits the image data to the image display device 300 via the network 200.
- the network 200 connects the monitoring camera 100 installed at a location away from the monitoring center 20C and the image display device 300 installed in the monitoring center 20C.
- the image display device 300 When receiving image data from the monitoring camera 100, the image display device 300 generates a virtual viewpoint image, converts the image data of the virtual viewpoint image into a displayable image signal, and outputs the image signal to the monitor 400. In addition, the image display apparatus 300 transmits operation data for operating the orientation of the monitoring camera 100 to the monitoring camera 100. The configuration of the image display device 300 will be described later.
- the monitor 400 displays an image of the image signal input from the image display device 300. When an operation performed by a monitor is input, the mouse 500 converts the operation data into operation data and outputs the operation data to the image display device 300.
- the image display apparatus 300 includes a communication I / F unit 310, a control unit 320, a memory unit 330, an HDD unit 340, an image display I / F unit 350, an operation input I / F unit 360, and a data bus 370.
- the control unit 320 includes a background image creation processing unit 321, a camera image input processing unit 322, an object extraction processing unit 323, a mapping processing unit 324, an object 3D model selection processing unit 325, a viewpoint conversion processing unit 326, a background / object tertiary.
- An original model synthesis processing unit 327 and an image display processing unit 328 are provided.
- the memory unit 330 includes an input image registration area 331, a speed measurement image registration area 332, a virtual object model registration area 333, and an object three-dimensional model registration area 334.
- the HDD unit 340 includes a background image registration area 341.
- the communication I / F unit 310 is connected to the network 200, receives image data transmitted from the monitoring camera 100, and stores it in the input image registration area 331 of the memory unit 330. In addition, when an operation signal for operating the monitoring camera 100 is input from the control unit 320, the communication I / F unit 310 converts the operation signal into operation data and transmits the operation data to the monitoring camera 100.
- the control unit 320 includes control means such as a CPU, and comprehensively controls the image display apparatus 300.
- the control unit 320 processes operation data for performing an operation on the monitoring camera 100 input from the operation input I / F unit 360 by the control unit 320 and performs the corresponding monitoring via the communication I / F unit 310. It transmits to the camera 100.
- the memory unit 330 stores a program for realizing the basic functions of the image display device 300, a program executed by the control unit 320, and data used in these programs.
- the input image registration area 331 is an area for registering image data received from the monitoring camera 100.
- the speed measurement image registration area 332 is an area for registering image data stored for speed measurement from images taken by the vehicle monitoring cameras 105 to 107 and the vehicle monitoring cameras 112 to 114.
- the virtual object model registration area 333 is installed in the front surveillance camera 101, the rear surveillance camera 102, the side surveillance cameras 103 and 104, and the object 3D model creation area 15 installed in the object 3D model creation area 14.
- the object 3D model registration area 334 is an area for registering the object 3D model generated by mapping the virtual object model.
- the HDD (hard disk drive) unit 340 stores a program executed by the control unit 320 and data used in the program.
- the background image registration area 341 includes a front monitoring camera 101, a rear monitoring camera 102, side monitoring cameras 103 and 104, and vehicle monitoring cameras 105 to 107, a front monitoring camera 108, a rear monitoring camera 109, and side monitoring cameras 110 and 111, And an area for registering image data of a background image taken by the vehicle monitoring cameras 112 to 114.
- the image display I / F unit 350 When the image display I / F unit 350 receives an image signal from the image display processing unit 328, the image display I / F unit 350 outputs the image signal to the monitor 400.
- the operation I / F unit 360 receives an operation signal from the mouse 500, converts it into operation data that can be analyzed by the control unit 320, and outputs the operation data to the control unit 320.
- the data bus 370 connects the units 310 to 360 to exchange data.
- a procedure for generating a virtual viewpoint image by the image display device 300 will be described with reference to FIG.
- a description will be given using an example in which a virtual viewpoint image is generated by combining a three-dimensional model image and a background image captured by the vehicle monitoring cameras 105 to 107 installed on the roadway 10.
- the procedure for generating the virtual viewpoint image is performed in the order of step S100 to step S800.
- the background image creation processing unit 321 shown in FIG. 2 has a front monitoring camera 101, a rear monitoring camera 102, a side monitoring camera 103 installed in the object three-dimensional model creation area 14 shown in FIG.
- a background image in which an object such as a vehicle is not reflected is created from images taken by the camera 104 and the vehicle monitoring cameras 105 to 107, and image data of the background image is stored in the background image registration area 341 of the HDD unit 340.
- Step S200 the camera image input processing unit 322 is photographed by the front monitoring camera 101, the rear monitoring camera 102, and the side monitoring cameras 103 and 104 installed in the object three-dimensional model creation area 14 shown in FIG.
- the image and the image data of the image taken by the vehicle monitoring cameras 105 to 107 are input and stored in the input image registration area 331.
- the camera image input processing unit 322 also stores image data of images taken by the vehicle monitoring camera 105 in a predetermined cycle in the speed measurement screen registration area 332 in order to measure the speed of the vehicle traveling on the roadway 10. save.
- Step S300 the object extraction processing unit 323 takes out the image data of the front monitoring camera 101, the rear monitoring camera 102, and the side monitoring cameras 103 and 104 from the input image registration area 331, and the front monitoring camera 101 and the rear monitoring camera. 102, background image data of the side monitoring cameras 103 and 104 is extracted from the background image registration area 341.
- the object extraction processing unit 323 extracts the virtual object model illustrated in FIG. 3 by comparing the input image data and the background image data, and stores the virtual object model in the virtual object model registration area 333.
- Step S400 the mapping processing unit 324 extracts pixels of the image data captured by the front monitoring camera 101, the rear monitoring camera 102, and the side monitoring cameras 103 and 104 from the input image registration area 331, and also registers a virtual object model.
- a virtual object model is taken out from the area 333.
- the mapping processing unit 324 generates a three-dimensional body of an object shown in FIG. 3 (hereinafter referred to as “object three-dimensional model”) by mapping pixels of the image data in the input image registration area 331 to a virtual object model. , It is stored in the object 3D model registration area 334 of the memory unit 330.
- Step S500 the object 3D model selection processing unit 325 selects an object 3D model corresponding to the vehicle in the image displayed on the monitor 400 from the object 3D model registration area 334.
- the viewpoint conversion processing unit 326 includes the vehicle monitoring cameras 105 to 107 registered in the object 3D model selected by the object 3D model selection processing unit 325 and the background image registration area 341 of the HDD unit 340. Both of the background images are converted into virtual viewpoint images separately.
- Step S700 the background / object three-dimensional model synthesis processing unit 327 synthesizes the background image and the object three-dimensional model so that the object three-dimensional model is located at the position where the object is present in the background image.
- Step S800 the image display processing unit 328 converts the image data of the image in which the object 3D model is combined with the background image into an image signal that can be displayed by the monitor 400, and the image display I / F unit 350 Output.
- the operation input I / F unit 360 outputs operation data of the background image creation request to the control unit 320.
- the control unit 320 activates the background image creation processing unit 321.
- the background image creation processing unit 321 starts background image creation processing.
- Step S110 the background image creation processing unit 321 analyzes the operation data of the background image creation request and determines whether the request is “background image registration”, “background image copy”, or “background image update”. Is determined. If it is “background image registration”, the process proceeds to step S120. If it is “background image copy”, the process proceeds to step S130. If it is “update background image”, the process proceeds to step S140.
- Step S ⁇ b> 120 When the background image creation processing unit 321 requests “background image registration”, the front monitoring camera 101, the rear monitoring camera 102, and the side monitoring camera 103 display images of only the background where the vehicle is not reflected. 104, and the vehicle monitoring cameras 105 to 107, and the image data of the captured images is registered in the background image registration area 341 of the HDD unit 340. Next, the background image creation processing unit 321 ends the background image creation processing.
- Step S ⁇ b> 130 The background image creation processing unit 321 copies image data of an image in which an object such as a vehicle is not reflected from the input image registration area 331 to the background image registration area 341 when the request for “background image copy” is made. .
- the background image creation processing unit 321 ends the background image creation processing.
- Step S140 The background image creation processing unit 321 inputs the image data of the input image from the input image registration area 331 at regular intervals when requested to update the background image, and from the background image registration area 341.
- the background image data in the background image registration area 341 is updated by weighted average of these image data.
- the update of the background image data is stopped by receiving a “background image update” stop request.
- the background image creation processing unit 321 ends the background image creation processing.
- the communication I / F unit 310 receives image data from the monitoring camera 100, it outputs an image data reception notification to the control unit 320.
- the control unit 320 activates the camera image input processing unit 322.
- the camera image input processing unit 322 starts camera image input processing.
- Step S210 First, the camera image input processing unit 322 inputs image data of an image captured by the monitoring camera 100.
- Step S220 Next, the camera image input processing unit 322 registers the image data in the input image registration area 331.
- Step S230 the camera image input processing unit 322 determines whether the input image data is a frame to be collected for speed measurement.
- the process proceeds to step S240.
- the image data is not a frame collected for speed measurement (No in step S230)
- the camera video input process is terminated.
- Step S ⁇ b> 240 the camera image input processing unit 322 registers the image data of the frame collected for speed measurement in the speed measurement image registration area 332. Thereafter, the camera image input processing unit 322 ends the camera image input process.
- the operation input I / F unit 360 outputs operation data of the virtual viewpoint image generation request to the control unit 320.
- the control unit 320 activates the object extraction processing unit 323.
- the object extraction processing unit 323 starts object extraction processing.
- Step S310 the object extraction processing unit 323 performs a real space object extraction process of extracting an image object as an object in the real space. Details of the real space object extraction processing will be described later.
- Step S320 the object extraction processing unit 323 calculates the size of the object in the real space extracted by the real space object extraction process.
- Step S330 the object extraction processing unit 323 determines the size of the virtual object model from the size of the object in the real space calculated in Step S320.
- Step S311 the object extraction processing unit 323 extracts the image data of the front monitoring camera 101, the rear monitoring camera 102, and the side monitoring cameras 103 and 104 from the input image registration area 331, and from the background image registration area 341
- the object image is subtracted (extracted) by subtracting the image of the background image data from the image data.
- Step S312 when the object image obtained in step S311 has 256 gradations, the object extraction processing unit 323 sets the gradation above the threshold to 255 gradations and sets the gradation below the threshold to 0 gradations. To binarize.
- Step S313 the object extraction processing unit 323 performs labeling that labels each same object in the object image binarized in Step S312.
- Step S314 the size (start point coordinates, width, height) and area (number of white pixels in the binarized pixels) are calculated for each object labeled in step S313.
- Step S315) the object extraction processing unit 323 performs image color histogram, highest frequency color, and lowest for the white pixel portion of the object image binarized for each labeled object.
- the color of the object is extracted by calculating the color of the frequency.
- Step S316 the object extraction processing unit 323 performs, for the white pixel portion of the object image binarized for each labeled object, the center of gravity, the circumference length, the circularity, the Euler number, The shape of the object is extracted by calculating the moment and the number of corners.
- Step S317) when the object extraction processing unit 323 extracts the image data from the input image registration area 331 and the speed measurement image data from the speed measurement image registration area 332, the image of the image data and the speed measurement image data are extracted.
- the optical flow is calculated using the image.
- the start point and end point on the image obtained by the optical flow are converted into real space coordinates, and the moving distance is calculated from the real space coordinates, thereby measuring the velocity of the object.
- Step S318) the object extraction processing unit 323 uses the information on the depression angle, height, and focal length of the front monitoring camera 101, the rear monitoring camera 102, and the side monitoring cameras 103 and 104 in the input image registration area 331, The coordinates of the object image are converted into real space coordinates. Thereafter, the object extraction processing unit 323 ends the object extraction process.
- the control unit 320 activates the mapping processing unit 324 when the object extraction processing unit 323 finishes the object extraction processing.
- the mapping processing unit 324 starts mapping processing.
- the mapping processing unit 324 is a predetermined pasting area out of the object area extracted from the image of the front monitoring camera 101 in front of the virtual object model, and the density of the number of pixels is greater than or equal to a certain level. Mapping is performed by extracting a portion, enlarging / reducing it according to the size of the virtual object model, and pasting it on the front portion of the virtual object model.
- enlarging / reducing for example, bilinear interpolation can be used to prevent deterioration of pixel density due to enlarging / reducing.
- the mapping processing unit 324 is a predetermined pasting region among the object regions extracted from the image of the rear monitoring camera 102 on the back surface of the virtual object model, and the density of the number of pixels is greater than or equal to a certain level. Mapping is performed by extracting a part, enlarging / reducing it according to the size of the virtual object model, and pasting it on the back part of the virtual object model.
- enlarging / reducing for example, bilinear interpolation can be used to prevent deterioration of pixel density due to enlarging / reducing.
- the mapping processing unit 324 is a predetermined pasting area among the object areas extracted from the images of the side monitoring cameras 103 and 104 on the side surface of the virtual object model, and the density of the pixels is not less than a certain value. Is mapped by pasting an image on the side surface portion of the virtual object model after extracting the portion, and expanding / reducing according to the size of the virtual object model.
- enlarging / reducing for example, bilinear interpolation can be used to prevent deterioration of pixel density due to enlarging / reducing.
- Step S440 the mapping processing unit 324 needs to paste the upper surface of the virtual object model with the higher image resolution of the front monitoring camera 101 and the rear monitoring camera 102. For this reason, when the image of the rear monitoring camera 102 is pasted, if the image of the front monitoring camera 101 and the pasting area overlap, whether the number of pixels is large in the pasting area overlapping the image of the front monitoring camera 101 or not. Determine. When the number of pixels is large, the upper surface is created by enlarging / reducing the image of the region as it is according to the size of the virtual object model and pasting it on the upper surface of the virtual object model. If the number of pixels is small, create an upper surface by pasting the image of the area where the overlapped part has been removed from the extracted area according to the size of the virtual object model and then pasting it on the upper surface of the virtual object model .
- Step S450 the mapping processing unit 324 registers the object 3D model generated by mapping the virtual object model in the object 3D model registration area 334. Thereafter, the mapping processing unit 324 ends the mapping process.
- the control unit 320 activates the object three-dimensional model selection processing unit 325.
- the object 3D model selection processing unit 325 starts the object 3D model selection process.
- Step S510 First, the object 3D model selection processing unit 325 displays the object image of the image taken by the vehicle monitoring cameras 105 to 107 and the size and shape of the object 3D model registered in the object 3D model registration area 334. , And compare colors. Further, when selecting with high accuracy, the coordinates of the object image detected by the object extraction process (hereinafter referred to as “object position”) and the speed of the object to be measured are further compared.
- object position the coordinates of the object image detected by the object extraction process
- speed of the object to be measured are further compared.
- Step S520 when the object 3D model selection processing unit 325 selects an object 3D model to be combined with the background image of the vehicle monitoring camera 105 based on the comparison result, the object 3D model selection process is terminated.
- the object three-dimensional model can be selected using information such as the time when the object three-dimensional model is registered in addition to the size, shape, color, object position, and object speed.
- the control unit 320 activates the viewpoint conversion processing unit 326.
- the viewpoint conversion processing unit 326 starts viewpoint conversion processing.
- Step S610 First, the viewpoint conversion processing unit 326 converts the selected object three-dimensional model from the coordinates of the real space to the coordinates of the background image of the vehicle monitoring camera 105.
- Step S620 the viewpoint conversion processing unit 326 rotates the coordinates so that the coordinate-converted object three-dimensional model becomes the object three-dimensional model viewed from the designated virtual viewpoint.
- Step S630 the viewpoint conversion processing unit 326 rotates the coordinates so that the background image of the monitoring camera 105 is a background image viewed from the designated virtual viewpoint. Thereafter, the viewpoint conversion processing unit 326 ends the viewpoint conversion process.
- the control unit 320 activates the background / object three-dimensional model synthesis processing unit 327 when the viewpoint conversion processing unit 326 finishes the viewpoint conversion processing.
- the background / object 3D model synthesis processing unit 327 starts the background / object 3D model synthesis processing.
- Step S710 First, the background / object three-dimensional model synthesis processing unit 327 extracts a display surface to be pasted from the object three-dimensional model on the background image.
- Step S720 the background / object three-dimensional model synthesis processing unit 327 enlarges or reduces the display surface of the object three-dimensional model so as to be the same size as the object existing in the image captured by the vehicle monitoring camera 105. To do.
- Step S730 the background / object three-dimensional model synthesis processing unit 327 synthesizes the display surface of the object three-dimensional model at the position where the object was present in the original image of the background image. Thereafter, the background / object three-dimensional model synthesis processing unit 327 ends the background / object three-dimensional model synthesis processing.
- the control unit 320 activates the image display processing unit 328 when the background / object 3D model synthesis processing unit 327 finishes the background / object 3D model synthesis processing.
- the image display processing unit 328 starts image display processing.
- Step S810 First, when the image display processing unit 328 converts the image data of the background image combined with the display surface of the object three-dimensional model into a displayable image signal, the image display processing unit 328 passes through the image display output I / F unit 350. Output to the monitor 400. Thereby, the image of the vehicle monitoring camera 105 converted into the virtual viewpoint designated on the monitor 400 is displayed. Next, the image display processing unit 328 ends the image display process.
- FIG. 8A shows an image when the virtual viewpoint image created based on the monitoring cameras 110 and 111 is viewed by moving the virtual viewpoint on the vehicle monitoring cameras 112 to 114.
- FIG. 8B shows images of the vehicle monitoring cameras 105 to 107 on the up lane 12 (the images of the vehicle monitoring cameras 112 to 114 on the down lane 13 are the same, but are not shown).
- the vehicle object is arranged by positioning the front of the vehicle in the traveling direction of the up lane of the virtual viewpoint image and the back of the back vehicle in the direction opposite to the traveling direction.
- the orientation of the 3D model can be set correctly.
- the entire images of the vehicle monitoring cameras 105 to 107 on the up lane 12 and the vehicle monitoring cameras 112 to 114 on the down lane 13 are converted into virtual viewpoint images.
- any of the images of the vehicle monitoring cameras 105 to 107 and the vehicle monitoring cameras 112 to 114, the vehicle image displayed in the image, and the object three-dimensional model can be displayed simultaneously. That is, when the object 3D model is converted into an icon and this icon is clicked, the object 3D model can be displayed in another window.
- the object 3D model viewed from a virtual viewpoint at any angle can be enlarged or reduced to display the vehicle license plate or driver. Can be easily confirmed.
- the object three-dimensional model creation area 14 is provided before the position of the vehicle monitoring cameras 105 to 107 on the up lane 12, and before the position of the vehicle monitoring cameras 112 to 114 on the down lane 13.
- the object three-dimensional model creation area 15 is provided, the present invention is not limited thereto, but is behind or in the middle of the position of the vehicle monitoring cameras 105 to 107 on the up lane 12 and behind the position of the vehicle monitoring cameras 112 to 114 on the down lane 13. Alternatively, it may be provided at an intermediate position.
- the image display device of the present invention separates the background image and the object image from the image captured by the camera, generates a virtual object model from the image captured from the front, rear, left and right sides of the object, and this virtual object model
- An object three-dimensional model is generated by mapping pixels of an actual object image.
- the object 3D model of the object to be combined with the background image is selected on the basis of information such as the size, shape, color, object position, and speed of the object 3D model generated in this way. Further, the selected object 3D model and the background image are separately converted into designated virtual viewpoint images, and the object 3D model and the background image are synthesized. Thus, by converting an actual image into a virtual viewpoint image, an object image can be displayed without distortion.
- An image display apparatus includes a background image acquisition unit that extracts a background image from an image, a virtual object model extraction unit that extracts a virtual object model from the image and the background image, and an object from the virtual object model.
- Object 3D model generation means for generating a 3D model, background image viewpoint conversion means for viewpoint conversion of the background image, object 3D model viewpoint conversion means for viewpoint conversion of the object 3D model, and viewpoint-converted
- the image processing apparatus includes a combining unit that combines the background image and the three-dimensional object model.
- the object three-dimensional model generation means of the image display device of the present invention according to (1) is characterized by comprising mapping means for mapping the image to the virtual object model.
- mapping means of the image display device of the present invention of (2) is characterized in that mapping is performed using the images taken at different angles by at least two cameras.
- mapping means of the image display device according to the present invention of (3) is characterized in that mapping is performed using the image having a large number of pixels when areas to be mapped overlap with each other.
- the virtual object model extraction means of the image display device according to any one of (1) to (4) is extracted by a feature extraction means for extracting features of the virtual object model, and the feature extraction means The object three-dimensional model selection means for selecting the object three-dimensional model according to the feature is provided.
- the virtual object model extracting means of the image display device of the present invention of (5) includes speed measuring means for measuring the speed of an object corresponding to the virtual object model, and the object three-dimensional model selecting means is The object three-dimensional model to be combined with the background image is selected using the speed.
- the virtual object model extraction unit of the image display device of the present invention according to any one of (5) and (6) includes a position detection unit that detects a position of an object corresponding to the virtual object model, The object 3D model selection means selects the object 3D model to be combined with the background image using the position of the object.
- the image display method of the present invention includes a background image acquisition step of extracting a background image from an image, a virtual object model extraction step of extracting a virtual object model from the image and the background image, and an object tertiary from the virtual object model.
- the object three-dimensional model generation step of the image display method of the present invention according to (9) includes a mapping step of mapping the image to the virtual object model.
- the mapping step of the image display method of the present invention of (11) is characterized in that mapping is performed using the image having a large number of pixels when the areas to be mapped overlap by the image.
- the virtual object model extraction step of the image display method of the present invention according to any one of (9) to (12) is extracted by a feature extraction step of extracting features of the virtual object model, and the feature extraction step The object three-dimensional model selection step of selecting the object three-dimensional model according to the feature.
- the virtual object model extraction step of the image display method of the present invention according to (13) includes a velocity measurement step of measuring a velocity of an object corresponding to the virtual object model, and the object three-dimensional model selection step includes The object three-dimensional model to be combined with the background image is selected using the speed.
- the virtual object model extraction step of the image display device according to any one of (13) and (14) includes a position detection step of detecting a position of an object corresponding to the virtual object model, In the object three-dimensional model selection step, the object three-dimensional model to be combined with the background image is selected using the position of the object.
- the object when an image captured by a camera is converted into a virtual viewpoint image, the object can be converted and displayed so as not to be distorted.
- the present invention can be applied to an apparatus that converts an image into a virtual viewpoint image.
- Memory unit 331 ⁇ Input image registration area 332.. .Velocity measurement image registration area 333... Virtual object model registration area 334... Object three-dimensional model registration area 340. ... HDD unit 341 ... Background image registration area 341, 350 ... Image display I / F unit 360 ... Operation input I / F unit 400 ... Monitor 500 ... Mouse 600 ... Surveillance camera 610 ... Virtual viewpoint 700 ... Vehicle 710 ... .. Virtual viewpoint image 720 ... Projection onto the roadway seen from the surveillance camera
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
(1):本発明の画像表示装置は、画像から背景画像を取り出す背景画像取得手段と、前記画像と前記背景画像から仮想物体モデルを抽出する仮想物体モデル抽出手段と、前記仮想物体モデルから物体三次元モデルを生成する物体三次元モデル生成手段と、前記背景画像を視点変換する背景画像視点変換手段と、前記物体三次元モデルを視点変換する物体三次元モデル視点変換手段と、視点変換された前記背景画像と前記物体三次元モデルとを合成する合成手段とを備えることを特徴としている。
(2):(1)の本発明の画像表示装置の前記物体三次元モデル生成手段は、前記仮想物体モデルに前記画像をマッピングするマッピング手段を備えることを特徴としている。
(3):(2)の本発明の画像表示装置の前記マッピング手段は、少なくとも2台のカメラが異なる角度で撮影した前記画像を用いてマッピングすることを特徴としている。
(4):(3)の本発明の画像表示装置の前記マッピング手段は、前記画像によりマッピングを行う領域が重なるときには、画素数の多い前記画像を用いてマッピングを行うことを特徴としている。
(5):(1)から(4)のいずれかの本発明の画像表示装置の前記仮想物体モデル抽出手段は、前記仮想物体モデルの特徴を抽出する特徴抽出手段と、前記特徴抽出手段により抽出された特徴により前記物体三次元モデルを選択する前記物体三次元モデル選択手段とを備えることを特徴としている。
(6):(5)の本発明の画像表示装置の前記仮想物体モデル抽出手段は、前記仮想物体モデルに対応する物体の速度を計測する速度計測手段を備え、前記物体三次元モデル選択手段は、前記速度を用いて前記背景画像と合成する前記物体三次元モデルを選択することを特徴としている。
(7):(5)または(6)のいずれかの本発明の画像表示装置の前記仮想物体モデル抽出手段は、前記仮想物体モデルに対応する物体の位置を検出する位置検出手段を備え、前記物体三次元モデル選択手段は、前記物体の位置を用いて前記背景画像と合成する前記物体三次元モデルを選択することを特徴としている。
(8):(1)から(7)のいずれかの本発明の画像表示装置は、前記物体三次元モデルを前記背景画像に合成するときに、前記物体三次元モデルが配置されていた領域の向きに対して同じ向きとなるように前記物体三次元モデルを設定することを特徴としている。
(9)本発明の画像表示方法は、画像から背景画像を取り出す背景画像取得工程と、前記画像と前記背景画像から仮想物体モデルを抽出する仮想物体モデル抽出工程と、前記仮想物体モデルから物体三次元モデルを生成する物体三次元モデル生成工程と、前記背景画像を視点変換する背景画像視点変換工程と、前記物体三次元モデルを視点変換する物体三次元モデル視点変換工程と、視点変換された前記背景画像と前記物体三次元モデルとを合成する合成工程とを備えることを特徴としている。
(10):(9)の本発明の画像表示方法の前記物体三次元モデル生成工程は、前記仮想物体モデルに前記画像をマッピングするマッピング工程を備えることを特徴としている。
(11):(10)の本発明の画像表示方法の前記マッピング工程は、少なくとも2台のカメラが異なる角度で撮影した前記画像を用いてマッピングすることを特徴としている。
(12):(11)の本発明の画像表示方法の前記マッピング工程は、前記画像によりマッピングを行う領域が重なるときには、画素数の多い前記画像を用いてマッピングを行うことを特徴としている。
(13):(9)から(12)のいずれかの本発明の画像表示方法の前記仮想物体モデル抽出工程は、前記仮想物体モデルの特徴を抽出する特徴抽出工程と、前記特徴抽出工程により抽出された特徴により前記物体三次元モデルを選択する前記物体三次元モデル選択工程とを備えることを特徴としている。
(14):(13)の本発明の画像表示方法の前記仮想物体モデル抽出工程は、前記仮想物体モデルに対応する物体の速度を計測する速度計測工程を備え、前記物体三次元モデル選択工程は、前記速度を用いて前記背景画像と合成する前記物体三次元モデルを選択することを特徴としている。
(15):(13)または(14)のいずれかの本発明の画像表示装置の前記仮想物体モデル抽出工程は、前記仮想物体モデルに対応する物体の位置を検出する位置検出工程を備え、前記物体三次元モデル選択工程は、前記物体の位置を用いて前記背景画像と合成する前記物体三次元モデルを選択することを特徴としている。
(16):(9)から(15)のいずれかの本発明の画像表示方法は、前記物体三次元モデルを前記背景画像に合成するときに、前記物体三次元モデルが配置されていた領域の向きに対して同じ向きとなるように前記物体三次元モデルを設定することを特徴としている。
(17):(9)から(16)のいずれかの本発明の画像表示方法をコンピュータに実行させるための画像表示プログラムであることを特徴としている。
11・・・・・・・センターライン
12・・・・・・・上り車線
13・・・・・・・下り車線
14・・・・・・・物体三次元モデル作成用エリア
15・・・・・・・物体三次元モデル作成用エリア
20・・・・・・・画像表示システム
20C・・・・・・監視センター
100・・・・・・・監視カメラ
101・・・・・・・前面監視カメラ
102・・・・・・・背面監視カメラ
103、104・・・側面監視カメラ
105~107・・・車両監視カメラ
108・・・・・・・前面監視カメラ
109・・・・・・・背面監視カメラ
110、111・・・側面監視カメラ
112~114・・・車両監視カメラ
200・・・・・・・ネットワーク
300・・・・・・・画像表示装置
310・・・・・・・通信I/F部
320・・・・・・・制御部
321・・・・・・・背景画像作成処理部
322・・・・・・・カメラ画像入力処理部
323・・・・・・・物体抽出処理部
324・・・・・・・マッピング処理部
325・・・・・・・物体三次元モデル選択処理部
326・・・・・・・視点変換処理部
327・・・・・・・背景・物体三次元モデル合成処理部
328・・・・・・・画像表示処理部
330・・・・・・・メモリ部
331・・・・・・・入力画像登録エリア
332・・・・・・・速度計測画像登録エリア
333・・・・・・・仮想物体モデル登録エリア
334・・・・・・・物体三次元モデル登録エリア
340・・・・・・・HDD部
341・・・・・・・背景画像登録エリア
341、350・・・・・・・画像表示I/F部
360・・・・・・・操作入力I/F部
400・・・・・・・モニタ
500・・・・・・・マウス
600・・・・・・・監視カメラ
610・・・・・・・仮想視点
700・・・・・・・車両
710・・・・・・・仮想視点画像
720・・・・・・・監視カメラから見た車道上への投影
Claims (14)
- 画像から背景画像を取り出す背景画像取得手段と、
前記画像と前記背景画像から仮想物体モデルを抽出する仮想物体モデル抽出手段と、
前記仮想物体モデルから物体三次元モデルを生成する物体三次元モデル生成手段と、
前記背景画像を視点変換する背景画像視点変換手段と、
前記物体三次元モデルを視点変換する物体三次元モデル視点変換手段と、
視点変換された前記背景画像と前記物体三次元モデルとを合成する合成手段と
を備えることを特徴とする画像表示装置。 - 前記物体三次元モデル生成手段は、前記仮想物体モデルに前記画像をマッピングするマッピング手段を備えることを特徴とする請求項1に記載の画像表示装置。
- 前記マッピング手段は、少なくとも2台のカメラが異なる角度で撮影した前記画像を用いてマッピングすることを特徴とする請求項2に記載の画像表示装置。
- 前記マッピング手段は、前記画像によりマッピングを行う領域が重なるときには、画素数の多い前記画像を用いてマッピングを行うことを特徴とする請求項3に記載の画像表示装置。
- 前記仮想物体モデル抽出手段は、
前記仮想物体モデルの特徴を抽出する特徴抽出手段と、
前記特徴抽出手段により抽出された特徴により前記物体三次元モデルを選択する前記物体三次元モデル選択手段と
を備えることを特徴とする請求項1から請求項4のいずれか1項に記載の画像表示装置。 - 前記仮想物体モデル抽出手段は、前記仮想物体モデルに対応する物体の速度を計測する速度計測手段を備え、
前記物体三次元モデル選択手段は、前記速度を用いて前記背景画像と合成する前記物体三次元モデルを選択することを特徴とする請求項5に記載の画像表示装置。 - 前記物体三次元モデルを前記背景画像に合成するときに、前記物体三次元モデルが配置されていた領域の向きに対して同じ向きとなるように前記物体三次元モデルを設定することを特徴とする請求項1から請求項6のいずれか1項に記載の画像表示装置。
- 画像から背景画像を取り出す背景画像取得処理と、
前記画像と前記背景画像から仮想物体モデルを抽出する仮想物体モデル抽出処理と、
前記仮想物体モデルから物体三次元モデルを生成する物体三次元モデル生成処理と、
前記背景画像を視点変換する背景画像視点変換処理と、
前記物体三次元モデルを視点変換する物体三次元モデル視点変換処理と、
視点変換された前記背景画像と前記物体三次元モデルとを合成する合成処理と
をコンピュータに実行させることを特徴とする画像表示方法。 - 前記物体三次元モデル生成処理は、前記仮想物体モデルに前記画像をマッピングするマッピング処理を備えることを特徴とする請求項8に記載の画像表示方法。
- 前記マッピング処理は、少なくとも2台のカメラが異なる角度で撮影した前記画像を用いてマッピングすることを特徴とする請求項9に記載の画像表示方法。
- 前記マッピング処理は、前記画像によりマッピングする領域が重なるときには、画素数の多い前記画像を用いてマッピングすることを特徴とする請求項10に記載の画像表示方法。
- 前記仮想物体モデル抽出処理は、前記仮想物体モデルの特徴を抽出する特徴抽出処理と、
前記特徴抽出処理により抽出された特徴により前記物体三次元モデルを選択する前記物体三次元モデル選択処理と
を備えることを特徴とする請求項9から請求項11のいずれか1項に記載の画像表示方法。 - 前記仮想物体モデル抽出処理は、前記仮想物体モデルに対応する物体の速度を計測する速度計測処理を備え、
前記物体三次元モデル選択処理は、前記速度を用いて前記背景画像と合成する前記物体三次元モデルを選択することを特徴とする請求項12に記載の画像表示方法。 - 前記物体三次元モデルを前記背景画像に合成するときに、前記物体三次元モデルが配置されていた領域の向きに対して同じ向きとなるように前記物体三次元モデルを設定することを特徴とする請求項9から請求項13のいずれか1項に記載の画像表示方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015506745A JP5865547B2 (ja) | 2013-03-19 | 2014-03-14 | 画像表示装置および画像表示方法 |
US14/766,845 US9877011B2 (en) | 2013-03-19 | 2014-03-14 | Image display apparatus and image display method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-056729 | 2013-03-19 | ||
JP2013056729 | 2013-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014148394A1 true WO2014148394A1 (ja) | 2014-09-25 |
Family
ID=51580078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/056952 WO2014148394A1 (ja) | 2013-03-19 | 2014-03-14 | 画像表示装置および画像表示方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9877011B2 (ja) |
JP (1) | JP5865547B2 (ja) |
WO (1) | WO2014148394A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105882528A (zh) * | 2016-04-25 | 2016-08-24 | 北京小米移动软件有限公司 | 平衡车的路况共享方法、装置及平衡车 |
CN105915885A (zh) * | 2016-03-02 | 2016-08-31 | 优势拓展(北京)科技有限公司 | 鱼眼图像的3d交互显示方法和系统 |
JP2018022387A (ja) * | 2016-08-04 | 2018-02-08 | Kddi株式会社 | オブジェクトのビルボードを生成するビルボード生成装置、方法及びプログラム |
JP7058806B1 (ja) * | 2021-02-22 | 2022-04-22 | 三菱電機株式会社 | 映像監視装置、映像監視システム、映像監視方法、及び映像監視プログラム |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104469167B (zh) * | 2014-12-26 | 2017-10-13 | 小米科技有限责任公司 | 自动对焦方法及装置 |
CN108369746A (zh) * | 2015-12-21 | 2018-08-03 | 罗伯特·博世有限公司 | 用于多相机交通工具系统的动态图像融合 |
WO2017208503A1 (ja) * | 2016-05-30 | 2017-12-07 | 三菱電機株式会社 | 地図データ更新装置、地図データ更新方法および地図データ更新プログラム |
JP6482580B2 (ja) * | 2017-02-10 | 2019-03-13 | キヤノン株式会社 | 情報処理装置、情報処理方法、およびプログラム |
US11173785B2 (en) * | 2017-12-01 | 2021-11-16 | Caterpillar Inc. | Operator assistance vision system |
FR3080701B1 (fr) * | 2018-04-26 | 2020-05-15 | Transdev Group | Systeme de surveillance de la circulation routiere avec affichage d'une image virtuelle d'objets mobiles evoluant dans une portion d'infrastructure routiere |
US11315334B1 (en) * | 2021-02-09 | 2022-04-26 | Varjo Technologies Oy | Display apparatuses and methods incorporating image masking |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001084408A (ja) * | 1999-09-13 | 2001-03-30 | Sanyo Electric Co Ltd | 3次元データ加工装置及び方法並びに記録媒体 |
JP2002222488A (ja) * | 2001-01-29 | 2002-08-09 | Matsushita Electric Ind Co Ltd | 交通監視装置 |
JP2005268847A (ja) * | 2004-03-16 | 2005-09-29 | Olympus Corp | 画像生成装置、画像生成方法、および画像生成プログラム |
JP2008217220A (ja) * | 2007-03-01 | 2008-09-18 | Hitachi Ltd | 画像検索方法及び画像検索システム |
JP2011221686A (ja) * | 2010-04-07 | 2011-11-04 | Nissan Motor Co Ltd | 画像情報提供装置および画像情報提供方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006050263A (ja) | 2004-08-04 | 2006-02-16 | Olympus Corp | 画像生成方法および装置 |
JP2010019703A (ja) * | 2008-07-10 | 2010-01-28 | Toyota Motor Corp | 移動体用測位装置 |
JP5891388B2 (ja) * | 2011-03-31 | 2016-03-23 | パナソニックIpマネジメント株式会社 | 立体視画像の描画を行う画像描画装置、画像描画方法、画像描画プログラム |
-
2014
- 2014-03-14 JP JP2015506745A patent/JP5865547B2/ja active Active
- 2014-03-14 WO PCT/JP2014/056952 patent/WO2014148394A1/ja active Application Filing
- 2014-03-14 US US14/766,845 patent/US9877011B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001084408A (ja) * | 1999-09-13 | 2001-03-30 | Sanyo Electric Co Ltd | 3次元データ加工装置及び方法並びに記録媒体 |
JP2002222488A (ja) * | 2001-01-29 | 2002-08-09 | Matsushita Electric Ind Co Ltd | 交通監視装置 |
JP2005268847A (ja) * | 2004-03-16 | 2005-09-29 | Olympus Corp | 画像生成装置、画像生成方法、および画像生成プログラム |
JP2008217220A (ja) * | 2007-03-01 | 2008-09-18 | Hitachi Ltd | 画像検索方法及び画像検索システム |
JP2011221686A (ja) * | 2010-04-07 | 2011-11-04 | Nissan Motor Co Ltd | 画像情報提供装置および画像情報提供方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105915885A (zh) * | 2016-03-02 | 2016-08-31 | 优势拓展(北京)科技有限公司 | 鱼眼图像的3d交互显示方法和系统 |
CN105882528A (zh) * | 2016-04-25 | 2016-08-24 | 北京小米移动软件有限公司 | 平衡车的路况共享方法、装置及平衡车 |
JP2018022387A (ja) * | 2016-08-04 | 2018-02-08 | Kddi株式会社 | オブジェクトのビルボードを生成するビルボード生成装置、方法及びプログラム |
JP7058806B1 (ja) * | 2021-02-22 | 2022-04-22 | 三菱電機株式会社 | 映像監視装置、映像監視システム、映像監視方法、及び映像監視プログラム |
WO2022176189A1 (ja) * | 2021-02-22 | 2022-08-25 | 三菱電機株式会社 | 映像監視装置、映像監視システム、映像監視方法、及び映像監視プログラム |
Also Published As
Publication number | Publication date |
---|---|
US9877011B2 (en) | 2018-01-23 |
JPWO2014148394A1 (ja) | 2017-02-16 |
US20160065944A1 (en) | 2016-03-03 |
JP5865547B2 (ja) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5865547B2 (ja) | 画像表示装置および画像表示方法 | |
KR101393881B1 (ko) | 차량의 주차구획 인식방법 | |
JP4970926B2 (ja) | 車両周辺監視装置 | |
JP4832321B2 (ja) | カメラ姿勢推定装置、車両、およびカメラ姿勢推定方法 | |
US11430228B2 (en) | Dynamic driving metric output generation using computer vision methods | |
WO2016199244A1 (ja) | 物体認識装置及び物体認識システム | |
JP5299296B2 (ja) | 車両周辺画像表示装置及び車両周辺画像表示方法 | |
JP7072641B2 (ja) | 路面検出装置、路面検出装置を利用した画像表示装置、路面検出装置を利用した障害物検知装置、路面検出方法、路面検出方法を利用した画像表示方法、および路面検出方法を利用した障害物検知方法 | |
JP2008186246A (ja) | 移動物体認識装置 | |
JP2010136289A (ja) | 運転支援装置及び運転支援方法 | |
JP5825713B2 (ja) | 車両用危険場面再現装置 | |
WO2020105499A1 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
KR20130120041A (ko) | 차선 검출장치 및 그 방법 | |
KR20190134303A (ko) | 영상 인식 장치 및 그 방법 | |
Olaverri-Monreal et al. | Tailigator: Cooperative system for safety distance observance | |
TWI621073B (zh) | Road lane detection system and method thereof | |
US20240071104A1 (en) | Image processing device, image processing method, and recording medium | |
JP2009077022A (ja) | 運転支援システム及び車両 | |
US20210323471A1 (en) | Method and arrangement for generating a representation of surroundings of a vehicle, and vehicle having such an arrangement | |
JP4847303B2 (ja) | 障害物検出方法、障害物検出プログラムおよび障害物検出装置 | |
US12026960B2 (en) | Dynamic driving metric output generation using computer vision methods | |
KR102264254B1 (ko) | 검지 영역 설정을 위한 영상 분석 장치, 시스템 및 이를 위한 방법 | |
JP7252775B2 (ja) | 映像解析支援装置及び方法 | |
JP2013142668A (ja) | 位置推定装置及び位置推定方法 | |
US20240137477A1 (en) | Method and apparatus for generating 3d image by recording digital content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14768362 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015506745 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14766845 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14768362 Country of ref document: EP Kind code of ref document: A1 |