WO2022070618A1 - 画像処理装置および画像表示装置 - Google Patents
画像処理装置および画像表示装置 Download PDFInfo
- Publication number
- WO2022070618A1 WO2022070618A1 PCT/JP2021/029499 JP2021029499W WO2022070618A1 WO 2022070618 A1 WO2022070618 A1 WO 2022070618A1 JP 2021029499 W JP2021029499 W JP 2021029499W WO 2022070618 A1 WO2022070618 A1 WO 2022070618A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- image
- information
- driver
- composite
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 55
- 239000002131 composite material Substances 0.000 claims abstract description 140
- 230000002093 peripheral effect Effects 0.000 claims description 71
- 238000000034 method Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 11
- 230000036544 posture Effects 0.000 description 29
- 230000000052 comparative effect Effects 0.000 description 22
- 238000001514 detection method Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 210000000245 forearm Anatomy 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/04—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
- G09B9/042—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles providing simulation in a real vehicle
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/04—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
- G09B9/052—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/04—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
- G09B9/058—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles for teaching control of cycles or motorcycles
Definitions
- the present invention mainly relates to an image processing device.
- Patent Document 1 describes an image processing technique in which a virtual viewpoint is provided on an image showing a state of the vehicle and its surroundings, and the image can be visually recognized while changing the virtual viewpoint.
- Patent Document 1 shows anti-theft as an example of the use of such a technique.
- An exemplary object of the present invention is to make it possible to diversify the uses of images obtained by the above image processing technology relatively easily.
- One aspect of the present invention relates to an image processing device, and the image processing device is an information acquisition means for acquiring driver attitude information indicating the attitude of the driver of a vehicle and vehicle state information indicating the state of the vehicle.
- a first composite image is generated by superimposing a first driver image based on the driver attitude information and a first vehicle image based on the vehicle state information on a certain vehicle and its driver.
- a second driver image based on the driver attitude information and a second vehicle based on the vehicle state information about the image generation means of 1, the vehicle traveling independently of the certain vehicle, and the driver thereof.
- a second image generation means for superimposing an image to generate a second composite image and an associating means for associating the first composite image with the second composite image based on predetermined information are provided. It is characterized by.
- the schematic diagram which shows the configuration example of an image display system The schematic diagram which shows the structural example of a vehicle. A flowchart showing an example of an image processing method.
- the schematic diagram which shows the driver image Schematic diagram showing a vehicle image.
- Schematic diagram showing a composite image The figure which shows the example of the composite image in a certain virtual viewpoint.
- the schematic diagram which shows the other configuration example of an image display system The schematic diagram which shows the other configuration example of an image display system.
- FIG. 1 is a schematic diagram showing a configuration example of an image display system SY according to an embodiment.
- the image display system SY includes a vehicle 1, an image processing device 2, and a terminal 3, and in the present embodiment, these are capable of intercommunication via the network N.
- Vehicle 1 is a saddle-mounted vehicle in this embodiment.
- the saddle-type vehicle refers to a type in which the driver rides across the vehicle body, and the concept includes a typical two-wheeled vehicle (including a scooter type vehicle) and a three-wheeled vehicle (one front wheel and rear). Two-wheeled or front-two-wheeled and rear-one-wheeled vehicles), all-terrain vehicles (ATVs) such as four-wheeled buggies, and the like are also included.
- the vehicle 1 may be a passenger type vehicle.
- the vehicle 1 includes an image pickup device 11A, an image pickup device 11B, a vehicle state detection device 12, a vehicle position identification device 13, and a communication device 14.
- a plurality of image pickup devices 11A are provided in the peripheral portion of the vehicle body so as to be able to capture an image showing the state around the vehicle 1, and the plurality of image pickup devices 11A are the imaging regions thereof. Is provided so as to include the entire area around the vehicle 1. That is, the plurality of image pickup devices 11A are provided so that the image pickup regions of the two image pickup devices 11A adjacent to each other partially overlap each other.
- the directivity direction of the image pickup apparatus 11A is schematically shown by a broken line, but the actual detection range of the image pickup apparatus 11A is wider than that shown in the figure.
- the image pickup device 11B is provided in front of and behind the driver's seat so that the driver can be imaged from the front and the rear respectively.
- the directivity direction of the image pickup device 11B is schematically shown by a broken line, but the actual detection range of the image pickup device 11B is wider than that shown in the figure. Details will be described later, but this makes it possible to image the posture (including behavior, etc.) and the appearance (including clothing, etc.) of the driver.
- a known camera composed of a CCD / CMOS image sensor or the like may be used for the image pickup devices 11A and 11B.
- a monocular camera is used in order to reduce the cost required for the image pickup devices 11A and 11B.
- the vehicle state detection device 12 is provided at each part of the vehicle body so that the state of the vehicle 1 can be detected.
- the state of the vehicle 1 includes the vehicle speed, the steering angle (or steering angle), the posture of the vehicle body, and the state of the lamp body (headlight, taillight, winker, etc.).
- the vehicle state detection device 12 may be expressed as a state detection device or simply a detection device.
- Vehicle speed is detected, for example, based on the number of wheel revolutions per unit time, which can be achieved by using a known speed sensor.
- the steering angle is detected, for example, based on the orientation of the steering wheel with respect to the vehicle body (or the orientation of the handlebar with respect to the vehicle body), which can be achieved by using a known steering angle sensor.
- the posture of the vehicle body is detected, for example, based on the orientation of the vehicle body with respect to the direction of gravity, which can be achieved by using a known acceleration sensor.
- the state of the lamp body is detected based on, for example, the conduction state of the light source, which can be realized by using a known ammeter.
- the vehicle position specifying device 13 specifies the position of the vehicle 1 on the traveling path.
- the travel path indicates the road on which the vehicle 1 is actually traveling, and the vehicle position specifying device 13 indicates the position of the vehicle 1 on the map data.
- a GPS (Global Positioning System) sensor can be typically used as the vehicle position specifying device 13.
- the vehicle position specifying device 13 may be expressed as a position specifying device or simply a specifying device.
- the communication device 14 transmits the image pickup result by the image pickup devices 11A and 11B, the detection result by the vehicle state detection device 12, and the identification result by the vehicle position identification device 13 to the image processing device 2 via the network N.
- the communication device 14 may be expressed as a transmission / reception device or the like, or may be simply referred to as a transmission device in the present embodiment.
- the image pickup result by the image pickup apparatus 11A shows information indicating the state around the vehicle 1 (hereinafter referred to as vehicle peripheral information 90A).
- vehicle peripheral information 90A is image information or image data.
- the image pickup result by the image pickup apparatus 11B shows information indicating the posture of the driver (hereinafter referred to as driver posture information 90B).
- the driver posture information 90B is image information or image data.
- the detection result by the vehicle state detection device 12 shows information indicating the state of the vehicle 1 (hereinafter referred to as vehicle state information 90C).
- vehicle state information 90C is a signal group indicating the vehicle speed, the steering angle, the posture of the vehicle body, and the state of the lamp body.
- the identification result by the vehicle position specifying device 13 indicates information indicating the position of the vehicle 1 on the traveling path (hereinafter referred to as vehicle position information 90D).
- vehicle position information 90D can be acquired based on GPS, and is a signal group indicating coordinates on map data.
- the above-mentioned information 90A to 90D are transmitted from the vehicle 1 to the image processing device 2 via the network N, as shown in FIG.
- the image processing device 2 includes a communication unit 21 and a calculation unit 22.
- the communication unit 21 enables the image processing device 2 to communicate with each of the vehicle 1 and the terminal 3 via the network N.
- the calculation unit 22 performs predetermined calculation processing including image processing, although the details will be described later.
- the arithmetic unit 22 is a processor including a CPU and a memory, and its function is realized by executing a predetermined program. That is, this program may be read out via a network or storage medium and executed on a computer.
- the arithmetic unit 22 may be composed of a semiconductor device such as a PLD (programmable logic device) or an ASIC (semiconductor integrated circuit for a specific application). That is, the function of the arithmetic unit 22 can be realized by either hardware or software.
- a semiconductor device such as a PLD (programmable logic device) or an ASIC (semiconductor integrated circuit for a specific application). That is, the function of the arithmetic unit 22 can be realized by either hardware or software.
- the terminal 3 is a mobile terminal (for example, a smartphone) in the present embodiment, and includes a communication unit 31, an operation unit 32, and a display unit 33.
- the user of the terminal 3 may be the driver of the vehicle 1 or a third party different from the driver.
- the communication unit 31 enables the terminal 3 to communicate with the image processing device 2 via the network N.
- the operation unit 32 can receive an operation input by a user, and a known operation panel such as a touch sensor type operation panel or a button / switch type operation panel can be used for the operation unit 32.
- the display unit 33 can display an image, and a known display such as a liquid crystal display or an organic EL display can be used for the display unit 33.
- the operation unit 32 and the display unit 33 are integrally provided (touch panel type display), but may be individually provided as another embodiment.
- the vehicle 1 can communicate with the image processing device 2, and the image pickup result by the image pickup devices 11A and 11B, the detection result by the vehicle state detection device 12, and the detection result by the vehicle state detection device 12 and the like.
- the identification result by the vehicle position specifying device 13 is transmitted to the image processing device 2.
- the image processing device 2 performs predetermined image processing by the calculation unit 22 to generate a composite image (hereinafter referred to as a composite image 90X), and transmits the composite image 90X to the terminal 3.
- the terminal 3 functions as an image display device, and the user can visually recognize the composite image 90X on the display unit 33 while inputting an operation to the operation unit 32 using the terminal 3.
- FIG. 3 is a flowchart showing an example of an image processing method for generating a composite image 90X.
- the content of this flowchart is mainly executed by the calculation unit 22, and the outline thereof is that the composite image 90X is generated based on the above-mentioned information 90A to 90D.
- this flowchart is executed after the vehicle 1 is used (when the vehicle is not running), it may be executed when the vehicle 1 is used (when the vehicle is running).
- step S1000 vehicle peripheral information 90A is acquired.
- the vehicle peripheral information 90A is obtained by the plurality of image pickup devices 11A, and the plurality of image pickup devices 11A are provided in the vehicle body peripheral portion so that the image pickup region thereof includes the entire area around the vehicle 1.
- the vehicle peripheral information 90A shows the state of the entire area around the vehicle 1 and can be obtained as image information or image data showing a so-called panoramic view (360-degree panoramic view).
- panoramic view 360-degree panoramic view
- an image showing the state around the vehicle 1 corresponding to the vehicle peripheral information 90A is generated as the vehicle peripheral image 90P. Therefore, the image processing using the vehicle peripheral information 90A can be performed relatively easily by using the spherical coordinate system.
- the driver posture information 90B is acquired.
- the driver posture information 90B is obtained by a pair of image pickup devices 11B provided in front of and behind the driver's seat, and the pair of image pickup devices 11B are provided so that the driver can be imaged from the front and the rear. .. Therefore, the driver posture information 90B is obtained as image information or image data showing the driver's driving mode such as the driver's posture (in the case of a still image) and behavior (in the case of a moving image), and is incidentally obtained as an appearance (for example, in the case of a moving image). , Skeleton, clothing (wear, helmet, etc.)) may be further indicated.
- an image of the driver corresponding to the driver posture information 90B is generated as the driver image 90Q. Therefore, the image processing for the driver posture information 90B can be performed relatively easily by using a three-dimensional coordinate system based on a predetermined human body model.
- the vehicle state information 90C is acquired.
- the vehicle state information 90C is obtained by the vehicle state detection device 12 provided at each part of the vehicle body so that the state of the vehicle 1 can be detected, and the state of the vehicle 1 is the vehicle speed, the steering angle, and the posture of the vehicle body. , And the state of the lamp body.
- an image of the vehicle 1 corresponding to the vehicle state information 90C is generated as the vehicle image 90R. Therefore, the image processing for the vehicle image 90R can be performed relatively easily by using the three-dimensional coordinate system based on the corresponding vehicle model.
- images 90P to 90R are generated based on the above information 90A to 90C, respectively. That is, the vehicle peripheral image 90P is generated based on the vehicle peripheral information 90A, the driver image 90Q is generated based on the driver posture information 90B, and the vehicle image 90R is generated based on the vehicle state information 90C.
- the images 90P, 90Q and 90R are superposed to generate a composite image 90X.
- the vehicle peripheral image 90P is processed in the spherical coordinate system
- the driver image 90Q and the vehicle image 90R are processed in the three-dimensional coordinate system.
- the three-dimensional coordinate system is typically a distance x in the vehicle body front-rear direction from the coordinate center to the target, a distance y in the vehicle body left-right direction from the coordinate center to the target, and a vehicle body up / down from the coordinate center to the target. It can be expressed in coordinates (x, y, z) using the distance z in the direction.
- the spherical coordinate system is typically a distance r from the coordinate center to the target, an angle ⁇ formed by a line passing between the coordinate center and the target and the vertical direction of the vehicle body, and a line passing between the coordinate center and the target. It can be indicated by coordinates (r, ⁇ , ⁇ ) using the angle ⁇ formed by the vehicle body front-rear direction.
- FIG. 4A is a schematic diagram showing a vehicle peripheral image 90P.
- the vehicle peripheral image 90P is processed by the spherical coordinate system and is drawn at a position r from the coordinate center.
- the vehicle peripheral image 90P as a panoramic view is drawn on the inner wall of a sphere having a radius r.
- the above r may be set so as to be located outside the vehicle 1.
- FIG. 4B is a schematic diagram showing a driver image 90Q.
- the driver image 90Q is processed in a three-dimensional coordinate system, for example, based on a predetermined human body model, head, shoulders, torso (chest and abdomen), waist, arms (upper arm and forearm), hands. Parts, legs (thighs and lower legs), feet, etc. can be depicted. Concomitantly, clothing may be further depicted.
- FIG. 4C is a schematic diagram showing a vehicle image 90R.
- the vehicle image 90R is processed in a three-dimensional coordinate system, and for example, an image of the vehicle 1 in a state based on the vehicle state information 90C (information indicating the vehicle speed, steering angle, posture of the vehicle body, and the state of the lamp body) is depicted. Can be done. For example, for the vehicle 1 in cornering, the vehicle image 90R can be depicted with the vehicle body tilted.
- the image pickup results by the image pickup devices 11A and 11B may be corrected according to the degree of inclination of the vehicle body. For example, when the vehicle peripheral image 90P is acquired by the image pickup apparatus 11A while the vehicle body is tilted at the inclination angle ⁇ 1, the image 90P can be rotated by the angle ⁇ 1.
- the correction processing for the image pickup results by the image pickup devices 11A and 11B may be performed in the image processing device 2, or may be performed in the vehicle 1.
- FIG. 4D is a schematic diagram showing a composite image 90X.
- the images 90P to 90R may be combined so that the coordinate centers, distances, and directions match.
- the center of coordinates is a position directly above the seat in this embodiment, but as another embodiment, it may be at another position (for example, any position of the vehicle body).
- the vehicle position information 90D is acquired, and the composite images 90X, the images 90P to 90R constituting them, and the information 90A to 90C for generating them are used.
- the vehicle position information 90D indicating the position of the vehicle 1 on the traveling path is given.
- the composite image 90X (or the information 90A to 90C and / or the images 90P to 90R used in the process of its generation) is used in another composite image 90X (or the process of its generation).
- Information 90A to 90C and / or images 90P to 90R) can be associated with the information.
- the composite image 90X is transmitted to the terminal 3.
- the user of the terminal 3 can display the composite image 90X on the display unit 33 from a viewpoint from an arbitrary position (hereinafter referred to as a virtual viewpoint) by inputting an operation to the operation unit 32.
- the user can also zoom in or out of the composite image 90X by inputting an operation to the operation unit 32.
- the calculation unit 22 functions as an information acquisition unit for acquiring vehicle peripheral information 90A, driver posture information 90B, and vehicle state information 90C.
- the calculation unit 22 functions as an image generation unit that generates a vehicle peripheral image 90P, a driver image 90Q, and a vehicle image 90R, and superimposes these images 90P to 90R to generate a composite image 90X.
- the calculation unit 22 functions as an association unit that correlates a certain composite image 90X with any one of a plurality of composite images 90X generated in the past by using the vehicle position information 90D. .. Further, in S1060, the calculation unit 22 functions as a transmission unit for transmitting the composite image 90X.
- the acquisition of the information 90A to 90D means that the CPU of the arithmetic unit 22 reads the information 90A to 90D from the memory, and the image processing device 2 refers to the information 90A before S1000.
- ⁇ 90D can be received collectively from the vehicle 1.
- FIG. 5A shows an example of a composite image 90X in the case of a touch panel display in which an operation unit 32 and a display unit 33 are integrally provided
- FIG. 5B shows another example of the composite image 90X (a virtual viewpoint different from that of FIG. 5A).
- Example of the composite image 90X in the above is shown.
- the display unit 33 displays icons 8a and 8b for changing the virtual viewpoint, icons 8c for zooming in, and icons 8d for zooming out as a part of the operation unit 32.
- the user can visually recognize the state of the vehicle 1 and its surroundings from a desired virtual viewpoint by performing predetermined operation inputs (for example, tap operation, swipe operation, flick operation, etc.) to these icons 8a and the like. It will be possible.
- the size of the vehicle image 90R in the composite image 90X is changed, and the size of the driver image 90Q is also changed accordingly, so that the appearance of the composite image 90X due to the change is changed.
- the discomfort of closing may be reduced.
- the size of the vehicle peripheral image 90P in the composite image 90X may be maintained when the virtual viewpoint is changed.
- the vehicle peripheral image 90P can be clearly displayed by using the image pickup apparatus 11A having a relatively large number of pixels.
- the images 90P to 90R are required to show the state at substantially the same time. Therefore, the vehicle position information 90D is added to the images 90P to 90R and the information 90A to 90C for generating them in the present embodiment.
- the images 90P to 90R and the information 90A to 90C are timed (at what timing the image was captured or at what timing the image was generated based on the acquired information).
- the indicated attribute information may be associated.
- the three-dimensional coordinate system can be typically represented by coordinates (x, y, z), and the spherical coordinate system can be typically represented by coordinates (r, ⁇ , ⁇ ). Therefore, as another embodiment, the driver image 90Q and the vehicle image 90R may be processed in the spherical coordinate system as in the vehicle peripheral image 90P by using a known coordinate transformation. Alternatively, the vehicle peripheral image 90P may be processed in the three-dimensional coordinate system as in the driver image 90Q and the vehicle image 90R.
- a monocular camera is used as the image pickup apparatus 11A, but a compound eye camera can be used instead.
- the image pickup target can be imaged together with the distance information, so that the vehicle peripheral image 90P can be processed in the three-dimensional coordinate system relatively easily.
- the user for example, the driver
- the composite image 90X can be used for various purposes, and as an example, it can also be used for practicing driving operations.
- FIG. 7 is a schematic diagram showing a management mode of the composite image 90X obtained as described above.
- the composite image 90X is managed for each identifier that can be distinguished by, for example, the vehicle 1, its driver, and the like, and is stored in the database DB for each identifier.
- a moving image showing a traveling mode by a certain vehicle 1a and a certain driver Ua is stored in the database DBaa as a plurality of composite images 90X.
- the moving image showing the traveling mode by a certain vehicle 1a and another driver Ub is stored in the database DBab as a plurality of composite images 90X.
- the moving image showing the driving mode by the other vehicle 1b and a certain driver Ua is stored in the database DBba as a plurality of composite images 90X. Further, the moving image showing the driving mode by the other vehicle 1b and the other driver Ub is stored in the database DBbb as a plurality of composite images 90X.
- the above database DBaa and the like can be managed for each identifier, but in the following, it is simply referred to as a database DB.
- the number of composite images 90X stored in each database DB typically follows the quantity according to the time and frame rate of the moving image.
- the arithmetic unit 22 can synthesize any two of the composite images 90X stored in the database DB, which correspond to different identifiers, for comparison.
- the arithmetic unit 22 uses a composite image corresponding to a certain identifier (referred to as “composite image 91X” for distinction) and a composite image corresponding to another identifier (referred to as “composite image 92X” for distinction). ) And, can be superimposed to generate a composite image 93X for comparison.
- the comparative composite image 93X may be displayed on the display unit 33 so that the portion is emphasized.
- the emphasized portion may be displayed on at least one of the composite image 91X and the composite image 92X in the comparative composite image 93X.
- known aspects such as coloring and highlighting of contours can be adopted. According to such a display mode, it is possible to make it easier for the user to visually recognize the difference between the composite images 91X and 92X.
- the composite image 91X is a vehicle peripheral image (referred to as "vehicle peripheral image 91P” for distinction), a driver image (referred to as “driver image 91Q” for distinction), and a vehicle image (referred to as “driver image 91Q” for distinction).
- Vehicle image 91R ”) and are superimposed. Further, for distinguishing the vehicle peripheral information, the driver attitude information, and the vehicle state information for generating the images 91P, 91Q, and 91R, respectively, “vehicle peripheral information 91A”, “driver attitude information 91B", and “driver attitude information 91B”, respectively. Vehicle status information 91C “. Further, “vehicle position information 91D” is used to distinguish the composite images 91X, the images 91P to 91R constituting them, and the vehicle position information given to the information 91A to 91C for generating them. And.
- the composite image 92X is a vehicle peripheral image (referred to as “vehicle peripheral image 92P” for distinction), a driver image (referred to as “driver image 92Q” for distinction), and a vehicle image (referred to as “driver image 92Q” for distinction) and a vehicle image (referred to as “driver image 92Q” for distinction).
- Vehicle image 92R ”) and are superimposed. Further, for the purpose of distinguishing the vehicle peripheral information, the driver attitude information and the vehicle state information for generating the images 92P, 92Q and 92R, respectively, “vehicle peripheral information 92A”, “driver attitude information 92B” and “driver attitude information 92B”, respectively. Vehicle status information 92C ”. Further, “vehicle position information 92D” is used to distinguish the composite images 92X, the images 92P to 92R constituting them, and the vehicle position information given to the information 92A to 92C for generating them. And.
- the comparative composite image 93X is generated by synthesizing the composite images 91X and 92X when the vehicle position information 91D and 92D indicate the same position.
- a certain vehicle 1a and its driver Ua, and the vehicle 1b and its driver Ub traveling independently of the vehicle 1a are arranged side by side (parallel running). It can be displayed on a single screen (when it is turned on).
- the background image of the comparative composite image 93X is formed by the vehicle peripheral images 91P and 92P, but both the images 91P and 92P are unnecessary for forming the background image. .. Therefore, when generating the comparative composite image 93X, one of the images 91P and 92P is omitted as shown in FIG. 8 as “91P / 92P”, and for example, the image 91P can be adopted as the background image.
- the vehicle peripheral image 90P may be omitted in S1040 (see FIG. 3) that generates the composite image 90X.
- the vehicle peripheral image 91P is omitted when the composite image 91X is generated
- the vehicle peripheral image 92P is omitted when the composite image 92X is generated
- one of the images 91P and 92P is generated when the comparative composite image 93X is generated.
- the calculation unit 22 can switch from one of the vehicle peripheral images 91P and 92P to the other when forming the background image of the comparative composite image 93X, and the background image can be arbitrarily selected.
- both the vehicle peripheral images 91P and 92P may be omitted when the comparative composite image 93X is generated.
- another image different from the images 91P and 92P for example, a monochromatic image, an image showing a landscape of another area, a virtual reality image, etc.
- the background image used as the vehicle peripheral image does not have to be an image that actually shows the state around the vehicle 1.
- the calculation unit 22 can generate two or more vehicle peripheral images including the vehicle peripheral images 91P and 92P, and when generating the comparative composite image 93X, the background image is used as the background image of the two or more vehicle peripherals. It is possible to switch from one of the images to the other. As a result, any one of the two or more vehicle peripheral images can be used as the background image.
- FIG. 9 is a schematic diagram showing a composite image 93X for comparison.
- a certain traveling mode e1 and a traveling mode e2 independent of the traveling mode e1 are displayed side by side / superimposed on a single screen on the display unit 33 of the terminal 3.
- Aspect e1 shows, for example, a running mode based on the database DBaa (driving mode of the vehicle 1a and its driver Ua).
- Aspect e2 shows, for example, a driving mode based on the database DBbb (driving mode of the vehicle 1b and its driver Ub that traveled independently of the vehicle 1a).
- the driver Ua can compare his / her own driving operation with the driving operation of the driver Ub by referring to the comparative composite image 93X as the user of the terminal 3, and is utilized for practicing the driving operation. It will be possible.
- the image processing device 2 (mainly the calculation unit 22) acquires the driver posture information 91B and 92B, the vehicle state information 91C and 92C, and the vehicle position information 91D and 92D.
- the image processing device 2 superimposes the driver image 91Q based on the driver posture information 91B and the vehicle image 91R based on the vehicle state information 91C to generate a composite image 91X for a certain vehicle and its driver.
- the image processing device 2 obtains a driver image 92Q based on the driver attitude information 92B and a vehicle image 92R based on the vehicle state information 92C for the vehicle traveling independently of the certain vehicle and its driver.
- the composite image 92X is generated by superimposing the images.
- the image processing device 2 bases the composite image 91X and the composite image 92X based on the vehicle position information 91D for the certain vehicle and the vehicle position information 92D for the vehicle traveling independently of the certain vehicle. To associate with.
- the two driving modes can be compared on a single screen (typically in a moving image) based on the composite images 91X and 92X.
- the composite images 91X and 92X are transmitted to the terminal 3, and the above two traveling modes are displayed on a single screen as the comparative composite image 93X in the terminal 3.
- the user can refer to the comparative composite image 93X in which the composite image 91X showing its own running mode and the composite image 92X showing another running mode are superimposed and displayed.
- Image 93X can be used for practicing driving operations.
- the comparative composite image 93X has a traveling mode of a certain vehicle 1a and its driver Ua, and a traveling mode of the vehicle 1b and its driver Ub traveling independently of the vehicle 1a.
- An example of displaying them side by side see FIG. 7).
- the two traveling modes displayed side by side in the comparative composite image 93X may be vehicles 1 traveling independently of each other, and need not be vehicles and / or drivers different from each other. That is, the vehicles 1 to be compared may be different vehicles that have traveled substantially at the same time, or may be the same vehicle that has traveled in different periods.
- the user can compare his / her own driving mode with the driving mode of another person, and can also compare his / her current driving mode with his / her past driving mode.
- the user has his / her current or past driving mode when using a certain vehicle (for example, vehicle 1a) and his / her current or past driving mode when using another vehicle (for example, vehicle 1b). It is also possible to compare. Therefore, the management of the composite image 90X described with reference to FIG. 7 may be performed for the traveling performed independently of each other, and the identifier required for the management is the vehicle 1, its driver, and the traveling. It can be decided based on various attribute information such as the time zone and the place of travel.
- the vehicle position information 91D and 92D when the vehicle position information 91D and 92D indicate the same position, they are combined assuming that the composite images 91X and 92X are traveling at substantially the same position. A composite image 93X for comparison is generated. Therefore, the vehicle position information 91D and 92D may indicate the same position at least in the vehicle body front-rear direction (traveling direction of the vehicle 1).
- the two traveling modes e1 and e2 displayed side by side in the comparative composite image 93X may be displayed close to each other so as to be appropriately comparable to the user who refers to them. It may be preferable. Therefore, in the comparative composite image 93X, the distance between the traveling modes e1 and e2 (distance in the left-right direction of the vehicle body) may be fixed so as to be specified by the user. In this case, on the moving image, the traveling modes e1 and e2 are displayed so that the distance between them is constant.
- the distance between the traveling modes e1 and e2 can be arbitrarily set by the user, for example, 0 to 1 [m (meter)].
- the distance between the traveling modes e1 and e2 is 0 [m (meter)]
- they are displayed overlapping.
- the user who refers to the traveling modes e1 and e2 can compare the differences in detail.
- the difference between the traveling modes e1 and e2 may be visually displayed on the display unit 33 of the terminal 3 by characters, symbols, icons and the like for notifying the user of the difference.
- a guide eg, audio guide
- indicating the above difference by sound may be attached.
- the display of the comparative composite image 93X may be partially omitted based on the user's request, that is, of the composite images 91X and 92X forming the comparative composite image 93X, the driver images 91Q and 92Q, and , Vehicle images 91R and 92R, may be partially omitted.
- the user can compare the inclination of the vehicle body at the time of cornering of the vehicle 1.
- the driver images 91Q and 92Q are displayed and the vehicle images 91R and 92R are omitted, the user can compare the postures of the driver when the vehicle 1 is cornering.
- the posture of the vehicle body and / or the driver may change depending on the vehicle speed of the vehicle 1 and the curvature of the cornering (turning radius). Therefore, the composite image 92X that can be associated with the composite image 91X may be limited based on predetermined conditions, for example, based on the vehicle speed, the place of travel, and the like.
- the embodiment in which the user utilizes the comparative composite image 93X for practicing the driving operation is exemplified, but the use thereof is not limited to the embodiment exemplified here.
- the content of the embodiment can be applied to generate a virtual motorcycle race moving image by generating CG (computer graphics) showing an embodiment in which a plurality of vehicles 1 are traveling substantially simultaneously.
- the vehicle position information 91D and 92D may allow a predetermined amount of positional deviation in the front-rear direction of the vehicle body. This permissible range can be arbitrarily set by the user, for example, 0 to 5 [m (meters)].
- the quantity of the composite image 90X required to generate the comparative composite image 93X may be 3 or more, and the comparative composite image 93X may be a composite image for viewing, a composite image for advertisement, or simply a composite. It may be paraphrased in various expressions such as images.
- the driver images 91Q and 92Q may not reflect the posture of the driver, and the driver posture information 91B and 92B may be simply expressed as the driver information.
- the function of the image processing device 2 is realized in a place different from the vehicle 1 (for example, a server), and the display of the composite image 90X and the change of the virtual viewpoint are performed by the terminal 3.
- the vehicle 1 for example, a server
- the display of the composite image 90X and the change of the virtual viewpoint are performed by the terminal 3.
- it is not limited to this aspect.
- FIG. 6A shows a configuration example of the image display system SYa.
- the image processing device 2 is mounted on the vehicle 1.
- the transmission of the composite image 90X from the vehicle 1 to the terminal 3 may be performed via the network N, or may be performed by a known communication means (for example, blue t Albany Moderator Th (registered trademark)).
- FIG. 6B shows a configuration example of the image display system SYb.
- the image processing device 2 is provided in the terminal 3. That is, the terminal 3 may receive the information 90A to 90D from the vehicle 1 via the network N or the like, generate a composite image 90X based on the information 90A to 90D, and display the information 90A to the display unit 33.
- the terminal 3 may be an in-vehicle monitor (for example, a car navigation system). In this case, the driver can visually recognize the state of the surroundings from a desired virtual viewpoint while driving the vehicle 1.
- a car navigation system for example, a car navigation system
- each element is shown by a name related to its functional aspect, but each element is not limited to the one having the contents described in the embodiment as the main function. However, it may be provided as an auxiliary.
- the first aspect relates to an image processing device (for example, 2), in which the image processing device includes driver information (for example, 90B, 91B, 92B) indicating the driver of the vehicle and vehicle state information (for example, 90B, 91B, 92B) indicating the state of the vehicle.
- driver information for example, 90B, 91B, 92B
- vehicle state information for example, 90B, 91B, 92B
- an information acquisition means for example, S1010, S1020, S1050 for acquiring 90C, 91C, 92C
- a first driver image for example, 91Q
- a first image generation means (for example, S1040) that superimposes a first vehicle image (for example, 91R) based on the vehicle state information to generate a first composite image (for example, 91X), and the certain vehicle.
- a second driver image for example, 92Q
- a second vehicle image for example, 92R
- a second image generation means (for example, S1040) that generates a second composite image (for example, 92X) and an association means (for example, for example) that associates the first composite image with the second composite image based on predetermined information. S1050) and.
- the traveling mode of a certain first vehicle and the traveling mode of the second vehicle traveling independently of the first vehicle are unified. It can be made comparable on the screen (typically in a video).
- the first and second vehicles may be the same vehicles that have traveled independently of each other, may be the same vehicle that has traveled in different periods, or may be different vehicles that have traveled substantially at the same time. good.
- the vehicle state information includes information indicating the vehicle speed, the steering angle, the posture of the vehicle body, and / or the state of the lamp body.
- the first and second vehicle images can show the vehicle speed, the steering angle, the posture of the vehicle body, and / or the state of the lamp body.
- the information acquisition means further acquires vehicle position information (for example, 90D, 91D, 92D) indicating the position of the vehicle on the travel path, and the predetermined information is the vehicle for the certain vehicle. It is characterized in that it is the position information and the vehicle position information about a vehicle that has traveled independently of the certain vehicle. This makes it possible to appropriately associate the first and second composite images.
- vehicle position information for example, 90D, 91D, 92D
- the information acquisition means acquires the vehicle position information based on GPS (Global Positioning System). This makes it possible to display a vehicle and its driver, and a vehicle and its driver that traveled independently of the vehicle side by side (in parallel) on a single screen. Will be.
- GPS Global Positioning System
- the information acquisition means further acquires vehicle peripheral information (for example, 90A, 91A, 92A) indicating the state around the vehicle, and the image processing device is based on the vehicle peripheral information.
- vehicle peripheral information for example, 90A, 91A, 92A
- a third image generation means for example, S1000 for generating a vehicle peripheral image (for example, 90P, 91P, 92P) is further provided, and the predetermined information is the vehicle peripheral image. This makes it possible to display a certain vehicle and its driver, and a vehicle and its driver that travel independently of the certain vehicle, using the vehicle peripheral image as a background image.
- the third image generation means has the vehicle peripheral image based on the vehicle peripheral information about the vehicle and the vehicle peripherals for the vehicle traveling independently of the vehicle. It is possible to generate two or more vehicle peripheral images including the information-based vehicle peripheral image, and the associating means may change the associated vehicle peripheral image from one of the two or more vehicle peripheral images to the other one. It is characterized by being able to switch to. As a result, any one of the two or more vehicle peripheral images can be used as the background image.
- the third image generating means processes the vehicle peripheral image in a spherical coordinate system (see FIGS. 4A and 4D). It is characterized by that. This makes it possible to process the image around the vehicle relatively easily.
- the first image generation means processes the first vehicle image in a three-dimensional coordinate system
- the second image generation means processes the second vehicle image in a three-dimensional coordinate system.
- the driver information is the driver posture information (for example, 90B, 91B, 92B) indicating the posture of the driver of the vehicle. This allows the user to compare the posture of the driver at the time of cornering, for example, based on the first and second composite images.
- a tenth aspect relates to an image display device (for example, 3), wherein the image display device uses the above-mentioned image processing device (for example, 2), the first composite image, and the second composite image. It is provided with a display (for example, 33) for displaying a vehicle and its driver, and a vehicle and its driver traveling independently of the vehicle side by side or on top of each other on a single screen. It is a feature. This makes it possible to refer to the traveling mode of a certain first vehicle and the traveling mode of a second vehicle that has traveled independently of the first vehicle while comparing them.
- the image processing apparatus displays on the display such that at least one of the first composite image and the second composite image emphasizes a portion where there is a difference of more than a reference between them. It is characterized by letting it. This makes it possible for the user to easily see the difference between the first composite image and the second composite image.
- a twelfth aspect relates to an image display device (for example, 3), wherein the image display device is an image display device capable of communicating with the above-mentioned image processing device (for example, 2), and is the first composite image and the first composite image. It is characterized by comprising a display (for example, 33) for displaying two composite images side by side or on top of each other on a single screen. This makes it possible to refer to the traveling mode of a certain first vehicle and the traveling mode of a second vehicle that has traveled independently of the first vehicle while comparing them.
- a thirteenth aspect is further provided with an operation unit (for example, 32) for receiving an operation input for changing the display content of the display to a viewpoint from an arbitrary position.
- an operation unit for example, 32 for receiving an operation input for changing the display content of the display to a viewpoint from an arbitrary position.
- the user can visually recognize the traveling modes of the first and second vehicles from any viewpoint.
- the display and the operation unit are realized by a touch panel type display in which they are integrally configured, but as another embodiment, they may be configured separately.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
図1は、実施形態に係る画像表示システムSYの構成例を示す模式図である。画像表示システムSYは、車両1、画像処理装置2および端末3を備えており、本実施形態では、これらはネットワークNを介して相互通信可能とする。
図3は、合成画像90Xを生成するための画像処理方法の一例を示すフローチャートである。本フローチャートの内容は、主に演算部22により実行され、その概要は、上述の情報90A~90Dに基づいて合成画像90Xを生成する、というものである。尚、本フローチャートは、車両1の使用後(非走行時)に実行されるものとするが、車両1の使用時(走行中)に実行されてもよい。
図5Aは、操作部32および表示部33が一体に設けられたタッチパネル式ディスプレイの場合における合成画像90Xの一例を示し、図5Bは、合成画像90Xの他の例(図5Aとは異なる仮想視点での合成画像90Xの例)を示す。表示部33には、操作部32の一部として、仮想視点を変更するためのアイコン8a及び8b、ズームインを行うためのアイコン8c、並びに、ズームアウトを行うためのアイコン8dが表示される。ユーザは、これらのアイコン8a等に対して所定の操作入力(例えば、タップ操作、スワイプ操作、フリック操作等)を行うことにより、所望の仮想視点から車両1およびその周辺の様子を視認することが可能となる。
図7は、上述のようにして得られる合成画像90Xの管理態様を示す模式図である。合成画像90Xは、例えば車両1、その運転者等で区別されうる識別子ごとに管理され、識別子ごとにデータベースDBに格納される。例えば、或る車両1aおよび或る運転者Uaによる走行態様を示す動画は、複数の合成画像90Xとして、データベースDBaaに格納される。また、或る車両1aおよび他の運転者Ubによる走行態様を示す動画は、複数の合成画像90Xとして、データベースDBabに格納される。他の車両1bおよび或る運転者Uaによる走行態様を示す動画は、複数の合成画像90Xとして、データベースDBbaに格納される。また、他の車両1bおよび他の運転者Ubによる走行態様を示す動画は、複数の合成画像90Xとして、データベースDBbbに格納される。他のデータベースDBac、DBbc等についても同様とする。
図8に例示されるように、演算部22は、データベースDBに格納されている合成画像90Xのうち、互いに異なる識別子に対応する任意の2つを比較用に合成することができる。例えば、演算部22は、或る識別子に対応する合成画像(区別のため「合成画像91X」とする。)と、他の識別子に対応する合成画像(区別のため「合成画像92X」とする。)と、を重ね合わせて、比較用合成画像93Xを生成することができる。
上述の実施形態では、比較用合成画像93Xは、或る車両1aおよびその運転者Uaの走行態様と、該車両1aとは独立して走行した車両1bおよびその運転者Ubの走行態様と、を並べて表示することを例示した(図7参照)。
上述の実施形態によれば、車両位置情報91D及び92Dが同一の位置を示す場合、合成画像91X及び92Xが実質的に同一の位置を走行している態様を示すものとして、それらを合成して比較用合成画像93Xを生成する。よって、車両位置情報91D及び92Dは、少なくとも車体前後方向(車両1の進行方向)において同一の位置を示すとよい。
比較用合成画像93Xの表示は、ユーザの要求に基づいて部分的に省略されてもよく、即ち、比較用合成画像93Xを形成する合成画像91X及び92Xのうち、運転者画像91Q及び92Q、並びに、車両画像91R及び92R、の一部は省略されてもよい。
上述の実施形態では、ユーザが比較用合成画像93Xを運転操作の練習に活用する態様を例示したが、その用途は、ここで例示された態様に限られるものではない。例えば、実施形態の内容は、複数の車両1が略同時に走行している態様を示すCG(コンピュータグラフィックス)を生成して仮想上のバイクレース動画を作成する際にも適用可能である。
前述の画像表示システムSYによれば(図1参照)、画像処理装置2の機能は、車両1とは異なる場所(例えばサーバー)において実現され、合成画像90Xの表示および仮想視点の変更は端末3において行われるものとしたが、この態様に限られるものではない。
以上の説明においては、理解の容易化のため、各要素をその機能面に関連する名称で示したが、各要素は、実施形態で説明された内容を主機能として備えるものに限られるものではなく、それを補助的に備えるものであってもよい。
第1の態様は画像処理装置(例えば2)に係り、前記画像処理装置は、車両の運転者を示す運転者情報(例えば90B、91B、92B)と、該車両の状態を示す車両状態情報(例えば90C、91C、92C)とを取得する情報取得手段(例えばS1010、S1020、S1050)と、或る車両およびその運転者について、前記運転者情報に基づく第1の運転者画像(例えば91Q)と、前記車両状態情報に基づく第1の車両画像(例えば91R)とを重ね合わせて第1の合成画像(例えば91X)を生成する第1の画像生成手段(例えばS1040)と、前記或る車両とは独立して走行した車両およびその運転者について、前記運転者情報に基づく第2の運転者画像(例えば92Q)と、前記車両状態情報に基づく第2の車両画像(例えば92R)とを重ね合わせて第2の合成画像(例えば92X)を生成する第2の画像生成手段(例えばS1040)と、前記第1の合成画像と前記第2の合成画像とを所定情報に基づいて関連付ける関連付け手段(例えばS1050)と、を備える
ことを特徴とする。これにより、第1~第2の合成画像に基づいて、或る第1の車両の走行態様と、該第1の車両とは独立して走行した第2の車両の走行態様と、を単一の画面上で(典型的には動画で)比較可能な状態にすることができる。尚、第1~第2の車両は、互いに独立して走行したものであればよく、互いに異なる期間に走行した同一の車両であってもよいし、略同時に走行した互いに異なる車両であってもよい。
ことを特徴とする。これにより、第1~第2の車両画像は、車速、転舵角、車体の姿勢、及び/又は、灯体の状態を示すことが可能となる。
ことを特徴とする。これにより、第1~第2の合成画像を適切に関連付け可能となる。
ことを特徴とする。これにより、或る車両およびその運転者と、該或る車両とは独立して走行した車両およびその運転者と、を並べて(並走させた状態で)単一の画面に表示させることが可能となる。
ことを特徴とする。これにより、車両周辺画像を背景画像として、或る車両およびその運転者と、該或る車両とは独立して走行した車両およびその運転者と、を表示させることが可能となる。
ことを特徴とする。これにより、2以上の車両周辺画像のうちの任意の1つを背景画像とすることができる。
ことを特徴とする。これにより、車両周辺画像の処理を比較的簡便に実現可能となる。
ことを特徴とする。これにより、第1~第2の車両画像の処理を比較的簡便に実現可能となる。
ことを特徴とする。これにより、ユーザは、第1~第2の合成画像に基づいて、例えばコーナリング時の運転者の姿勢を比較可能となる。
ことを特徴とする。これにより、或る第1の車両の走行態様と、該第1の車両とは独立して走行した第2の車両の走行態様と、を比較しながら参照することが可能となる。
ことを特徴とする。これにより、第1の合成画像および第2の合成画像の間の差異をユーザに視認させ易くすることが可能となる。
ことを特徴とする。これにより、或る第1の車両の走行態様と、該第1の車両とは独立して走行した第2の車両の走行態様と、を比較しながら参照することが可能となる。
ことを特徴とする。これにより、ユーザは、上記第1~第2の車両の走行態様を任意の視点で視認可能となる。尚、実施形態では、ディスプレイと操作部とは、それらが一体に構成されたタッチパネル式ディスプレイにより実現されるものとしたが、他の実施形態として、これらは別体で構成されてもよい。
Claims (13)
- 車両の運転者を示す運転者情報と、該車両の状態を示す車両状態情報とを取得する情報取得手段と、
或る車両およびその運転者について、前記運転者情報に基づく第1の運転者画像と、前記車両状態情報に基づく第1の車両画像とを重ね合わせて第1の合成画像を生成する第1の画像生成手段と、
前記或る車両とは独立して走行した車両およびその運転者について、前記運転者情報に基づく第2の運転者画像と、前記車両状態情報に基づく第2の車両画像とを重ね合わせて第2の合成画像を生成する第2の画像生成手段と、
前記第1の合成画像と前記第2の合成画像とを所定情報に基づいて関連付ける関連付け手段と、を備える
ことを特徴とする画像処理装置。 - 前記車両状態情報は、車速、転舵角、車体の姿勢、及び/又は、灯体の状態を示す情報を含む
ことを特徴とする請求項1記載の画像処理装置。 - 前記情報取得手段は、前記車両の走行路における位置を示す車両位置情報を更に取得し、前記所定情報は、前記或る車両についての前記車両位置情報および前記或る車両とは独立して走行した車両についての前記車両位置情報である
ことを特徴とする請求項1または請求項2記載の画像処理装置。 - 前記情報取得手段は、前記車両位置情報を、GPS(Global Positioning System)に基づいて取得する
ことを特徴とする請求項3記載の画像処理装置。 - 前記情報取得手段は、更に、前記車両の周辺の様子を示す車両周辺情報を取得し、
前記画像処理装置は、前記車両周辺情報に基づいて車両周辺画像を生成する第3の画像生成手段を更に備え、
前記所定情報は前記車両周辺画像である
ことを特徴とする請求項1から請求項4の何れか1項記載の画像処理装置。 - 前記第3の画像生成手段は、前記或る車両についての前記車両周辺情報に基づく前記車両周辺画像と、前記或る車両とは独立して走行した車両についての前記車両周辺情報に基づく前記車両周辺画像とを含む2以上の車両周辺画像を生成可能であり、
前記関連付け手段は、前記関連付けられる車両周辺画像を、前記2以上の車両周辺画像の1つから他の1つに切替え可能とする
ことを特徴とする請求項5記載の画像処理装置。 - 前記第3の画像生成手段は、前記車両周辺画像を球面座標系で処理する
ことを特徴とする請求項5または請求項6記載の画像処理装置。 - 前記第1の画像生成手段は、前記第1の車両画像を三次元座標系で処理し、
前記第2の画像生成手段は、前記第2の車両画像を三次元座標系で処理する
ことを特徴とする請求項7記載の画像処理装置。 - 前記運転者情報は、車両の運転者の姿勢を示す運転者姿勢情報である
ことを特徴とする請求項1から請求項8の何れか1項記載の画像処理装置。 - 請求項1から請求項9の何れか1項記載の画像処理装置と、
前記第1の合成画像および前記第2の合成画像を用いて、前記或る車両およびその運転者と、前記或る車両とは独立して走行した車両およびその運転者と、を並べて又は重ねて単一の画面に表示するディスプレイと、を備える
ことを特徴とする画像表示装置。 - 前記画像処理装置は、前記第1の合成画像および前記第2の合成画像の少なくとも一方について、それらの間に基準以上の差異が存する部分を強調するように前記ディスプレイに表示させる
ことを特徴とする請求項10記載の画像表示装置。 - 請求項1から請求項9の何れか1項記載の画像処理装置に通信可能な画像表示装置であって、
前記第1の合成画像および前記第2の合成画像を並べて又は重ねて単一の画面に表示するディスプレイを備える
ことを特徴とする画像表示装置。 - 前記ディスプレイの表示内容を任意の位置からの視点に変更するための操作入力を受け付ける操作部を更に備える
ことを特徴とする請求項10から請求項12の何れか1項記載の画像表示装置。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022553516A JP7377372B2 (ja) | 2020-09-30 | 2021-08-10 | 画像処理装置および画像表示装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-165999 | 2020-09-30 | ||
JP2020165999 | 2020-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022070618A1 true WO2022070618A1 (ja) | 2022-04-07 |
Family
ID=80949896
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/029499 WO2022070618A1 (ja) | 2020-09-30 | 2021-08-10 | 画像処理装置および画像表示装置 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7377372B2 (ja) |
TW (1) | TWI789030B (ja) |
WO (1) | WO2022070618A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12097804B2 (en) * | 2020-03-31 | 2024-09-24 | Honda Motor Co., Ltd. | Image processing device, vehicle, image processing method, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012143447A (ja) * | 2011-01-13 | 2012-08-02 | Sharp Corp | ネットワークシステム、コントロール方法、コントローラ、およびコントロールプログラム |
US20170103160A1 (en) * | 2015-10-12 | 2017-04-13 | Milsco Manufacturing Company, A Unit Of Jason Incorporated | Customer Comfort Optimization Method, Apparatus, and System |
WO2020100334A1 (ja) * | 2018-11-15 | 2020-05-22 | ヤマハ発動機株式会社 | 鞍乗型車両走行データ処理装置、鞍乗型車両走行データ処理方法および鞍乗型車両走行データ処理プログラム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWM518183U (zh) * | 2015-09-18 | 2016-03-01 | Hua Chuang Automobile Information Technical Ct Co Ltd | 三維行車影像校正裝置 |
US20170300186A1 (en) * | 2016-04-18 | 2017-10-19 | Peter Kuhar | Systems and methods for health management |
-
2021
- 2021-08-10 WO PCT/JP2021/029499 patent/WO2022070618A1/ja active Application Filing
- 2021-08-10 JP JP2022553516A patent/JP7377372B2/ja active Active
- 2021-09-27 TW TW110135717A patent/TWI789030B/zh active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012143447A (ja) * | 2011-01-13 | 2012-08-02 | Sharp Corp | ネットワークシステム、コントロール方法、コントローラ、およびコントロールプログラム |
US20170103160A1 (en) * | 2015-10-12 | 2017-04-13 | Milsco Manufacturing Company, A Unit Of Jason Incorporated | Customer Comfort Optimization Method, Apparatus, and System |
WO2020100334A1 (ja) * | 2018-11-15 | 2020-05-22 | ヤマハ発動機株式会社 | 鞍乗型車両走行データ処理装置、鞍乗型車両走行データ処理方法および鞍乗型車両走行データ処理プログラム |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12097804B2 (en) * | 2020-03-31 | 2024-09-24 | Honda Motor Co., Ltd. | Image processing device, vehicle, image processing method, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TWI789030B (zh) | 2023-01-01 |
JP7377372B2 (ja) | 2023-11-09 |
TW202215365A (zh) | 2022-04-16 |
JPWO2022070618A1 (ja) | 2022-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11127373B2 (en) | Augmented reality wearable system for vehicle occupants | |
JP5267660B2 (ja) | 画像処理装置、画像処理プログラム、画像処理方法 | |
JP6177872B2 (ja) | 入出力装置、入出力プログラム、および入出力方法 | |
US11610342B2 (en) | Integrated augmented reality system for sharing of augmented reality content between vehicle occupants | |
CN102291541A (zh) | 一种车辆虚拟合成显示系统 | |
EP3333808B1 (en) | Information processing device | |
WO2020125006A1 (zh) | 增强现实显示设备及应用增强现实显示设备的交互方法 | |
US9994157B2 (en) | Periphery monitoring apparatus and periphery monitoring system | |
CN106339980A (zh) | 基于汽车的vr显示装置、方法及汽车 | |
WO2022070618A1 (ja) | 画像処理装置および画像表示装置 | |
JPWO2014128752A1 (ja) | 表示制御装置、表示制御プログラム、および表示制御方法 | |
JP2009192448A (ja) | 情報表示装置及び情報提供システム | |
WO2014128751A1 (ja) | ヘッドマウントディスプレイ装置、ヘッドマウントディスプレイ用プログラム、およびヘッドマウントディスプレイ方法 | |
US20130265331A1 (en) | Virtual Reality Telescopic Observation System of Intelligent Electronic Device and Method Thereof | |
US20230065018A1 (en) | Method for operating data glasses in a motor vehicle and system of a motor vehicle and data glasses | |
CN108351736B (zh) | 可穿戴显示器、图像显示装置和图像显示系统 | |
CN117916706A (zh) | 在行驶期间在机动车中运行智能眼镜的方法、可相应运行的智能眼镜及机动车 | |
KR20200032547A (ko) | 자율주행차량용 ar게임 장치 및 그 방법 | |
JP6250025B2 (ja) | 入出力装置、入出力プログラム、および入出力方法 | |
Rao et al. | AR-IVI—implementation of in-vehicle augmented reality | |
JP6624758B2 (ja) | 画像表示装置および画像表示方法 | |
WO2021199318A1 (ja) | 画像処理装置、車両、画像処理方法およびプログラム | |
CN208207372U (zh) | 增强现实眼镜和系统 | |
JP6007773B2 (ja) | 画像データ変換装置並びにナビゲーションシステムおよびカメラ装置並びに車両 | |
US12039898B2 (en) | Image display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21874921 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202317014889 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 2022553516 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21874921 Country of ref document: EP Kind code of ref document: A1 |