WO2024070608A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2024070608A1
WO2024070608A1 PCT/JP2023/032950 JP2023032950W WO2024070608A1 WO 2024070608 A1 WO2024070608 A1 WO 2024070608A1 JP 2023032950 W JP2023032950 W JP 2023032950W WO 2024070608 A1 WO2024070608 A1 WO 2024070608A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
driving
simulation
data
unit
Prior art date
Application number
PCT/JP2023/032950
Other languages
French (fr)
Japanese (ja)
Inventor
達也 山崎
巨成 高橋
辰志 梨子田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2024070608A1 publication Critical patent/WO2024070608A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/042Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles providing simulation in a real vehicle
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/052Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a program, and in particular to an information processing device, an information processing method, and a program that are capable of implementing a driving simulation using a vehicle used in normal driving.
  • Patent Document 1 discloses a racing game device in which a racing game is played using an operating unit that has a steering wheel, accelerator, and brake.
  • This disclosure has been made in light of these circumstances, and makes it possible to realize a driving simulation using a vehicle used in normal driving.
  • the information processing device includes a communication unit that communicates with other information processing devices, an operation data acquisition unit that acquires driving operation data indicating the amount of operation of driving operations for a plurality of types of operation systems that operate the vehicle, a vehicle operation control unit that supplies control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle, a driving mode control unit that supplies control data to the vehicle operation control unit for controlling the driving of the vehicle according to the driving operation data when in a driving mode in which the vehicle is used to drive, a simulation mode control unit that supplies control data to the vehicle operation control unit for controlling the behavior of the vehicle so as to reproduce the behavior of the vehicle in a virtual space determined according to the driving operation data based on vehicle setting parameters acquired from the other information processing device via the communication unit when in a simulation mode in which a driving simulation is performed using the vehicle, and a stopped state determination unit that determines whether to transition to the simulation mode according to the stopped state of the vehicle when an operation instructing switching from the driving mode to the simulation
  • the information processing method or program of the first aspect of the present disclosure includes communicating with another information processing device, acquiring driving operation data indicating the amount of driving operation for a plurality of types of operating systems that operate a vehicle, supplying control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle, controlling the operation of the vehicle with control data that controls the running of the vehicle according to the driving operation data when in a driving mode in which the vehicle is used to run, controlling the operation of the vehicle with control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in a virtual space determined according to the driving operation data based on vehicle setting parameters acquired by communication from the other information processing device when in a simulation mode in which a driving simulation is performed using the vehicle, and determining whether to transition to the simulation mode according to the stopped state of the vehicle when an operation instructing a switch from the driving mode to the simulation mode is performed.
  • communication is performed with another information processing device, driving operation data indicating the amount of driving operation for multiple types of operating systems that operate the vehicle is obtained, control signals are supplied to multiple types of drive systems that drive the vehicle to control the operation of the vehicle, and in a driving mode in which the vehicle is used to drive, the operation of the vehicle is controlled by control data that controls the driving of the vehicle in accordance with the driving operation data, and in a simulation mode in which a driving simulation is performed using the vehicle, the operation of the vehicle is controlled by control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in a virtual space determined in accordance with the driving operation data based on vehicle setting parameters obtained by communication from the other information processing device, and when an operation is performed to instruct switching from the driving mode to the simulation mode, it is determined whether or not to transition to the simulation mode in accordance with the stopped state of the vehicle.
  • the information processing device includes a communication unit that communicates with other information processing devices mounted on the vehicle, a virtual space generation unit that performs a driving simulation in which the vehicle virtually travels in a virtual space according to driving operation data acquired from the vehicle via the communication unit, and generates a simulation image by capturing images of the virtual space in all directions centered on the vehicle with a virtual camera arranged in the virtual space so as to correspond to a predetermined position of the vehicle, and an image conversion processing unit that performs image conversion processing on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of the occupants of the vehicle.
  • the information processing method or program of the second aspect of the present disclosure includes communicating with another information processing device mounted on the vehicle, performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication, capturing images of the virtual space in all directions from the vehicle with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, and generating a simulation image, and performing an image conversion process on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of the occupants of the vehicle.
  • communication is performed with another information processing device mounted on the vehicle, and a driving simulation is performed in which the vehicle is virtually driven in a virtual space according to driving operation data acquired from the vehicle through communication, and a simulation image is generated by capturing images of the virtual space in all directions centered on the vehicle with a virtual camera placed in the virtual space corresponding to a predetermined position of the vehicle, and an image conversion process is performed on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of the vehicle occupants.
  • FIG. 1 is a block diagram showing a configuration example of an embodiment of a driving simulation system to which the present technology is applied;
  • FIG. 1 is a diagram illustrating a configuration example of a vehicle that employs a by-wire system.
  • FIG. 2 is a diagram illustrating an example of the configuration of a garage device.
  • FIG. 2 is a diagram illustrating an example of the internal configuration of a vehicle.
  • 11A and 11B are diagrams illustrating an example of a method for identifying the viewpoint of an occupant.
  • FIG. 2 is a block diagram showing a configuration example of a garage system control device.
  • 11 is a diagram illustrating an example in which a virtual plane viewed from a viewpoint position is displayed on a garage display.
  • FIG. 2 is a block diagram showing an example of the configuration of an operation system and a drive system of the in-vehicle control device; 2 is a block diagram showing a configuration example of a video system and an audio system of the in-vehicle control device.
  • FIG. FIG. 4 is a diagram showing an example of a data structure of vehicle parameters and vehicle setting parameters.
  • FIG. 13 is a diagram showing the behavior of a vehicle in a spin state.
  • FIG. 4 is a diagram showing an example of vehicle parameters and vehicle setting parameters.
  • FIG. 4 is a diagram illustrating suspension setting data.
  • FIG. 4 is a diagram illustrating sheet setting data.
  • 11 is a flowchart illustrating an initial setup process on the garage system side. 4 is a flowchart illustrating an initial setup process on the vehicle side.
  • FIG. 11 is a flowchart illustrating a driving simulation operation process on the garage system side.
  • 10 is a flowchart illustrating a driving simulation operation process on the vehicle side.
  • 11 is a flowchart illustrating a vehicle position and attitude recognition process.
  • 5 is a flowchart illustrating an occupant position and attitude recognition process.
  • 11 is a flowchart illustrating a reflection control process.
  • FIG. 1 is a block diagram showing an example of the configuration of a training system. 1 is a flowchart illustrating unsupervised training. 1 is a flowchart illustrating supervised training. 13 is a flowchart illustrating learning. 1 is a flowchart illustrating training conducted one-on-one between a teacher and a student, the teacher being a human.
  • 13 is a flowchart illustrating training in which the teacher is a human and the teacher and students are one-to-many. This is a flowchart that explains training in which the teacher is an AI and the teacher and students are trained one-to-one or one-to-many.
  • 11 is a flowchart illustrating personal optimization of a vehicle.
  • 1 is a block diagram showing an example of the configuration of an embodiment of a computer to which the present technology is applied.
  • FIG. 1 is a block diagram showing an example of the configuration of an embodiment of a driving simulation system to which the present technology is applied.
  • the driving simulation system 11 shown in FIG. 1 is composed of a garage system 12 and a vehicle 13, and can realize a driving simulation using the vehicle 13 that the user uses during normal driving.
  • the garage system 12 is composed of a garage device 14, a charging device 15, a garage system control device 16, and a home server 17.
  • the garage device 14 has the function of displaying images used in a driving simulation (hereinafter referred to as simulation images) surrounding the vehicle 13, in addition to the function of a general garage for storing the vehicle 13, as described below with reference to FIG. 3.
  • simulation images images used in a driving simulation
  • the garage device 14 is described as being installed inside a garage.
  • the garage device 14 may be installed in a parking lot, parking space, etc.
  • the charging device 15 charges the vehicle 13 by connecting a charging and communication cable to the vehicle 13, and enables high-speed IP (Internet Protocol) communication between the garage system 12 and the vehicle 13.
  • IP Internet Protocol
  • the garage system control device 16 controls the garage system 12 when performing a driving simulation, using data acquired via an external network and data stored in the home server 17.
  • the detailed configuration of the garage system control device 16 will be described later with reference to FIG. 6.
  • the home server 17 obtains data necessary for the garage system control device 16 to control the garage system 12 via an external network and provides it to the garage system control device 16.
  • the vehicle 13 employs a by-wire system that is configured to detect driving operations on the operating system using sensors and electrically transmit signals corresponding to the driving operations to the drive system via signal lines, rather than physically transmitting operations on the operating system to the drive system.
  • the vehicle 13 is also configured with an in-vehicle control device 18 and a power source 19.
  • the in-vehicle control device 18 communicates with the garage system control device 16 and controls the vehicle 13 when performing a driving simulation.
  • the detailed configuration of the in-vehicle control device 18 will be described later with reference to Figures 8 and 9.
  • the power source 19 is an electric motor that uses electricity as an energy source.
  • the power source 19 is an internal combustion engine that uses gasoline as fuel, and if the vehicle 13 is a hybrid vehicle, a combination of an internal combustion engine and an electric motor is used.
  • the driving simulation system 11 is configured in this manner, and when the vehicle 13 is stored in the garage device 14, a driving mode in which normal driving is performed using the vehicle 13 is switched to a simulation mode in which a driving simulation is performed using the vehicle 13, and a driving simulation can be performed.
  • the vehicle 13 is configured with a steering wheel 21, accelerator pedal 22, brake pedal 23, axle steering unit 24, throttle motor 25, brake 26, steering wheel rotation sensor 27, accelerator position sensor 28, brake position sensor 29, axle drive mechanism 30, throttle drive mechanism 31, and brake drive mechanism 32.
  • the vehicle 13 is configured such that the operating system, such as the steering wheel 21, accelerator pedal 22, and brake pedal 23, and the drive system, such as the axle steering unit 24, throttle motor 25, and brake 26, are electrically connected to the in-vehicle control device 18 via signal lines, and are physically separated from each other.
  • the steering wheel rotation sensor 27 detects the rotation of the steering wheel 21 in response to the rotation operation by the driver of the vehicle 13, and outputs a steering wheel rotation signal indicating the amount of rotation of the steering wheel 21 to the in-vehicle control device 18.
  • the accelerator position sensor 28 detects the position of the accelerator pedal 22 according to the depression operation by the driver of the vehicle 13, and outputs an accelerator position signal indicating the amount of depression of the accelerator pedal 22 to the in-vehicle control device 18.
  • the brake position sensor 29 detects the position of the brake pedal 23 according to the depression operation by the driver of the vehicle 13, and outputs a brake position signal indicating the amount of depression of the brake pedal 23 to the in-vehicle control device 18.
  • the axle-direction drive mechanism 30 drives the axle-direction steering unit 24 according to an axle-direction control signal supplied from the vehicle control device 18 based on the steering wheel rotation signal output from the steering wheel rotation sensor 27. This allows the axle-direction steering unit 24 to steer the tires of the vehicle 13 to an angle that corresponds to the rotation operation of the steering wheel 21 by the driver of the vehicle 13.
  • the throttle drive mechanism 31 drives the throttle motor 25 in accordance with a throttle control signal supplied from the in-vehicle control device 18 based on the accelerator position signal output from the accelerator position sensor 28. This allows the throttle motor 25 to drive the vehicle 13 so as to accelerate in response to the driver of the vehicle 13 depressing the accelerator pedal 22.
  • the brake drive mechanism 32 drives the brake 26 in accordance with a brake control signal supplied from the in-vehicle control device 18 based on the brake position signal output from the brake position sensor 29. This allows the brake 26 to drive the vehicle 13 so as to decelerate in response to the driver of the vehicle 13 depressing the brake pedal 23.
  • the vehicle 13 When in a driving mode, the vehicle 13 configured in this manner can travel by operating the drive system, such as the axle steering unit 24, throttle motor 25, and brake 26, according to the control of the in-vehicle control device 18 in response to the operation of the steering wheel 21, accelerator pedal 22, brake pedal 23, and other operating systems by the driver of the vehicle 13.
  • the drive system such as the axle steering unit 24, throttle motor 25, and brake 26, according to the control of the in-vehicle control device 18 in response to the operation of the steering wheel 21, accelerator pedal 22, brake pedal 23, and other operating systems by the driver of the vehicle 13.
  • the vehicle 13 supplies driving operation data indicating the operation by the driver of the vehicle 13 of the operating systems such as the steering wheel 21, accelerator pedal 22, and brake pedal 23 to the garage system control device 16 to execute a driving simulation.
  • the drive system such as the axle steering unit 24, throttle motor 25, and brake 26 do not operate and remain stopped.
  • the operating system and drive system of the vehicle 13 are connected by a by-wire system, and the vehicle is configured such that the drive system does not operate even if the operating system is operated in the simulation mode. It is sufficient that at least one corresponding operating system and drive system is connected by the by-wire system. For example, if the steering wheel 21 and the axle steering unit 24 are connected by the by-wire system, tire wear caused by steering the tires while the vehicle 13 is stopped can be avoided.
  • the vehicle 13 is configured with a handle reaction force drive mechanism 33, an accelerator reaction force drive mechanism 34, a brake reaction force drive mechanism 35, a suspension drive mechanism 36, and a seat drive mechanism 37.
  • the steering wheel reaction force drive mechanism 33 generates a steering wheel reaction force in the steering wheel 21 that resists the arm force of the driver who is rotating the steering wheel 21 in accordance with a steering wheel reaction force control signal supplied from the in-vehicle control device 18.
  • the accelerator reaction force drive mechanism 34 generates an accelerator reaction force in the accelerator pedal 22 that resists the leg force of the driver who is depressing the accelerator pedal 22 in accordance with an accelerator reaction force control signal supplied from the in-vehicle control device 18.
  • the brake reaction force drive mechanism 35 generates a brake reaction force in the brake pedal 23 that resists the leg force of the driver who is depressing the brake pedal 23 in accordance with a brake reaction force control signal supplied from the in-vehicle control device 18.
  • the suspension drive mechanism 36 drives a hydro-pneumatic suspension (not shown) that can adjust the height of the vehicle 13 by adjusting air pressure or hydraulic pressure, and actively controls the height of each of the four wheels of the vehicle 13 independently.
  • the seat drive mechanism 37 drives actuators (not shown) built into the back and seat surface of each seat of the vehicle 13 to perform various adjustment operations on the seat surface and back at a certain speed along the time axis.
  • a of FIG. 3 shows a schematic configuration example of the garage device 14 viewed from the side with the vehicle 13 stored
  • B of FIG. 3 shows a schematic configuration example of the garage device 14 viewed from the top with the vehicle 13 stored. In this way, a driving simulation can be performed with the vehicle 13 stored in the garage device 14.
  • the garage device 14 is configured with a garage display 41-1 for the ceiling, a garage display 41-2 for the front, a garage display 41-3 for the right side, a garage display 41-4 for the left side, a garage display 41-5 for the rear, a garage display 41-6 for the floor, a garage camera 42, and garage sensors 43-1 and 43-2.
  • the garage display 41-1 for the ceiling surface is installed so as to cover the entire ceiling wall surface of the garage device 14, and displays a simulation image for the ceiling surface supplied from the garage system control device 16 when a driving simulation is performed.
  • the front garage display 41-2 is installed so as to cover the entire front wall of the garage device 14, and displays the front simulation image supplied from the garage system control device 16 when the driving simulation is being performed.
  • the garage display 41-3 for the right side is installed so as to cover the entire right side wall of the garage device 14, and displays the simulation image for the right side supplied from the garage system control device 16 when the driving simulation is performed.
  • the garage display 41-4 for the left side is installed so as to cover the entire left side wall of the garage device 14, and displays the simulation image for the left side supplied from the garage system control device 16 when the driving simulation is being performed.
  • the rear garage display 41-5 is installed to cover the entire rear wall of the garage device 14, and displays the rear simulation image supplied from the garage system control device 16 when the driving simulation is being performed.
  • the garage floor display 41-6 is installed so as to cover the entire floor and wall surface of the garage device 14, and displays a simulation image for the floor supplied from the garage system control device 16 when a driving simulation is being performed.
  • the garage display 41 uses a display unit that employs a light-emitting method such as LED (Light Emitting Diode) or OLED (Organic Light Emitting Diode), and multiple display units can be tiled for use.
  • a light-emitting method such as LED (Light Emitting Diode) or OLED (Organic Light Emitting Diode)
  • the garage camera 42 photographs the vehicle 13 stored in the garage device 14 to recognize the position and posture of the vehicle 13, and supplies the image data to the garage system control device 16.
  • the garage sensors 43-1 and 43-2 supply sensor data obtained by detecting the positions of multiple markers (e.g., retroreflective materials) attached to the body of the vehicle 13 to the garage system control device 16. Note that, when there is no need to distinguish between the garage sensors 43-1 and 43-2, they will be referred to simply as garage sensor 43 below. Also, in the example shown in FIG. 3, two garage sensors 43-1 and 43-2 are shown, but the position and attitude of the vehicle 13 may be detected by one garage sensor 43 or three or more garage sensors 43. Alternatively, the relative relationship between the position and attitude of the vehicle 13 and the garage device 14 may be detected by an external sensor (not shown) provided on the vehicle 13.
  • an external sensor not shown
  • the driving simulation system 11 may recognize the position and attitude of the vehicle 13 by using a positioning sensor (e.g., a piezoelectric element) embedded in the floor of the garage device 14 to detect the four points where the tires of the vehicle 13 are in contact.
  • a positioning sensor e.g., a piezoelectric element
  • the driving simulation system 11 may recognize the position and attitude of the vehicle 13 by acquiring shape data such as a point cloud or mesh of the vehicle 13 using a measuring device that combines an RGB camera, a depth sensor, etc.
  • the garage device 14 configured in this manner can display a simulation image seen from the viewpoint position P of a passenger inside the vehicle 13 on a garage display 41 arranged to surround the periphery of the vehicle 13 when a driving simulation is being performed.
  • FIG. 4A shows an example of the internal configuration of vehicle 13 as viewed forward from the driver's seat and passenger seat
  • FIG. 4B shows an example of the internal configuration of vehicle 13 as viewed forward from the rear seat.
  • the interior of the vehicle 13 is provided with a navigation display 51, side mirror displays 52L and 52R, an instrument panel display 53, a rearview mirror display 54, rear seat displays 55L and 55R, an in-vehicle camera 56, in-vehicle sensors 57-1 and 57-2, and speakers 58-1 and 58-2.
  • the navigation display 51 displays a navigation image based on the video stream for navigation supplied from the in-vehicle control device 18, for example, by placing a vehicle mark indicating the driving position of the vehicle 13 on a map image and displaying a navigation image in which the vehicle mark moves on the map image.
  • the navigation display 51 displays an actual navigation image according to the position information of the vehicle 13, and in the simulation mode, it displays a virtual navigation image according to the driving position in a virtual space based on a driving simulation.
  • Side mirror displays 52L and 52R display, for example, side mirror images taken toward the rear of vehicle 13 by side mirror cameras installed on the left and right sides of vehicle 13, based on the side mirror image streams supplied from in-vehicle control device 18.
  • side mirror displays 52L and 52R display actual side mirror images taken by side mirror cameras installed on vehicle 13, and in simulation mode, display virtual side mirror images taken by side mirror cameras installed in a virtual space based on a driving simulation.
  • the instrument panel display 53 displays an instrument panel image showing various instrument data such as the vehicle 13's driving speed and engine RPM, based on the video stream for the instrument panel supplied from the in-vehicle control device 18.
  • the instrument panel display 53 displays an actual instrument panel image based on the driving of the vehicle 13, and in the simulation mode, displays a virtual instrument panel image based on a driving simulation.
  • the rearview mirror display 54 displays, for example, a rearview mirror image taken toward the rear of the vehicle 13 by a rearview mirror camera installed on the rear of the vehicle 13, based on the rearview mirror image stream supplied from the in-vehicle control device 18.
  • the rearview mirror display 54 displays an actual rearview mirror image taken by a rearview mirror camera installed on the vehicle 13, and in the simulation mode, it displays a virtual rearview mirror image taken by a rearview mirror camera installed in a virtual space based on the driving simulation.
  • the rear seat displays 55L and 55R display rear seat viewpoint images, which are images captured by an external camera installed outside the vehicle 13 and are assumed to be seen through the rear seat displays 55L and 55R from the viewpoint of a passenger in the rear seat, based on the rear seat video stream supplied from the in-vehicle control device 18.
  • the rear seat displays 55L and 55R display actual rear seat viewpoint images captured by an external camera installed in the vehicle 13, and in the simulation mode, display virtual rear seat viewpoint images captured by an external camera installed in a virtual space based on the driving simulation.
  • the in-vehicle camera 56 photographs the occupants of the vehicle 13 to recognize their positions and postures, and supplies the image data to the in-vehicle control device 18.
  • the in-vehicle sensors 57-1 and 57-2 detect the distance (depth) to the occupants and supply the sensor data obtained to the in-vehicle control device 18. Note that, when there is no need to distinguish between the in-vehicle sensors 57-1 and 57-2, hereinafter they will be referred to simply as the in-vehicle sensor 57. Also, in the example shown in FIG. 3, two in-vehicle sensors 57-1 and 57-2 are illustrated, but the position and posture of the occupants may be detected by one in-vehicle sensor 57 or three or more in-vehicle sensors 57.
  • Speakers 58-1 and 58-2 output audio based on an audio signal supplied from in-vehicle control device 18. For example, in driving mode, speakers 58-1 and 58-2 output audio played by an audio system installed in vehicle 13, and in simulation mode, output object audio emitted from objects in a virtual space based on a driving simulation.
  • the garage system control device 16 recognizes the position and orientation of the vehicle 13 relative to each garage display 41 of the garage device 14 based on the image data supplied from the garage camera 42 and the sensor data supplied from the garage sensor 43. This allows the garage system control device 16 to obtain position and orientation data of the vehicle 13 represented by a rectangular parallelepiped space C inscribed by the vehicle 13, relative to the front garage display 41-2, for example, as shown in A of FIG. 5.
  • the in-vehicle control device 18 also recognizes the position and posture of the occupant relative to the vehicle 13, as well as the occupant's viewpoint position P (viewpoint position and line of sight direction) based on the video data supplied from the in-vehicle camera 56 and the sensor data supplied from the in-vehicle sensor 57.
  • the in-vehicle control device 18 acquires occupant position and posture data including information indicating the occupant's viewpoint position and line of sight direction in addition to information indicating the occupant's position and posture, and supplies this data to the garage system control device 16. This allows the garage system control device 16 to recognize the position, posture, viewpoint position, and line of sight direction of the occupant relative to the rectangular parallelepiped space C inscribed by the vehicle 13, as shown in FIG. 5B.
  • the garage system control device 16 can, for example, determine the occupant's viewpoint position P relative to the front garage display 41-2 based on the rectangular space C relative to the front garage display 41-2 and the occupant's viewpoint position P relative to the rectangular space C.
  • the position and orientation data of the vehicle 13 and the position and orientation data of the occupants may be identified by methods other than those described here.
  • the position and orientation data of the vehicle 13 may be expressed in coordinates of the vehicle 13 centered on a part of the vehicle 13 (such as the position where the front right tire touches the ground).
  • the position and orientation data of the vehicle 13 includes position information of the part of the vehicle 13 that corresponds to the center of the vehicle coordinate system.
  • FIG. 6 is a block diagram showing an example configuration of the garage system control device 16.
  • the garage system control device 16 is configured with a sensor data acquisition unit 61, a video data acquisition unit 62, a vehicle position and attitude recognition unit 63, a communication unit 64, a virtual space storage unit 65, a 3DCG storage unit 66, a 3DCG generation unit 67, an object audio storage unit 68, an object audio generation unit 69, a user setting acquisition unit 70, a virtual space generation unit 71, a signal multiplexing unit 72, a video conversion processing unit 73, a ceiling surface video transmission unit 74, a front surface video transmission unit 75, a right side surface video transmission unit 76, a left side surface video transmission unit 77, a rear surface video transmission unit 78, and a floor surface video transmission unit 79.
  • the sensor data acquisition unit 61 acquires the sensor data supplied from the garage sensor 43 and supplies it to the vehicle position and attitude recognition unit 63.
  • the video data acquisition unit 62 acquires the video data provided by the garage camera 42 and provides it to the vehicle position and attitude recognition unit 63.
  • the vehicle position and attitude recognition unit 63 recognizes the position and attitude of the vehicle 13 stored in the garage device 14 based on the sensor data supplied from the sensor data acquisition unit 61 and the video data supplied from the video data acquisition unit 62, and acquires position and attitude data of the vehicle 13.
  • the vehicle position and attitude recognition unit 63 can acquire position and attitude data of the vehicle 13 represented by the rectangular parallelepiped space C in which the vehicle 13 is inscribed as described above with reference to FIG. 5.
  • the vehicle position and attitude recognition unit 63 then supplies the position and attitude data of the vehicle 13 to the video conversion processing unit 73.
  • the communication unit 64 communicates with the in-vehicle control device 18 and an external network.
  • the communication unit 64 receives vehicle parameters transmitted from the in-vehicle control device 18, supplies position and posture data of occupants in the vehicle 13 contained in the vehicle parameters to the video conversion processing unit 73, and supplies driving operation data contained in the vehicle parameters to the virtual space generation unit 71.
  • the communication unit 64 also transmits to the vehicle 13 the vehicle setting parameters supplied from the virtual space generation unit 71 and the video and audio stream for the vehicle 13 supplied from the signal multiplexing unit 72.
  • the virtual space storage unit 65 stores virtual space data consisting of the shapes and textures of roads, buildings, etc. that constitute the virtual space in which the vehicle 13 virtually travels during the driving simulation.
  • the 3DCG memory unit 66 stores 3DCG (3-Dimensional Computer Graphics) data consisting of shapes and textures representing various three-dimensional objects (e.g., other vehicles, pedestrians, traffic lights, etc.) that are placed in the virtual space in which the vehicle 13 virtually travels during the driving simulation.
  • 3DCG 3-Dimensional Computer Graphics
  • the 3DCG generation unit 67 reads 3DCG data of objects to be placed near the vehicle 13 during the driving simulation from the 3DCG storage unit 66, generates the objects, and supplies them to the virtual space generation unit 71.
  • the object audio storage unit 68 stores audio data representing audio (e.g., the sounds of other vehicles traveling, pedestrian footsteps, traffic light melodies, etc.) emitted from various three-dimensional objects placed in the virtual space in which the vehicle 13 travels during the driving simulation.
  • audio e.g., the sounds of other vehicles traveling, pedestrian footsteps, traffic light melodies, etc.
  • the object audio generation unit 69 reads audio data corresponding to objects placed near the vehicle 13 in the driving simulation from the object audio storage unit 68, generates the audio, and supplies it to the virtual space generation unit 71.
  • the user setting acquisition unit 70 acquires and stores user setting values corresponding to the user's operational input, and supplies the user setting values to the virtual space generation unit 71.
  • the user setting acquisition unit 70 acquires and stores the user setting value that instructs the start of a driving simulation, and supplies the user setting value to the virtual space generation unit 71.
  • the virtual space generation unit 71 reads the virtual space data from the virtual space storage unit 65 to generate a virtual space, places the objects supplied from the 3DCG generation unit 67 within the virtual space, and sets the audio source supplied from the object audio generation unit 69 in correspondence with the position of each object.
  • the virtual space generation unit 71 performs a driving simulation in which the vehicle 13 virtually runs in the virtual space according to the driving operation data supplied from the communication unit 64.
  • the virtual space generation unit 71 determines the behavior of the vehicle 13 in the virtual space, generates vehicle setting parameters (suspension setting data, seat setting data, and reaction force setting data) for controlling the behavior of the vehicle 13 so as to reproduce that behavior, and supplies these to the communication unit 64.
  • the virtual space generation unit 71 generates a video stream for the vehicle 13, an audio stream for the vehicle 13, and a video stream for the garage according to the driving position of the vehicle 13 according to the driving operation data.
  • the virtual space generation unit 71 generates a video stream for the vehicle 13 corresponding to the navigation image displayed on the navigation display 51 by placing a vehicle mark indicating the driving position of the vehicle 13 on a map image in the virtual space and moving the vehicle mark on the map image in accordance with the driving operation data.
  • the virtual space generation unit 71 also generates a video stream for the vehicle 13 corresponding to the side mirror images displayed on the side mirror displays 52L and 52R by placing virtual cameras in the virtual space corresponding to the side mirror cameras installed on the left and right sides of the vehicle 13 and photographing the virtual space with each virtual camera.
  • the virtual space generation unit 71 also generates a video stream for the vehicle 13 corresponding to the instrument panel image displayed on the instrument panel display 53 in accordance with various instrument data such as driving speed and engine speed in accordance with the driving operation data.
  • the virtual space generation unit 71 also generates a video stream for the vehicle 13, which serves as the rear-mirror image displayed on the rear-mirror display 54, by placing a virtual camera in the virtual space corresponding to the rear-mirror camera installed on the rear of the vehicle 13 and capturing an image of the virtual space with the virtual camera.
  • the virtual space generation unit 71 also generates a video stream for the vehicle 13, which serves as the rear-seat viewpoint image displayed on the rear-seat displays 55L and 55R, by placing a virtual camera in the virtual space corresponding to the viewpoint of the rear-seat passengers and capturing an image of the virtual space seen through the rear-seat displays 55L and 55R with the virtual camera.
  • the virtual space generation unit 71 then supplies the video stream for the vehicle 13 generated in this manner to the signal multiplexing unit 72.
  • the virtual space generation unit 71 also generates an audio stream for the vehicle 13 so that object audio is emitted from each object, with the positions of objects placed near the vehicle 13 traveling in the virtual space in accordance with the driving operation data being set as audio source positions, and supplies the audio stream to the video conversion processing unit 73.
  • the virtual space generation unit 71 also places a virtual camera in the virtual space so as to correspond to a predetermined position of the vehicle 13 (for example, the center position of the vehicle 13), and uses the virtual camera to capture images of the virtual space in all directions centered on the vehicle 13, thereby generating a video stream for the garage, and supplies this to the video conversion processing unit 73.
  • a virtual camera in the virtual space so as to correspond to a predetermined position of the vehicle 13 (for example, the center position of the vehicle 13), and uses the virtual camera to capture images of the virtual space in all directions centered on the vehicle 13, thereby generating a video stream for the garage, and supplies this to the video conversion processing unit 73.
  • the signal multiplexing unit 72 multiplexes the video stream for the vehicle 13 and the audio stream for the vehicle 13 supplied from the virtual space generating unit 71 to generate a video/audio stream for the vehicle 13 and supplies it to the communication unit 64.
  • the image conversion processing unit 73 performs image conversion processing on the image stream for the garage supplied from the virtual space generation unit 71, based on the position and orientation data of the vehicle 13 supplied from the vehicle position and orientation recognition unit 63 and the position and orientation data of the occupants in the vehicle 13 supplied from the communication unit 64.
  • the image conversion processing unit 73 has a viewpoint detection unit 81 and a geometric transformation unit 82.
  • the viewpoint detection unit 81 sets a rectangular parallelepiped space C in which the vehicle 13 is inscribed at the position of the vehicle 13 relative to the garage display 41 based on the position and orientation data of the vehicle 13.
  • the viewpoint detection unit 81 sets the position, orientation, viewpoint, and line of sight of the occupant in the rectangular parallelepiped space C based on the position and orientation data of the occupant inside the vehicle 13, thereby being able to determine the position, orientation, viewpoint, and line of sight of the occupant relative to the garage display 41.
  • This allows the viewpoint detection unit 81 to supply information about the viewpoint and line of sight of the occupant relative to the garage display 41 to the geometric transformation unit 82.
  • the geometric transformation unit 82 performs a geometric transformation on the garage video stream supplied from the virtual space generation unit 71 based on the position of each garage display 41 and the occupant's viewpoint and line of sight relative to the garage display 41. That is, the geometric transformation unit 82 performs a geometric transformation on the garage video stream so as to project the garage video stream using each garage display 41 as a projection surface, based on the occupant's viewpoint and line of sight identified by the viewpoint detection unit 81, thereby generating a video stream for each garage display 41. In this way, a spherical garage video stream centered on the vehicle 13 is projected onto, for example, the ceiling garage display 41-1, thereby generating a planar ceiling video stream. Similarly, planar video streams are generated for each of the other garage displays 41.
  • the ceiling surface video transmission unit 74 transmits the ceiling surface video stream supplied from the video conversion processing unit 73 to the ceiling surface garage display 41-1, causing the ceiling surface garage display 41-1 to display the ceiling surface simulation video.
  • the front image transmission unit 75 transmits the front image stream supplied from the image conversion processing unit 73 to the front garage display 41-2, causing the front garage display 41-2 to display the front simulation image.
  • the right side image transmission unit 76 transmits the right side image stream supplied from the image conversion processing unit 73 to the right side garage display 41-3, causing the right side garage display 41-3 to display the right side simulation image.
  • the left side image transmission unit 77 transmits the left side image stream supplied from the image conversion processing unit 73 to the left side garage display 41-4, causing the left side garage display 41-4 to display the left side simulation image.
  • the rear view video transmission unit 78 transmits the rear view video stream supplied from the video conversion processing unit 73 to the rear view garage display 41-5, causing the rear view garage display 41-5 to display the rear view simulation video.
  • the floor surface image transmission unit 79 transmits the floor surface image stream supplied from the image conversion processing unit 73 to the floor surface garage display 41-6, causing the floor surface garage display 41-6 to display the floor surface simulation image.
  • FIG. 8 is a block diagram showing an example of the configuration of the operation system and drive system of the in-vehicle control device 18.
  • the in-vehicle control device 18 is configured to include a sensor data acquisition unit 91, a video data acquisition unit 92, a sensor data acquisition unit 93, a video data acquisition unit 94, a vehicle interior environment recognition unit 95, a vehicle exterior environment recognition unit 96, a communication unit 97, a user setting acquisition unit 98, a vehicle stop state determination unit 99, a brake position data acquisition unit 100, a steering wheel rotation data acquisition unit 101, an accelerator position data acquisition unit 102, a driving operation detection unit 103, a driving mode control unit 104, a simulation mode control unit 105, an axle direction control unit 106, a throttle control unit 107, a brake control unit 108, a steering wheel reaction force control unit 109, a suspension control unit 110, a brake reaction force control unit 111, an accelerator reaction force control unit 112, and a seat control unit 113.
  • the sensor data acquisition unit 91 acquires sensor data provided from the in-vehicle sensor 57 and provides it to the vehicle interior environment recognition unit 95.
  • the video data acquisition unit 92 acquires video data provided by the in-vehicle camera 56 and provides it to the vehicle interior environment recognition unit 95.
  • the sensor data acquisition unit 93 acquires sensor data provided from an external sensor (not shown) installed outside the vehicle 13 and provides it to the vehicle external environment recognition unit 96.
  • the video data acquisition unit 94 acquires video data provided from an external camera (not shown) installed outside the vehicle 13 and provides it to the vehicle external environment recognition unit 96.
  • the vehicle interior environment recognition unit 95 generates position and orientation data of the occupants in the vehicle 13 based on the sensor data supplied from the sensor data acquisition unit 91 and the video data supplied from the video data acquisition unit 92, and supplies the data to the communication unit 97.
  • the position and orientation data of the occupants in the vehicle 13 includes information indicating the viewpoint position and line of sight direction of the occupants in the vehicle 13 relative to the rectangular parallelepiped space C inscribed by the vehicle 13, as described above with reference to FIG. 5.
  • the vehicle external environment recognition unit 96 generates vehicle external environment data based on the sensor data supplied from the sensor data acquisition unit 93 and the video data supplied from the video data acquisition unit 94, and supplies the data to the stopped state determination unit 99 and the driving mode control unit 104.
  • the vehicle external environment data includes information indicating the distance to objects around the vehicle 13, and when the vehicle 13 is stored in the garage device 14, includes information indicating the distance to the garage display 41.
  • the communication unit 97 communicates with the garage system control device 16 and the external network. For example, the communication unit 97 transmits a stopped state notification provided from the stopped state determination unit 99 to the garage system control device 16, and receives a garage state notification transmitted from the garage system control device 16 and supplies it to the stopped state determination unit 99. The communication unit 97 also transmits vehicle parameters to the garage system control device 16, including position and posture data of the occupants in the vehicle 13 provided from the vehicle internal environment recognition unit 95 and driving operation data provided from the driving operation detection unit 103. The communication unit 97 then receives vehicle setting parameters transmitted from the garage system control device 16 and supplies them to the simulation mode control unit 105.
  • the user setting acquisition unit 98 acquires and stores a user setting value corresponding to the operational input, and supplies the user setting value to the stopped state determination unit 99.
  • the user setting acquisition unit 98 acquires and stores a user setting value instructing switching from the driving mode to the simulation mode, and supplies the user setting value to the stopped state determination unit 99.
  • the stopped state determination unit 99 determines whether or not to transition to the simulation mode according to the vehicle external environment data supplied from the vehicle external environment recognition unit 96.
  • the stopped state determination unit 99 also transmits a stopped state notification to the garage system control device 16 via the communication unit 97, and receives a garage state notification transmitted from the garage system control device 16 via the communication unit 97.
  • the stopped state determination unit 99 also supplies a stopped state signal to the driving mode control unit 104 and the simulation mode control unit 105.
  • the stopped state determination unit 99 determines that the vehicle will transition to the simulation mode.
  • the stopped state determination unit 99 may also determine that the vehicle will transition to the simulation mode when the vehicle 13 is stopped and the gear selector of the vehicle 13 is in the parking range.
  • the stopped state determination unit 99 may make a determination based on the position information of the vehicle 13 using, for example, GPS (Global Positioning System) or the like.
  • the stopped state determination unit 99 stores in advance the position information of a location where a driving simulation may be performed (for example, the installation location of the garage device 14), and can determine to transition to simulation mode when the vehicle 13 is stopped at a location where a driving simulation may be performed based on the position information of the vehicle 13.
  • the stopped state determination unit 99 may also make a determination using, for example, the distance or relative position from the garage display 41.
  • the garage state notification includes the distance or relative position of the vehicle 13 from the garage display 41.
  • the stopped state determination unit 99 can then determine to transition to simulation mode when the distance or relative position of the vehicle 13 from the garage display 41 is equal to or less than a predetermined threshold, that is, when the vehicle 13 is stopped at a predetermined position within the garage device 14.
  • the vehicle stop state determination unit 99 determines whether or not to transition to simulation mode, it is possible to avoid a situation in which it becomes impossible to drive the vehicle 13 in an unexpected location, for example. For example, even if an operation to start the simulation mode is mistakenly performed while the vehicle is temporarily stopped during a drive, it is possible to avoid a situation in which it becomes impossible to suddenly operate the vehicle 13.
  • a message stating "Cannot switch to simulation mode" may be displayed on the display inside the vehicle 13 based on the result of the judgment by the stopped state judgment unit 99.
  • the reason for this may also be displayed.
  • the reason may be that the vehicle 13 is not stopped, the gear selector of the vehicle 13 is not in the parking range, the position of the vehicle 13 is not in an appropriate location (such as a garage, a parking lot, a parking space, or a predetermined position relative to the driving simulation system 11), etc.
  • the brake position data acquisition unit 100 acquires brake position data indicating the amount of depression of the brake pedal 23 by the driver of the vehicle 13 based on the brake position signal supplied from the brake position sensor 29 in FIG. 2, and supplies the data to the driving operation detection unit 103.
  • the steering wheel rotation data acquisition unit 101 acquires steering wheel rotation data indicating the amount of rotation of the steering wheel 21 by the driver of the vehicle 13 based on the steering wheel rotation signal supplied from the steering wheel rotation sensor 27 in FIG. 2, and supplies the data to the driving operation detection unit 103.
  • the accelerator position data acquisition unit 102 acquires accelerator position data indicating the amount of depression of the accelerator pedal 22 by the driver of the vehicle 13 based on the accelerator position signal supplied from the accelerator position sensor 28 in FIG. 2, and supplies this data to the driving operation detection unit 103.
  • the driving operation detection unit 103 detects the driver's driving operation of the vehicle 13 based on the brake position data supplied from the brake position data acquisition unit 100, the steering wheel rotation data supplied from the steering wheel rotation data acquisition unit 101, and the accelerator position data supplied from the accelerator position data acquisition unit 102. The driving operation detection unit 103 then supplies driving operation data indicating the content of the driving operation (brake position data, steering wheel rotation data, and accelerator position data) to the driving mode control unit 104 and the communication unit 97.
  • the driving mode control unit 104 controls the driving of the vehicle 13 according to the driving operation data supplied from the driving operation detection unit 103.
  • the driving mode control unit 104 generates axle direction control data for controlling the axle direction of the tires of the vehicle 13 according to the steering wheel rotation data, and supplies the data to the axle direction control unit 106.
  • the driving mode control unit 104 also generates throttle control data for controlling the acceleration of the vehicle 13 according to the accelerator position data, and supplies the data to the throttle control unit 107.
  • the driving mode control unit 104 also generates brake control data for controlling the deceleration of the vehicle 13 according to the brake position data, and supplies the data to the brake control unit 108.
  • the driving mode control unit 104 generates steering reaction force control data, suspension control data, brake reaction force control data, accelerator reaction force control data, and seat control data according to the driving operation data supplied from the driving operation detection unit 103 and the vehicle external environment data supplied from the vehicle external environment recognition unit 96.
  • the driving mode control unit 104 then supplies each of the control data to the steering reaction force control unit 109, the suspension control unit 110, the brake reaction force control unit 111, the accelerator reaction force control unit 112, and the seat control unit 113.
  • the simulation mode control unit 105 controls the behavior of the vehicle 13 in a virtual space according to the vehicle setting parameters supplied from the communication unit 97.
  • the vehicle setting parameters include suspension setting data, seat setting data, and reaction force setting data (data values for steering reaction force, brake reaction force, and accelerator reaction force). Therefore, the simulation mode control unit 105 generates suspension control data that controls the height of the suspension of each of the four wheels of the vehicle 13 according to the suspension setting data, and supplies it to the suspension control unit 110. The simulation mode control unit 105 also generates seat control data that controls the inclination of the seat cushion and backrest according to the seat setting data, and supplies it to the seat control unit 113.
  • the simulation mode control unit 105 also generates steering reaction force control data for controlling the steering reaction force generated in the steering wheel 21 according to the steering reaction force data value in the reaction force setting data, and supplies this to the steering reaction force control unit 109.
  • the simulation mode control unit 105 also generates brake reaction force control data for controlling the brake reaction force generated in the brake pedal 23 according to the brake reaction force data value in the reaction force setting data, and supplies this to the brake reaction force control unit 111.
  • the simulation mode control unit 105 also generates accelerator reaction force control data for controlling the accelerator reaction force generated in the accelerator pedal 22 according to the accelerator reaction force data value in the reaction force setting data, and supplies this to the accelerator reaction force control unit 112.
  • the axle direction control unit 106 generates an axle direction control signal according to the axle direction control data supplied from the driving mode control unit 104, and supplies it to the axle direction drive mechanism 30 in FIG. 3.
  • the throttle control unit 107 generates a throttle control signal according to the throttle control data supplied from the driving mode control unit 104, and supplies it to the throttle drive mechanism 31 in FIG. 3.
  • the brake control unit 108 generates a brake control signal according to the brake control data supplied from the driving mode control unit 104, and supplies it to the brake drive mechanism 32 in FIG. 3.
  • the steering reaction force control unit 109 generates a steering reaction force control signal according to steering reaction force control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the steering reaction force drive mechanism 33 in FIG. 3.
  • the suspension control unit 110 generates a suspension control signal according to the suspension control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the suspension drive mechanism 36 in FIG. 3.
  • the brake reaction force control unit 111 generates a brake reaction force control signal according to the brake reaction force control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the brake reaction force drive mechanism 35 in FIG. 3.
  • the accelerator reaction force control unit 112 generates an accelerator reaction force control signal according to the accelerator reaction force control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the accelerator reaction force drive mechanism 34 in FIG. 3.
  • the seat control unit 113 generates a seat control signal according to the seat control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the seat drive mechanism 37 in FIG. 3.
  • FIG. 9 shows a block diagram illustrating an example of the configuration of the video and audio systems of the in-vehicle control device 18.
  • the in-vehicle control device 18 is configured with a user setting acquisition unit 121, a communication unit 122, a video acquisition unit 123, an audio acquisition unit 124, a signal separation unit 125, a video input switching unit 126, an audio input switching unit 127, a video signal processing unit 128, a navigation video transmission unit 129, a side mirror video transmission unit 130, an instrument panel video transmission unit 131, a rearview mirror video transmission unit 132, a rear seat video transmission unit 133, an audio signal processing unit 134, and an analog amplification unit 135.
  • the user setting acquisition unit 121 acquires and stores a user setting value corresponding to the user's operational input, and supplies the user setting value to the communication unit 122, the video input switching unit 126, and the audio input switching unit 127.
  • the user setting acquisition unit 121 acquires and stores a user setting value instructing switching from the driving mode to the simulation mode, and supplies the user setting value to the communication unit 122, the video input switching unit 126, and the audio input switching unit 127.
  • the communication unit 122 communicates with the garage system control device 16 and an external network. For example, the communication unit 122 transmits a user setting value supplied from the user setting acquisition unit 121 to the garage system control device 16, the user setting value instructing switching from the driving mode to the simulation mode. The communication unit 122 also receives a video and audio stream for the vehicle 13 transmitted from the garage system control device 16, and supplies the video and audio stream to the signal separation unit 125.
  • the image acquisition unit 123 acquires images to be displayed on the navigation display 51, the side mirror displays 52L and 52R, the instrument panel display 53, the rearview mirror display 54, and the rear seat displays 55L and 55R, and supplies these image streams to the image input switching unit 126.
  • the image acquisition unit 123 acquires navigation images according to the position information of the vehicle 13, side mirror images captured by side mirror cameras installed on the left and right sides of the vehicle 13, instrument panel images based on the vehicle 13's traveling speed and engine RPM, rearview mirror images captured by the rearview mirror camera of the vehicle 13, rear seat viewpoint images based on images captured by an external camera of the vehicle 13, etc.
  • the audio acquisition unit 124 acquires audio played on the audio system installed in the vehicle 13 during driving mode, and supplies the audio stream to the audio input switching unit 127.
  • the signal separation unit 125 separates the video and audio stream for vehicle 13 supplied from the communication unit 122 into a video stream for vehicle 13 and an audio stream for vehicle 13. The signal separation unit 125 then supplies the video stream for vehicle 13 to the video input switching unit 126, and supplies the audio stream for vehicle 13 to the audio input switching unit 127.
  • the video input switching unit 126 switches the video stream supplied to the video signal processing unit 128 in accordance with the user setting value supplied from the user setting acquisition unit 121. For example, when a user setting value instructing switching from the driving mode to the simulation mode is supplied to the video input switching unit 126, the video input switching unit 126 switches so that the video stream for the vehicle 13 supplied from the signal separation unit 125 is supplied to the video signal processing unit 128. On the other hand, when a user setting value instructing switching from the simulation mode to the driving mode is supplied to the video input switching unit 126, the video input switching unit 126 switches so that the video stream supplied from the video acquisition unit 123 is supplied to the video signal processing unit 128.
  • the audio input switching unit 127 switches the audio stream supplied to the audio signal processing unit 134 in accordance with the user setting value supplied from the user setting acquisition unit 121. For example, when a user setting value instructing switching from the driving mode to the simulation mode is supplied to the audio input switching unit 127, the audio input switching unit 127 switches so that the audio stream for the vehicle 13 supplied from the signal separation unit 125 is supplied to the audio signal processing unit 134. On the other hand, when a user setting value instructing switching from the simulation mode to the driving mode is supplied to the audio input switching unit 127, the audio input switching unit 127 switches so that the audio stream supplied from the audio acquisition unit 124 is supplied to the audio signal processing unit 134.
  • the video signal processing unit 128 performs signal processing on the video streams supplied from the video input switching unit 126 to obtain a video stream for navigation, a video stream for side mirrors, a video stream for the instrument panel, a video stream for the rearview mirror, and a video stream for the rear seats.
  • the video signal processing unit 128 then supplies the video stream for navigation to the navigation video transmission unit 129, the video stream for the side mirrors to the side mirror video transmission unit 130, the video stream for the instrument panel to the instrument panel video transmission unit 131, the video stream for the rearview mirror to the rearview mirror video transmission unit 132, and the video stream for the rear seats to the rear seat video transmission unit 133.
  • the navigation video transmission unit 129 transmits the video stream for navigation provided by the video signal processing unit 128 to the navigation display 51, causing the navigation display 51 to display the navigation video.
  • the side mirror image transmission unit 130 transmits the side mirror image stream supplied from the image signal processing unit 128 to the side mirror displays 52L and 52R, and displays the side mirror image on the side mirror displays 52L and 52R.
  • the instrument panel image transmission unit 131 transmits the image stream for the instrument panel supplied from the image signal processing unit 128 to the instrument panel display 53, causing the instrument panel display 53 to display the instrument panel image.
  • the rearview mirror image transmission unit 132 transmits the rearview mirror image stream supplied from the image signal processing unit 128 to the rearview mirror display 54, causing the rearview mirror display 54 to display the rearview mirror image.
  • the rear seat video transmission unit 133 transmits the rear seat video stream supplied from the video signal processing unit 128 to the rear seat displays 55L and 55R, and displays the rear seat viewpoint video on the rear seat displays 55L and 55R.
  • the audio signal processing unit 134 performs signal processing on the audio stream supplied from the audio input switching unit 127 to obtain an audio signal and supply it to the analog amplification unit 135.
  • the analog amplifier 135 amplifies the audio signal supplied from the audio signal processor 134 and supplies it to the speakers 58-1 and 58-2, causing the speakers 58-1 and 58-2 to output audio.
  • a in FIG. 10 shows an example of the data structure of vehicle parameters transmitted from the in-vehicle control device 18 to the garage system control device 16.
  • B in FIG. 10 shows an example of the data structure of vehicle setting parameters transmitted from the garage system control device 16 to the in-vehicle control device 18.
  • the vehicle parameters are configured by sequentially arranging steering wheel rotation data, brake position data, accelerator position data, and other data following a header section in which all common data for the vehicle parameters is stored. Furthermore, each of the steering wheel rotation data, brake position data, and accelerator position data has a header section and a data storage section.
  • Each data storage section stores one or more data values associated with each time, and in the example shown in A of FIG. 10, x data values (for example, data value 1a to data value xa at time a) are stored associated with each time from time a to time y.
  • the vehicle setting parameters are configured by sequentially arranging suspension setting data, seat setting data, reaction force setting data, and other data following a header section in which all common data for the vehicle setting parameters is stored. Furthermore, the suspension setting data, seat setting data, and reaction force setting data each have a header section and a data storage section. Each data storage section stores one or more data values associated with each time. In the example shown in FIG. 10B, x data values (for example, data value 1a to data value xa at time a) are stored in association with each time from time a to time y.
  • FIG. 12 shows an example of vehicle parameters and vehicle setting parameters that are generated at times T1 to T4 for a vehicle 13 that has experienced a spinning behavior due to excessive steering as shown in FIG. 11.
  • the vehicle parameters input to the garage system control device 16 store one data value each for the accelerator position data, brake position data, and steering wheel rotation data for each time T.
  • the data value Front RIGHT is suspension setting data for controlling the suspension height of the right front wheel
  • the data value Front LEFT is suspension setting data for controlling the suspension height of the left front wheel
  • the data value Rear RIGHT is suspension setting data for controlling the suspension height of the right rear wheel
  • the data value Rear LEFT is suspension setting data for controlling the suspension height of the left rear wheel.
  • the data value Seat Control 1 is seat setting data for controlling the height of the front end of the seat cushion
  • the data value Seat Control 2 is seat setting data for controlling the height of the rear end of the seat cushion
  • the data value Seat Control 3 is seat setting data for controlling the forward or rearward tilt of the seat back.
  • the seat setting data is set so that no control is performed on the seat cushion and the seat back during constant speed driving.
  • the seat setting data is set so that the height of the front end of the seat cushion is lowered, the height of the rear end of the seat cushion is raised, and the seat back is tilted forward during braking.
  • the driving simulation system 11 can recreate the riding conditions of an actual vehicle 13 using the actual vehicle 13, and can therefore be used to learn correct driving techniques under specific conditions.
  • the garage display 41 in the garage device 14 presents an image that mimics the scenery outside the vehicle 13. Specifically, an image of the scenery moving in a certain direction when driving at 40 km/h is displayed on the garage display 41-1 for the ceiling surface, the garage display 41-2 for the front, the garage display 41-3 for the right side, the garage display 41-4 for the left side, and the garage display 41-5 for the rear, thereby creating a visual state in which the occupants of the vehicle 13 visually feel as if they are driving in that scene.
  • the side mirror displays 52L and 52R and the rearview mirror display 54 which correspond to the rearview mirror and side mirrors mounted on the vehicle 13, also display images of the vehicle as if it were being driven, in the same way as the garage display 41.
  • the occupants of the vehicle 13 can then follow the instructions of the driving simulation and actually operate the steering wheel, accelerator, and brakes in a given scene, for example, in their own actual vehicle 13.
  • the vehicle 13 itself employs a by-wire system, the actual rudder does not move even if the steering wheel is operated, and no braking operation or brake pad piston movement occurs, and driving operation data indicating the amount of each operation is sent to the garage system control device 16.
  • the garage system control device 16 generates and displays an image of the occupants of the vehicle 13 as if they were driving on not only the garage display 41 but also the side mirror displays 52L and 52R and the rearview mirror display 54 mounted on the vehicle, according to the amount of operation obtained from the driving operation data. At this time, the garage system control device 16 creates a scene in which the driving is taking place in a virtual space, places a virtual camera in the virtual space at a position corresponding to the line of sight of the occupants, and can appropriately display the image captured there on the garage display 41, the side mirror displays 52L and 52R, and the rearview mirror display 54. At this time, in order to generate an appropriate display image, position and orientation data of the vehicle 13 and the occupants in the vehicle 13 with respect to the position of the garage display 41 are required.
  • the vehicle 13 is equipped with a suspension function that allows the height of each of the four wheels to be changed independently, so that the height of each of the four wheels can be changed according to information from the driving simulation.
  • This makes it possible to reproduce rolling when going around a curve, or diving when braking rapidly, and to provide the occupants with a more realistic situation, for example, by expressing a state of slipping on a rainy day.
  • equipping the seats in which the occupants sit with various adjustment mechanisms it is possible to move them appropriately according to the situation, and by working in coordination with the control of the suspension height function, it is possible to make the occupants feel the acceleration.
  • the garage system control device 16 can record various operations performed by the occupant as an operation log, and by analyzing these on a server or the like, it is possible to perform a detailed analysis of the problem with the occupant's driving technique. This can also be obtained during actual driving, but in that case, it is necessary to accurately recognize the state of the driving scene, and it is expected that there may be high hurdles to accurately judge the driving technique.
  • the driving simulation system 11 can envision in advance the scenes to be presented in the driving simulation, and can obtain data on how the occupants will drive in those scenes or in specific conditions, making it possible to make more accurate judgments.
  • the vehicle 13 that the driver owns rather than a general driving simulator that uses a separate device, more accurate data can be collected.
  • the second use case for driving simulation is to help people learn ideal driving techniques for specific situations, such as avoiding danger.
  • the steering wheel 21, accelerator pedal 22, brake pedal 23, etc. of the vehicle 13 are by-wire, and at the same time, each of them can generate a reaction force for each operation using an actuator, etc., which can be effectively used to learn driving techniques.
  • a large reaction force can be applied to the steering wheel 21, for example, to allow the occupant to experience correct steering operation.
  • the occupant can experience correct driving operation, and can learn correct driving operation in heavy rain.
  • the experience can be carried out in one's own vehicle 13, so it is expected that extremely efficient learning of driving techniques can be achieved.
  • the teacher data can be changed depending on the skill level of the occupant's driving operation technique.
  • Figure 15 is a flowchart explaining the initial setup process on the garage system 12 side.
  • the initial setup process on the garage system 12 side is started.
  • step S11 the communication unit 64 of the garage system control device 16 determines whether communication is possible between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18, and waits until it determines that communication is possible between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18. Then, if the communication unit 64 determines that communication is possible between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18, the process proceeds to step S12.
  • step S12 the communication unit 64 of the garage system control device 16 starts communication between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18.
  • step S13 the vehicle position and attitude recognition unit 63 of the garage system control device 16 executes a vehicle position and attitude recognition process (see FIG. 19) to recognize the position and attitude of the vehicle 13, and obtains position and attitude data of the vehicle 13.
  • step S14 the communication unit 64 of the garage system control device 16 receives the vehicle data (various data related to the vehicle 13 required to perform the driving simulation) sent from the communication unit 97 of the in-vehicle control device 18, and supplies it to the video conversion processing unit 73.
  • vehicle data variable data related to the vehicle 13 required to perform the driving simulation
  • step S15 the communication unit 64 of the garage system control device 16 receives the position and posture data of the occupants in the vehicle 13 transmitted from the communication unit 97 of the in-vehicle control device 18, and supplies it to the image conversion processing unit 73.
  • step S16 the communication unit 64 of the garage system control device 16 transmits setup parameters for setting up the display system, audio system, etc. of the vehicle 13 to the in-vehicle control device 18.
  • step S17 the communication unit 64 of the garage system control device 16 determines whether the setup of the vehicle 13 is complete, and waits until it is determined that the setup of the vehicle 13 is complete. For example, when the communication unit 64 receives a vehicle setup completion notification sent from the communication unit 122 of the in-vehicle control device 18 in step S32 of FIG. 16 described below, it determines that the setup of the vehicle 13 is complete, and the initial setup process on the garage system 12 side is terminated.
  • FIG. 16 is a flowchart explaining the initial setup process on the vehicle 13 side.
  • the in-vehicle control device 18 when the in-vehicle control device 18 is started, the initial setup process on the vehicle 13 side is started, and at the start of the process, the vehicle 13 is set to a driving mode.
  • step S21 the vehicle stop state determination unit 99 determines whether or not to switch from the driving mode to the simulation mode. For example, when a user setting value instructing the switching from the driving mode to the simulation mode is supplied from the user setting acquisition unit 98, the vehicle stop state determination unit 99 can determine that the switching from the driving mode to the simulation mode will be performed.
  • step S21 If the vehicle stop state determination unit 99 determines in step S21 that the driving mode should not be switched to the simulation mode, the process proceeds to step S22, and the vehicle 13 continues in the driving mode. After that, the process returns to step S21, and the same process is repeated.
  • step S21 determines in step S21 that the mode should be switched from the driving mode to the simulation mode.
  • step S23 the vehicle stop state determination unit 99 determines whether the vehicle 13 is stopped based on the vehicle external environment data supplied from the vehicle external environment recognition unit 96, and waits until it determines that the vehicle 13 is stopped. Then, if the vehicle stop state determination unit 99 determines that the vehicle 13 is stopped, the process proceeds to step S24.
  • step S24 the communication unit 97 and the communication unit 122 of the in-vehicle control device 18 determine whether communication with the communication unit 64 of the garage system control device 16 is possible, and waits until it is determined that communication with the communication unit 64 of the garage system control device 16 is possible. Then, if the communication unit 97 and the communication unit 122 determine that communication with the communication unit 64 of the garage system control device 16 is possible, the process proceeds to step S25.
  • step S25 the communication unit 97 and the communication unit 122 of the in-vehicle control device 18 start communication with the communication unit 64 of the garage system control device 16.
  • step S26 the communication unit 97 of the in-vehicle control device 18 transmits the vehicle data to the communication unit 97 of the in-vehicle control device 18.
  • step S27 the vehicle interior environment recognition unit 95 executes an occupant position and posture recognition process (see FIG. 20) to recognize the positions and postures of occupants inside the vehicle 13, and obtains position and posture data of the occupants inside the vehicle 13.
  • step S28 the vehicle interior environment recognition unit 95 supplies the position and attitude data of the occupant inside the vehicle 13 acquired in the occupant position and attitude recognition process in step S27 to the communication unit 97, and the communication unit 97 transmits the position and attitude data of the occupant inside the vehicle 13 to the garage system control device 16.
  • step S29 the communication unit 122 of the in-vehicle control device 18 receives the setup parameters sent from the garage system control device 16 in step S16 of FIG. 15.
  • step S30 the communication unit 122 of the in-vehicle control device 18 supplies the setup parameters received in step S29 to the display system, audio system, etc. of the vehicle 13, and executes their setup.
  • step S31 switching to the by-wire system is performed, and control is shifted from the driving mode control unit 104 to the simulation mode control unit 105.
  • step S32 the communication unit 97 and the communication unit 122 of the in-vehicle control device 18 send a vehicle setup completion notification to the garage system control device 16 to notify that the setup of the vehicle 13 has been completed, and the initial setup process on the vehicle 13 side is terminated.
  • FIG. 17 is a flowchart explaining the driving simulation operation processing on the garage system 12 side.
  • the driving simulation operation process on the garage system 12 side is started.
  • step S41 the communication unit 64 of the garage system control device 16 determines whether or not the vehicle parameters have been received, and waits to process until it is determined that the vehicle parameters have been received. For example, when the communication unit 64 receives the vehicle parameters transmitted from the communication unit 97 of the in-vehicle control device 18 in step S69 of FIG. 18 described below, it supplies the driving operation data contained in the vehicle parameters to the virtual space generation unit 71, and supplies the position and posture data of the occupants in the vehicle 13 contained in the vehicle parameters to the image conversion processing unit 73. The communication unit 64 then determines that the vehicle parameters have been received, and the process proceeds to step S42.
  • step S42 the virtual space generation unit 71 generates suspension setting data as described above with reference to FIG. 13, based on the behavior of the vehicle 13 (cornering, acceleration, deceleration, etc.) determined by the driving simulation according to the driving operation data supplied in step S41.
  • step S43 the virtual space generation unit 71 generates seat setting data as described above with reference to FIG. 14, based on the behavior of the vehicle 13 (when driving at a constant speed, when braking, etc.) determined by a driving simulation according to the driving operation data supplied in step S41.
  • step S44 the virtual space generation unit 71 calculates the steering reaction force generated in the steering wheel 21, the accelerator reaction force generated in the accelerator pedal 22, and the brake reaction force generated in the brake pedal 23 based on the behavior of the vehicle 13 determined by a driving simulation in accordance with the driving operation data supplied in step S41.
  • the virtual space generation unit 71 then generates reaction force setting data in which the steering reaction force, accelerator reaction force, and brake reaction force are stored.
  • step S45 the virtual space generation unit 71 supplies the vehicle setting parameters, which are composed of the suspension setting data generated in step S42, the seat setting data generated in step S43, and the reaction force setting data generated in step S44, to the communication unit 64.
  • the communication unit 64 then transmits the vehicle setting parameters to the in-vehicle control device 18.
  • step S46 the virtual space generation unit 71 generates rendering parameters based on the steering wheel rotation data included in the driving operation data supplied from the communication unit 64 in step S41.
  • step S47 the virtual space generation unit 71 generates rendering parameters based on the brake position data included in the driving operation data supplied from the communication unit 64 in step S41.
  • step S48 the virtual space generation unit 71 generates rendering parameters based on the accelerator position data included in the driving operation data supplied from the communication unit 64 in step S41.
  • step S49 the virtual space generation unit 71 reads the virtual space data from the virtual space storage unit 65 and places the objects supplied from the 3DCG generation unit 67 in the virtual space generated by the virtual space generation unit 71.
  • the virtual space generation unit 71 then renders the virtual space based on the rendering parameters generated in steps S46 to S47, and generates a video stream for the vehicle 13 and a video stream for the garage.
  • step S50 the virtual space generation unit 71 outputs the video stream for the vehicle 13 generated in step S49 to the signal multiplexing unit 72.
  • the video stream for the vehicle 13 is then transmitted to the vehicle 13 via the signal multiplexing unit 72 and the communication unit 64.
  • step S51 the virtual space generation unit 71 outputs the video stream for the garage generated in step S49 to the video conversion processing unit 73.
  • the video conversion processing unit 73 then performs video conversion processing based on the position and orientation data of the vehicle 13 supplied from the vehicle position and orientation recognition unit 63 and the position and orientation data of the occupants in the vehicle 13 supplied from the communication unit 64 in step S41.
  • step S52 the video conversion processing unit 73 outputs the video stream for the garage that has been subjected to the video conversion processing in step S51. That is, the video conversion processing unit 73 supplies the video stream for the ceiling surface to the video transmission unit 74 for the ceiling surface, and causes the simulation video for the ceiling surface to be displayed on the garage display 41-1 for the ceiling surface.
  • the video conversion processing unit 73 causes the video stream for the front surface to be displayed on the garage display 41-2 for the front surface, the simulation video for the right side surface to be displayed on the garage display 41-3 for the right side surface, the simulation video for the left side surface to be displayed on the garage display 41-4 for the left side surface, the simulation video for the rear surface to be displayed on the garage display 41-5 for the rear surface, and the simulation video for the floor surface to be displayed on the garage display 41-6 for the floor surface.
  • the process returns to step S41, and the same process is repeated.
  • FIG. 18 is a flowchart explaining the driving simulation operation process on the vehicle 13 side.
  • the driving simulation operation process on the vehicle 13 side is started.
  • step S61 the steering wheel rotation data acquisition unit 101 determines whether or not the driver of the vehicle 13 has rotated the steering wheel 21, and waits until it determines that the driver of the vehicle 13 has rotated the steering wheel 21. Then, in step S61, if the steering wheel rotation data acquisition unit 101 determines that the driver of the vehicle 13 has rotated the steering wheel 21, the process proceeds to step S62.
  • step S62 the steering wheel rotation data acquisition unit 101 acquires steering wheel rotation data in accordance with the rotation operation of the steering wheel 21 by the driver of the vehicle 13, and supplies the data to the driving operation detection unit 103.
  • step S63 the brake position data acquisition unit 100 determines whether or not the driver of the vehicle 13 has depressed the brake pedal 23, and waits until it determines that the driver of the vehicle 13 has depressed the brake pedal 23. Then, in step S63, if the brake position data acquisition unit 100 determines that the driver of the vehicle 13 has depressed the brake pedal 23, the process proceeds to step S64.
  • step S64 the brake position data acquisition unit 100 acquires brake position data according to the brake pedal 23 depression operation by the driver of the vehicle 13, and supplies the data to the driving operation detection unit 103.
  • step S65 the accelerator position data acquisition unit 102 determines whether or not the driver of the vehicle 13 has depressed the accelerator pedal 22, and waits until it determines that the driver of the vehicle 13 has depressed the accelerator pedal 22. Then, in step S65, if the accelerator position data acquisition unit 102 determines that the driver of the vehicle 13 has depressed the accelerator pedal 22, the process proceeds to step S66.
  • step S66 the accelerator position data acquisition unit 102 acquires accelerator position data according to the depression of the accelerator pedal 22 by the driver of the vehicle 13, and supplies the data to the driving operation detection unit 103.
  • step S67 the vehicle interior environment recognition unit 95 determines whether or not it has recognized the position and posture of the occupant inside the vehicle 13, and waits until it determines that it has recognized the position and posture of the occupant inside the vehicle 13. Then, in step S67, if the vehicle interior environment recognition unit 95 determines that it has recognized the position and posture of the occupant inside the vehicle 13, the process proceeds to step S68.
  • step S68 the vehicle interior environment recognition unit 95 generates position and posture data of the occupants inside the vehicle 13.
  • step S69 the driving operation detection unit 103 supplies driving operation data (brake position data, steering wheel rotation data, and accelerator position data) indicating the content of the driving operation of the driver of the vehicle 13 to the communication unit 97.
  • driving operation data brake position data, steering wheel rotation data, and accelerator position data
  • the vehicle internal environment recognition unit 95 supplies position and posture data of the occupants in the vehicle 13 to the communication unit 97.
  • the communication unit 97 transmits vehicle parameters including the driving operation data and the position and posture data of the occupants in the vehicle 13 to the garage system control device 16.
  • step S70 the communication unit 97 determines whether the vehicle setting parameters have been received, and waits until it is determined that the vehicle setting parameters have been received. Then, when the communication unit 64 of the garage system control device 16 transmits the vehicle setting parameters in step S45 of FIG. 17 described above, the communication unit 97 supplies the vehicle setting parameters to the simulation mode control unit 105, determines that the vehicle setting parameters have been received, and the process proceeds to step S71.
  • step S71 the simulation mode control unit 105 generates suspension control data in accordance with the suspension setting data included in the vehicle setting parameters supplied from the communication unit 97, and supplies the data to the suspension control unit 110. This enables the suspension control unit 110 to control the suspension height as described with reference to FIG. 13, and reproduce the behavior of the vehicle 13 in the virtual space.
  • step S72 the simulation mode control unit 105 generates seat control data according to the seat setting data included in the vehicle setting parameters supplied from the communication unit 97, and supplies the generated data to the seat control unit 113. This enables the seat control unit 113 to control the seat cushion and the inclination of the seat cushion as described with reference to FIG. 14, and reproduce the behavior of the vehicle 13 in the virtual space.
  • step S73 the simulation mode control unit 105 generates steering reaction force control data in accordance with the steering reaction force data value included in the vehicle setting parameters supplied from the communication unit 97, and supplies the generated steering reaction force control data to the steering reaction force control unit 109.
  • This enables the steering reaction force control unit 109 to generate a steering reaction force in the steering wheel 21 that resists the arm force of the driver who is turning the steering wheel 21, thereby reproducing the behavior of the vehicle 13 in the virtual space.
  • step S74 the simulation mode control unit 105 generates brake reaction force control data in accordance with the brake reaction force data value included in the vehicle setting parameters supplied from the communication unit 97, and supplies the generated data to the brake reaction force control unit 111.
  • This enables the brake reaction force control unit 111 to generate a brake reaction force in the brake pedal 23 that resists the leg force of the driver who depresses the brake pedal 23, thereby reproducing the behavior of the vehicle 13 in the virtual space.
  • step S75 the video signal processing unit 128 determines whether a video stream for the vehicle 13 has been supplied, and waits until it is determined that a video stream for the vehicle 13 has been supplied. Then, when a video stream for the vehicle 13 is transmitted from the garage system control device 16 in step S50 of FIG. 17 described above, the video stream for the vehicle 13 is supplied to the video signal processing unit 128 via the communication unit 122, the signal separation unit 125, and the video input switching unit 126. As a result, the video signal processing unit 128 determines that a video stream for the vehicle 13 has been supplied, and the process proceeds to step S76.
  • step S76 the video signal processing unit 128 performs signal processing on the video stream for the vehicle 13, and supplies the corresponding video streams to the navigation video transmission unit 129, the side mirror video transmission unit 130, the instrument panel video transmission unit 131, the rearview mirror video transmission unit 132, and the rear seat video transmission unit 133.
  • the corresponding images are displayed on the navigation display 51, the side mirror displays 52L and 52R, the instrument panel display 53, the rearview mirror display 54, and the rear seat displays 55L and 55R. Then, the process returns to step S61, and the same process is repeated.
  • FIG. 19 is a flowchart explaining the vehicle position and attitude recognition process performed in step S13 of FIG. 15.
  • step S81 the garage sensor 43 and garage camera 42 are set up.
  • step S82 the garage sensor 43 and garage camera 42 start sensing the position and attitude of the vehicle 13, and sensor data and video data are output.
  • step S83 the sensor data acquisition unit 61 acquires the sensor data output from the garage sensor 43 and supplies it to the vehicle position and attitude recognition unit 63, and the video data acquisition unit 62 acquires the video data output from the garage camera 42 and supplies it to the vehicle position and attitude recognition unit 63.
  • step S84 the vehicle position and attitude recognition unit 63 calculates the position and attitude of the vehicle 13 based on the sensor data and video data supplied in step S83. As a result, the vehicle position and attitude recognition unit 63 acquires the position and attitude data of the vehicle 13, and the vehicle position and attitude recognition process ends.
  • FIG. 20 is a flowchart explaining the occupant position and attitude recognition process performed in step S27 of FIG. 16.
  • step S91 the in-vehicle sensor 57 and the in-vehicle camera 56 are set up.
  • step S92 the in-vehicle sensor 57 and the in-vehicle camera 56 start sensing the positions and postures of the occupants in the vehicle 13, and the sensor data and video data are output.
  • step S93 the sensor data acquisition unit 91 acquires sensor data output from the in-vehicle sensor 57 and supplies it to the vehicle interior environment recognition unit 95, and the video data acquisition unit 92 acquires video data output from the in-vehicle camera 56 and supplies it to the vehicle interior environment recognition unit 95.
  • step S94 the vehicle interior environment recognition unit 95 calculates the positions and postures of the occupants in the vehicle 13 based on the sensor data and video data supplied in step S93. As a result, the vehicle interior environment recognition unit 95 acquires position and posture data of the occupants in the vehicle 13, and the occupant position and posture recognition process is terminated.
  • Figure 21 is a flowchart explaining the reflection control process that controls the reflection of the body of the vehicle 13.
  • step S101 the virtual space generation unit 71 acquires the position and orientation data of the vehicle 13 supplied from the vehicle position and orientation recognition unit 63.
  • step S102 the virtual space generation unit 71 places a virtual light source in the virtual space.
  • step S103 the virtual space generation unit 71 places a 3DCG object corresponding to the vehicle 13 in the virtual space based on the position and orientation data acquired in step S101.
  • step S104 the virtual space generation unit 71 sets the reflectance of the body surface of the 3DCG object corresponding to the vehicle 13 in the virtual space.
  • step S105 the virtual space generation unit 71 performs ray tracing such that light is irradiated from the virtual light source placed in step S102 to the 3DCG object corresponding to the vehicle 13 placed in step S103, and the light is reflected on the body surface with the reflectance set in step S104.
  • step S106 the virtual space generation unit 71 acquires the position and orientation data of the occupants in the vehicle 13 supplied from the communication unit 64.
  • step S107 the virtual space generation unit 71 places a virtual camera at a position in the virtual space that corresponds to the viewpoint position P of the occupant in the vehicle 13.
  • step S108 the virtual space generation unit 71 identifies light rays on the body of the vehicle 13 that can be observed by the occupants in the vehicle 13 in the virtual space from the ray tracing results using the virtual camera placed in step S107.
  • step S109 the virtual space generation unit 71 identifies the body of the vehicle 13 in the virtual space where the light ray identified in step S108 is reflected, and the coordinates on that body.
  • step S110 the vehicle position and attitude recognition unit 63 identifies the body of the vehicle 13 in the virtual space identified in step S109 and the coordinates on that body on the vehicle 13 in the real space based on the sensor data supplied from the sensor data acquisition unit 61 and the video data supplied from the video data acquisition unit 62.
  • step S111 the vehicle position and attitude recognition unit 63 determines the position of the garage display 41 in the vertical direction relative to the coordinates determined in step S110.
  • step S112 the virtual space generation unit 71 identifies within the virtual space the position and orientation of the garage display 41 in the real space identified in step S111.
  • step S113 the virtual space generation unit 71 identifies rendering content for the garage display 41 identified in the virtual space in step S112.
  • step S114 the virtual space generation unit 71 displays the rendering content identified in step S113 on the garage display 41 identified in step S112, and the reflection control process is terminated.
  • the above-described reflection control process makes it possible to reproduce the shine of the surface of the body of the vehicle 13.
  • FIG. 22 is a block diagram showing an example configuration of a training system.
  • the training system 201 is composed of a cloud system 212 constructed on a network 211, and multiple client garage systems 213 connected to the network 211.
  • N client garage systems 213-1 to 213-N are connected to the network 211. Note that, when there is no need to distinguish between the client garage systems 213-1 to 213-N, they will hereinafter be referred to as client garage systems 213.
  • the cloud system 212 is composed of a database 214 and a server device 215, and the server device 215 can use various data registered in the database 214 to provide training to multiple client garage systems 213.
  • the server device 215 is configured with a processor 221, an input/output interface 222, an operating system 223, and a memory 224, and application software 225 is stored in the memory 224.
  • the processor 221 then reads out the application software 225 from the memory 224 and executes it to perform training.
  • the client garage system 213 is configured with a processor 231, an input/output interface 232, an operating system 233, and a memory 234, and application software 235 is stored in the memory 234.
  • the processor 231 reads and executes the application software 235 from the memory 234 to perform training.
  • the client garage system 213 corresponds to the driving simulation system 11 in FIG. 1, and includes each of the blocks included in the driving simulation system 11.
  • the training configurations implemented by the training system 201 are divided into unsupervised training and supervised training.
  • Figure 23 is a flowchart explaining unsupervised training.
  • step S201 the client garage system 213 acquires the user's driving operations (steering wheel rotation data, accelerator position data, and brake position data) for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
  • step S202 the client garage system 213 executes a driving simulation (the driving simulation operation process in FIG. 17 and FIG. 18 described above) according to each driving operation of the user acquired in step S201.
  • step S203 the client garage system 213 displays the simulation image generated in the driving simulation executed in step S202 on the garage display 41.
  • step S204 the client garage system 213 controls the corresponding drive mechanisms according to each control data (steering reaction force control data, suspension control data, brake reaction force control data, accelerator reaction force control data, and seat control data) generated in the driving simulation executed in step S202.
  • control data steering reaction force control data, suspension control data, brake reaction force control data, accelerator reaction force control data, and seat control data
  • step S205 the client garage system 213 determines whether or not to end the driving simulation, for example, according to the user's operational input.
  • step S205 the client garage system 213 determines not to end the driving simulation, the process returns to step S201, and the same process is repeated thereafter. In this case, the user can perform each driving operation while being aware of the results of the previous driving simulation.
  • the client garage system 213 determines that the driving simulation should be ended, the unsupervised training is terminated.
  • Figure 24 is a flowchart explaining supervised training.
  • step S211 the client garage system 213 acquires the user's driving operations for the steering wheel 21, the accelerator pedal 22, and the brake pedal 23.
  • step S212 the client garage system 213 executes a driving simulation according to each of the user's driving operations acquired in step S211. Then, the client garage system 213 generates a difference between the user's operations and the training data.
  • step S213 the client garage system 213 displays the simulation image generated in the driving simulation executed in step S212 on the garage display 41. At this time, the client garage system 213 outputs the difference between the user's operation generated in step S212 and the teacher data.
  • step S214 the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S212. At this time, the client garage system 213 notifies the difference between the user's operation generated in step S212 and the teacher data.
  • step S215 the client garage system 213 determines whether or not to end the driving simulation, for example, according to the user's operational input.
  • step S215 the client garage system 213 determines not to end the driving simulation, the process returns to step S211, and the same process is repeated thereafter. In this case, the user can perform each driving operation while being aware of the results of the previous driving simulation.
  • the client garage system 213 determines that the driving simulation should be terminated, the supervised training is terminated.
  • Figure 25 is a flowchart explaining learning.
  • step S221 the client garage system 213 selects learning according to the user's operational input.
  • step S222 the client garage system 213 starts learning and performs a driving simulation according to the ideal teacher driving pattern.
  • step S223 the client garage system 213 displays the simulation video generated in the driving simulation executed in step S222 on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S224 the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S222, and then the learning process ends.
  • students can experience the teacher's example, including AI, without actually driving.
  • students can learn ideal driving patterns, such as steering, braking, and accelerating, while sitting in the driver's seat of their own vehicle 13 and experiencing the reaction forces (operations by the teacher) given to them.
  • the student performs the operation and receives operational assistance and advice from the teacher while undergoing training. For example, if the student's operation deviates from the ideal driving pattern, the optimal method of returning to the ideal driving pattern depending on the course, road surface, and speed conditions, and the method of training measures to prevent deviation will differ as described below with reference to Figures 26 to 28.
  • Figure 26 is a flowchart that explains training conducted one-on-one between a teacher and a student, where the teacher is a human.
  • step S231 the client garage system 213 selects a training session according to the user's (student's) operational input.
  • step S232 the client garage system 213 connects the operation systems and communication paths of the teacher and students via the network 211.
  • step S233 the client garage system 213 executes a driving simulation according to the ideal teacher driving pattern.
  • step S234 the client garage system 213 displays the simulation video generated in the driving simulation executed in step S233 on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S235 the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S233.
  • step S236 if there is a deviation from the ideal teacher driving pattern, the teacher performs auxiliary operations via the network 211, and the client garage system 213 controls the corresponding drive mechanisms according to each control data based on the auxiliary operations.
  • step S237 the client garage system 213 displays the simulation video generated in the driving simulation according to the teacher's assisted operations in step S236 on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S238 the client garage system 213 acquires the user's driving operations for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
  • step S239 the client garage system 213 determines whether the training is complete. If it determines that the training is not complete, the process returns to step S233, and the same process is repeated. On the other hand, if the client garage system 213 determines in step S239 that the training is complete, the process proceeds to step S240.
  • step S240 the client garage system 213 records the evaluation and scoring of the operation log, linking it to the user ID.
  • step S241 if there are any concerns, the client garage system 213 adjusts the software in response to the teacher's operation. At this time, the teacher may prepare a request for adjustments to be sent to the dealer if necessary.
  • step S242 the client garage system 213 communicates with the teacher, and then the process ends.
  • training system 201 allows a human teacher and student to train one-on-one. That is, the operation systems and communication paths of the teacher and student are connected via network 211. Then, while the student is driving, the teacher can remotely provide assistance and advice on steering, braking, accelerator operation, and other driving-related actions via network 211. This allows the user to train individually in the driver's seat of their own vehicle 13.
  • Figure 27 is a flowchart that explains training in which the teacher is a human and the teacher and students are one-to-many.
  • step S251 the client garage system 213 selects a training session according to the user's (student's) operational input.
  • step S252 the client garage system 213 connects the operation systems and communication paths of the teacher and student via the network 211, and connects the communication paths between the students.
  • step S253 the client garage system 213 executes a driving simulation according to the ideal teacher driving pattern.
  • step S254 the client garage system 213 displays the simulation video generated in the driving simulation executed in step S253 on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S255 the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S253.
  • step S256 if there is a deviation from the ideal teacher driving pattern, the teacher performs auxiliary operations as a measure to address the tendency of the entire group via the network 211, and the client garage system 213 controls the corresponding drive mechanisms according to each control data based on the auxiliary operations.
  • step S257 the client garage system 213 displays the simulation video generated in the driving simulation following the teacher's assisted operations in step S256 for the entire group on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S258 if there is a deviation from the ideal teacher driving pattern, the teacher performs auxiliary operations as a countermeasure and tendency for individuals and groups via the network 211, and the client garage system 213 controls the corresponding drive mechanisms according to each control data based on the auxiliary operations.
  • step S259 the client garage system 213 displays, individually and as a group, on the garage display 41 the simulation video generated in the driving simulation following the teacher's assisted operations in step S258, and outputs the audio generated together with the simulation video.
  • step S260 the client garage system 213 acquires the user's driving operations for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
  • step S261 the client garage system 213 determines whether the training has been completed. If it is determined that the training has not been completed, the process returns to step S253, and the same process is repeated thereafter. On the other hand, if the client garage system 213 determines in step S261 that the training has been completed, the process proceeds to step S262.
  • step S262 the client garage system 213 records the evaluation and scoring of each individual operation log, linking it to the user ID.
  • step S263 if there are any concerns, the client garage system 213 adjusts each piece of software in response to the teacher's operation.
  • the teacher may prepare an adjustment request form for the dealer if necessary.
  • concerns here may include the software settings being inappropriate for the user's driving skill, or the software being unable to adjust the settings to suit the user's driving skill, etc.
  • step S264 the client garage system 213 communicates with the teacher and among the students, and then the process ends.
  • training system 201 can be used to allow one-to-many training between a human teacher and students. That is, the operation systems and communication paths of the teacher and many students are connected via network 211 (communication paths between students may also be connected). Then, while the students are driving, the teacher can remotely provide assistance and advice to the entire group via network 211 for actions related to driving, such as steering, braking, and accelerator operation, based on trends and countermeasures. In addition, the teacher can select individuals or groups that deviate greatly from the ideal driving pattern and provide assistance and advice to them individually, remotely. This allows users to train individually or together with a community, in the driver's seat of their own vehicle 13.
  • Figure 28 is a flowchart explaining training in which the teacher is an AI and the teacher and students are trained one-to-one or one-to-many.
  • step S271 the client garage system 213 selects a training session according to the user's (student's) operational input.
  • step S272 the client garage system 213 connects the operation systems and communication paths of the teacher AI and the students via the network 211, and if there are a large number of students, connects the communication paths between the students.
  • step S273 the client garage system 213 executes a driving simulation according to the ideal teacher driving pattern.
  • step S274 the client garage system 213 displays the simulation video generated in the driving simulation executed in step S273 on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S275 the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S273.
  • step S276 if there is a deviation from the ideal teacher driving pattern, the teacher AI performs auxiliary operations individually via the network 211 as a measure to address the tendency, and the client garage system 213 performs control of the corresponding drive mechanism according to each control data based on the auxiliary operations.
  • step S277 the client garage system 213 displays the simulation video generated in the driving simulation according to the individual auxiliary operations of the teacher AI in step S276 on the garage display 41, and outputs the audio generated together with the simulation video.
  • step S278 the client garage system 213 generates a driving simulation and controls the corresponding drive mechanisms according to each control data with the support of the teacher AI.
  • step S279 the client garage system 213 displays, for each individual and group, on the garage display 41 the simulation video generated by the driving simulation following the assisted operations of the teacher AI in step S278, and outputs the audio generated together with the simulation video.
  • step S280 the client garage system 213 acquires the user's driving operations for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
  • step S281 the client garage system 213 determines whether the training has been completed. If it is determined that the training has not been completed, the process returns to step S273, and the same process is repeated thereafter. On the other hand, if the client garage system 213 determines in step S281 that the training has been completed, the process proceeds to step S282.
  • step S282 the client garage system 213 records the evaluation and scoring of each individual operation log, linking it to the user ID.
  • step S283 if there are any concerns, the client garage system 213 adjusts each piece of software according to the output of the teacher AI. At this time, the teacher AI may prepare an adjustment request form for the dealer if necessary.
  • step S284 the client garage system 213 communicates between the students, and then the process ends.
  • the teacher AI and students can train one-to-one and one-to-many. That is, the operation systems and communication paths of the teacher AI and many students are connected via the network 211 (communication paths between students may also be connected).
  • the teacher AI can then remotely provide auxiliary operations and advice in response to deviations from the ideal driving pattern. This allows the user to train individually or together with a community in the driver's seat of their own vehicle 13.
  • the training system 201 can improve the efficiency of management and training itself by using a combination of a human teacher and a teacher AI, such as entrusting repetitive training to the teacher AI depending on the training situation.
  • the teacher AI may be executed on either the cloud system 212 or the client garage system 213.
  • the ideal driving pattern differs depending on the purpose, such as safe driving or racing, but in the training system 201, it is derived by setting goals to be achieved through training, tests, game elements, etc., and by the accompanying professional driving, teacher examples, and various AI technologies.
  • the training system 201 the achievement of the set goals and all trained actions are recorded as metadata in the metaverse space, making fair evaluation and scoring possible. Scoring can also include actions related to driving operations, such as detection of actions such as checking for collisions through camera monitoring, and detection of shift, turn signal, hazard lamp, fog lamp, and defroster operation.
  • the training system 201 can not only reduce accidents by improving the user's driving skills, but also contribute to the granting and renewal of licenses by enabling fair evaluation and scoring.
  • the driver will improve his/her driving skills to match the performance of his/her vehicle 13.
  • the response, weight, and stiffness of the steering, brakes, accelerator, suspension, etc. may be hindered by differences in each individual's abilities, or may be impossible to do in the first place.
  • any parts that cannot be adjusted using software can be passed on to the dealer as adjustment items and can be adjusted.
  • step S291 the client garage system 213 reads parameters linked to the personal ID of the user in the vehicle 13 that supports the personal optimization function, and acquires each driving operation of the user.
  • step S292 the client garage system 213 performs a matching process on the cloud with the performance of the vehicle 13 in which the user will be riding, and then downloads the parameters to a vehicle 13 that supports the personal optimization function.
  • step S293 the client garage system 213 performs parameter adjustment and the process ends.
  • the vehicle adjustment parameters linked to one's ID are stored on the cloud, and when one drives one's partner's vehicle 13 or a shared car or rental vehicle 13, if the vehicle 13 one is driving supports personal optimization functions, the parameters are read from the cloud and a matching process is performed with the performance of the vehicle 13 one is driving, allowing the settings to be automatically adjusted to match those of one's own vehicle 13.
  • the replacement vehicle 13 when replacing the vehicle 13, it is possible to start training from scratch when purchasing the vehicle 13, but if the replacement vehicle 13 supports the personal optimization function, it is possible to read the vehicle adjustment parameters linked to one's ID from the cloud and perform a matching process with the performance of the replacement vehicle 13 to bring it closer to the settings of one's own vehicle 13. For example, if it is known in advance that the replacement vehicle 13 supports the personal optimization function, it is possible to virtually reproduce the personally optimized settings after replacement in the current vehicle 13, and to carry out learning and training related to the replacement vehicle 13 in advance.
  • the driving simulation system 11 may be used as a system that provides a simulated driving experience.
  • the driving simulation system 11 may also be used as a system that provides an experience related to new safety features of the vehicle 13, or safety features not yet installed in the vehicle 13 owned by the user.
  • FIG. 30 is a block diagram showing an example of the configuration of one embodiment of a computer on which a program that executes the series of processes described above is installed.
  • the program can be pre-recorded on the hard disk 305 or ROM 303 as a recording medium built into the computer.
  • the program can be stored (recorded) on a removable recording medium 311 driven by the drive 309.
  • a removable recording medium 311 can be provided as so-called packaged software.
  • examples of the removable recording medium 311 include a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a magnetic disk, a semiconductor memory, etc.
  • the program can also be downloaded to the computer via a communication network or broadcasting network and installed on the built-in hard disk 305. That is, the program can be transferred to the computer wirelessly from a download site via an artificial satellite for digital satellite broadcasting, or transferred to the computer via a wired connection via a network such as a LAN (Local Area Network) or the Internet.
  • a network such as a LAN (Local Area Network) or the Internet.
  • the computer has a built-in CPU (Central Processing Unit) 302, to which an input/output interface 310 is connected via a bus 301.
  • CPU Central Processing Unit
  • the CPU 302 executes a program stored in the ROM (Read Only Memory) 303 accordingly.
  • the CPU 302 loads a program stored on the hard disk 305 into the RAM (Random Access Memory) 304 and executes it.
  • the CPU 302 performs processing according to the above-mentioned flowchart, or processing performed by the configuration of the above-mentioned block diagram. Then, the CPU 302 outputs the processing results from the output unit 306 via the input/output interface 310, or transmits them from the communication unit 308, or even records them on the hard disk 305, as necessary.
  • the input unit 307 is composed of a keyboard, mouse, microphone, etc.
  • the output unit 306 is composed of an LCD (Liquid Crystal Display), speaker, etc.
  • the processing performed by a computer according to a program does not necessarily have to be performed in chronological order according to the order described in the flowchart.
  • the processing performed by a computer according to a program also includes processing that is executed in parallel or individually (for example, parallel processing or processing by objects).
  • the program may be processed by one computer (processor), or may be distributed among multiple computers. Furthermore, the program may be transferred to a remote computer for execution.
  • a system refers to a collection of multiple components (devices, modules (parts), etc.), regardless of whether all the components are in the same housing. Therefore, multiple devices housed in separate housings and connected via a network, and a single device in which multiple modules are housed in a single housing, are both systems.
  • the configuration described above as one device (or processing unit) may be divided and configured as multiple devices (or processing units).
  • the configurations described above as multiple devices (or processing units) may be combined and configured as one device (or processing unit).
  • part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit).
  • this technology can be configured as cloud computing, in which a single function is shared and processed collaboratively by multiple devices via a network.
  • the above-mentioned program can be executed in any device.
  • the device has the necessary functions (functional blocks, etc.) and is capable of obtaining the necessary information.
  • each step described in the above flowchart can be executed by one device, or can be shared and executed by multiple devices.
  • one step includes multiple processes, the multiple processes included in that one step can be executed by one device, or can be shared and executed by multiple devices.
  • multiple processes included in one step can be executed as multiple step processes.
  • processes described as multiple steps can be executed collectively as one step.
  • processing of the steps that describe a program executed by a computer may be executed chronologically in the order described in this specification, or may be executed in parallel, or individually at the required timing, such as when a call is made. In other words, as long as no contradictions arise, the processing of each step may be executed in an order different from the order described above. Furthermore, the processing of the steps that describe this program may be executed in parallel with the processing of other programs, or may be executed in combination with the processing of other programs.
  • a communication unit that communicates with other information processing devices; an operation data acquisition unit that acquires driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle; a vehicle operation control unit that supplies control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle; a driving mode control unit that supplies control data to the vehicle operation control unit to control the driving of the vehicle in accordance with the driving operation data when the vehicle is in a driving mode in which the vehicle is driven; a simulation mode control unit that, in a simulation mode in which a driving simulation is performed using the vehicle, supplies to the vehicle operation control unit control data for controlling a behavior of the vehicle so as to reproduce a behavior of the vehicle in a virtual space determined according to the driving operation data, based on vehicle setting parameters acquired from the other information processing device via the communication unit; and a stopped state determination unit that determines whether to transition to the simulation mode according to a stopped state of the vehicle when
  • At least one of the plurality of types of operation systems and at least one of the plurality of types of drive systems corresponding to the operation systems are connected by a by-wire system,
  • the information processing device described in (1) above is configured so that, in the simulation mode, even if a driving operation is performed on one of the operating systems, the drive system connected to that one of the operating systems by a by-wire system will not operate.
  • a vehicle interior environment recognition unit that recognizes the positions and postures of occupants of the vehicle based on sensor data and image data supplied from sensors and cameras provided in the vehicle, and acquires occupant position and posture data including at least the viewpoint positions of the occupants,
  • the information processing device according to any one of (1) to (3) above, wherein the communication unit transmits the occupant position and attitude data to the other information processing device.
  • the vehicle can perform a driving simulation while being stored in a garage device having a garage display that displays an image surrounding the vehicle,
  • the garage display performs a driving simulation in which the vehicle is virtually driven within the virtual space in accordance with the driving operation data, and displays a simulation image generated by photographing the virtual space in all directions centered on the vehicle with a virtual camera positioned within the virtual space to correspond to a predetermined position of the vehicle, the simulation image being geometrically transformed according to the occupant's viewpoint based on vehicle position and attitude data indicating the position and attitude of the vehicle and the occupant position and attitude data.
  • the information processing device described in (4) above.
  • the vehicle is equipped with a side mirror display that displays a side mirror image captured toward the rear of the vehicle by a side mirror camera installed on a side of the vehicle,
  • the information processing device described in (5) above in which, in the simulation mode, a virtual side mirror image generated by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the side of the vehicle in the virtual space is displayed on the side mirror display.
  • the vehicle is equipped with a rearview mirror display that displays a rearview mirror image captured toward the rear of the vehicle by a rearview mirror camera installed on a rear surface of the vehicle,
  • the information processing device described in (5) or (6) above in which, in the simulation mode, a virtual rearview mirror image generated by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the rear of the vehicle in the virtual space is displayed on the rearview mirror display.
  • the vehicle includes a rear seat display provided in a rear seat of the vehicle, In the simulation mode, a rear seat viewpoint image generated by capturing an image of the virtual space as seen through the rear seat display using a virtual camera positioned in the virtual space to correspond to the viewpoint of the rear seat occupant in accordance with the occupant position and attitude data is displayed on the rear seat display.
  • An information processing device as described in any of (5) to (7) above.
  • An information processing device Communicating with other information processing devices; Acquiring driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle; supplying control signals to a plurality of types of drive systems that drive the vehicle to control operation of the vehicle;
  • the vehicle is controlled by control data for controlling the driving of the vehicle in accordance with the driving operation data.
  • control of the operation of the vehicle is performed using control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in the virtual space determined according to the driving operation data, based on vehicle setting parameters acquired by communication from the other information processing device.
  • the computer of the information processing device Communicating with other information processing devices; Acquiring driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle; supplying control signals to a plurality of types of drive systems that drive the vehicle to control operation of the vehicle; When the vehicle is in a driving mode, the vehicle is controlled by control data for controlling the driving of the vehicle in accordance with the driving operation data.
  • control of the operation of the vehicle is performed using control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in the virtual space determined according to the driving operation data, based on vehicle setting parameters acquired by communication from the other information processing device. and when an operation is performed to instruct switching from the driving mode to the simulation mode, determining whether to transition to the simulation mode according to a stopped state of the vehicle.
  • a communication unit that communicates with other information processing devices mounted in the vehicle; a virtual space generation unit that performs a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle via the communication unit, and that generates a simulation image by capturing images of the virtual space in all directions centered on the vehicle using a virtual camera that is arranged in the virtual space so as to correspond to a predetermined position of the vehicle; an image conversion processing unit that performs image conversion processing on the simulation image based on vehicle position and attitude data indicating a position and attitude of the vehicle and occupant position and attitude data including at least a viewpoint position of an occupant of the vehicle.
  • the information processing device described in (11) above wherein the virtual space generation unit determines the behavior of the vehicle within the virtual space, generates vehicle setting parameters for controlling the behavior of the vehicle so as to reproduce that behavior, and transmits the vehicle setting parameters to the vehicle via the communication unit.
  • the vehicle can perform a driving simulation while being stored in a garage device having a garage display that displays an image surrounding the vehicle, The information processing device according to (11) or (12) above, wherein the simulation image output from the image conversion processing unit is displayed on the garage display.
  • the video conversion processing unit includes: a viewpoint detection unit that identifies a viewpoint position of the occupant relative to the garage display by setting a rectangular parallelepiped space in which the vehicle is inscribed at a position of the vehicle relative to the garage display based on the vehicle position and orientation data, and setting a viewpoint position of the occupant relative to the rectangular parallelepiped space based on the occupant position and orientation data;
  • the information processing device further comprising: a geometric transformation unit that performs a geometric transformation such that the simulation image generated in the virtual space generation unit is projected onto the garage display as a projection surface, based on the viewpoint position of the occupant identified by the viewpoint detection unit.
  • the virtual space generation unit generates a rear seat viewpoint image by capturing the virtual space as seen through a rear seat display provided in the rear seat of the vehicle using a virtual camera positioned in the virtual space to correspond to the viewpoint of an occupant in the rear seat of the vehicle in accordance with the occupant position and attitude data, transmits the image to the vehicle via the communication unit, and displays it on the rear seat display.
  • An information processing device Communicating with other information processing devices mounted in the vehicle; performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication; and capturing images of the virtual space in all directions from the vehicle center with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, thereby generating a simulation image; and performing an image conversion process on the simulation image based on vehicle position and attitude data indicating a position and attitude of the vehicle and occupant position and attitude data including at least a viewpoint position of an occupant of the vehicle.
  • the computer of the information processing device Communicating with other information processing devices mounted in the vehicle; performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication; and capturing images of the virtual space in all directions from the vehicle center with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, thereby generating a simulation image; and performing image conversion processing on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of occupants of the vehicle.
  • 11 driving simulation system 12 garage system, 13 vehicle, 14 garage device, 15 charging device, 16 garage system control device, 17 home server, 18 in-vehicle control device, 19 power source, 21 steering wheel, 22 accelerator pedal, 23 brake pedal, 24 axle steering section, 25 throttle motor, 26 brake, 27 steering wheel rotation sensor, 28 accelerator position sensor, 29 brake position sensor, 30 axle drive mechanism, 31 throttle drive mechanism, 32 Brake drive mechanism, 33 Steering wheel reaction drive mechanism, 34 Accelerator reaction drive mechanism, 35 Brake reaction drive mechanism, 36 Suspension drive mechanism, 37 Seat drive mechanism, 41 Garage display, 42 Garage camera, 43 Garage sensor, 51 Navigation display, 52 Side mirror display, 53 Instrument panel display, 54 Rearview mirror display, 55 Rear seat display, 56 In-car camera, 57 In-car sensor, 58 Speaker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure pertains to an information processing device, an information processing method, and a program that make it possible to achieve a driving simulation that uses the vehicle to be used during normal driving. The present invention controls operation of the vehicle by obtaining driving operation data, which expresses a driving operation amount for multiple types of operation systems for operating a vehicle, and supplying a control signal to multiple types of drive systems for driving the vehicle. The present invention controls operation of the vehicle using control data for controlling travel of the vehicle according to driving operation data when in travel mode, and controls operation of the vehicle using control data for controlling the behavior of the vehicle so as to reproduce the desired behavior of the vehicle in a virtual space according to the driving operation data, on the basis of a vehicle settings parameter obtained via communication from another information processing device when in a simulation mode. When an operation indicating a switch is to be made from the travel mode to the simulation mode, whether or not to transition to the simulation mode is determined according to whether the vehicle is stopped.

Description

情報処理装置および情報処理方法、並びにプログラムInformation processing device, information processing method, and program
 本開示は、情報処理装置および情報処理方法、並びにプログラムに関し、特に、通常の運転で使用する車両を用いた運転シミュレーションを実現することができるようにした情報処理装置および情報処理方法、並びにプログラムに関する。 The present disclosure relates to an information processing device, an information processing method, and a program, and in particular to an information processing device, an information processing method, and a program that are capable of implementing a driving simulation using a vehicle used in normal driving.
 従来、車両の運転シミュレーションは、通常の運転で使用する車両が用いられることはなく、シミュレーション用の操作系(ハンドルや、アクセルペダル、ブレーキペダルなど)を用いて行われていた。 Traditionally, vehicle driving simulations have not used vehicles used in normal driving, but have instead been performed using simulation operation systems (steering wheels, accelerator pedals, brake pedals, etc.).
 例えば、特許文献1には、ステアリング、アクセル、およびブレーキが設けられた操作部を用いてレーシングゲームを行うレーシングゲーム装置が開示されている。 For example, Patent Document 1 discloses a racing game device in which a racing game is played using an operating unit that has a steering wheel, accelerator, and brake.
特開2000-229177号公報JP 2000-229177 A
 ところで、上述したようなシミュレーション用の操作系を用いるのではなく、通常の運転で使用する車両を用いて運転シミュレーションを行いたいというニーズがあった。 However, there was a need to perform driving simulations using vehicles used in normal driving, rather than using the simulation operation system described above.
 本開示は、このような状況に鑑みてなされたものであり、通常の運転で使用する車両を用いた運転シミュレーションを実現することができるようにするものである。 This disclosure has been made in light of these circumstances, and makes it possible to realize a driving simulation using a vehicle used in normal driving.
 本開示の第1の側面の情報処理装置は、他の情報処理装置と通信を行う通信部と、車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得する操作データ取得部と、前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御する車両動作制御部と、前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データを前記車両動作制御部に供給する走行モードコントロール部と、前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から前記通信部を介して取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データを前記車両動作制御部に供給するシミュレーションモードコントロール部と、前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断する停車状態判断部とを備える。 The information processing device according to the first aspect of the present disclosure includes a communication unit that communicates with other information processing devices, an operation data acquisition unit that acquires driving operation data indicating the amount of operation of driving operations for a plurality of types of operation systems that operate the vehicle, a vehicle operation control unit that supplies control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle, a driving mode control unit that supplies control data to the vehicle operation control unit for controlling the driving of the vehicle according to the driving operation data when in a driving mode in which the vehicle is used to drive, a simulation mode control unit that supplies control data to the vehicle operation control unit for controlling the behavior of the vehicle so as to reproduce the behavior of the vehicle in a virtual space determined according to the driving operation data based on vehicle setting parameters acquired from the other information processing device via the communication unit when in a simulation mode in which a driving simulation is performed using the vehicle, and a stopped state determination unit that determines whether to transition to the simulation mode according to the stopped state of the vehicle when an operation instructing switching from the driving mode to the simulation mode is performed.
 本開示の第1の側面の情報処理方法またはプログラムは、他の情報処理装置と通信を行うことと、車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得することと、前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御することと、前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データで前記車両の動作を制御させることと、前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から通信により取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データで前記車両の動作を制御させることと、前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断することとを含む。 The information processing method or program of the first aspect of the present disclosure includes communicating with another information processing device, acquiring driving operation data indicating the amount of driving operation for a plurality of types of operating systems that operate a vehicle, supplying control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle, controlling the operation of the vehicle with control data that controls the running of the vehicle according to the driving operation data when in a driving mode in which the vehicle is used to run, controlling the operation of the vehicle with control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in a virtual space determined according to the driving operation data based on vehicle setting parameters acquired by communication from the other information processing device when in a simulation mode in which a driving simulation is performed using the vehicle, and determining whether to transition to the simulation mode according to the stopped state of the vehicle when an operation instructing a switch from the driving mode to the simulation mode is performed.
 本開示の第1の側面においては、他の情報処理装置と通信が行われ、車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データが取得され、車両を駆動させる複数種類の駆動系に対して制御信号を供給して、車両の動作が制御され、車両を使用して走行する走行モードのとき、運転操作データに従って車両の走行を制御する制御データで車両の動作が制御され、車両を使用して運転シミュレーションを行うシミュレーションモードのとき、他の情報処理装置から通信により取得される車両設定パラメータに基づいて、運転操作データに従って求められる仮想空間内での車両の挙動を再現するように車両の挙動を制御する制御データで車両の動作が制御され、走行モードからシミュレーションモードへの切り替えを指示する操作が行われたときに、車両の停車状態に従って、シミュレーションモードに移行するか否かが判断される。 In a first aspect of the present disclosure, communication is performed with another information processing device, driving operation data indicating the amount of driving operation for multiple types of operating systems that operate the vehicle is obtained, control signals are supplied to multiple types of drive systems that drive the vehicle to control the operation of the vehicle, and in a driving mode in which the vehicle is used to drive, the operation of the vehicle is controlled by control data that controls the driving of the vehicle in accordance with the driving operation data, and in a simulation mode in which a driving simulation is performed using the vehicle, the operation of the vehicle is controlled by control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in a virtual space determined in accordance with the driving operation data based on vehicle setting parameters obtained by communication from the other information processing device, and when an operation is performed to instruct switching from the driving mode to the simulation mode, it is determined whether or not to transition to the simulation mode in accordance with the stopped state of the vehicle.
 本開示の第2の側面の情報処理装置は、車両に搭載された他の情報処理装置と通信を行う通信部と、前記通信部を介して前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成する仮想空間生成部と、前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施す映像変換処理部とを備える。 The information processing device according to the second aspect of the present disclosure includes a communication unit that communicates with other information processing devices mounted on the vehicle, a virtual space generation unit that performs a driving simulation in which the vehicle virtually travels in a virtual space according to driving operation data acquired from the vehicle via the communication unit, and generates a simulation image by capturing images of the virtual space in all directions centered on the vehicle with a virtual camera arranged in the virtual space so as to correspond to a predetermined position of the vehicle, and an image conversion processing unit that performs image conversion processing on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of the occupants of the vehicle.
 本開示の第2の側面の情報処理方法またはプログラムは、車両に搭載された他の情報処理装置と通信を行うことと、前記通信により前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成することと、前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施すこととを含む。 The information processing method or program of the second aspect of the present disclosure includes communicating with another information processing device mounted on the vehicle, performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication, capturing images of the virtual space in all directions from the vehicle with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, and generating a simulation image, and performing an image conversion process on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of the occupants of the vehicle.
 本開示の第2の側面においては、車両に搭載された他の情報処理装置と通信が行われ、通信により車両から取得される運転操作データに従って、仮想空間内で車両を仮想的に走行させる運転シミュレーションを行い、車両の所定位置に対応するように仮想空間内に配置された仮想カメラで、車両を中心とした全方位を対象として仮想空間を撮影してシミュレーション映像が生成され、シミュレーション映像に対して、車両の位置および姿勢を示す車両位置姿勢データと車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理が施される。 In a second aspect of the present disclosure, communication is performed with another information processing device mounted on the vehicle, and a driving simulation is performed in which the vehicle is virtually driven in a virtual space according to driving operation data acquired from the vehicle through communication, and a simulation image is generated by capturing images of the virtual space in all directions centered on the vehicle with a virtual camera placed in the virtual space corresponding to a predetermined position of the vehicle, and an image conversion process is performed on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of the vehicle occupants.
本技術を適用した運転シミュレーションシステムの一実施の形態の構成例を示すブロック図である。1 is a block diagram showing a configuration example of an embodiment of a driving simulation system to which the present technology is applied; バイワイヤシステムを採用した車両の構成例について説明する図である。FIG. 1 is a diagram illustrating a configuration example of a vehicle that employs a by-wire system. ガレージ装置の構成例について説明する図である。FIG. 2 is a diagram illustrating an example of the configuration of a garage device. 車両の内部の構成例について説明する図である。FIG. 2 is a diagram illustrating an example of the internal configuration of a vehicle. 乗員の視点を特定する手法の一例について説明する図である。11A and 11B are diagrams illustrating an example of a method for identifying the viewpoint of an occupant. ガレージシステム制御装置の構成例を示すブロック図である。FIG. 2 is a block diagram showing a configuration example of a garage system control device. 視点位置から見た仮想平面をガレージディスプレイで表現する例について説明する図である。11 is a diagram illustrating an example in which a virtual plane viewed from a viewpoint position is displayed on a garage display. FIG. 車両内制御装置の操作系および駆動系の構成例を示すブロック図である。2 is a block diagram showing an example of the configuration of an operation system and a drive system of the in-vehicle control device; 車両内制御装置の映像系およびオーディオ系の構成例を示すブロック図である。2 is a block diagram showing a configuration example of a video system and an audio system of the in-vehicle control device. FIG. 車両パラメータおよび車両設定パラメータのデータ構造の一例を示す図である。FIG. 4 is a diagram showing an example of a data structure of vehicle parameters and vehicle setting parameters. スピンが発生した車両の挙動を表す図である。FIG. 13 is a diagram showing the behavior of a vehicle in a spin state. 車両パラメータおよび車両設定パラメータの一例を示す図である。FIG. 4 is a diagram showing an example of vehicle parameters and vehicle setting parameters. サスペンション設定データについて説明する図である。FIG. 4 is a diagram illustrating suspension setting data. シート設定データについて説明する図である。FIG. 4 is a diagram illustrating sheet setting data. ガレージシステム側の初期セットアップ処理を説明するフローチャートである。11 is a flowchart illustrating an initial setup process on the garage system side. 車両側の初期セットアップ処理を説明するフローチャートである。4 is a flowchart illustrating an initial setup process on the vehicle side. ガレージシステム側の運転シミュレーション動作処理を説明するフローチャートである。11 is a flowchart illustrating a driving simulation operation process on the garage system side. 車両側の運転シミュレーション動作処理を説明するフローチャートである。10 is a flowchart illustrating a driving simulation operation process on the vehicle side. 車両位置姿勢認識処理を説明するフローチャートである。11 is a flowchart illustrating a vehicle position and attitude recognition process. 乗員位置姿勢認識処理を説明するフローチャートである。5 is a flowchart illustrating an occupant position and attitude recognition process. 映り込み制御処理を説明するフローチャートである。11 is a flowchart illustrating a reflection control process. トレーニングシステムの構成例を示すブロック図である。FIG. 1 is a block diagram showing an example of the configuration of a training system. 教師なしトレーニングについて説明するフローチャートである。1 is a flowchart illustrating unsupervised training. 教師ありトレーニングについて説明するフローチャートである。1 is a flowchart illustrating supervised training. 学習について説明するフローチャートである。13 is a flowchart illustrating learning. 教師が人間であって、教師と生徒とが1対1で行われるトレーニングについて説明するフローチャートである。1 is a flowchart illustrating training conducted one-on-one between a teacher and a student, the teacher being a human. 教師が人間であって、教師と生徒とが1対多で行われるトレーニングについて説明するフローチャートである。13 is a flowchart illustrating training in which the teacher is a human and the teacher and students are one-to-many. 教師がAIであって、教師と生徒とが1対1または1対多で行われるトレーニングについて説明するフローチャートである。This is a flowchart that explains training in which the teacher is an AI and the teacher and students are trained one-to-one or one-to-many. 車両の個人最適化について説明するフローチャートである。11 is a flowchart illustrating personal optimization of a vehicle. 本技術を適用したコンピュータの一実施の形態の構成例を示すブロック図である。1 is a block diagram showing an example of the configuration of an embodiment of a computer to which the present technology is applied.
 以下、本技術を適用した具体的な実施の形態について、図面を参照しながら詳細に説明する。 Below, specific embodiments of the application of this technology will be described in detail with reference to the drawings.
 <ガレージシステムの構成例>
 図1は、本技術を適用した運転シミュレーションシステムの一実施の形態の構成例を示すブロック図である。
<Garage system configuration example>
FIG. 1 is a block diagram showing an example of the configuration of an embodiment of a driving simulation system to which the present technology is applied.
 図1に示す運転シミュレーションシステム11は、ガレージシステム12および車両13によって構成され、ユーザが通常の運転で使用する車両13を用いた運転シミュレーションを実現することができる。 The driving simulation system 11 shown in FIG. 1 is composed of a garage system 12 and a vehicle 13, and can realize a driving simulation using the vehicle 13 that the user uses during normal driving.
 ガレージシステム12は、ガレージ装置14、充電装置15、ガレージシステム制御装置16、およびホームサーバ17を備えて構成される。 The garage system 12 is composed of a garage device 14, a charging device 15, a garage system control device 16, and a home server 17.
 ガレージ装置14は、車両13を保管するための一般的なガレージとしての機能に加えて、図3を参照して後述するように、運転シミュレーションで用いられる映像(以下、シミュレーション映像と称する)を車両13の周囲を囲うように表示する機能を備えて構成される。本実施の形態では、ガレージ装置14は、ガレージ内に設けられるものとして説明する。その他、ガレージ装置14は、駐車場や駐車スペースなどに設けられてもよい。 The garage device 14 has the function of displaying images used in a driving simulation (hereinafter referred to as simulation images) surrounding the vehicle 13, in addition to the function of a general garage for storing the vehicle 13, as described below with reference to FIG. 3. In this embodiment, the garage device 14 is described as being installed inside a garage. Alternatively, the garage device 14 may be installed in a parking lot, parking space, etc.
 充電装置15は、充電用および通信用のケーブルを車両13に接続することによって、車両13を充電するとともに、ガレージシステム12および車両13の間で高速なIP(Internet Protocol)通信を行うことが可能な状態とする。 The charging device 15 charges the vehicle 13 by connecting a charging and communication cable to the vehicle 13, and enables high-speed IP (Internet Protocol) communication between the garage system 12 and the vehicle 13.
 ガレージシステム制御装置16は、外部ネットワークを介して取得したデータや、ホームサーバ17に蓄積されているデータなどを用いて、運転シミュレーションを実行する際のガレージシステム12の制御を行う。なお、ガレージシステム制御装置16の詳細な構成については、図6を参照して後述する。 The garage system control device 16 controls the garage system 12 when performing a driving simulation, using data acquired via an external network and data stored in the home server 17. The detailed configuration of the garage system control device 16 will be described later with reference to FIG. 6.
 ホームサーバ17は、ガレージシステム制御装置16がガレージシステム12の制御を行うのに必要となるデータなどを、外部ネットワークを介して取得してガレージシステム制御装置16に提供する。 The home server 17 obtains data necessary for the garage system control device 16 to control the garage system 12 via an external network and provides it to the garage system control device 16.
 車両13は、図2を参照して後述するように、操作系に対する操作を物理的に駆動系へ伝達するのではなく、操作系に対する運転操作をセンサによって検出し、その運転操作に応じた信号を、信号線を介して電気的に駆動系へ伝達するように構成されたバイワイヤシステムを採用している。また、車両13は、車両内制御装置18および動力源19を備えて構成される。 As described below with reference to FIG. 2, the vehicle 13 employs a by-wire system that is configured to detect driving operations on the operating system using sensors and electrically transmit signals corresponding to the driving operations to the drive system via signal lines, rather than physically transmitting operations on the operating system to the drive system. The vehicle 13 is also configured with an in-vehicle control device 18 and a power source 19.
 車両内制御装置18は、ガレージシステム制御装置16との間で通信を行って、運転シミュレーションを実行する際の車両13の制御を行う。なお、車両内制御装置18の詳細な構成については、図8および図9を参照して後述する。 The in-vehicle control device 18 communicates with the garage system control device 16 and controls the vehicle 13 when performing a driving simulation. The detailed configuration of the in-vehicle control device 18 will be described later with reference to Figures 8 and 9.
 動力源19には、例えば、車両13がEV(Electric Vehicle)である場合、電気をエネルギー源とした電気モータが用いられる。また、動力源19には、車両13がガソリン車である場合、ガソリンを燃料とする内燃機関が用いられ、車両13がハイブリッド車である場合には、内燃機関および電気モータが併用して用いられる。 For example, if the vehicle 13 is an EV (Electric Vehicle), the power source 19 is an electric motor that uses electricity as an energy source. If the vehicle 13 is a gasoline-powered vehicle, the power source 19 is an internal combustion engine that uses gasoline as fuel, and if the vehicle 13 is a hybrid vehicle, a combination of an internal combustion engine and an electric motor is used.
 このように運転シミュレーションシステム11は構成されており、車両13がガレージ装置14に格納された状態で、車両13を使用して通常の運転を行う走行モードから車両13を使用して運転シミュレーションを行うシミュレーションモードに切り替えて、運転シミュレーションを実行することができる。 The driving simulation system 11 is configured in this manner, and when the vehicle 13 is stored in the garage device 14, a driving mode in which normal driving is performed using the vehicle 13 is switched to a simulation mode in which a driving simulation is performed using the vehicle 13, and a driving simulation can be performed.
 図2を参照して、バイワイヤシステムを採用した車両13の構成例について説明する。 With reference to Figure 2, an example configuration of a vehicle 13 that employs a by-wire system will be described.
 図2に示すように、車両13は、ハンドル21、アクセルペダル22、ブレーキペダル23、車軸方向操舵部24、スロットルモータ25、ブレーキ26、ハンドル回転センサ27、アクセル位置センサ28、ブレーキ位置センサ29、車軸方向駆動機構30、スロットル駆動機構31、およびブレーキ駆動機構32を備えて構成される。そして、車両13は、ハンドル21や、アクセルペダル22、ブレーキペダル23などの操作系と、車軸方向操舵部24や、スロットルモータ25、ブレーキ26などの駆動系とが、信号線を介して電気的に車両内制御装置18に接続されて構成され、それらが物理的に切り離された構成となっている。 As shown in FIG. 2, the vehicle 13 is configured with a steering wheel 21, accelerator pedal 22, brake pedal 23, axle steering unit 24, throttle motor 25, brake 26, steering wheel rotation sensor 27, accelerator position sensor 28, brake position sensor 29, axle drive mechanism 30, throttle drive mechanism 31, and brake drive mechanism 32. The vehicle 13 is configured such that the operating system, such as the steering wheel 21, accelerator pedal 22, and brake pedal 23, and the drive system, such as the axle steering unit 24, throttle motor 25, and brake 26, are electrically connected to the in-vehicle control device 18 via signal lines, and are physically separated from each other.
 ハンドル回転センサ27は、車両13の運転者による回転操作に応じたハンドル21の回転を検出し、ハンドル21の回転量を示すハンドル回転信号を車両内制御装置18に出力する。 The steering wheel rotation sensor 27 detects the rotation of the steering wheel 21 in response to the rotation operation by the driver of the vehicle 13, and outputs a steering wheel rotation signal indicating the amount of rotation of the steering wheel 21 to the in-vehicle control device 18.
 アクセル位置センサ28は、車両13の運転者による踏み込み操作に応じたアクセルペダル22の位置を検出し、アクセルペダル22の踏み込み量を示すアクセル位置信号を車両内制御装置18に出力する。 The accelerator position sensor 28 detects the position of the accelerator pedal 22 according to the depression operation by the driver of the vehicle 13, and outputs an accelerator position signal indicating the amount of depression of the accelerator pedal 22 to the in-vehicle control device 18.
 ブレーキ位置センサ29は、車両13の運転者による踏み込み操作に応じたブレーキペダル23の位置を検出し、ブレーキペダル23の踏み込み量を示すブレーキ位置信号を車両内制御装置18に出力する。 The brake position sensor 29 detects the position of the brake pedal 23 according to the depression operation by the driver of the vehicle 13, and outputs a brake position signal indicating the amount of depression of the brake pedal 23 to the in-vehicle control device 18.
 車軸方向駆動機構30は、ハンドル回転センサ27から出力されるハンドル回転信号に基づいて車両内制御装置18から供給される車軸方向制御信号に従って、車軸方向操舵部24を駆動する。これにより、車軸方向操舵部24は、車両13の運転者によるハンドル21に対する回転操作に応じた角度となるように、車両13のタイヤを操舵することができる。 The axle-direction drive mechanism 30 drives the axle-direction steering unit 24 according to an axle-direction control signal supplied from the vehicle control device 18 based on the steering wheel rotation signal output from the steering wheel rotation sensor 27. This allows the axle-direction steering unit 24 to steer the tires of the vehicle 13 to an angle that corresponds to the rotation operation of the steering wheel 21 by the driver of the vehicle 13.
 スロットル駆動機構31は、アクセル位置センサ28から出力されるアクセル位置信号に基づいて車両内制御装置18から供給されるスロットル制御信号に従って、スロットルモータ25を駆動する。これにより、スロットルモータ25は、車両13の運転者によるアクセルペダル22に対する踏み込み操作に応じて加速するように、車両13を走行させることができる。 The throttle drive mechanism 31 drives the throttle motor 25 in accordance with a throttle control signal supplied from the in-vehicle control device 18 based on the accelerator position signal output from the accelerator position sensor 28. This allows the throttle motor 25 to drive the vehicle 13 so as to accelerate in response to the driver of the vehicle 13 depressing the accelerator pedal 22.
 ブレーキ駆動機構32は、ブレーキ位置センサ29から出力されるブレーキ位置信号に基づいて車両内制御装置18から供給されるブレーキ制御信号に従って、ブレーキ26を駆動する。これにより、ブレーキ26は、車両13の運転者によるブレーキペダル23に対する踏み込み操作に応じて減速するように、車両13を走行させることができる。 The brake drive mechanism 32 drives the brake 26 in accordance with a brake control signal supplied from the in-vehicle control device 18 based on the brake position signal output from the brake position sensor 29. This allows the brake 26 to drive the vehicle 13 so as to decelerate in response to the driver of the vehicle 13 depressing the brake pedal 23.
 このように構成される車両13は、走行モードのときに、ハンドル21や、アクセルペダル22、ブレーキペダル23などの操作系に対する車両13の運転者による操作に応じた車両内制御装置18の制御に従って、車軸方向操舵部24や、スロットルモータ25、ブレーキ26などの駆動系が動作することで走行することができる。 When in a driving mode, the vehicle 13 configured in this manner can travel by operating the drive system, such as the axle steering unit 24, throttle motor 25, and brake 26, according to the control of the in-vehicle control device 18 in response to the operation of the steering wheel 21, accelerator pedal 22, brake pedal 23, and other operating systems by the driver of the vehicle 13.
 一方、車両13は、シミュレーションモードのときに、ハンドル21や、アクセルペダル22、ブレーキペダル23などの操作系に対する車両13の運転者による操作を示す運転操作データをガレージシステム制御装置16に供給して運転シミュレーションを実行する。このとき、車軸方向操舵部24や、スロットルモータ25、ブレーキ26などの駆動系は動作せずに停止したままとなる。つまり、車両13は、操作系と駆動系とがバイワイヤ方式で繋がっており、シミュレーションモードのとき、操作系に対する運転操作が行われても、駆動系は動作することがないような構成となっている。なお、操作系と駆動系とは、それぞれ対応する少なくとも1つがバイワイヤ方式で繋がっていればよい。例えば、ハンドル21と車軸方向操舵部24とがバイワイヤ方式で繋がっていると、車両13が停車したままでタイヤが操舵されることによるタイヤの摩耗を回避することができる。 On the other hand, in the simulation mode, the vehicle 13 supplies driving operation data indicating the operation by the driver of the vehicle 13 of the operating systems such as the steering wheel 21, accelerator pedal 22, and brake pedal 23 to the garage system control device 16 to execute a driving simulation. At this time, the drive system such as the axle steering unit 24, throttle motor 25, and brake 26 do not operate and remain stopped. In other words, the operating system and drive system of the vehicle 13 are connected by a by-wire system, and the vehicle is configured such that the drive system does not operate even if the operating system is operated in the simulation mode. It is sufficient that at least one corresponding operating system and drive system is connected by the by-wire system. For example, if the steering wheel 21 and the axle steering unit 24 are connected by the by-wire system, tire wear caused by steering the tires while the vehicle 13 is stopped can be avoided.
 さらに、車両13は、ハンドル反力駆動機構33、アクセル反力駆動機構34、ブレーキ反力駆動機構35、サスペンション駆動機構36、およびシート駆動機構37を備えて構成される。 Furthermore, the vehicle 13 is configured with a handle reaction force drive mechanism 33, an accelerator reaction force drive mechanism 34, a brake reaction force drive mechanism 35, a suspension drive mechanism 36, and a seat drive mechanism 37.
 ハンドル反力駆動機構33は、車両内制御装置18から供給されるハンドル反力制御信号に従って、ハンドル21に対して回転操作を行う運転者の腕力に抵抗するようなハンドル反力をハンドル21に発生させる。 The steering wheel reaction force drive mechanism 33 generates a steering wheel reaction force in the steering wheel 21 that resists the arm force of the driver who is rotating the steering wheel 21 in accordance with a steering wheel reaction force control signal supplied from the in-vehicle control device 18.
 アクセル反力駆動機構34は、車両内制御装置18から供給されるアクセル反力制御信号に従って、アクセルペダル22に対して踏み込み操作を行う運転者の脚力に抵抗するようなアクセル反力をアクセルペダル22に発生させる。 The accelerator reaction force drive mechanism 34 generates an accelerator reaction force in the accelerator pedal 22 that resists the leg force of the driver who is depressing the accelerator pedal 22 in accordance with an accelerator reaction force control signal supplied from the in-vehicle control device 18.
 ブレーキ反力駆動機構35は、車両内制御装置18から供給されるブレーキ反力制御信号に従って、ブレーキペダル23に対して踏み込み操作を行う運転者の脚力に抵抗するようなブレーキ反力をブレーキペダル23に発生させる。 The brake reaction force drive mechanism 35 generates a brake reaction force in the brake pedal 23 that resists the leg force of the driver who is depressing the brake pedal 23 in accordance with a brake reaction force control signal supplied from the in-vehicle control device 18.
 サスペンション駆動機構36は、空気圧や油圧などを加減することによって車両13の車高を調整することができるハイドロニューマチック・サスペンション(図示せず)を駆動して、車両13の4輪を独立してアクティブにハイトコントロールする。 The suspension drive mechanism 36 drives a hydro-pneumatic suspension (not shown) that can adjust the height of the vehicle 13 by adjusting air pressure or hydraulic pressure, and actively controls the height of each of the four wheels of the vehicle 13 independently.
 シート駆動機構37は、車両13の各シートの背面および座面に内蔵されているアクチュエータ(図示せず)を駆動して、時間軸に沿ってある程度の速度でシートの座面および背面に対する各種の調整動作を行う。 The seat drive mechanism 37 drives actuators (not shown) built into the back and seat surface of each seat of the vehicle 13 to perform various adjustment operations on the seat surface and back at a certain speed along the time axis.
 図3を参照して、ガレージ装置14の構成例について説明する。 With reference to Figure 3, an example configuration of the garage device 14 will be described.
 図3のAには、車両13を格納した状態を側面視したガレージ装置14の概略的な構成例が示されており、図3のBには、車両13を格納した状態を平面視したガレージ装置14の概略的な構成例が示されている。このように、車両13がガレージ装置14に格納された状態で、運転シミュレーションを実行することができる。 A of FIG. 3 shows a schematic configuration example of the garage device 14 viewed from the side with the vehicle 13 stored, and B of FIG. 3 shows a schematic configuration example of the garage device 14 viewed from the top with the vehicle 13 stored. In this way, a driving simulation can be performed with the vehicle 13 stored in the garage device 14.
 図3に示すように、ガレージ装置14は、天井面用のガレージディスプレイ41-1、正面用のガレージディスプレイ41-2、右側面用のガレージディスプレイ41-3、左側面用のガレージディスプレイ41-4、背面用のガレージディスプレイ41-5、床面用のガレージディスプレイ41-6、ガレージカメラ42、並びに、ガレージセンサ43-1および43-2を備えて構成される。 As shown in FIG. 3, the garage device 14 is configured with a garage display 41-1 for the ceiling, a garage display 41-2 for the front, a garage display 41-3 for the right side, a garage display 41-4 for the left side, a garage display 41-5 for the rear, a garage display 41-6 for the floor, a garage camera 42, and garage sensors 43-1 and 43-2.
 天井面用のガレージディスプレイ41-1は、ガレージ装置14の天井の壁面を覆うように全面に設置され、運転シミュレーションの実行時に、ガレージシステム制御装置16から供給される天井面用のシミュレーション映像を表示する。 The garage display 41-1 for the ceiling surface is installed so as to cover the entire ceiling wall surface of the garage device 14, and displays a simulation image for the ceiling surface supplied from the garage system control device 16 when a driving simulation is performed.
 正面用のガレージディスプレイ41-2は、ガレージ装置14の正面の壁面を覆うように全面に設置され、運転シミュレーションの実行時に、ガレージシステム制御装置16から供給される正面用のシミュレーション映像を表示する。 The front garage display 41-2 is installed so as to cover the entire front wall of the garage device 14, and displays the front simulation image supplied from the garage system control device 16 when the driving simulation is being performed.
 右側面用のガレージディスプレイ41-3は、ガレージ装置14の右側面の壁面を覆うように全面に設置され、運転シミュレーションの実行時に、ガレージシステム制御装置16から供給される右側面用のシミュレーション映像を表示する。 The garage display 41-3 for the right side is installed so as to cover the entire right side wall of the garage device 14, and displays the simulation image for the right side supplied from the garage system control device 16 when the driving simulation is performed.
 左側面用のガレージディスプレイ41-4は、ガレージ装置14の左側面の壁面を覆うように全面に設置され、運転シミュレーションの実行時に、ガレージシステム制御装置16から供給される左側面用のシミュレーション映像を表示する。 The garage display 41-4 for the left side is installed so as to cover the entire left side wall of the garage device 14, and displays the simulation image for the left side supplied from the garage system control device 16 when the driving simulation is being performed.
 背面用のガレージディスプレイ41-5は、ガレージ装置14の背面の壁面を覆うように全面に設置され、運転シミュレーションの実行時に、ガレージシステム制御装置16から供給される背面用のシミュレーション映像を表示する。 The rear garage display 41-5 is installed to cover the entire rear wall of the garage device 14, and displays the rear simulation image supplied from the garage system control device 16 when the driving simulation is being performed.
 床面用のガレージディスプレイ41-6は、ガレージ装置14の床面の壁面を覆うように全面に設置され、運転シミュレーションの実行時に、ガレージシステム制御装置16から供給される床面用のシミュレーション映像を表示する。 The garage floor display 41-6 is installed so as to cover the entire floor and wall surface of the garage device 14, and displays a simulation image for the floor supplied from the garage system control device 16 when a driving simulation is being performed.
 なお、天井面用のガレージディスプレイ41-1、正面用のガレージディスプレイ41-2、右側面用のガレージディスプレイ41-3、左側面用のガレージディスプレイ41-4、背面用のガレージディスプレイ41-5、および床面用のガレージディスプレイ41-6を区別する必要がない場合、以下、ガレージディスプレイ41と称する。例えば、ガレージディスプレイ41には、LED(Light Emitting Diode)やOLED(Organic Light Emitting Diode)などの発光方式を採用した表示ユニットが用いられ、複数の表示ユニットをタイリングして使用することができる。 In the following, when there is no need to distinguish between the garage display 41-1 for the ceiling, the garage display 41-2 for the front, the garage display 41-3 for the right side, the garage display 41-4 for the left side, the garage display 41-5 for the rear, and the garage display 41-6 for the floor, they will be referred to as garage display 41. For example, the garage display 41 uses a display unit that employs a light-emitting method such as LED (Light Emitting Diode) or OLED (Organic Light Emitting Diode), and multiple display units can be tiled for use.
 ガレージカメラ42は、車両13の位置および姿勢を認識するために、ガレージ装置14に格納されている車両13を撮影して映像を取得し、その映像データをガレージシステム制御装置16に供給する。 The garage camera 42 photographs the vehicle 13 stored in the garage device 14 to recognize the position and posture of the vehicle 13, and supplies the image data to the garage system control device 16.
 ガレージセンサ43-1および43-2は、車両13の位置および姿勢を認識するために、例えば、車両13のボディに装着された複数のマーカ(例えば、再帰性反射材など)の位置を検出して得られるセンサデータを、ガレージシステム制御装置16に供給する。なお、ガレージセンサ43-1および43-2を区別する必要がない場合、以下、単にガレージセンサ43と称する。また、図3に示す例では、2つのガレージセンサ43-1および43-2が図示されているが、1つのガレージセンサ43または3つ以上のガレージセンサ43によって車両13の位置および姿勢を検出してもよい。または、車両13とガレージ装置14との位置および姿勢の相対関係を、車両13に設けられた車外センサ(図示せず)で検出してもよい。 In order to recognize the position and attitude of the vehicle 13, the garage sensors 43-1 and 43-2 supply sensor data obtained by detecting the positions of multiple markers (e.g., retroreflective materials) attached to the body of the vehicle 13 to the garage system control device 16. Note that, when there is no need to distinguish between the garage sensors 43-1 and 43-2, they will be referred to simply as garage sensor 43 below. Also, in the example shown in FIG. 3, two garage sensors 43-1 and 43-2 are shown, but the position and attitude of the vehicle 13 may be detected by one garage sensor 43 or three or more garage sensors 43. Alternatively, the relative relationship between the position and attitude of the vehicle 13 and the garage device 14 may be detected by an external sensor (not shown) provided on the vehicle 13.
 なお、運転シミュレーションシステム11では、ガレージ装置14の床面に埋め込まれたポジショニングセンサ(例えば、圧電素子など)を使用して、車両13のタイヤが接する4点を検出することで、車両13の位置および姿勢を認識してもよい。または、運転シミュレーションシステム11では、RGBカメラや深度センサなどを組み合わせた測定装置によって、車両13の点群やメッシュなどの形状データを取得することで、車両13の位置および姿勢を認識してもよい。 The driving simulation system 11 may recognize the position and attitude of the vehicle 13 by using a positioning sensor (e.g., a piezoelectric element) embedded in the floor of the garage device 14 to detect the four points where the tires of the vehicle 13 are in contact. Alternatively, the driving simulation system 11 may recognize the position and attitude of the vehicle 13 by acquiring shape data such as a point cloud or mesh of the vehicle 13 using a measuring device that combines an RGB camera, a depth sensor, etc.
 このように構成されるガレージ装置14は、運転シミュレーションの実行時に、車両13内の乗員の視点位置Pから見たシミュレーション映像を、車両13の周囲を囲うように配置されたガレージディスプレイ41に表示することができる。 The garage device 14 configured in this manner can display a simulation image seen from the viewpoint position P of a passenger inside the vehicle 13 on a garage display 41 arranged to surround the periphery of the vehicle 13 when a driving simulation is being performed.
 図4を参照して、車両13の内部の構成例について説明する。 An example of the internal configuration of the vehicle 13 will be described with reference to Figure 4.
 図4のAには、運転席および助手席から前方を見た車両13の内部の構成例が示されており、図4のBには、後部座席から前方を見た車両13の内部の構成例が示されている。 FIG. 4A shows an example of the internal configuration of vehicle 13 as viewed forward from the driver's seat and passenger seat, and FIG. 4B shows an example of the internal configuration of vehicle 13 as viewed forward from the rear seat.
 図4に示すように、車両13の内部には、ナビゲーションディスプレイ51、サイドミラーディスプレイ52Lおよび52R、インストルメントパネルディスプレイ53、バックミラーディスプレイ54、後部座席用ディスプレイ55Lおよび55R、車内カメラ56、車内センサ57-1および57-2、並びに、スピーカ58-1および58-2が配置されている。 As shown in FIG. 4, the interior of the vehicle 13 is provided with a navigation display 51, side mirror displays 52L and 52R, an instrument panel display 53, a rearview mirror display 54, rear seat displays 55L and 55R, an in-vehicle camera 56, in-vehicle sensors 57-1 and 57-2, and speakers 58-1 and 58-2.
 ナビゲーションディスプレイ51は、車両内制御装置18から供給されるナビゲーション用の映像ストリームに基づいて、例えば、車両13の走行位置を示す車両マークを地図画像上に配置し、車両マークが地図画像上を移動するようなナビゲーション映像を表示する。また、ナビゲーションディスプレイ51は、走行モードでは、車両13の位置情報に従った実際のナビゲーション映像を表示し、シミュレーションモードでは、運転シミュレーションに基づく仮想空間上の走行位置に従った仮想的なナビゲーション映像を表示する。 The navigation display 51 displays a navigation image based on the video stream for navigation supplied from the in-vehicle control device 18, for example, by placing a vehicle mark indicating the driving position of the vehicle 13 on a map image and displaying a navigation image in which the vehicle mark moves on the map image. In addition, in the driving mode, the navigation display 51 displays an actual navigation image according to the position information of the vehicle 13, and in the simulation mode, it displays a virtual navigation image according to the driving position in a virtual space based on a driving simulation.
 サイドミラーディスプレイ52Lおよび52Rは、車両内制御装置18から供給されるサイドミラー用の映像ストリームに基づいて、例えば、車両13の左側および右側に設置されているサイドミラーカメラで車両13の後方に向かって撮影したサイドミラー映像を表示する。また、サイドミラーディスプレイ52Lおよび52Rは、走行モードでは、車両13に設けられているサイドミラーカメラで撮影した実際のサイドミラー映像を表示し、シミュレーションモードでは、運転シミュレーションに基づく仮想空間上に設けられているサイドミラーカメラで撮影した仮想的なサイドミラー映像を表示する。 Side mirror displays 52L and 52R display, for example, side mirror images taken toward the rear of vehicle 13 by side mirror cameras installed on the left and right sides of vehicle 13, based on the side mirror image streams supplied from in-vehicle control device 18. In addition, in driving mode, side mirror displays 52L and 52R display actual side mirror images taken by side mirror cameras installed on vehicle 13, and in simulation mode, display virtual side mirror images taken by side mirror cameras installed in a virtual space based on a driving simulation.
 インストルメントパネルディスプレイ53は、車両内制御装置18から供給されるインストルメントパネル用の映像ストリームに基づいて、例えば、車両13の走行速度やエンジン回転数などの各種の計器データを表すインストルメントパネル映像を表示する。また、インストルメントパネルディスプレイ53は、走行モードでは、車両13の運転に基づく実際のインストルメントパネル映像を表示し、シミュレーションモードでは、運転シミュレーションに基づく仮想的なインストルメントパネル映像を表示する。 The instrument panel display 53 displays an instrument panel image showing various instrument data such as the vehicle 13's driving speed and engine RPM, based on the video stream for the instrument panel supplied from the in-vehicle control device 18. In addition, in the driving mode, the instrument panel display 53 displays an actual instrument panel image based on the driving of the vehicle 13, and in the simulation mode, displays a virtual instrument panel image based on a driving simulation.
 バックミラーディスプレイ54は、車両内制御装置18から供給されるバックミラー用の映像ストリームに基づいて、例えば、車両13の背面に設置されているバックミラーカメラで車両13の後方に向かって撮影したバックミラー映像を表示する。また、バックミラーディスプレイ54は、走行モードでは、車両13に設けられているバックミラーカメラで撮影した実際のバックミラー映像を表示し、シミュレーションモードでは、運転シミュレーションに基づく仮想空間上に設けられているバックミラーカメラで撮影した仮想的なバックミラー映像を表示する。 The rearview mirror display 54 displays, for example, a rearview mirror image taken toward the rear of the vehicle 13 by a rearview mirror camera installed on the rear of the vehicle 13, based on the rearview mirror image stream supplied from the in-vehicle control device 18. In addition, in the driving mode, the rearview mirror display 54 displays an actual rearview mirror image taken by a rearview mirror camera installed on the vehicle 13, and in the simulation mode, it displays a virtual rearview mirror image taken by a rearview mirror camera installed in a virtual space based on the driving simulation.
 後部座席用ディスプレイ55Lおよび55Rは、車両内制御装置18から供給される後部座席用の映像ストリームに基づいて、例えば、車両13の外部に設けられている外部カメラで撮影した映像であって、後部座席の乗員の視点から後部座席用ディスプレイ55Lおよび55R越しに見えると想定される映像である後部座席視点映像を表示する。また、後部座席用ディスプレイ55Lおよび55Rは、走行モードでは、車両13に設けられている外部カメラで撮影した実際の後部座席視点映像を表示し、シミュレーションモードでは、運転シミュレーションに基づく仮想空間上に設けられている外部カメラで撮影した仮想的な後部座席視点映像を表示する。 The rear seat displays 55L and 55R display rear seat viewpoint images, which are images captured by an external camera installed outside the vehicle 13 and are assumed to be seen through the rear seat displays 55L and 55R from the viewpoint of a passenger in the rear seat, based on the rear seat video stream supplied from the in-vehicle control device 18. In addition, in the driving mode, the rear seat displays 55L and 55R display actual rear seat viewpoint images captured by an external camera installed in the vehicle 13, and in the simulation mode, display virtual rear seat viewpoint images captured by an external camera installed in a virtual space based on the driving simulation.
 車内カメラ56は、車両13の乗員の位置および姿勢を認識するために、車両13の乗員を撮影して映像を取得し、その映像データを車両内制御装置18に供給する。 The in-vehicle camera 56 photographs the occupants of the vehicle 13 to recognize their positions and postures, and supplies the image data to the in-vehicle control device 18.
 車内センサ57-1および57-2は、車両13の乗員の位置および姿勢を認識するために、乗員までの距離(奥行き)を検出して得られるセンサデータを、車両内制御装置18に供給する。なお、車内センサ57-1および57-2を区別する必要がない場合、以下、単に車内センサ57と称する。また、図3に示す例では、2つの車内センサ57-1および57-2が図示されているが、1つの車内センサ57または3つ以上の車内センサ57によって乗員の位置および姿勢を検出してもよい。 In order to recognize the position and posture of the occupants of the vehicle 13, the in-vehicle sensors 57-1 and 57-2 detect the distance (depth) to the occupants and supply the sensor data obtained to the in-vehicle control device 18. Note that, when there is no need to distinguish between the in-vehicle sensors 57-1 and 57-2, hereinafter they will be referred to simply as the in-vehicle sensor 57. Also, in the example shown in FIG. 3, two in-vehicle sensors 57-1 and 57-2 are illustrated, but the position and posture of the occupants may be detected by one in-vehicle sensor 57 or three or more in-vehicle sensors 57.
 スピーカ58-1および58-2は、車両内制御装置18から供給されるオーディオ信号に基づいたオーディオを出力する。例えば、スピーカ58-1および58-2は、走行モードでは、車両13に搭載されているオーディオシステムで再生されたオーディオを出力し、シミュレーションモードでは、運転シミュレーションに基づく仮想空間上のオブジェクトから発せられるオブジェクトオーディオを出力する。 Speakers 58-1 and 58-2 output audio based on an audio signal supplied from in-vehicle control device 18. For example, in driving mode, speakers 58-1 and 58-2 output audio played by an audio system installed in vehicle 13, and in simulation mode, output object audio emitted from objects in a virtual space based on a driving simulation.
 図5を参照して、乗員の視点位置Pを特定する手法の一例について説明する。 With reference to Figure 5, an example of a method for identifying the occupant's viewpoint position P will be described.
 例えば、ガレージシステム制御装置16は、ガレージカメラ42から供給される映像データおよびガレージセンサ43から供給されるセンサデータに基づいて、ガレージ装置14の各ガレージディスプレイ41に対する車両13の位置および姿勢を認識する。これにより、ガレージシステム制御装置16は、例えば、図5のAに示すように、正面用のガレージディスプレイ41-2に対して相対的に、車両13が内接する直方体空間Cにより表される車両13の位置姿勢データを取得することができる。 For example, the garage system control device 16 recognizes the position and orientation of the vehicle 13 relative to each garage display 41 of the garage device 14 based on the image data supplied from the garage camera 42 and the sensor data supplied from the garage sensor 43. This allows the garage system control device 16 to obtain position and orientation data of the vehicle 13 represented by a rectangular parallelepiped space C inscribed by the vehicle 13, relative to the front garage display 41-2, for example, as shown in A of FIG. 5.
 また、車両内制御装置18は、車内カメラ56から供給される映像データおよび車内センサ57から供給されるセンサデータに基づいて、車両13に対して相対的に、乗員の位置および姿勢を認識するとともに、乗員の視点位置P(視点位置および視線方向)を認識する。そして、車両内制御装置18は、乗員の位置および姿勢を示す情報に加えて、乗員の視点位置および視線方向を示す情報が含まれる乗員の位置姿勢データを取得し、ガレージシステム制御装置16に供給する。これにより、ガレージシステム制御装置16は、図5のBに示すような車両13が内接する直方体空間Cに対して相対的に、乗員の位置、姿勢、視点位置、および視線方向を認識することができる。 The in-vehicle control device 18 also recognizes the position and posture of the occupant relative to the vehicle 13, as well as the occupant's viewpoint position P (viewpoint position and line of sight direction) based on the video data supplied from the in-vehicle camera 56 and the sensor data supplied from the in-vehicle sensor 57. The in-vehicle control device 18 then acquires occupant position and posture data including information indicating the occupant's viewpoint position and line of sight direction in addition to information indicating the occupant's position and posture, and supplies this data to the garage system control device 16. This allows the garage system control device 16 to recognize the position, posture, viewpoint position, and line of sight direction of the occupant relative to the rectangular parallelepiped space C inscribed by the vehicle 13, as shown in FIG. 5B.
 従って、ガレージシステム制御装置16は、例えば、正面用のガレージディスプレイ41-2に対して相対的な直方体空間C、および、直方体空間Cに対して相対的な乗員の視点位置Pに基づいて、正面用のガレージディスプレイ41-2に対して相対的な乗員の視点位置Pを特定することができる。 Therefore, the garage system control device 16 can, for example, determine the occupant's viewpoint position P relative to the front garage display 41-2 based on the rectangular space C relative to the front garage display 41-2 and the occupant's viewpoint position P relative to the rectangular space C.
 なお、ここで説明した以外の方法で、車両13の位置姿勢データおよび乗員の位置姿勢データを特定してもよい。例えば、車両13の一部(前方右側タイヤが地面に接地した位置など)を中心とした車両13の座標で、車両13の位置姿勢データを表現してもよい。この場合、車両13の位置姿勢データには、車両座標系の中心が対応する車両13の一部の位置情報が含まれる。 It should be noted that the position and orientation data of the vehicle 13 and the position and orientation data of the occupants may be identified by methods other than those described here. For example, the position and orientation data of the vehicle 13 may be expressed in coordinates of the vehicle 13 centered on a part of the vehicle 13 (such as the position where the front right tire touches the ground). In this case, the position and orientation data of the vehicle 13 includes position information of the part of the vehicle 13 that corresponds to the center of the vehicle coordinate system.
 <ガレージシステム制御装置の構成例>
 図6および図7を参照して、ガレージシステム制御装置16の構成例について説明する。
<Configuration example of garage system control device>
An example of the configuration of the garage system control device 16 will be described with reference to FIGS.
 図6は、ガレージシステム制御装置16の構成例を示すブロック図である。 FIG. 6 is a block diagram showing an example configuration of the garage system control device 16.
 図6に示すように、ガレージシステム制御装置16は、センサデータ取得部61、映像データ取得部62、車両位置姿勢認識部63、通信部64、仮想空間記憶部65、3DCG記憶部66、3DCG生成部67、オブジェクトオーディオ記憶部68、オブジェクトオーディオ生成部69、ユーザ設定取得部70、仮想空間生成部71、信号多重化部72、映像変換処理部73、天井面用映像送信部74、正面用映像送信部75、右側面用映像送信部76、左側面用映像送信部77、背面用映像送信部78、および床面用映像送信部79を備えて構成される。 As shown in FIG. 6, the garage system control device 16 is configured with a sensor data acquisition unit 61, a video data acquisition unit 62, a vehicle position and attitude recognition unit 63, a communication unit 64, a virtual space storage unit 65, a 3DCG storage unit 66, a 3DCG generation unit 67, an object audio storage unit 68, an object audio generation unit 69, a user setting acquisition unit 70, a virtual space generation unit 71, a signal multiplexing unit 72, a video conversion processing unit 73, a ceiling surface video transmission unit 74, a front surface video transmission unit 75, a right side surface video transmission unit 76, a left side surface video transmission unit 77, a rear surface video transmission unit 78, and a floor surface video transmission unit 79.
 センサデータ取得部61は、ガレージセンサ43から供給されるセンサデータを取得して、車両位置姿勢認識部63に供給する。 The sensor data acquisition unit 61 acquires the sensor data supplied from the garage sensor 43 and supplies it to the vehicle position and attitude recognition unit 63.
 映像データ取得部62は、ガレージカメラ42から供給される映像データを取得して、車両位置姿勢認識部63に供給する。 The video data acquisition unit 62 acquires the video data provided by the garage camera 42 and provides it to the vehicle position and attitude recognition unit 63.
 車両位置姿勢認識部63は、センサデータ取得部61から供給されるセンサデータ、および、映像データ取得部62から供給される映像データに基づいて、ガレージ装置14に格納されている車両13の位置および姿勢を認識し、車両13の位置姿勢データを取得する。例えば、車両位置姿勢認識部63は、図5を参照して上述したような車両13が内接する直方体空間Cにより表される車両13の位置姿勢データを取得することができる。そして、車両位置姿勢認識部63は、車両13の位置姿勢データを映像変換処理部73に供給する。 The vehicle position and attitude recognition unit 63 recognizes the position and attitude of the vehicle 13 stored in the garage device 14 based on the sensor data supplied from the sensor data acquisition unit 61 and the video data supplied from the video data acquisition unit 62, and acquires position and attitude data of the vehicle 13. For example, the vehicle position and attitude recognition unit 63 can acquire position and attitude data of the vehicle 13 represented by the rectangular parallelepiped space C in which the vehicle 13 is inscribed as described above with reference to FIG. 5. The vehicle position and attitude recognition unit 63 then supplies the position and attitude data of the vehicle 13 to the video conversion processing unit 73.
 通信部64は、車両内制御装置18および外部ネットワークと通信を行う。例えば、通信部64は、車両内制御装置18から送信されてくる車両パラメータを受信して、車両パラメータに含まれている車両13内の乗員の位置姿勢データを映像変換処理部73に供給し、車両パラメータに含まれている運転操作データを仮想空間生成部71に供給する。また、通信部64は、仮想空間生成部71から供給される車両設定パラメータ、および、信号多重化部72から供給される車両13用の映像オーディオストリームを、車両13へ送信する。 The communication unit 64 communicates with the in-vehicle control device 18 and an external network. For example, the communication unit 64 receives vehicle parameters transmitted from the in-vehicle control device 18, supplies position and posture data of occupants in the vehicle 13 contained in the vehicle parameters to the video conversion processing unit 73, and supplies driving operation data contained in the vehicle parameters to the virtual space generation unit 71. The communication unit 64 also transmits to the vehicle 13 the vehicle setting parameters supplied from the virtual space generation unit 71 and the video and audio stream for the vehicle 13 supplied from the signal multiplexing unit 72.
 仮想空間記憶部65は、運転シミュレーションにおいて車両13が仮想的に走行する仮想空間を構成する道路や建物などの形状およびテクスチャからなる仮想空間データを記憶する。 The virtual space storage unit 65 stores virtual space data consisting of the shapes and textures of roads, buildings, etc. that constitute the virtual space in which the vehicle 13 virtually travels during the driving simulation.
 3DCG記憶部66は、運転シミュレーションにおいて車両13が仮想的に走行する仮想空間内に配置される各種の立体的なオブジェクト(例えば、他の車両や、歩行者、信号機など)を表す形状およびテクスチャからなる3DCG(3-Dimensional Computer Graphics)データを記憶する。 The 3DCG memory unit 66 stores 3DCG (3-Dimensional Computer Graphics) data consisting of shapes and textures representing various three-dimensional objects (e.g., other vehicles, pedestrians, traffic lights, etc.) that are placed in the virtual space in which the vehicle 13 virtually travels during the driving simulation.
 3DCG生成部67は、運転シミュレーションにおいて車両13の近隣に配置されるオブジェクトの3DCGデータを3DCG記憶部66から読み出し、そのオブジェクトを生成して仮想空間生成部71に供給する。 The 3DCG generation unit 67 reads 3DCG data of objects to be placed near the vehicle 13 during the driving simulation from the 3DCG storage unit 66, generates the objects, and supplies them to the virtual space generation unit 71.
 オブジェクトオーディオ記憶部68は、運転シミュレーションにおいて車両13が走行する仮想空間内に配置される各種の立体的なオブジェクトから発せられるオーディオ(例えば、他の車両の走行音や、歩行者の足音、信号機のメロディなど)を表すオーディオデータを記憶する。 The object audio storage unit 68 stores audio data representing audio (e.g., the sounds of other vehicles traveling, pedestrian footsteps, traffic light melodies, etc.) emitted from various three-dimensional objects placed in the virtual space in which the vehicle 13 travels during the driving simulation.
 オブジェクトオーディオ生成部69は、運転シミュレーションにおいて車両13の近隣に配置されるオブジェクトに対応するオーディオデータをオブジェクトオーディオ記憶部68から読み出し、そのオーディオを生成して仮想空間生成部71に供給する。 The object audio generation unit 69 reads audio data corresponding to objects placed near the vehicle 13 in the driving simulation from the object audio storage unit 68, generates the audio, and supplies it to the virtual space generation unit 71.
 ユーザ設定取得部70は、運転シミュレーションシステム11を利用するユーザがガレージシステム制御装置16に対する各種の操作入力を行うと、ユーザの操作入力に応じたユーザ設定値を取得して記憶するとともに、そのユーザ設定値を仮想空間生成部71に供給する。例えば、ユーザ設定取得部70は、運転シミュレーションの開始を指示する操作入力が行われると、運転シミュレーションの開始を指示するユーザ設定値を取得して記憶するとともに、そのユーザ設定値を仮想空間生成部71に供給する。 When a user using the driving simulation system 11 performs various operational inputs to the garage system control device 16, the user setting acquisition unit 70 acquires and stores user setting values corresponding to the user's operational input, and supplies the user setting values to the virtual space generation unit 71. For example, when an operational input is performed to instruct the start of a driving simulation, the user setting acquisition unit 70 acquires and stores the user setting value that instructs the start of a driving simulation, and supplies the user setting value to the virtual space generation unit 71.
 仮想空間生成部71は、仮想空間記憶部65から仮想空間データを読み出して仮想空間を生成して、3DCG生成部67から供給されるオブジェクトを仮想空間内に配置し、オブジェクトオーディオ生成部69から供給されるオーディオの音源を、それぞれのオブジェクトの位置に対応付けて設定する。 The virtual space generation unit 71 reads the virtual space data from the virtual space storage unit 65 to generate a virtual space, places the objects supplied from the 3DCG generation unit 67 within the virtual space, and sets the audio source supplied from the object audio generation unit 69 in correspondence with the position of each object.
 そして、仮想空間生成部71は、通信部64から供給される運転操作データに従って仮想空間内で車両13を仮想的に走行させる運転シミュレーションを行う。これにより、仮想空間生成部71は、仮想空間内での車両13の挙動を求めて、その挙動を再現するように車両13の挙動を制御するための車両設定パラメータ(サスペンション設定データ、シート設定データ、および反力設定データ)を生成して、通信部64に供給する。 Then, the virtual space generation unit 71 performs a driving simulation in which the vehicle 13 virtually runs in the virtual space according to the driving operation data supplied from the communication unit 64. As a result, the virtual space generation unit 71 determines the behavior of the vehicle 13 in the virtual space, generates vehicle setting parameters (suspension setting data, seat setting data, and reaction force setting data) for controlling the behavior of the vehicle 13 so as to reproduce that behavior, and supplies these to the communication unit 64.
 さらに、仮想空間生成部71は、運転操作データに従った車両13の走行位置に応じて、車両13用の映像ストリーム、車両13用のオーディオストリーム、およびガレージ用の映像ストリームを生成する。 Furthermore, the virtual space generation unit 71 generates a video stream for the vehicle 13, an audio stream for the vehicle 13, and a video stream for the garage according to the driving position of the vehicle 13 according to the driving operation data.
 例えば、仮想空間生成部71は、ナビゲーションディスプレイ51に表示されるナビゲーション映像に対応する車両13用の映像ストリームを、車両13の走行位置を示す車両マークを仮想空間の地図画像上に配置し、運転操作データに従って車両マークが地図画像上を移動するように生成する。また、仮想空間生成部71は、サイドミラーディスプレイ52Lおよび52Rに表示されるサイドミラー映像に対応する車両13用の映像ストリームを、車両13の左側および右側に設置されているサイドミラーカメラに対応するように仮想空間上に仮想カメラを配置して、それぞれの仮想カメラで仮想空間を撮影することにより生成する。また、仮想空間生成部71は、インストルメントパネルディスプレイ53に表示されるインストルメントパネル映像に対応する車両13用の映像ストリームを、運転操作データに従った走行速度やエンジン回転数などの各種の計器データに従って生成する。 For example, the virtual space generation unit 71 generates a video stream for the vehicle 13 corresponding to the navigation image displayed on the navigation display 51 by placing a vehicle mark indicating the driving position of the vehicle 13 on a map image in the virtual space and moving the vehicle mark on the map image in accordance with the driving operation data. The virtual space generation unit 71 also generates a video stream for the vehicle 13 corresponding to the side mirror images displayed on the side mirror displays 52L and 52R by placing virtual cameras in the virtual space corresponding to the side mirror cameras installed on the left and right sides of the vehicle 13 and photographing the virtual space with each virtual camera. The virtual space generation unit 71 also generates a video stream for the vehicle 13 corresponding to the instrument panel image displayed on the instrument panel display 53 in accordance with various instrument data such as driving speed and engine speed in accordance with the driving operation data.
 また、仮想空間生成部71は、バックミラーディスプレイ54に表示されるバックミラー映像となる車両13用の映像ストリームを、車両13の背面に設置されているバックミラーカメラに対応するように仮想空間上に仮想カメラを配置して、その仮想カメラで仮想空間を撮影することにより生成する。また、仮想空間生成部71は、後部座席用ディスプレイ55Lおよび55Rに表示される後部座席視点映像となる車両13用の映像ストリームを、後部座席の乗員の視点に対応するように仮想空間上に仮想カメラを配置して、その仮想カメラで、後部座席用ディスプレイ55Lおよび55R越しに見える仮想空間を撮影することにより生成する。そして、仮想空間生成部71は、このようにして生成した車両13用の映像ストリームを信号多重化部72に供給する。 The virtual space generation unit 71 also generates a video stream for the vehicle 13, which serves as the rear-mirror image displayed on the rear-mirror display 54, by placing a virtual camera in the virtual space corresponding to the rear-mirror camera installed on the rear of the vehicle 13 and capturing an image of the virtual space with the virtual camera. The virtual space generation unit 71 also generates a video stream for the vehicle 13, which serves as the rear-seat viewpoint image displayed on the rear-seat displays 55L and 55R, by placing a virtual camera in the virtual space corresponding to the viewpoint of the rear-seat passengers and capturing an image of the virtual space seen through the rear-seat displays 55L and 55R with the virtual camera. The virtual space generation unit 71 then supplies the video stream for the vehicle 13 generated in this manner to the signal multiplexing unit 72.
 また、仮想空間生成部71は、運転操作データに従って仮想空間内を走行する車両13の近隣に配置されるオブジェクトの位置をオーディオの音源位置として、それぞれのオブジェクトからオブジェクトオーディオが発せられるように車両13用のオーディオストリームを生成し、映像変換処理部73に供給する。 The virtual space generation unit 71 also generates an audio stream for the vehicle 13 so that object audio is emitted from each object, with the positions of objects placed near the vehicle 13 traveling in the virtual space in accordance with the driving operation data being set as audio source positions, and supplies the audio stream to the video conversion processing unit 73.
 また、仮想空間生成部71は、車両13の所定位置(例えば、車両13の中心位置)に対応するように仮想空間上に仮想カメラを配置して、その仮想カメラで、車両13を中心とした全方位を対象として仮想空間を撮影することによりガレージ用の映像ストリームを生成し、映像変換処理部73に供給する。 The virtual space generation unit 71 also places a virtual camera in the virtual space so as to correspond to a predetermined position of the vehicle 13 (for example, the center position of the vehicle 13), and uses the virtual camera to capture images of the virtual space in all directions centered on the vehicle 13, thereby generating a video stream for the garage, and supplies this to the video conversion processing unit 73.
 信号多重化部72は、仮想空間生成部71から供給される車両13用の映像ストリームと車両13用のオーディオストリームとを多重化することにより車両13用の映像オーディオストリームを生成し、通信部64に供給する。 The signal multiplexing unit 72 multiplexes the video stream for the vehicle 13 and the audio stream for the vehicle 13 supplied from the virtual space generating unit 71 to generate a video/audio stream for the vehicle 13 and supplies it to the communication unit 64.
 映像変換処理部73は、仮想空間生成部71から供給されるガレージ用の映像ストリームに対して、車両位置姿勢認識部63から供給される車両13の位置姿勢データ、および、通信部64から供給される車両13内の乗員の位置姿勢データに基づいた映像変換処理を施す。例えば、映像変換処理部73は、視点検出部81および幾何変換部82を有している。 The image conversion processing unit 73 performs image conversion processing on the image stream for the garage supplied from the virtual space generation unit 71, based on the position and orientation data of the vehicle 13 supplied from the vehicle position and orientation recognition unit 63 and the position and orientation data of the occupants in the vehicle 13 supplied from the communication unit 64. For example, the image conversion processing unit 73 has a viewpoint detection unit 81 and a geometric transformation unit 82.
 視点検出部81は、図5を参照して上述したように、車両13の位置姿勢データに基づいて、ガレージディスプレイ41に対する車両13の位置に、車両13が内接する直方体空間Cを設定する。そして、視点検出部81は、その直方体空間Cにおいて、車両13内の乗員の位置姿勢データに基づいた乗員の位置・姿勢、視点・視線を設定することで、ガレージディスプレイ41に対する、乗員の位置・姿勢・視点・視線を求めることができる。これにより、視点検出部81は、ガレージディスプレイ41に対して相対的な乗員の視点および視線の情報を幾何変換部82に供給することができる。 The viewpoint detection unit 81, as described above with reference to FIG. 5, sets a rectangular parallelepiped space C in which the vehicle 13 is inscribed at the position of the vehicle 13 relative to the garage display 41 based on the position and orientation data of the vehicle 13. The viewpoint detection unit 81 then sets the position, orientation, viewpoint, and line of sight of the occupant in the rectangular parallelepiped space C based on the position and orientation data of the occupant inside the vehicle 13, thereby being able to determine the position, orientation, viewpoint, and line of sight of the occupant relative to the garage display 41. This allows the viewpoint detection unit 81 to supply information about the viewpoint and line of sight of the occupant relative to the garage display 41 to the geometric transformation unit 82.
 幾何変換部82は、それぞれのガレージディスプレイ41の位置、並びに、ガレージディスプレイ41に対する乗員の視点および視線に基づいて、仮想空間生成部71から供給されるガレージ用の映像ストリームに対して幾何変換を施す。即ち、幾何変換部82は、視点検出部81によって特定された乗員の視点および視線を基準として、それぞれのガレージディスプレイ41を投影面としてガレージ用の映像ストリームを投影するような幾何変換を、ガレージ用の映像ストリームに対して施して、ガレージディスプレイ41ごとの映像ストリームを生成する。これにより、車両13を中心とした球面的なガレージ用の映像ストリームを、例えば、天井面用のガレージディスプレイ41-1に投影することで、平面的な天井面用の映像ストリームが生成される。同様に、他のガレージディスプレイ41についても、それぞれ平面的な映像ストリームが生成される。 The geometric transformation unit 82 performs a geometric transformation on the garage video stream supplied from the virtual space generation unit 71 based on the position of each garage display 41 and the occupant's viewpoint and line of sight relative to the garage display 41. That is, the geometric transformation unit 82 performs a geometric transformation on the garage video stream so as to project the garage video stream using each garage display 41 as a projection surface, based on the occupant's viewpoint and line of sight identified by the viewpoint detection unit 81, thereby generating a video stream for each garage display 41. In this way, a spherical garage video stream centered on the vehicle 13 is projected onto, for example, the ceiling garage display 41-1, thereby generating a planar ceiling video stream. Similarly, planar video streams are generated for each of the other garage displays 41.
 例えば、図7に示すように、視点位置Pから見た仮想平面をガレージディスプレイ41で表現したい場合、視点位置Pから表現したい仮想平面上の点を通る光線に相当する直線を引き、仮想平面上の点それぞれを通る直線のガレージディスプレイ41上の交点が透視投影点となるので、ガレージディスプレイ41の画素を同定することができる。この際、視点位置Pとガレージディスプレイ41の位置とを計算によって求めるようにするため、それぞれの位置姿勢が、共通の座標系などによる座標値など厳密に数値化されている必要がある。例えば、これらのプロセスは、ゲームエンジンで既に実現されている仕組みを利用することで、撮影するカメラの位置姿勢に対し、配置されたガレージディスプレイ41に表示するべき映像を容易に生成できることができる。 For example, as shown in FIG. 7, if you want to represent a virtual plane viewed from viewpoint position P on the garage display 41, you can draw a straight line equivalent to a ray that passes from viewpoint position P through a point on the virtual plane you want to represent, and the intersections on the garage display 41 of the lines that pass through each point on the virtual plane become perspective projection points, making it possible to identify the pixels of the garage display 41. In this case, since the viewpoint position P and the position of the garage display 41 are found by calculation, their respective positions and orientations need to be precisely quantified, such as coordinate values based on a common coordinate system. For example, these processes can easily generate an image to be displayed on the garage display 41 placed according to the position and orientation of the camera taking the image by utilizing mechanisms that are already implemented in game engines.
 天井面用映像送信部74は、映像変換処理部73から供給される天井面用の映像ストリームを天井面用のガレージディスプレイ41-1に送信し、天井面用のガレージディスプレイ41-1に天井面用のシミュレーション映像を表示させる。 The ceiling surface video transmission unit 74 transmits the ceiling surface video stream supplied from the video conversion processing unit 73 to the ceiling surface garage display 41-1, causing the ceiling surface garage display 41-1 to display the ceiling surface simulation video.
 正面用映像送信部75は、映像変換処理部73から供給される正面用の映像ストリームを正面用のガレージディスプレイ41-2に送信し、正面用のガレージディスプレイ41-2に正面用のシミュレーション映像を表示させる。 The front image transmission unit 75 transmits the front image stream supplied from the image conversion processing unit 73 to the front garage display 41-2, causing the front garage display 41-2 to display the front simulation image.
 右側面用映像送信部76は、映像変換処理部73から供給される右側面用の映像ストリームを右側面用のガレージディスプレイ41-3に送信し、右側面用のガレージディスプレイ41-3に右側面用のシミュレーション映像を表示させる。 The right side image transmission unit 76 transmits the right side image stream supplied from the image conversion processing unit 73 to the right side garage display 41-3, causing the right side garage display 41-3 to display the right side simulation image.
 左側面用映像送信部77は、映像変換処理部73から供給される左側面用の映像ストリームを左側面用のガレージディスプレイ41-4に送信し、左側面用のガレージディスプレイ41-4に左側面用のシミュレーション映像を表示させる。 The left side image transmission unit 77 transmits the left side image stream supplied from the image conversion processing unit 73 to the left side garage display 41-4, causing the left side garage display 41-4 to display the left side simulation image.
 背面用映像送信部78は、映像変換処理部73から供給される背面用の映像ストリームを背面用のガレージディスプレイ41-5に送信し、背面用のガレージディスプレイ41-5に背面用のシミュレーション映像を表示させる。 The rear view video transmission unit 78 transmits the rear view video stream supplied from the video conversion processing unit 73 to the rear view garage display 41-5, causing the rear view garage display 41-5 to display the rear view simulation video.
 床面用映像送信部79は、映像変換処理部73から供給される床面用の映像ストリームを床面用のガレージディスプレイ41-6に送信し、床面用のガレージディスプレイ41-6に床面用のシミュレーション映像を表示させる。 The floor surface image transmission unit 79 transmits the floor surface image stream supplied from the image conversion processing unit 73 to the floor surface garage display 41-6, causing the floor surface garage display 41-6 to display the floor surface simulation image.
 <車両内制御装置の構成例>
 図8および図9を参照して、車両内制御装置18の構成例について説明する。
<Configuration example of in-vehicle control device>
An example of the configuration of the in-vehicle control device 18 will be described with reference to FIGS.
 図8は、車両内制御装置18の操作系および駆動系の構成例を示すブロック図である。 FIG. 8 is a block diagram showing an example of the configuration of the operation system and drive system of the in-vehicle control device 18.
 図8に示すように、車両内制御装置18は、センサデータ取得部91、映像データ取得部92、センサデータ取得部93、映像データ取得部94、車両内部環境認識部95、車両外部環境認識部96、通信部97、ユーザ設定取得部98、停車状態判断部99、ブレーキ位置データ取得部100、ハンドル回転データ取得部101、アクセル位置データ取得部102、運転操作検出部103、走行モードコントロール部104、シミュレーションモードコントロール部105、車軸方向制御部106、スロットル制御部107、ブレーキ制御部108、ハンドル反力制御部109、サスペンション制御部110、ブレーキ反力制御部111、アクセル反力制御部112、およびシート制御部113を備えて構成される。 As shown in FIG. 8, the in-vehicle control device 18 is configured to include a sensor data acquisition unit 91, a video data acquisition unit 92, a sensor data acquisition unit 93, a video data acquisition unit 94, a vehicle interior environment recognition unit 95, a vehicle exterior environment recognition unit 96, a communication unit 97, a user setting acquisition unit 98, a vehicle stop state determination unit 99, a brake position data acquisition unit 100, a steering wheel rotation data acquisition unit 101, an accelerator position data acquisition unit 102, a driving operation detection unit 103, a driving mode control unit 104, a simulation mode control unit 105, an axle direction control unit 106, a throttle control unit 107, a brake control unit 108, a steering wheel reaction force control unit 109, a suspension control unit 110, a brake reaction force control unit 111, an accelerator reaction force control unit 112, and a seat control unit 113.
 センサデータ取得部91は、車内センサ57から供給されるセンサデータを取得して、車両内部環境認識部95に供給する。 The sensor data acquisition unit 91 acquires sensor data provided from the in-vehicle sensor 57 and provides it to the vehicle interior environment recognition unit 95.
 映像データ取得部92は、車内カメラ56から供給される映像データを取得して、車両内部環境認識部95に供給する。 The video data acquisition unit 92 acquires video data provided by the in-vehicle camera 56 and provides it to the vehicle interior environment recognition unit 95.
 センサデータ取得部93は、車両13の外部に設けられている車外センサ(図示せず)から供給されるセンサデータを取得して、車両外部環境認識部96に供給する。 The sensor data acquisition unit 93 acquires sensor data provided from an external sensor (not shown) installed outside the vehicle 13 and provides it to the vehicle external environment recognition unit 96.
 映像データ取得部94は、車両13の外部に設けられている車外カメラ(図示せず)から供給される映像データを取得して、車両外部環境認識部96に供給する。 The video data acquisition unit 94 acquires video data provided from an external camera (not shown) installed outside the vehicle 13 and provides it to the vehicle external environment recognition unit 96.
 車両内部環境認識部95は、センサデータ取得部91から供給されるセンサデータおよび映像データ取得部92から供給される映像データに基づいて、車両13内の乗員の位置姿勢データを生成して通信部97に供給する。例えば、車両13内の乗員の位置姿勢データには、上述の図5を参照して説明したように、車両13が内接する直方体空間Cに対して相対的な、車両13内の乗員の視点位置および視線方向を示す情報が含まれている。 The vehicle interior environment recognition unit 95 generates position and orientation data of the occupants in the vehicle 13 based on the sensor data supplied from the sensor data acquisition unit 91 and the video data supplied from the video data acquisition unit 92, and supplies the data to the communication unit 97. For example, the position and orientation data of the occupants in the vehicle 13 includes information indicating the viewpoint position and line of sight direction of the occupants in the vehicle 13 relative to the rectangular parallelepiped space C inscribed by the vehicle 13, as described above with reference to FIG. 5.
 車両外部環境認識部96は、センサデータ取得部93から供給されるセンサデータおよび映像データ取得部94から供給される映像データに基づいて車両外部環境データを生成し、停車状態判断部99および走行モードコントロール部104に供給する。例えば、車両外部環境データには、車両13の周囲にある物体までの距離を示す情報が含まれており、車両13がガレージ装置14に格納されている場合には、ガレージディスプレイ41までの距離を示す情報が含まれる。 The vehicle external environment recognition unit 96 generates vehicle external environment data based on the sensor data supplied from the sensor data acquisition unit 93 and the video data supplied from the video data acquisition unit 94, and supplies the data to the stopped state determination unit 99 and the driving mode control unit 104. For example, the vehicle external environment data includes information indicating the distance to objects around the vehicle 13, and when the vehicle 13 is stored in the garage device 14, includes information indicating the distance to the garage display 41.
 通信部97は、ガレージシステム制御装置16および外部ネットワークと通信を行う。例えば、通信部97は、停車状態判断部99から供給される停止状態通知をガレージシステム制御装置16に送信し、ガレージシステム制御装置16から送信されてくるガレージ状態通知を受信して停車状態判断部99に供給する。また、通信部97は、車両内部環境認識部95から供給される車両13内の乗員の位置姿勢データと、運転操作検出部103から供給される運転操作データとを含む車両パラメータをガレージシステム制御装置16に送信する。そして、通信部97は、ガレージシステム制御装置16から送信されてくる車両設定パラメータを受信してシミュレーションモードコントロール部105に供給する。 The communication unit 97 communicates with the garage system control device 16 and the external network. For example, the communication unit 97 transmits a stopped state notification provided from the stopped state determination unit 99 to the garage system control device 16, and receives a garage state notification transmitted from the garage system control device 16 and supplies it to the stopped state determination unit 99. The communication unit 97 also transmits vehicle parameters to the garage system control device 16, including position and posture data of the occupants in the vehicle 13 provided from the vehicle internal environment recognition unit 95 and driving operation data provided from the driving operation detection unit 103. The communication unit 97 then receives vehicle setting parameters transmitted from the garage system control device 16 and supplies them to the simulation mode control unit 105.
 ユーザ設定取得部98は、運転シミュレーションシステム11を利用するユーザが車両内制御装置18に対する各種の操作入力を行うと、その操作入力に応じたユーザ設定値を取得して記憶するとともに、停車状態判断部99にユーザ設定値を供給する。例えば、ユーザ設定取得部98は、走行モードからシミュレーションモードへの切り替えを指示する操作入力が行われると、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値を取得して記憶するとともに、そのユーザ設定値を停車状態判断部99に供給する。 When a user using the driving simulation system 11 performs various operational inputs to the in-vehicle control device 18, the user setting acquisition unit 98 acquires and stores a user setting value corresponding to the operational input, and supplies the user setting value to the stopped state determination unit 99. For example, when an operational input is performed to instruct switching from the driving mode to the simulation mode, the user setting acquisition unit 98 acquires and stores a user setting value instructing switching from the driving mode to the simulation mode, and supplies the user setting value to the stopped state determination unit 99.
 停車状態判断部99は、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値がユーザ設定取得部98から供給されると、車両外部環境認識部96から供給される車両外部環境データに従って、シミュレーションモードに移行するか否かを判断する。また、停車状態判断部99は、停止状態通知を、通信部97を介してガレージシステム制御装置16へ送信し、ガレージシステム制御装置16から送信されてくるガレージ状態通知を、通信部97を介して受信する。また、停車状態判断部99は、停止状態シグナルを、走行モードコントロール部104およびシミュレーションモードコントロール部105に供給する。 When the user setting value instructing switching from the driving mode to the simulation mode is supplied from the user setting acquisition unit 98, the stopped state determination unit 99 determines whether or not to transition to the simulation mode according to the vehicle external environment data supplied from the vehicle external environment recognition unit 96. The stopped state determination unit 99 also transmits a stopped state notification to the garage system control device 16 via the communication unit 97, and receives a garage state notification transmitted from the garage system control device 16 via the communication unit 97. The stopped state determination unit 99 also supplies a stopped state signal to the driving mode control unit 104 and the simulation mode control unit 105.
 例えば、停車状態判断部99は、車両13が停車しているときに、走行モードからシミュレーションモードへの切り替えを指示する操作がユーザにより行われると、シミュレーションモードに移行すると判断する。また、停車状態判断部99は、車両13が停車していて、かつ、車両13のギヤセレクタがパーキングレンジに入っている場合に、シミュレーションモードに移行すると判断してもよい。 For example, when the vehicle 13 is stopped and the user performs an operation to instruct switching from the driving mode to the simulation mode, the stopped state determination unit 99 determines that the vehicle will transition to the simulation mode. The stopped state determination unit 99 may also determine that the vehicle will transition to the simulation mode when the vehicle 13 is stopped and the gear selector of the vehicle 13 is in the parking range.
 さらに、停車状態判断部99は、例えば、GPS(Global Positioning System)などを利用した車両13の位置情報に基づいて判断を行ってもよい。この場合、停車状態判断部99は、運転シミュレーションを実行してもよい地点(例えば、ガレージ装置14の設置位置)の位置情報を予め保持しておき、車両13の位置情報に基づいて、運転シミュレーションを実行してもよい地点に車両13が停車している場合に、シミュレーションモードに移行すると判断することができる。 Furthermore, the stopped state determination unit 99 may make a determination based on the position information of the vehicle 13 using, for example, GPS (Global Positioning System) or the like. In this case, the stopped state determination unit 99 stores in advance the position information of a location where a driving simulation may be performed (for example, the installation location of the garage device 14), and can determine to transition to simulation mode when the vehicle 13 is stopped at a location where a driving simulation may be performed based on the position information of the vehicle 13.
 また、停車状態判断部99は、例えば、ガレージディスプレイ41からの距離または相対位置を利用して判断を行ってもよく、この場合、ガレージ状態通知に、ガレージディスプレイ41に対する車両13の距離または相対位置が含まれているとする。そして、停車状態判断部99は、ガレージディスプレイ41に対する車両13の距離または相対位置が所定の閾値以下の場合に、つまり、車両13がガレージ装置14内の所定位置に停車している場合に、シミュレーションモードに移行すると判断することができる。 The stopped state determination unit 99 may also make a determination using, for example, the distance or relative position from the garage display 41. In this case, the garage state notification includes the distance or relative position of the vehicle 13 from the garage display 41. The stopped state determination unit 99 can then determine to transition to simulation mode when the distance or relative position of the vehicle 13 from the garage display 41 is equal to or less than a predetermined threshold, that is, when the vehicle 13 is stopped at a predetermined position within the garage device 14.
 このように、停車状態判断部99が、シミュレーションモードに移行するか否かの判断を行うことによって、例えば、想定していない場所で、車両13に対する運転操作ができなくなるという状態を回避することができる。例えば、ドライブ中に一時的に停車しているときに誤ってシミュレーションモード開始操作がなされた場合であっても、突然、車両13の操作ができなくなるという状態を回避することができる。 In this way, by having the vehicle stop state determination unit 99 determine whether or not to transition to simulation mode, it is possible to avoid a situation in which it becomes impossible to drive the vehicle 13 in an unexpected location, for example. For example, even if an operation to start the simulation mode is mistakenly performed while the vehicle is temporarily stopped during a drive, it is possible to avoid a situation in which it becomes impossible to suddenly operate the vehicle 13.
 また、シミュレーションモードに移行してはいけない場合、停車状態判断部99の判断結果に基づいて、車両13内のディスプレイに「シミュレーションモードに移行できない」旨のメッセージを表示するようにしてもよい。その際、併せて、その理由を表示するようにしてもよい。例えば、その理由としては、車両13が停車していないことや、車両13のギヤセレクタがパーキングレンジに入っていないこと、車両13の位置が適切な場所(ガレージや、駐車場、パーキングスペース、運転シミュレーションシステム11に対して所定の位置など)ではないことなどが挙げられる。 In addition, if it is not permitted to switch to simulation mode, a message stating "Cannot switch to simulation mode" may be displayed on the display inside the vehicle 13 based on the result of the judgment by the stopped state judgment unit 99. At that time, the reason for this may also be displayed. For example, the reason may be that the vehicle 13 is not stopped, the gear selector of the vehicle 13 is not in the parking range, the position of the vehicle 13 is not in an appropriate location (such as a garage, a parking lot, a parking space, or a predetermined position relative to the driving simulation system 11), etc.
 ブレーキ位置データ取得部100は、図2のブレーキ位置センサ29から供給されるブレーキ位置信号に基づいて、車両13の運転者によるブレーキペダル23の踏み込み量を示すブレーキ位置データを取得し、運転操作検出部103に供給する。 The brake position data acquisition unit 100 acquires brake position data indicating the amount of depression of the brake pedal 23 by the driver of the vehicle 13 based on the brake position signal supplied from the brake position sensor 29 in FIG. 2, and supplies the data to the driving operation detection unit 103.
 ハンドル回転データ取得部101は、図2のハンドル回転センサ27から供給されるハンドル回転信号に基づいて、車両13の運転者によるハンドル21の回転量を示すハンドル回転データを取得し、運転操作検出部103に供給する。 The steering wheel rotation data acquisition unit 101 acquires steering wheel rotation data indicating the amount of rotation of the steering wheel 21 by the driver of the vehicle 13 based on the steering wheel rotation signal supplied from the steering wheel rotation sensor 27 in FIG. 2, and supplies the data to the driving operation detection unit 103.
 アクセル位置データ取得部102は、図2のアクセル位置センサ28から供給されるアクセル位置信号に基づいて、車両13の運転者によるアクセルペダル22の踏み込み量を示すアクセル位置データを取得し、運転操作検出部103に供給する。 The accelerator position data acquisition unit 102 acquires accelerator position data indicating the amount of depression of the accelerator pedal 22 by the driver of the vehicle 13 based on the accelerator position signal supplied from the accelerator position sensor 28 in FIG. 2, and supplies this data to the driving operation detection unit 103.
 運転操作検出部103は、ブレーキ位置データ取得部100から供給されるブレーキ位置データ、ハンドル回転データ取得部101から供給されるハンドル回転データ、および、アクセル位置データ取得部102から供給されるアクセル位置データに基づいて、運転者の車両13に対する運転操作を検出する。そして、運転操作検出部103は、その運転操作の内容を示す運転操作データ(ブレーキ位置データ、ハンドル回転データ、およびアクセル位置データ)を、走行モードコントロール部104および通信部97に供給する。 The driving operation detection unit 103 detects the driver's driving operation of the vehicle 13 based on the brake position data supplied from the brake position data acquisition unit 100, the steering wheel rotation data supplied from the steering wheel rotation data acquisition unit 101, and the accelerator position data supplied from the accelerator position data acquisition unit 102. The driving operation detection unit 103 then supplies driving operation data indicating the content of the driving operation (brake position data, steering wheel rotation data, and accelerator position data) to the driving mode control unit 104 and the communication unit 97.
 走行モードコントロール部104は、走行モードのとき、運転操作検出部103から供給される運転操作データに従って、車両13の走行をコントロールする。例えば、走行モードコントロール部104は、ハンドル回転データに従って、車両13のタイヤの車軸方向を制御する車軸方向制御データを生成して車軸方向制御部106に供給する。また、走行モードコントロール部104は、アクセル位置データに従って、車両13の加速を制御するスロットル制御データを生成してスロットル制御部107に供給する。また、走行モードコントロール部104は、ブレーキ位置データに従って、車両13の減速を制御するブレーキ制御データを生成してブレーキ制御部108に供給する。 When in the driving mode, the driving mode control unit 104 controls the driving of the vehicle 13 according to the driving operation data supplied from the driving operation detection unit 103. For example, the driving mode control unit 104 generates axle direction control data for controlling the axle direction of the tires of the vehicle 13 according to the steering wheel rotation data, and supplies the data to the axle direction control unit 106. The driving mode control unit 104 also generates throttle control data for controlling the acceleration of the vehicle 13 according to the accelerator position data, and supplies the data to the throttle control unit 107. The driving mode control unit 104 also generates brake control data for controlling the deceleration of the vehicle 13 according to the brake position data, and supplies the data to the brake control unit 108.
 さらに、走行モードコントロール部104は、運転操作検出部103から供給される運転操作データおよび車両外部環境認識部96から供給される車両外部環境データに従って、ハンドル反力制御データ、サスペンション制御データ、ブレーキ反力制御データ、アクセル反力制御データ、およびシート制御データを生成する。そして、走行モードコントロール部104は、それぞれの制御データを、ハンドル反力制御部109、サスペンション制御部110、ブレーキ反力制御部111、アクセル反力制御部112、およびシート制御部113に供給する。 Furthermore, the driving mode control unit 104 generates steering reaction force control data, suspension control data, brake reaction force control data, accelerator reaction force control data, and seat control data according to the driving operation data supplied from the driving operation detection unit 103 and the vehicle external environment data supplied from the vehicle external environment recognition unit 96. The driving mode control unit 104 then supplies each of the control data to the steering reaction force control unit 109, the suspension control unit 110, the brake reaction force control unit 111, the accelerator reaction force control unit 112, and the seat control unit 113.
 シミュレーションモードコントロール部105は、シミュレーションモードのとき、通信部97から供給される車両設定パラメータに従って、仮想空間内での車両13の挙動を再現するように車両13の挙動をコントロールする。 In simulation mode, the simulation mode control unit 105 controls the behavior of the vehicle 13 in a virtual space according to the vehicle setting parameters supplied from the communication unit 97.
 例えば、車両設定パラメータには、サスペンション設定データ、シート設定データ、および反力設定データ(ハンドル反力、ブレーキ反力、およびアクセル反力のデータ値)が含まれている。従って、シミュレーションモードコントロール部105は、サスペンション設定データに従って、車両13の4輪の各サスペンションの高さを制御するサスペンション制御データを生成し、サスペンション制御部110に供給する。また、シミュレーションモードコントロール部105は、シート設定データに従って、シートの座面および背面の傾きを制御するシート制御データを生成し、シート制御部113に供給する。 For example, the vehicle setting parameters include suspension setting data, seat setting data, and reaction force setting data (data values for steering reaction force, brake reaction force, and accelerator reaction force). Therefore, the simulation mode control unit 105 generates suspension control data that controls the height of the suspension of each of the four wheels of the vehicle 13 according to the suspension setting data, and supplies it to the suspension control unit 110. The simulation mode control unit 105 also generates seat control data that controls the inclination of the seat cushion and backrest according to the seat setting data, and supplies it to the seat control unit 113.
 また、シミュレーションモードコントロール部105は、反力設定データのハンドル反力のデータ値に従って、ハンドル21に発生させるハンドル反力を制御するハンドル反力制御データを生成し、ハンドル反力制御部109に供給する。また、シミュレーションモードコントロール部105は、反力設定データのブレーキ反力のデータ値に従って、ブレーキペダル23に発生させるブレーキ反力を制御するブレーキ反力制御データを生成し、ブレーキ反力制御部111に供給する。また、シミュレーションモードコントロール部105は、反力設定データのアクセル反力のデータ値に従って、アクセルペダル22に発生させるアクセル反力を制御するアクセル反力制御データを生成し、アクセル反力制御部112に供給する。 The simulation mode control unit 105 also generates steering reaction force control data for controlling the steering reaction force generated in the steering wheel 21 according to the steering reaction force data value in the reaction force setting data, and supplies this to the steering reaction force control unit 109. The simulation mode control unit 105 also generates brake reaction force control data for controlling the brake reaction force generated in the brake pedal 23 according to the brake reaction force data value in the reaction force setting data, and supplies this to the brake reaction force control unit 111. The simulation mode control unit 105 also generates accelerator reaction force control data for controlling the accelerator reaction force generated in the accelerator pedal 22 according to the accelerator reaction force data value in the reaction force setting data, and supplies this to the accelerator reaction force control unit 112.
 車軸方向制御部106は、走行モードコントロール部104から供給される車軸方向制御データに従って車軸方向制御信号を生成し、図3の車軸方向駆動機構30に供給する。 The axle direction control unit 106 generates an axle direction control signal according to the axle direction control data supplied from the driving mode control unit 104, and supplies it to the axle direction drive mechanism 30 in FIG. 3.
 スロットル制御部107は、走行モードコントロール部104から供給されるスロットル制御データに従ってスロットル制御信号を生成し、図3のスロットル駆動機構31に供給する。 The throttle control unit 107 generates a throttle control signal according to the throttle control data supplied from the driving mode control unit 104, and supplies it to the throttle drive mechanism 31 in FIG. 3.
 ブレーキ制御部108は、走行モードコントロール部104から供給されるブレーキ制御データに従ってブレーキ制御信号を生成し、図3のブレーキ駆動機構32に供給する。 The brake control unit 108 generates a brake control signal according to the brake control data supplied from the driving mode control unit 104, and supplies it to the brake drive mechanism 32 in FIG. 3.
 ハンドル反力制御部109は、走行モードコントロール部104またはシミュレーションモードコントロール部105から供給されるハンドル反力制御データに従ってハンドル反力制御信号を生成し、図3のハンドル反力駆動機構33に供給する。 The steering reaction force control unit 109 generates a steering reaction force control signal according to steering reaction force control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the steering reaction force drive mechanism 33 in FIG. 3.
 サスペンション制御部110は、走行モードコントロール部104またはシミュレーションモードコントロール部105から供給されるサスペンション制御データに従ってサスペンション制御信号を生成し、図3のサスペンション駆動機構36に供給する。 The suspension control unit 110 generates a suspension control signal according to the suspension control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the suspension drive mechanism 36 in FIG. 3.
 ブレーキ反力制御部111は、走行モードコントロール部104またはシミュレーションモードコントロール部105から供給されるブレーキ反力制御データに従ってブレーキ反力制御信号を生成し、図3のブレーキ反力駆動機構35に供給する。 The brake reaction force control unit 111 generates a brake reaction force control signal according to the brake reaction force control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the brake reaction force drive mechanism 35 in FIG. 3.
 アクセル反力制御部112は、走行モードコントロール部104またはシミュレーションモードコントロール部105から供給されるアクセル反力制御データに従ってアクセル反力制御信号を生成し、図3のアクセル反力駆動機構34に供給する。 The accelerator reaction force control unit 112 generates an accelerator reaction force control signal according to the accelerator reaction force control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the accelerator reaction force drive mechanism 34 in FIG. 3.
 シート制御部113は、走行モードコントロール部104またはシミュレーションモードコントロール部105から供給されるシート制御データに従ってシート制御信号を生成し、図3のシート駆動機構37に供給する。 The seat control unit 113 generates a seat control signal according to the seat control data supplied from the driving mode control unit 104 or the simulation mode control unit 105, and supplies it to the seat drive mechanism 37 in FIG. 3.
 図9には、車両内制御装置18の映像系およびオーディオ系の構成例を示すブロック図が示されている。 FIG. 9 shows a block diagram illustrating an example of the configuration of the video and audio systems of the in-vehicle control device 18.
 図9に示すように、車両内制御装置18は、ユーザ設定取得部121、通信部122、映像取得部123、オーディオ取得部124、信号分離部125、映像入力切り替え部126、オーディオ入力切り替え部127、映像信号処理部128、ナビゲーション映像送信部129、サイドミラー映像送信部130、インストルメントパネル映像送信部131、バックミラー映像送信部132、後部座席用映像送信部133、オーディオ信号処理部134、およびアナログ増幅部135を備えて構成される。 As shown in FIG. 9, the in-vehicle control device 18 is configured with a user setting acquisition unit 121, a communication unit 122, a video acquisition unit 123, an audio acquisition unit 124, a signal separation unit 125, a video input switching unit 126, an audio input switching unit 127, a video signal processing unit 128, a navigation video transmission unit 129, a side mirror video transmission unit 130, an instrument panel video transmission unit 131, a rearview mirror video transmission unit 132, a rear seat video transmission unit 133, an audio signal processing unit 134, and an analog amplification unit 135.
 ユーザ設定取得部121は、運転シミュレーションシステム11を利用するユーザが車両内制御装置18に対する各種の操作入力を行うと、ユーザの操作入力に応じたユーザ設定値を取得して記憶するとともに、そのユーザ設定値を、通信部122、映像入力切り替え部126、およびオーディオ入力切り替え部127に供給する。例えば、ユーザ設定取得部121は、走行モードからシミュレーションモードへの切り替えを指示する操作入力が行われると、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値を取得して記憶するとともに、そのユーザ設定値を、通信部122、映像入力切り替え部126、およびオーディオ入力切り替え部127に供給する。 When a user using the driving simulation system 11 performs various operational inputs to the in-vehicle control device 18, the user setting acquisition unit 121 acquires and stores a user setting value corresponding to the user's operational input, and supplies the user setting value to the communication unit 122, the video input switching unit 126, and the audio input switching unit 127. For example, when an operational input is performed to instruct switching from the driving mode to the simulation mode, the user setting acquisition unit 121 acquires and stores a user setting value instructing switching from the driving mode to the simulation mode, and supplies the user setting value to the communication unit 122, the video input switching unit 126, and the audio input switching unit 127.
 通信部122は、ガレージシステム制御装置16および外部ネットワークと通信を行う。例えば、通信部122は、ユーザ設定取得部121から供給される、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値をガレージシステム制御装置16へ送信する。また、通信部122は、ガレージシステム制御装置16から送信されてくる車両13用の映像オーディオストリームを受信して、信号分離部125に供給する。 The communication unit 122 communicates with the garage system control device 16 and an external network. For example, the communication unit 122 transmits a user setting value supplied from the user setting acquisition unit 121 to the garage system control device 16, the user setting value instructing switching from the driving mode to the simulation mode. The communication unit 122 also receives a video and audio stream for the vehicle 13 transmitted from the garage system control device 16, and supplies the video and audio stream to the signal separation unit 125.
 映像取得部123は、走行モード時に、ナビゲーションディスプレイ51、サイドミラーディスプレイ52Lおよび52R、インストルメントパネルディスプレイ53、バックミラーディスプレイ54、並びに、後部座席用ディスプレイ55Lおよび55Rに表示させる映像を取得し、それらの映像ストリームを映像入力切り替え部126に供給する。例えば、映像取得部123は、車両13の位置情報に従ったナビゲーション映像や、車両13の左側および右側に設置されているサイドミラーカメラで撮影されたサイドミラー映像、車両13の走行速度やエンジン回転数などに基づくインストルメントパネル映像、車両13のバックミラーカメラで撮影されたバックミラー映像、車両13の外部カメラで撮影した映像に基づく後部座席視点映像などを取得する。 In driving mode, the image acquisition unit 123 acquires images to be displayed on the navigation display 51, the side mirror displays 52L and 52R, the instrument panel display 53, the rearview mirror display 54, and the rear seat displays 55L and 55R, and supplies these image streams to the image input switching unit 126. For example, the image acquisition unit 123 acquires navigation images according to the position information of the vehicle 13, side mirror images captured by side mirror cameras installed on the left and right sides of the vehicle 13, instrument panel images based on the vehicle 13's traveling speed and engine RPM, rearview mirror images captured by the rearview mirror camera of the vehicle 13, rear seat viewpoint images based on images captured by an external camera of the vehicle 13, etc.
 オーディオ取得部124は、走行モード時に、車両13に搭載されているオーディオシステムで再生されたオーディオを取得し、そのオーディオストリームをオーディオ入力切り替え部127に供給する。 The audio acquisition unit 124 acquires audio played on the audio system installed in the vehicle 13 during driving mode, and supplies the audio stream to the audio input switching unit 127.
 信号分離部125は、通信部122から供給される車両13用の映像オーディオストリームを、車両13用の映像ストリームと車両13用のオーディオストリームとに分離する。そして、信号分離部125は、車両13用の映像ストリームを映像入力切り替え部126に供給し、車両13用のオーディオストリームをオーディオ入力切り替え部127に供給する。 The signal separation unit 125 separates the video and audio stream for vehicle 13 supplied from the communication unit 122 into a video stream for vehicle 13 and an audio stream for vehicle 13. The signal separation unit 125 then supplies the video stream for vehicle 13 to the video input switching unit 126, and supplies the audio stream for vehicle 13 to the audio input switching unit 127.
 映像入力切り替え部126は、ユーザ設定取得部121から供給されるユーザ設定値に従って、映像信号処理部128に供給する映像ストリームの切り替えを行う。例えば、映像入力切り替え部126は、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値が供給された場合、信号分離部125から供給される車両13用の映像ストリームを映像信号処理部128に供給するように切り替えを行う。一方、映像入力切り替え部126は、シミュレーションモードから走行モードへの切り替えを指示するユーザ設定値が供給された場合、映像取得部123から供給される映像ストリームを映像信号処理部128に供給するように切り替えを行う。 The video input switching unit 126 switches the video stream supplied to the video signal processing unit 128 in accordance with the user setting value supplied from the user setting acquisition unit 121. For example, when a user setting value instructing switching from the driving mode to the simulation mode is supplied to the video input switching unit 126, the video input switching unit 126 switches so that the video stream for the vehicle 13 supplied from the signal separation unit 125 is supplied to the video signal processing unit 128. On the other hand, when a user setting value instructing switching from the simulation mode to the driving mode is supplied to the video input switching unit 126, the video input switching unit 126 switches so that the video stream supplied from the video acquisition unit 123 is supplied to the video signal processing unit 128.
 オーディオ入力切り替え部127は、ユーザ設定取得部121から供給されるユーザ設定値に従って、オーディオ信号処理部134に供給するオーディオストリームの切り替えを行う。例えば、オーディオ入力切り替え部127は、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値が供給された場合、信号分離部125から供給される車両13用のオーディオストリームをオーディオ信号処理部134に供給するように切り替えを行う。一方、オーディオ入力切り替え部127は、シミュレーションモードから走行モードへの切り替えを指示するユーザ設定値が供給された場合、オーディオ取得部124から供給されるオーディオストリームをオーディオ信号処理部134に供給するように切り替えを行う。 The audio input switching unit 127 switches the audio stream supplied to the audio signal processing unit 134 in accordance with the user setting value supplied from the user setting acquisition unit 121. For example, when a user setting value instructing switching from the driving mode to the simulation mode is supplied to the audio input switching unit 127, the audio input switching unit 127 switches so that the audio stream for the vehicle 13 supplied from the signal separation unit 125 is supplied to the audio signal processing unit 134. On the other hand, when a user setting value instructing switching from the simulation mode to the driving mode is supplied to the audio input switching unit 127, the audio input switching unit 127 switches so that the audio stream supplied from the audio acquisition unit 124 is supplied to the audio signal processing unit 134.
 映像信号処理部128は、映像入力切り替え部126から供給される映像ストリームに対する信号処理を行って、ナビゲーション用の映像ストリーム、サイドミラー用の映像ストリーム、インストルメントパネル用の映像ストリーム、バックミラー用の映像ストリーム、および後部座席用の映像ストリームを取得する。そして、映像信号処理部128は、ナビゲーション用の映像ストリームをナビゲーション映像送信部129に供給し、サイドミラー用の映像ストリームをサイドミラー映像送信部130に供給し、インストルメントパネル用の映像ストリームをインストルメントパネル映像送信部131に供給し、バックミラー用の映像ストリームをバックミラー映像送信部132に供給し、後部座席用の映像ストリームを後部座席用映像送信部133に供給する。 The video signal processing unit 128 performs signal processing on the video streams supplied from the video input switching unit 126 to obtain a video stream for navigation, a video stream for side mirrors, a video stream for the instrument panel, a video stream for the rearview mirror, and a video stream for the rear seats. The video signal processing unit 128 then supplies the video stream for navigation to the navigation video transmission unit 129, the video stream for the side mirrors to the side mirror video transmission unit 130, the video stream for the instrument panel to the instrument panel video transmission unit 131, the video stream for the rearview mirror to the rearview mirror video transmission unit 132, and the video stream for the rear seats to the rear seat video transmission unit 133.
 ナビゲーション映像送信部129は、映像信号処理部128から供給されるナビゲーション用の映像ストリームをナビゲーションディスプレイ51に送信し、ナビゲーションディスプレイ51にナビゲーション映像を表示させる。 The navigation video transmission unit 129 transmits the video stream for navigation provided by the video signal processing unit 128 to the navigation display 51, causing the navigation display 51 to display the navigation video.
 サイドミラー映像送信部130は、映像信号処理部128から供給されるサイドミラー用の映像ストリームをサイドミラーディスプレイ52Lおよび52Rに送信し、サイドミラーディスプレイ52Lおよび52Rにサイドミラー映像を表示させる。 The side mirror image transmission unit 130 transmits the side mirror image stream supplied from the image signal processing unit 128 to the side mirror displays 52L and 52R, and displays the side mirror image on the side mirror displays 52L and 52R.
 インストルメントパネル映像送信部131は、映像信号処理部128から供給されるインストルメントパネル用の映像ストリームをインストルメントパネルディスプレイ53に送信し、インストルメントパネルディスプレイ53にインストルメントパネル映像を表示させる。 The instrument panel image transmission unit 131 transmits the image stream for the instrument panel supplied from the image signal processing unit 128 to the instrument panel display 53, causing the instrument panel display 53 to display the instrument panel image.
 バックミラー映像送信部132は、映像信号処理部128から供給されるバックミラー用の映像ストリームをバックミラーディスプレイ54に送信し、バックミラーディスプレイ54にバックミラー映像を表示させる。 The rearview mirror image transmission unit 132 transmits the rearview mirror image stream supplied from the image signal processing unit 128 to the rearview mirror display 54, causing the rearview mirror display 54 to display the rearview mirror image.
 後部座席用映像送信部133は、映像信号処理部128から供給される後部座席用の映像ストリームを後部座席用ディスプレイ55Lおよび55Rに送信し、後部座席用ディスプレイ55Lおよび55Rに後部座席視点映像を表示させる。 The rear seat video transmission unit 133 transmits the rear seat video stream supplied from the video signal processing unit 128 to the rear seat displays 55L and 55R, and displays the rear seat viewpoint video on the rear seat displays 55L and 55R.
 オーディオ信号処理部134は、オーディオ入力切り替え部127から供給されるオーディオストリームに対する信号処理を行って、オーディオ信号を取得してアナログ増幅部135に供給する。 The audio signal processing unit 134 performs signal processing on the audio stream supplied from the audio input switching unit 127 to obtain an audio signal and supply it to the analog amplification unit 135.
 アナログ増幅部135は、オーディオ信号処理部134から供給されるオーディオ信号を増幅してスピーカ58-1および58-2に供給し、スピーカ58-1および58-2からオーディオを出力させる。 The analog amplifier 135 amplifies the audio signal supplied from the audio signal processor 134 and supplies it to the speakers 58-1 and 58-2, causing the speakers 58-1 and 58-2 to output audio.
 <車両パラメータおよび車両設定パラメータの一例>
 図10乃至図14を参照して、車両パラメータおよび車両設定パラメータについて説明する。
<Examples of vehicle parameters and vehicle setting parameters>
The vehicle parameters and the vehicle setting parameters will be described with reference to FIGS.
 図10のAには、車両内制御装置18からガレージシステム制御装置16に送信される車両パラメータのデータ構造の一例が示されている。図10のBには、ガレージシステム制御装置16から車両内制御装置18に送信される車両設定パラメータのデータ構造の一例が示されている。 A in FIG. 10 shows an example of the data structure of vehicle parameters transmitted from the in-vehicle control device 18 to the garage system control device 16. B in FIG. 10 shows an example of the data structure of vehicle setting parameters transmitted from the garage system control device 16 to the in-vehicle control device 18.
 図10のAに示すように、車両パラメータは、車両パラメータの全体の共通データが格納されるヘッダ部に続いて、ハンドル回転データ、ブレーキ位置データ、アクセル位置データ、および、その他のデータが順に配置されて構成される。また、ハンドル回転データ、ブレーキ位置データ、およびアクセル位置データには、それぞれヘッダ部およびデータ格納部が設けられている。そして、それぞれのデータ格納部には、各時刻に対応付けて1個または複数個のデータ値が格納され、図10のAに示す例では、時刻aから時刻yまでの各時刻に対応付けてx個のデータ値(例えば、時刻aではデータ値1a~データ値xa)が格納される。 As shown in A of FIG. 10, the vehicle parameters are configured by sequentially arranging steering wheel rotation data, brake position data, accelerator position data, and other data following a header section in which all common data for the vehicle parameters is stored. Furthermore, each of the steering wheel rotation data, brake position data, and accelerator position data has a header section and a data storage section. Each data storage section stores one or more data values associated with each time, and in the example shown in A of FIG. 10, x data values (for example, data value 1a to data value xa at time a) are stored associated with each time from time a to time y.
 図10のBに示すように、車両設定パラメータは、車両設定パラメータの全体の共通データが格納されるヘッダ部に続いて、サスペンション設定データ、シート設定データ、反力設定データ、および、その他のデータが順に配置されて構成される。また、サスペンション設定データ、シート設定データ、および反力設定データには、それぞれヘッダ部およびデータ格納部が設けられている。そして、それぞれのデータ格納部には、各時刻に対応付けて1個または複数個のデータ値が格納され、図10のBに示す例では、時刻aから時刻yまでの各時刻に対応付けてx個のデータ値(例えば、時刻aではデータ値1a~データ値xa)が格納される。 As shown in FIG. 10B, the vehicle setting parameters are configured by sequentially arranging suspension setting data, seat setting data, reaction force setting data, and other data following a header section in which all common data for the vehicle setting parameters is stored. Furthermore, the suspension setting data, seat setting data, and reaction force setting data each have a header section and a data storage section. Each data storage section stores one or more data values associated with each time. In the example shown in FIG. 10B, x data values (for example, data value 1a to data value xa at time a) are stored in association with each time from time a to time y.
 ここでは、図11に示すように、ハンドルを切り過ぎたためにスピンするような挙動が発生した車両13の時刻T1乃至T4の各時刻において生成される車両パラメータおよび車両設定パラメータの一例が、図12に示されている。 In this example, FIG. 12 shows an example of vehicle parameters and vehicle setting parameters that are generated at times T1 to T4 for a vehicle 13 that has experienced a spinning behavior due to excessive steering as shown in FIG. 11.
 図12に示すように、ガレージシステム制御装置16に入力される車両パラメータでは、時刻Tごとに、アクセル位置データ、ブレーキ位置データ、およびハンドル回転データに対して、それぞれ1つのデータ値が格納される。 As shown in FIG. 12, the vehicle parameters input to the garage system control device 16 store one data value each for the accelerator position data, brake position data, and steering wheel rotation data for each time T.
 また、ガレージシステム制御装置16が車両13に対する制御を行うための車両設定パラメータでは、時刻Tごとに、サスペンション設定データに対して4個のデータ値(Front RIGHT , Front LEFT , Rear RIGHT , Rear LEFT)が格納される。同様に、シート設定データに対して3個のデータ値(Seat Control 1 , Seat Control 2 , Seat Control 3)が格納され、反力設定データに対して2個のデータ値(ハンドル反力、ブレーキ反力)が格納される。 Furthermore, in the vehicle setting parameters used by the garage system control device 16 to control the vehicle 13, four data values (Front RIGHT, Front LEFT, Rear RIGHT, Rear LEFT) are stored for the suspension setting data for each time T. Similarly, three data values (Seat Control 1, Seat Control 2, Seat Control 3) are stored for the seat setting data, and two data values (steering reaction force, brake reaction force) are stored for the reaction force setting data.
 図13のAに示すように、データ値Front RIGHTは、右前輪のサスペンションの高さを制御するためのサスペンション設定データであり、データ値Front LEFTは、左前輪のサスペンションの高さを制御するためのサスペンション設定データであり、データ値Rear RIGHTは、右後輪のサスペンションの高さを制御するためのサスペンション設定データであり、データ値Rear LEFTは、左後輪のサスペンションの高さを制御するためのサスペンション設定データである。そして、図13のBに示すように、左コーナリング時には、右側のサスペンションが下がるとともに、左側のサスペンションが上がるようにサスペンション設定データが設定される。また、図13のCに示すように、ブレーキング時には、前側のサスペンションが下がるとともに、後ろ側のサスペンションが上がるようにサスペンション設定データが設定される。また、図13のDに示すように、加速時には、後ろ側のサスペンションが下がるとともに、前側のサスペンションが上がるようにサスペンション設定データが設定される。 As shown in A of FIG. 13, the data value Front RIGHT is suspension setting data for controlling the suspension height of the right front wheel, the data value Front LEFT is suspension setting data for controlling the suspension height of the left front wheel, the data value Rear RIGHT is suspension setting data for controlling the suspension height of the right rear wheel, and the data value Rear LEFT is suspension setting data for controlling the suspension height of the left rear wheel. Then, as shown in B of FIG. 13, the suspension setting data is set so that the right suspension is lowered and the left suspension is raised during left cornering. Also, as shown in C of FIG. 13, the suspension setting data is set so that the front suspension is lowered and the rear suspension is raised during braking. Also, as shown in D of FIG. 13, the suspension setting data is set so that the rear suspension is lowered and the front suspension is raised during acceleration.
 図14のAに示すように、データ値Seat Control 1は、シート座面の前端の高さを制御するためのシート設定データであり、データ値Seat Control 2は、シート座面の後端の高さを制御するためのシート設定データであり、データ値Seat Control 3は、シート背面の前方または後方への傾きを制御するためのシート設定データである。そして、図14のBに示すように、定速走行時には、シート座面およびシート背面に対する制御が行われないように、シート設定データが設定される。また、図14のCに示すように、ブレーキング時には、シート座面の前端の高さが下がるようにシート設定データが設定され、シート座面の後端の高さが上がるようにシート設定データが設定され、シート背面が前方に傾くようにシート設定データが設定される。 As shown in A of FIG. 14, the data value Seat Control 1 is seat setting data for controlling the height of the front end of the seat cushion, the data value Seat Control 2 is seat setting data for controlling the height of the rear end of the seat cushion, and the data value Seat Control 3 is seat setting data for controlling the forward or rearward tilt of the seat back. Then, as shown in B of FIG. 14, the seat setting data is set so that no control is performed on the seat cushion and the seat back during constant speed driving. Also, as shown in C of FIG. 14, the seat setting data is set so that the height of the front end of the seat cushion is lowered, the height of the rear end of the seat cushion is raised, and the seat back is tilted forward during braking.
 <運転シミュレーションの利用例>
 ここで、運転シミュレーションの第1および第2の利用例について説明する。
<Examples of using driving simulation>
Here, a first and a second application example of the driving simulation will be described.
 運転シミュレーションの第1の利用例として、運転シミュレーションシステム11では、実際の車両13に乗車した状態を実際の車両13で作り出せるので、特定条件下における正しい運転技法などを学ぶことに利用することができる。 As a first example of the use of driving simulation, the driving simulation system 11 can recreate the riding conditions of an actual vehicle 13 using the actual vehicle 13, and can therefore be used to learn correct driving techniques under specific conditions.
 例えば、豪雨の状況で路面が極めて滑りやすい状況を、乗員に運転シミュレーションをさせる場合、ガレージ装置14内のガレージディスプレイ41では車両13外の風景を模倣する映像を提示する。具体的には、40km/hで走っている場合の風景の一定方向の移動状態の映像を、天井面用のガレージディスプレイ41-1、正面用のガレージディスプレイ41-2、右側面用のガレージディスプレイ41-3、左側面用のガレージディスプレイ41-4、および背面用のガレージディスプレイ41-5に表示することにより、車両13の乗員が視覚的に、あたかもその場面を走っているかのような状態を作り出すことができる。 For example, when the occupants are made to simulate driving in a heavy rain situation where the road surface is extremely slippery, the garage display 41 in the garage device 14 presents an image that mimics the scenery outside the vehicle 13. Specifically, an image of the scenery moving in a certain direction when driving at 40 km/h is displayed on the garage display 41-1 for the ceiling surface, the garage display 41-2 for the front, the garage display 41-3 for the right side, the garage display 41-4 for the left side, and the garage display 41-5 for the rear, thereby creating a visual state in which the occupants of the vehicle 13 visually feel as if they are driving in that scene.
 その際、車両13に搭載されているバックミラーやサイドミラーに相当するサイドミラーディスプレイ52Lおよび52Rやバックミラーディスプレイ54に対しても、ガレージディスプレイ41と同様に走行しているような状態の映像を表示する。そして、車両13の乗員は、運転シミュレーションの指示に従って、与えられた場面で、例えば、自分の所有する実際の車両13で、ハンドル操作、アクセル操作、ブレーキ操作を実際に行うことができる。この際、車両13自体は、バイワイヤシステムを採用しているので、ハンドル操作をしても実際の方向舵は稼働することはなく、ブレーキ操作もブレーキパッドのピストンの動作も起きることはなく、それぞれの操作量を示す運転操作データがガレージシステム制御装置16に送信されることとなる。 At that time, the side mirror displays 52L and 52R and the rearview mirror display 54, which correspond to the rearview mirror and side mirrors mounted on the vehicle 13, also display images of the vehicle as if it were being driven, in the same way as the garage display 41. The occupants of the vehicle 13 can then follow the instructions of the driving simulation and actually operate the steering wheel, accelerator, and brakes in a given scene, for example, in their own actual vehicle 13. At this time, because the vehicle 13 itself employs a by-wire system, the actual rudder does not move even if the steering wheel is operated, and no braking operation or brake pad piston movement occurs, and driving operation data indicating the amount of each operation is sent to the garage system control device 16.
 ガレージシステム制御装置16は、運転操作データから得られた操作量に従って、ガレージディスプレイ41のみならず、車両側に搭載されたサイドミラーディスプレイ52Lおよび52Rやバックミラーディスプレイ54に対して、車両13の乗員が走行されているが如くの映像を生成して表示する。この際、ガレージシステム制御装置16は、運転を行っている場面を仮想空間内に構成し、そこに乗員の視線に相当する位置に仮想空間内で仮想カメラを置き、そこで撮影された映像を、ガレージディスプレイ41や、サイドミラーディスプレイ52Lおよび52R、バックミラーディスプレイ54に対して適切に表示することができる。この際、適切な表示映像を生成するために、ガレージディスプレイ41の位置に対して、車両13の位置姿勢データ、および、車両13に乗車している乗員の位置姿勢データが必要となる。 The garage system control device 16 generates and displays an image of the occupants of the vehicle 13 as if they were driving on not only the garage display 41 but also the side mirror displays 52L and 52R and the rearview mirror display 54 mounted on the vehicle, according to the amount of operation obtained from the driving operation data. At this time, the garage system control device 16 creates a scene in which the driving is taking place in a virtual space, places a virtual camera in the virtual space at a position corresponding to the line of sight of the occupants, and can appropriately display the image captured there on the garage display 41, the side mirror displays 52L and 52R, and the rearview mirror display 54. At this time, in order to generate an appropriate display image, position and orientation data of the vehicle 13 and the occupants in the vehicle 13 with respect to the position of the garage display 41 are required.
 また、車両13は、4輪で独立してハイトが変えられるサスペンション機能を備えることにより、運転シミュレーションの情報に従って4輪の高さを変えることができる。従って、カーブを曲がっている際のローリングや、急速なブレーキングを行った場合のダイビング動作などを再現することができ、例えば、雨の日にスリップした状態などを表現することができるなど、乗員に対してよりリアルな状況を提供すること可能となる。加えて、乗員が座っているシートに各種調整機構を備えることにより、それを場面に応じて適切に動かし、サスペンションのハイト機能のコントロールと協調動作することにより、乗員に加速度を感じさせることが可能になる。 Furthermore, the vehicle 13 is equipped with a suspension function that allows the height of each of the four wheels to be changed independently, so that the height of each of the four wheels can be changed according to information from the driving simulation. This makes it possible to reproduce rolling when going around a curve, or diving when braking rapidly, and to provide the occupants with a more realistic situation, for example, by expressing a state of slipping on a rainy day. In addition, by equipping the seats in which the occupants sit with various adjustment mechanisms, it is possible to move them appropriately according to the situation, and by working in coordination with the control of the suspension height function, it is possible to make the occupants feel the acceleration.
 この際、乗員による各種操作については、ガレージシステム制御装置16で操作ログとして残すことが可能となるので、これらをサーバなどで分析すれば、その乗員の運転技術のどこに問題があるのかを詳細に分析することができる。これは、実走行時に取得する事も可能であるが、その場合、その走行場面がどういう状態であるのかという場面の状態認識を正確に行う必要があるため、正確に運転技術を判断するにはハードルが高い場合があると想定される。 In this case, the garage system control device 16 can record various operations performed by the occupant as an operation log, and by analyzing these on a server or the like, it is possible to perform a detailed analysis of the problem with the occupant's driving technique. This can also be obtained during actual driving, but in that case, it is necessary to accurately recognize the state of the driving scene, and it is expected that there may be high hurdles to accurately judge the driving technique.
 これに対し、運転シミュレーションシステム11においては、あらかじめ、運転シミュレーションで提示する場面を想定することができ、その場面や特定状態において、乗員がどのように運転操作を行うのかというデータを取得すること可能となるので、より正確な判断を行う事が可能である。勿論、ここでも、別の装置で行う一般的な運転シミュレータと違って自分で所有する車両13を用いることによって、より正しいデータ収集を行うことができる。 In contrast, the driving simulation system 11 can envision in advance the scenes to be presented in the driving simulation, and can obtain data on how the occupants will drive in those scenes or in specific conditions, making it possible to make more accurate judgments. Of course, here too, by using the vehicle 13 that the driver owns, rather than a general driving simulator that uses a separate device, more accurate data can be collected.
 運転シミュレーションの第2の利用例として、危険回避など特定の場面において、理想的な運転技術を学習してもらうことに利用することができる。 The second use case for driving simulation is to help people learn ideal driving techniques for specific situations, such as avoiding danger.
 例えば、車両13は、ハンドル21や、アクセルペダル22、ブレーキペダル23などは、バイワイヤであることと同時にそれぞれが、各操作に対してアクチュエーターなどを利用して反力を生じることができるので、それを効果的に運転技術の学習に活用することができる。 For example, the steering wheel 21, accelerator pedal 22, brake pedal 23, etc. of the vehicle 13 are by-wire, and at the same time, each of them can generate a reaction force for each operation using an actuator, etc., which can be effectively used to learn driving techniques.
 具体的には、ある場面において、ハンドル21や、アクセルペダル22、ブレーキペダル23の操作を行う場合、正しい運転となる教師データとかけ離れた操作を乗員が行おうとした場合、例えば、ハンドル21に大きな反力を付けることにより、正しいハンドル操作を乗員に体感してもらうようにする。同様に、ブレーキ操作やアクセル操作に対しても反力を付けることにより、乗員に正しい運転操作を体感してもらい、激しい雨の場合の正しい運転操作を身に着けてもらう事が可能となる。 Specifically, in a certain situation, when an occupant operates the steering wheel 21, accelerator pedal 22, or brake pedal 23 in a manner that is far removed from the teaching data that indicates correct driving, a large reaction force can be applied to the steering wheel 21, for example, to allow the occupant to experience correct steering operation. Similarly, by applying a reaction force to braking and accelerator operation, the occupant can experience correct driving operation, and can learn correct driving operation in heavy rain.
 ここでも、一般的な運転シミュレーションと違って、自分の所有している車両13でその体験を行うことができるので、極めて高効率な運転技術の学習が期待される。また、この際、乗員の運転操作技術の技量のレベルによって、この教師データも変更可能とする。 Again, unlike general driving simulations, the experience can be carried out in one's own vehicle 13, so it is expected that extremely efficient learning of driving techniques can be achieved. In addition, the teacher data can be changed depending on the skill level of the occupant's driving operation technique.
 即ち、雨の日など路面が滑りやすい場面で、危険回避を行う技術を身に着ける場合、乗員の技量が極めて高いと、上述した第1の利用例のように運転シミュレーション時に判断が付いている場合、危険回避時にカウンターを当てるようなハンドリング操作やブレーキング、アクセル操作を、それぞれの反力により体感して学習してもらうことが期待される。一方で、技量が比較的高くない乗員だと判断した場合は、最小限のハンドル操作、ブレーキング、アクセル操作を学習させるように、それぞれの反力により体感して学習してもらうことが期待される。 In other words, when learning how to avoid danger on slippery roads, such as on rainy days, if the driver's skill is extremely high and this is determined during the driving simulation as in the first use case described above, it is expected that the driver will learn how to counteract handling, braking, and accelerating operations when avoiding danger by experiencing the reaction forces of each. On the other hand, if it is determined that the driver's skill is relatively low, it is expected that the driver will learn how to perform minimal steering, braking, and accelerating operations by experiencing the reaction forces of each.
 <運転シミュレーションの処理例>
 図15乃至図21を参照して、運転シミュレーションシステム11において実行される運転シミュレーションの処理例について説明する。
<Example of driving simulation processing>
15 to 21, a process example of the driving simulation executed in the driving simulation system 11 will be described.
 図15は、ガレージシステム12側の初期セットアップ処理を説明するフローチャートである。 Figure 15 is a flowchart explaining the initial setup process on the garage system 12 side.
 例えば、ユーザが、ガレージシステム制御装置16のユーザ設定取得部70に対して、運転シミュレーションの開始を指示する操作入力を行うとガレージシステム12側の初期セットアップ処理が開始される。 For example, when a user performs an operation input to the user setting acquisition unit 70 of the garage system control device 16 to instruct the start of a driving simulation, the initial setup process on the garage system 12 side is started.
 ステップS11において、ガレージシステム制御装置16の通信部64は、車両内制御装置18の通信部97および通信部122との間の通信が可能であるか否かを判定し、車両内制御装置18の通信部97および通信部122との間の通信が可能であると判定するまで処理を待機する。そして、通信部64が、車両内制御装置18の通信部97および通信部122との間の通信が可能であると判定した場合、処理はステップS12に進む。 In step S11, the communication unit 64 of the garage system control device 16 determines whether communication is possible between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18, and waits until it determines that communication is possible between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18. Then, if the communication unit 64 determines that communication is possible between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18, the process proceeds to step S12.
 ステップS12において、ガレージシステム制御装置16の通信部64は、車両内制御装置18の通信部97および通信部122との間の通信を開始する。 In step S12, the communication unit 64 of the garage system control device 16 starts communication between the communication unit 97 and the communication unit 122 of the in-vehicle control device 18.
 ステップS13において、ガレージシステム制御装置16の車両位置姿勢認識部63は、車両13の位置および姿勢を認識する車両位置姿勢認識処理(図19参照)を実行して、車両13の位置姿勢データを取得する。 In step S13, the vehicle position and attitude recognition unit 63 of the garage system control device 16 executes a vehicle position and attitude recognition process (see FIG. 19) to recognize the position and attitude of the vehicle 13, and obtains position and attitude data of the vehicle 13.
 ステップS14において、ガレージシステム制御装置16の通信部64は、車両内制御装置18の通信部97から送信されてくる車両データ(運転シミュレーションを実行するのに必要となる車両13に関する各種のデータ)を受信して、映像変換処理部73に供給する。 In step S14, the communication unit 64 of the garage system control device 16 receives the vehicle data (various data related to the vehicle 13 required to perform the driving simulation) sent from the communication unit 97 of the in-vehicle control device 18, and supplies it to the video conversion processing unit 73.
 ステップS15において、ガレージシステム制御装置16の通信部64は、車両内制御装置18の通信部97から送信されてくる車両13内の乗員の位置姿勢データを受信して、映像変換処理部73に供給する。 In step S15, the communication unit 64 of the garage system control device 16 receives the position and posture data of the occupants in the vehicle 13 transmitted from the communication unit 97 of the in-vehicle control device 18, and supplies it to the image conversion processing unit 73.
 ステップS16において、ガレージシステム制御装置16の通信部64は、車両13のディスプレイシステムやオーディオシステムなどをセットアップするためのセットアップパラメータを車両内制御装置18へ送信する。 In step S16, the communication unit 64 of the garage system control device 16 transmits setup parameters for setting up the display system, audio system, etc. of the vehicle 13 to the in-vehicle control device 18.
 ステップS17において、ガレージシステム制御装置16の通信部64は、車両13のセットアップが完了したか否かを判定し、車両13のセットアップが完了したと判定するまで処理を待機する。例えば、通信部64は、後述する図16のステップS32で車両内制御装置18の通信部122から送信されてくる車両セットアップ完了通知を受信すると、車両13のセットアップが完了したと判定し、ガレージシステム12側の初期セットアップ処理は終了される。 In step S17, the communication unit 64 of the garage system control device 16 determines whether the setup of the vehicle 13 is complete, and waits until it is determined that the setup of the vehicle 13 is complete. For example, when the communication unit 64 receives a vehicle setup completion notification sent from the communication unit 122 of the in-vehicle control device 18 in step S32 of FIG. 16 described below, it determines that the setup of the vehicle 13 is complete, and the initial setup process on the garage system 12 side is terminated.
 図16は、車両13側の初期セットアップ処理を説明するフローチャートである。 FIG. 16 is a flowchart explaining the initial setup process on the vehicle 13 side.
 例えば、車両内制御装置18の起動に伴って車両13側の初期セットアップ処理が開始され、その開始時点では、車両13は走行モードに設定されている。 For example, when the in-vehicle control device 18 is started, the initial setup process on the vehicle 13 side is started, and at the start of the process, the vehicle 13 is set to a driving mode.
 ステップS21において、停車状態判断部99は、走行モードからシミュレーションモードへの切り替えを行うか否かを判定する。例えば、停車状態判断部99は、走行モードからシミュレーションモードへの切り替えを指示するユーザ設定値がユーザ設定取得部98から供給されると、走行モードからシミュレーションモードへの切り替えを行うと判定することができる。 In step S21, the vehicle stop state determination unit 99 determines whether or not to switch from the driving mode to the simulation mode. For example, when a user setting value instructing the switching from the driving mode to the simulation mode is supplied from the user setting acquisition unit 98, the vehicle stop state determination unit 99 can determine that the switching from the driving mode to the simulation mode will be performed.
 ステップS21において、停車状態判断部99が、走行モードからシミュレーションモードへの切り替えを行わないと判定した場合、処理はステップS22に進み、車両13の走行モードが継続される。その後、処理はステップS21に戻り、以下、同様の処理が繰り返して行われる。 If the vehicle stop state determination unit 99 determines in step S21 that the driving mode should not be switched to the simulation mode, the process proceeds to step S22, and the vehicle 13 continues in the driving mode. After that, the process returns to step S21, and the same process is repeated.
 一方、ステップS21において、停車状態判断部99が、走行モードからシミュレーションモードへの切り替えを行うと判定した場合、処理はステップS23に進む。 On the other hand, if the stopped state determination unit 99 determines in step S21 that the mode should be switched from the driving mode to the simulation mode, the process proceeds to step S23.
 ステップS23において、停車状態判断部99は、車両外部環境認識部96から供給される車両外部環境データに基づいて車両13が停車状態であるか否かを判定し、車両13が停車状態であると判定するまで処理を待機する。そして、停車状態判断部99が、車両13が停車状態であると判定した場合、処理はステップS24に進む。 In step S23, the vehicle stop state determination unit 99 determines whether the vehicle 13 is stopped based on the vehicle external environment data supplied from the vehicle external environment recognition unit 96, and waits until it determines that the vehicle 13 is stopped. Then, if the vehicle stop state determination unit 99 determines that the vehicle 13 is stopped, the process proceeds to step S24.
 ステップS24において、車両内制御装置18の通信部97および通信部122は、ガレージシステム制御装置16の通信部64との間の通信が可能であるか否かを判定し、ガレージシステム制御装置16の通信部64との間の通信が可能であると判定するまで処理を待機する。そして、通信部97および通信部122が、ガレージシステム制御装置16の通信部64との間の通信が可能であると判定した場合、処理はステップS25に進む。 In step S24, the communication unit 97 and the communication unit 122 of the in-vehicle control device 18 determine whether communication with the communication unit 64 of the garage system control device 16 is possible, and waits until it is determined that communication with the communication unit 64 of the garage system control device 16 is possible. Then, if the communication unit 97 and the communication unit 122 determine that communication with the communication unit 64 of the garage system control device 16 is possible, the process proceeds to step S25.
 ステップS25において、車両内制御装置18の通信部97および通信部122は、ガレージシステム制御装置16の通信部64との間の通信を開始する。 In step S25, the communication unit 97 and the communication unit 122 of the in-vehicle control device 18 start communication with the communication unit 64 of the garage system control device 16.
 ステップS26において、車両内制御装置18の通信部97は、車両データを車両内制御装置18の通信部97に送信する。 In step S26, the communication unit 97 of the in-vehicle control device 18 transmits the vehicle data to the communication unit 97 of the in-vehicle control device 18.
 ステップS27において、車両内部環境認識部95は、車両13内の乗員の位置および姿勢を認識する乗員位置姿勢認識処理(図20参照)を実行して、車両13内の乗員の位置姿勢データを取得する。 In step S27, the vehicle interior environment recognition unit 95 executes an occupant position and posture recognition process (see FIG. 20) to recognize the positions and postures of occupants inside the vehicle 13, and obtains position and posture data of the occupants inside the vehicle 13.
 ステップS28において、車両内部環境認識部95は、ステップS27の乗員位置姿勢認識処理で取得した車両13内の乗員の位置姿勢データを通信部97に供給し、通信部97は、車両13内の乗員の位置姿勢データをガレージシステム制御装置16へ送信する。 In step S28, the vehicle interior environment recognition unit 95 supplies the position and attitude data of the occupant inside the vehicle 13 acquired in the occupant position and attitude recognition process in step S27 to the communication unit 97, and the communication unit 97 transmits the position and attitude data of the occupant inside the vehicle 13 to the garage system control device 16.
 ステップS29において、車両内制御装置18の通信部122は、図15のステップS16でガレージシステム制御装置16から送信されてくるセットアップパラメータを受信する。 In step S29, the communication unit 122 of the in-vehicle control device 18 receives the setup parameters sent from the garage system control device 16 in step S16 of FIG. 15.
 ステップS30において、車両内制御装置18の通信部122は、ステップS29で受信したセットアップパラメータを車両13のディスプレイシステムやオーディオシステムなどに供給し、それらのセットアップを実行させる。 In step S30, the communication unit 122 of the in-vehicle control device 18 supplies the setup parameters received in step S29 to the display system, audio system, etc. of the vehicle 13, and executes their setup.
 ステップS31において、バイワイヤシステムへの切り替えが行われ、走行モードコントロール部104による制御から、シミュレーションモードコントロール部105による制御へ移行する。 In step S31, switching to the by-wire system is performed, and control is shifted from the driving mode control unit 104 to the simulation mode control unit 105.
 ステップS32において、車両内制御装置18の通信部97および通信部122は、車両13のセットアップが完了したことを通知する車両セットアップ完了通知をガレージシステム制御装置16へ送信し、車両13側の初期セットアップ処理は終了される。 In step S32, the communication unit 97 and the communication unit 122 of the in-vehicle control device 18 send a vehicle setup completion notification to the garage system control device 16 to notify that the setup of the vehicle 13 has been completed, and the initial setup process on the vehicle 13 side is terminated.
 図17は、ガレージシステム12側の運転シミュレーション動作処理を説明するフローチャートである。 FIG. 17 is a flowchart explaining the driving simulation operation processing on the garage system 12 side.
 例えば、図15のガレージシステム12側の初期セットアップ処理が終了すると、ガレージシステム12側の運転シミュレーション動作処理が開始される。 For example, when the initial setup process on the garage system 12 side in FIG. 15 is completed, the driving simulation operation process on the garage system 12 side is started.
 ステップS41において、ガレージシステム制御装置16の通信部64は、車両パラメータを受信したか否かを判定し、車両パラメータを受信したと判定するまで処理を待機する。例えば、通信部64は、後述する図18のステップS69で車両内制御装置18の通信部97から送信されてくる車両パラメータを受信すると、車両パラメータに含まれる運転操作データを仮想空間生成部71に供給し、車両パラメータに含まれる車両13内の乗員の位置姿勢データを映像変換処理部73に供給する。そして、通信部64は、車両パラメータを受信したと判定し、処理はステップS42に進む。 In step S41, the communication unit 64 of the garage system control device 16 determines whether or not the vehicle parameters have been received, and waits to process until it is determined that the vehicle parameters have been received. For example, when the communication unit 64 receives the vehicle parameters transmitted from the communication unit 97 of the in-vehicle control device 18 in step S69 of FIG. 18 described below, it supplies the driving operation data contained in the vehicle parameters to the virtual space generation unit 71, and supplies the position and posture data of the occupants in the vehicle 13 contained in the vehicle parameters to the image conversion processing unit 73. The communication unit 64 then determines that the vehicle parameters have been received, and the process proceeds to step S42.
 ステップS42において、仮想空間生成部71は、ステップS41で供給された運転操作データに従った運転シミュレーションで求められる車両13の挙動(コーナリングや、加速、減速など)に基づいて、上述の図13を参照して説明したように、サスペンション設定データを生成する。 In step S42, the virtual space generation unit 71 generates suspension setting data as described above with reference to FIG. 13, based on the behavior of the vehicle 13 (cornering, acceleration, deceleration, etc.) determined by the driving simulation according to the driving operation data supplied in step S41.
 ステップS43において、仮想空間生成部71は、ステップS41で供給された運転操作データに従った運転シミュレーションで求められる車両13の挙動(定速走行時や、ブレーキング時など)に基づいて、上述の図14を参照して説明したように、シート設定データを生成する。 In step S43, the virtual space generation unit 71 generates seat setting data as described above with reference to FIG. 14, based on the behavior of the vehicle 13 (when driving at a constant speed, when braking, etc.) determined by a driving simulation according to the driving operation data supplied in step S41.
 ステップS44において、仮想空間生成部71は、ステップS41で供給された運転操作データに従った運転シミュレーションで求められる車両13の挙動に基づいて、ハンドル21に発生するハンドル反力、アクセルペダル22に発生するアクセル反力、および、ブレーキペダル23に発生するブレーキ反力を求める。そして、仮想空間生成部71は、ハンドル反力、アクセル反力、およびブレーキ反力が格納される反力設定データを生成する。 In step S44, the virtual space generation unit 71 calculates the steering reaction force generated in the steering wheel 21, the accelerator reaction force generated in the accelerator pedal 22, and the brake reaction force generated in the brake pedal 23 based on the behavior of the vehicle 13 determined by a driving simulation in accordance with the driving operation data supplied in step S41. The virtual space generation unit 71 then generates reaction force setting data in which the steering reaction force, accelerator reaction force, and brake reaction force are stored.
 ステップS45において、仮想空間生成部71は、ステップS42で生成したサスペンション設定データ、ステップS43で生成したシート設定データ、および、ステップS44で生成した反力設定データにより構成される車両設定パラメータを通信部64に供給する。そして、通信部64は、その車両設定パラメータを車両内制御装置18へ送信する。 In step S45, the virtual space generation unit 71 supplies the vehicle setting parameters, which are composed of the suspension setting data generated in step S42, the seat setting data generated in step S43, and the reaction force setting data generated in step S44, to the communication unit 64. The communication unit 64 then transmits the vehicle setting parameters to the in-vehicle control device 18.
 ステップS46において、仮想空間生成部71は、ステップS41で通信部64から供給された運転操作データに含まれるハンドル回転データに基づくレンダリングパラメータを生成する。 In step S46, the virtual space generation unit 71 generates rendering parameters based on the steering wheel rotation data included in the driving operation data supplied from the communication unit 64 in step S41.
 ステップS47において、仮想空間生成部71は、ステップS41で通信部64から供給された運転操作データに含まれるブレーキ位置データに基づくレンダリングパラメータを生成する。 In step S47, the virtual space generation unit 71 generates rendering parameters based on the brake position data included in the driving operation data supplied from the communication unit 64 in step S41.
 ステップS48において、仮想空間生成部71は、ステップS41で通信部64から供給された運転操作データに含まれるアクセル位置データに基づくレンダリングパラメータを生成する。 In step S48, the virtual space generation unit 71 generates rendering parameters based on the accelerator position data included in the driving operation data supplied from the communication unit 64 in step S41.
 ステップS49において、仮想空間生成部71は、仮想空間記憶部65から仮想空間データを読み出して生成した仮想空間内に、3DCG生成部67から供給されるオブジェクトを配置する。そして、仮想空間生成部71は、ステップS46乃至S47で生成した各レンダリングパラメータに基づいて、その仮想空間をレンダリングし、車両13用の映像ストリームおよびガレージ用の映像ストリームを生成する。 In step S49, the virtual space generation unit 71 reads the virtual space data from the virtual space storage unit 65 and places the objects supplied from the 3DCG generation unit 67 in the virtual space generated by the virtual space generation unit 71. The virtual space generation unit 71 then renders the virtual space based on the rendering parameters generated in steps S46 to S47, and generates a video stream for the vehicle 13 and a video stream for the garage.
 ステップS50において、仮想空間生成部71は、ステップS49で生成した車両13用の映像ストリームを信号多重化部72へ出力する。そして、車両13用の映像ストリームは、信号多重化部72および通信部64を介して、車両13へ送信される。 In step S50, the virtual space generation unit 71 outputs the video stream for the vehicle 13 generated in step S49 to the signal multiplexing unit 72. The video stream for the vehicle 13 is then transmitted to the vehicle 13 via the signal multiplexing unit 72 and the communication unit 64.
 ステップS51において、仮想空間生成部71は、ステップS49で生成したガレージ用の映像ストリームを映像変換処理部73へ出力する。そして、映像変換処理部73は、車両位置姿勢認識部63から供給される車両13の位置姿勢データ、および、ステップS41で通信部64から供給された車両13内の乗員の位置姿勢データに基づいて、映像変換処理を行う。 In step S51, the virtual space generation unit 71 outputs the video stream for the garage generated in step S49 to the video conversion processing unit 73. The video conversion processing unit 73 then performs video conversion processing based on the position and orientation data of the vehicle 13 supplied from the vehicle position and orientation recognition unit 63 and the position and orientation data of the occupants in the vehicle 13 supplied from the communication unit 64 in step S41.
 ステップS52において、映像変換処理部73は、ステップS51で映像変換処理を施したガレージ用の映像ストリームを出力する。即ち、映像変換処理部73は、天井面用の映像ストリームを天井面用映像送信部74に供給して天井面用のガレージディスプレイ41-1に天井面用のシミュレーション映像を表示させる。以下、同様に、映像変換処理部73は、正面用の映像ストリームを正面用のガレージディスプレイ41-2に表示させ、右側面用のシミュレーション映像を右側面用のガレージディスプレイ41-3に表示させ、左側面用のシミュレーション映像を左側面用のガレージディスプレイ41-4に表示させ、背面用のシミュレーション映像を背面用のガレージディスプレイ41-5に表示させ、床面用のシミュレーション映像を床面用のガレージディスプレイ41-6に表示させる。その後、処理はステップS41に戻り、以下、同様の処理が繰り返して行われる。 In step S52, the video conversion processing unit 73 outputs the video stream for the garage that has been subjected to the video conversion processing in step S51. That is, the video conversion processing unit 73 supplies the video stream for the ceiling surface to the video transmission unit 74 for the ceiling surface, and causes the simulation video for the ceiling surface to be displayed on the garage display 41-1 for the ceiling surface. Similarly, the video conversion processing unit 73 causes the video stream for the front surface to be displayed on the garage display 41-2 for the front surface, the simulation video for the right side surface to be displayed on the garage display 41-3 for the right side surface, the simulation video for the left side surface to be displayed on the garage display 41-4 for the left side surface, the simulation video for the rear surface to be displayed on the garage display 41-5 for the rear surface, and the simulation video for the floor surface to be displayed on the garage display 41-6 for the floor surface. After that, the process returns to step S41, and the same process is repeated.
 図18は、車両13側の運転シミュレーション動作処理を説明するフローチャートである。 FIG. 18 is a flowchart explaining the driving simulation operation process on the vehicle 13 side.
 例えば、図16の車両13側の初期セットアップ処理が終了すると、車両13側の運転シミュレーション動作処理が開始される。 For example, when the initial setup process on the vehicle 13 side in FIG. 16 is completed, the driving simulation operation process on the vehicle 13 side is started.
 ステップS61において、ハンドル回転データ取得部101は、車両13の運転者によるハンドル21に対する回転操作が行われたか否かを判定し、車両13の運転者によるハンドル21に対する回転操作が行われたと判定するまで処理を待機する。そして、ステップS61において、ハンドル回転データ取得部101が、車両13の運転者によるハンドル21に対する回転操作が行われたと判定すると、処理はステップS62に進む。 In step S61, the steering wheel rotation data acquisition unit 101 determines whether or not the driver of the vehicle 13 has rotated the steering wheel 21, and waits until it determines that the driver of the vehicle 13 has rotated the steering wheel 21. Then, in step S61, if the steering wheel rotation data acquisition unit 101 determines that the driver of the vehicle 13 has rotated the steering wheel 21, the process proceeds to step S62.
 ステップS62において、ハンドル回転データ取得部101は、車両13の運転者によるハンドル21に対する回転操作に従ったハンドル回転データを取得し、運転操作検出部103に供給する。 In step S62, the steering wheel rotation data acquisition unit 101 acquires steering wheel rotation data in accordance with the rotation operation of the steering wheel 21 by the driver of the vehicle 13, and supplies the data to the driving operation detection unit 103.
 ステップS63において、ブレーキ位置データ取得部100は、車両13の運転者によるブレーキペダル23に対する踏み込み操作が行われたか否かを判定し、車両13の運転者によるブレーキペダル23に対する踏み込み操作が行われたと判定するまで処理を待機する。そして、ステップS63において、ブレーキ位置データ取得部100が、車両13の運転者によるブレーキペダル23に対する踏み込み操作が行われたと判定すると、処理はステップS64に進む。 In step S63, the brake position data acquisition unit 100 determines whether or not the driver of the vehicle 13 has depressed the brake pedal 23, and waits until it determines that the driver of the vehicle 13 has depressed the brake pedal 23. Then, in step S63, if the brake position data acquisition unit 100 determines that the driver of the vehicle 13 has depressed the brake pedal 23, the process proceeds to step S64.
 ステップS64において、ブレーキ位置データ取得部100は、車両13の運転者によるブレーキペダル23に対する踏み込み操作に従ったブレーキ位置データを取得し、運転操作検出部103に供給する。 In step S64, the brake position data acquisition unit 100 acquires brake position data according to the brake pedal 23 depression operation by the driver of the vehicle 13, and supplies the data to the driving operation detection unit 103.
 ステップS65において、アクセル位置データ取得部102は、車両13の運転者によるアクセルペダル22に対する踏み込み操作が行われたか否かを判定し、車両13の運転者によるアクセルペダル22に対する踏み込み操作が行われたと判定するまで処理を待機する。そして、ステップS65において、アクセル位置データ取得部102が、車両13の運転者によるアクセルペダル22に対する踏み込み操作が行われたと判定すると、処理はステップS66に進む。 In step S65, the accelerator position data acquisition unit 102 determines whether or not the driver of the vehicle 13 has depressed the accelerator pedal 22, and waits until it determines that the driver of the vehicle 13 has depressed the accelerator pedal 22. Then, in step S65, if the accelerator position data acquisition unit 102 determines that the driver of the vehicle 13 has depressed the accelerator pedal 22, the process proceeds to step S66.
 ステップS66において、アクセル位置データ取得部102は、車両13の運転者によるアクセルペダル22に対する踏み込み操作に従ったアクセル位置データを取得し、運転操作検出部103に供給する。 In step S66, the accelerator position data acquisition unit 102 acquires accelerator position data according to the depression of the accelerator pedal 22 by the driver of the vehicle 13, and supplies the data to the driving operation detection unit 103.
 ステップS67において、車両内部環境認識部95は、車両13内の乗員の位置および姿勢を認識したか否かを判定し、車両13内の乗員の位置および姿勢を認識したと判定するまで処理を待機する。そして、ステップS67において、車両内部環境認識部95が、車両13内の乗員の位置および姿勢を認識したと判定すると、処理はステップS68に進む。 In step S67, the vehicle interior environment recognition unit 95 determines whether or not it has recognized the position and posture of the occupant inside the vehicle 13, and waits until it determines that it has recognized the position and posture of the occupant inside the vehicle 13. Then, in step S67, if the vehicle interior environment recognition unit 95 determines that it has recognized the position and posture of the occupant inside the vehicle 13, the process proceeds to step S68.
 ステップS68において、車両内部環境認識部95は、車両13内の乗員の位置姿勢データを生成する。 In step S68, the vehicle interior environment recognition unit 95 generates position and posture data of the occupants inside the vehicle 13.
 ステップS69において、運転操作検出部103は、車両13の運転者の運転操作の内容を示す運転操作データ(ブレーキ位置データ、ハンドル回転データ、およびアクセル位置データ)を通信部97に供給する。また、車両内部環境認識部95は、車両13内の乗員の位置姿勢データを通信部97に供給する。そして、通信部97は、運転操作データと車両13内の乗員の位置姿勢データとを含む車両パラメータを、ガレージシステム制御装置16に送信する。 In step S69, the driving operation detection unit 103 supplies driving operation data (brake position data, steering wheel rotation data, and accelerator position data) indicating the content of the driving operation of the driver of the vehicle 13 to the communication unit 97. In addition, the vehicle internal environment recognition unit 95 supplies position and posture data of the occupants in the vehicle 13 to the communication unit 97. The communication unit 97 then transmits vehicle parameters including the driving operation data and the position and posture data of the occupants in the vehicle 13 to the garage system control device 16.
 ステップS70において、通信部97は、車両設定パラメータを受信したか否を判定し、車両設定パラメータを受信したと判定するまで処理を待機する。そして、上述した図17のステップS45でガレージシステム制御装置16の通信部64が車両設定パラメータを送信すると、通信部97は、その車両設定パラメータをシミュレーションモードコントロール部105に供給し、車両設定パラメータを受信したと判定して、処理はステップS71に進む。 In step S70, the communication unit 97 determines whether the vehicle setting parameters have been received, and waits until it is determined that the vehicle setting parameters have been received. Then, when the communication unit 64 of the garage system control device 16 transmits the vehicle setting parameters in step S45 of FIG. 17 described above, the communication unit 97 supplies the vehicle setting parameters to the simulation mode control unit 105, determines that the vehicle setting parameters have been received, and the process proceeds to step S71.
 ステップS71において、シミュレーションモードコントロール部105は、通信部97から供給された車両設定パラメータに含まれているサスペンション設定データに従って、サスペンション制御データを生成してサスペンション制御部110に供給する。これにより、サスペンション制御部110は、図13を参照して説明したようにサスペンションの高さを制御し、仮想空間内での車両13の挙動を再現することができる。 In step S71, the simulation mode control unit 105 generates suspension control data in accordance with the suspension setting data included in the vehicle setting parameters supplied from the communication unit 97, and supplies the data to the suspension control unit 110. This enables the suspension control unit 110 to control the suspension height as described with reference to FIG. 13, and reproduce the behavior of the vehicle 13 in the virtual space.
 ステップS72において、シミュレーションモードコントロール部105は、通信部97から供給された車両設定パラメータに含まれているシート設定データに従って、シート制御データを生成してシート制御部113に供給する。これにより、シート制御部113は、図14を参照して説明したようにシート座面およびシート座面の傾きを制御し、仮想空間内での車両13の挙動を再現することができる。 In step S72, the simulation mode control unit 105 generates seat control data according to the seat setting data included in the vehicle setting parameters supplied from the communication unit 97, and supplies the generated data to the seat control unit 113. This enables the seat control unit 113 to control the seat cushion and the inclination of the seat cushion as described with reference to FIG. 14, and reproduce the behavior of the vehicle 13 in the virtual space.
 ステップS73において、シミュレーションモードコントロール部105は、通信部97から供給された車両設定パラメータに含まれているハンドル反力のデータ値に従って、ハンドル反力制御データを生成してハンドル反力制御部109に供給する。これにより、ハンドル反力制御部109は、ハンドル21に対して回転操作を行う運転者の腕力に抵抗するようなハンドル反力をハンドル21に発生させ、仮想空間内での車両13の挙動を再現することができる。 In step S73, the simulation mode control unit 105 generates steering reaction force control data in accordance with the steering reaction force data value included in the vehicle setting parameters supplied from the communication unit 97, and supplies the generated steering reaction force control data to the steering reaction force control unit 109. This enables the steering reaction force control unit 109 to generate a steering reaction force in the steering wheel 21 that resists the arm force of the driver who is turning the steering wheel 21, thereby reproducing the behavior of the vehicle 13 in the virtual space.
 ステップS74において、シミュレーションモードコントロール部105は、通信部97から供給された車両設定パラメータに含まれているブレーキ反力のデータ値に従って、ブレーキ反力制御データを生成してブレーキ反力制御部111に供給する。これにより、ブレーキ反力制御部111は、ブレーキペダル23に対して踏み込み操作を行う運転者の脚力に抵抗するようなブレーキ反力をブレーキペダル23に発生させ、仮想空間内での車両13の挙動を再現することができる。 In step S74, the simulation mode control unit 105 generates brake reaction force control data in accordance with the brake reaction force data value included in the vehicle setting parameters supplied from the communication unit 97, and supplies the generated data to the brake reaction force control unit 111. This enables the brake reaction force control unit 111 to generate a brake reaction force in the brake pedal 23 that resists the leg force of the driver who depresses the brake pedal 23, thereby reproducing the behavior of the vehicle 13 in the virtual space.
 ステップS75において、映像信号処理部128は、車両13用の映像ストリームが供給されたか否を判定し、車両13用の映像ストリームが供給されたと判定するまで処理を待機する。そして、上述した図17のステップS50でガレージシステム制御装置16から車両13用の映像ストリームが送信されると、通信部122、信号分離部125、および映像入力切り替え部126を介して、車両13用の映像ストリームが映像信号処理部128に供給される。これにより、映像信号処理部128は、車両13用の映像ストリームが供給されたと判定して、処理はステップS76に進む。 In step S75, the video signal processing unit 128 determines whether a video stream for the vehicle 13 has been supplied, and waits until it is determined that a video stream for the vehicle 13 has been supplied. Then, when a video stream for the vehicle 13 is transmitted from the garage system control device 16 in step S50 of FIG. 17 described above, the video stream for the vehicle 13 is supplied to the video signal processing unit 128 via the communication unit 122, the signal separation unit 125, and the video input switching unit 126. As a result, the video signal processing unit 128 determines that a video stream for the vehicle 13 has been supplied, and the process proceeds to step S76.
 ステップS76において、映像信号処理部128は、車両13用の映像ストリームに対する信号処理を行って、ナビゲーション映像送信部129、サイドミラー映像送信部130、インストルメントパネル映像送信部131、バックミラー映像送信部132、および後部座席用映像送信部133に、それぞれ対応する映像ストリームを供給する。これにより、ナビゲーションディスプレイ51、サイドミラーディスプレイ52Lおよび52R、インストルメントパネルディスプレイ53、バックミラーディスプレイ54、並びに、後部座席用ディスプレイ55Lおよび55Rに、それぞれ対応する映像が表示される。その後、処理はステップS61に戻って、以下、同様の処理が繰り返して行われる。 In step S76, the video signal processing unit 128 performs signal processing on the video stream for the vehicle 13, and supplies the corresponding video streams to the navigation video transmission unit 129, the side mirror video transmission unit 130, the instrument panel video transmission unit 131, the rearview mirror video transmission unit 132, and the rear seat video transmission unit 133. As a result, the corresponding images are displayed on the navigation display 51, the side mirror displays 52L and 52R, the instrument panel display 53, the rearview mirror display 54, and the rear seat displays 55L and 55R. Then, the process returns to step S61, and the same process is repeated.
 図19は、図15のステップS13で行われる車両位置姿勢認識処理を説明するフローチャートである。 FIG. 19 is a flowchart explaining the vehicle position and attitude recognition process performed in step S13 of FIG. 15.
 ステップS81において、ガレージセンサ43およびガレージカメラ42をセットアップする。 In step S81, the garage sensor 43 and garage camera 42 are set up.
 ステップS82において、ガレージセンサ43およびガレージカメラ42は、車両13の位置および姿勢のセンシングを開始して、センサデータおよび映像データが出力される。 In step S82, the garage sensor 43 and garage camera 42 start sensing the position and attitude of the vehicle 13, and sensor data and video data are output.
 ステップS83において、センサデータ取得部61は、ガレージセンサ43から出力されるセンサデータを取得して車両位置姿勢認識部63に供給し、映像データ取得部62は、ガレージカメラ42から出力される映像データを取得して車両位置姿勢認識部63に供給する。 In step S83, the sensor data acquisition unit 61 acquires the sensor data output from the garage sensor 43 and supplies it to the vehicle position and attitude recognition unit 63, and the video data acquisition unit 62 acquires the video data output from the garage camera 42 and supplies it to the vehicle position and attitude recognition unit 63.
 ステップS84において、車両位置姿勢認識部63は、ステップS83で供給されるセンサデータおよび映像データに基づいて、車両13の位置および姿勢を算出する。これにより、車両位置姿勢認識部63は、車両13の位置姿勢データを取得して、車両位置姿勢認識処理は終了される。 In step S84, the vehicle position and attitude recognition unit 63 calculates the position and attitude of the vehicle 13 based on the sensor data and video data supplied in step S83. As a result, the vehicle position and attitude recognition unit 63 acquires the position and attitude data of the vehicle 13, and the vehicle position and attitude recognition process ends.
 図20は、図16のステップS27で行われる乗員位置姿勢認識処理を説明するフローチャートである。 FIG. 20 is a flowchart explaining the occupant position and attitude recognition process performed in step S27 of FIG. 16.
 ステップS91において、車内センサ57および車内カメラ56をセットアップする。 In step S91, the in-vehicle sensor 57 and the in-vehicle camera 56 are set up.
 ステップS92において、車内センサ57および車内カメラ56は、車両13内の乗員の位置および姿勢のセンシングを開始して、センサデータおよび映像データが出力される。 In step S92, the in-vehicle sensor 57 and the in-vehicle camera 56 start sensing the positions and postures of the occupants in the vehicle 13, and the sensor data and video data are output.
 ステップS93において、センサデータ取得部91は、車内センサ57から出力されるセンサデータを取得して車両内部環境認識部95に供給し、映像データ取得部92は、車内カメラ56から出力される映像データを取得して車両内部環境認識部95に供給する。 In step S93, the sensor data acquisition unit 91 acquires sensor data output from the in-vehicle sensor 57 and supplies it to the vehicle interior environment recognition unit 95, and the video data acquisition unit 92 acquires video data output from the in-vehicle camera 56 and supplies it to the vehicle interior environment recognition unit 95.
 ステップS94において、車両内部環境認識部95は、ステップS93で供給されるセンサデータおよび映像データに基づいて、車両13内の乗員の位置および姿勢を算出する。これにより、車両内部環境認識部95は、車両13内の乗員の位置姿勢データを取得して、乗員位置姿勢認識処理は終了される。 In step S94, the vehicle interior environment recognition unit 95 calculates the positions and postures of the occupants in the vehicle 13 based on the sensor data and video data supplied in step S93. As a result, the vehicle interior environment recognition unit 95 acquires position and posture data of the occupants in the vehicle 13, and the occupant position and posture recognition process is terminated.
 図21は、車両13のホディの映り込みを制御する映り込み制御処理を説明するフローチャートである。 Figure 21 is a flowchart explaining the reflection control process that controls the reflection of the body of the vehicle 13.
 ステップS101において、仮想空間生成部71は、車両位置姿勢認識部63から供給される車両13の位置姿勢データを取得する。 In step S101, the virtual space generation unit 71 acquires the position and orientation data of the vehicle 13 supplied from the vehicle position and orientation recognition unit 63.
 ステップS102において、仮想空間生成部71は、仮想空間内の仮想光源を配置する。 In step S102, the virtual space generation unit 71 places a virtual light source in the virtual space.
 ステップS103において、仮想空間生成部71は、ステップS101で取得した位置姿勢データに基づいて、車両13に対応する3DCGのオブジェクトを、仮想空間内に配置する。 In step S103, the virtual space generation unit 71 places a 3DCG object corresponding to the vehicle 13 in the virtual space based on the position and orientation data acquired in step S101.
 ステップS104において、仮想空間生成部71は、仮想空間内で車両13に対応する3DCGのオブジェクトのボディ表面の反射率を設定する。 In step S104, the virtual space generation unit 71 sets the reflectance of the body surface of the 3DCG object corresponding to the vehicle 13 in the virtual space.
 ステップS105において、仮想空間生成部71は、ステップS102で配置した仮想光源から、ステップS103で配置した車両13に対応する3DCGのオブジェクトに光が照射され、その光がボディ表面において、ステップS104で設定された反射率で反射するようなレイトレーシングを実施する。 In step S105, the virtual space generation unit 71 performs ray tracing such that light is irradiated from the virtual light source placed in step S102 to the 3DCG object corresponding to the vehicle 13 placed in step S103, and the light is reflected on the body surface with the reflectance set in step S104.
 ステップS106において、仮想空間生成部71は、通信部64から供給される車両13内の乗員の位置姿勢データを取得する。 In step S106, the virtual space generation unit 71 acquires the position and orientation data of the occupants in the vehicle 13 supplied from the communication unit 64.
 ステップS107において、仮想空間生成部71は、仮想空間内における車両13内の乗員の視点位置Pに対応する位置に仮想カメラを配置する。 In step S107, the virtual space generation unit 71 places a virtual camera at a position in the virtual space that corresponds to the viewpoint position P of the occupant in the vehicle 13.
 ステップS108において、仮想空間生成部71は、ステップS107で配置した仮想カメラを用いたレイトレーシング結果から、仮想空間内における車両13内の乗員が観察可能な車両13のボディ上の光線を特定する。 In step S108, the virtual space generation unit 71 identifies light rays on the body of the vehicle 13 that can be observed by the occupants in the vehicle 13 in the virtual space from the ray tracing results using the virtual camera placed in step S107.
 ステップS109において、仮想空間生成部71は、ステップS108で特定した光線が反射する仮想空間内の車両13のボディと、そのボディ上の座標を特定する。 In step S109, the virtual space generation unit 71 identifies the body of the vehicle 13 in the virtual space where the light ray identified in step S108 is reflected, and the coordinates on that body.
 ステップS110において、車両位置姿勢認識部63は、センサデータ取得部61から供給されるセンサデータ、および、映像データ取得部62から供給される映像データに基づいて、ステップS109で特定した仮想空間内の車両13のボディと、そのボディ上の座標を、現実空間の車両13上で特定する。 In step S110, the vehicle position and attitude recognition unit 63 identifies the body of the vehicle 13 in the virtual space identified in step S109 and the coordinates on that body on the vehicle 13 in the real space based on the sensor data supplied from the sensor data acquisition unit 61 and the video data supplied from the video data acquisition unit 62.
 ステップS111において、車両位置姿勢認識部63は、ステップS110で特定した座標に対して鉛直方向にあるガレージディスプレイ41の位置を特定する。 In step S111, the vehicle position and attitude recognition unit 63 determines the position of the garage display 41 in the vertical direction relative to the coordinates determined in step S110.
 ステップS112において、仮想空間生成部71は、ステップS111で特定した現実空間のガレージディスプレイ41の位置姿勢を、仮想空間内で同定する。 In step S112, the virtual space generation unit 71 identifies within the virtual space the position and orientation of the garage display 41 in the real space identified in step S111.
 ステップS113において、仮想空間生成部71は、ステップS112において仮想空間内で同定されたガレージディスプレイ41に対してレンダリング内容を同定する。 In step S113, the virtual space generation unit 71 identifies rendering content for the garage display 41 identified in the virtual space in step S112.
 ステップS114において、仮想空間生成部71は、ステップS113で同定されたレンダリング内容を、ステップS112で同定されたガレージディスプレイ41に表示させ、映り込み制御処理は終了される。 In step S114, the virtual space generation unit 71 displays the rendering content identified in step S113 on the garage display 41 identified in step S112, and the reflection control process is terminated.
 以上のような映り込み制御処理によって、車両13のボディの表面の輝きを再現することができる。 The above-described reflection control process makes it possible to reproduce the shine of the surface of the body of the vehicle 13.
 <トレーニングシステム>
 図22乃至図29を参照して、図1の運転シミュレーションシステム11を利用したトレーニングシステムについて説明する。
<Training System>
A training system using the driving simulation system 11 of FIG. 1 will be described with reference to FIGS.
 図22は、トレーニングシステムの構成例を示すブロック図である。 FIG. 22 is a block diagram showing an example configuration of a training system.
 図22に示すように、トレーニングシステム201は、ネットワーク211上に構築されるクラウドシステム212と、ネットワーク211に接続されている複数のクライアントガレージシステム213とにより構成される。図22に示す例では、N個のクライアントガレージシステム213-1乃至213-Nがネットワーク211に接続されている。なお、クライアントガレージシステム213-1乃至213-Nを区別する必要がない場合、以下、クライアントガレージシステム213と称する。 As shown in FIG. 22, the training system 201 is composed of a cloud system 212 constructed on a network 211, and multiple client garage systems 213 connected to the network 211. In the example shown in FIG. 22, N client garage systems 213-1 to 213-N are connected to the network 211. Note that, when there is no need to distinguish between the client garage systems 213-1 to 213-N, they will hereinafter be referred to as client garage systems 213.
 クラウドシステム212は、データベース214およびサーバデバイス215により構成され、データベース214に登録されている各種のデータを使用して、サーバデバイス215が複数のクライアントガレージシステム213に対して提供されるトレーニングを実施することができる。 The cloud system 212 is composed of a database 214 and a server device 215, and the server device 215 can use various data registered in the database 214 to provide training to multiple client garage systems 213.
 図示するように、サーバデバイス215は、プロセッサ221、入出力インタフェース222、オペレーティングシステム223、およびメモリ224を備えて構成されており、メモリ224にアプリケーションソフトウェア225が記憶されている。そして、プロセッサ221が、メモリ224からアプリケーションソフトウェア225を読み出して実行することで、トレーニングが実施される。 As shown in the figure, the server device 215 is configured with a processor 221, an input/output interface 222, an operating system 223, and a memory 224, and application software 225 is stored in the memory 224. The processor 221 then reads out the application software 225 from the memory 224 and executes it to perform training.
 同様に、クライアントガレージシステム213は、プロセッサ231、入出力インタフェース232、オペレーティングシステム233、およびメモリ234を備えて構成されており、メモリ234にアプリケーションソフトウェア235が記憶されている。そして、プロセッサ231が、メモリ234からアプリケーションソフトウェア235を読み出して実行することで、トレーニングが実施される。なお、クライアントガレージシステム213は、図1の運転シミュレーションシステム11に対応しており、運転シミュレーションシステム11が備える各ブロックを備えている。 Similarly, the client garage system 213 is configured with a processor 231, an input/output interface 232, an operating system 233, and a memory 234, and application software 235 is stored in the memory 234. The processor 231 reads and executes the application software 235 from the memory 234 to perform training. The client garage system 213 corresponds to the driving simulation system 11 in FIG. 1, and includes each of the blocks included in the driving simulation system 11.
 例えば、トレーニングシステム201で実施されるトレーニングの構成は、教師なしトレーニングと教師ありトレーニングとに分別される。 For example, the training configurations implemented by the training system 201 are divided into unsupervised training and supervised training.
 図23は、教師なしトレーニングについて説明するフローチャートである。 Figure 23 is a flowchart explaining unsupervised training.
 ステップS201において、クライアントガレージシステム213は、ハンドル21、アクセルペダル22、およびブレーキペダル23に対するユーザの各運転操作(ハンドル回転データ、アクセル位置データ、およびブレーキ位置データ)を取得する。 In step S201, the client garage system 213 acquires the user's driving operations (steering wheel rotation data, accelerator position data, and brake position data) for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
 ステップS202において、クライアントガレージシステム213は、ステップS201で取得したユーザの各運転操作に応じて運転シミュレーション(上述した図17および図18の運転シミュレーション動作処理)を実行する。 In step S202, the client garage system 213 executes a driving simulation (the driving simulation operation process in FIG. 17 and FIG. 18 described above) according to each driving operation of the user acquired in step S201.
 ステップS203において、クライアントガレージシステム213は、ステップS202で実行した運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示する。 In step S203, the client garage system 213 displays the simulation image generated in the driving simulation executed in step S202 on the garage display 41.
 ステップS204において、クライアントガレージシステム213は、ステップS202で実行した運転シミュレーションで生成された各制御データ(ハンドル反力制御データ、サスペンション制御データ、ブレーキ反力制御データ、アクセル反力制御データ、およびシート制御データ)に従って、それぞれ対応する駆動機構に対する制御を行う。 In step S204, the client garage system 213 controls the corresponding drive mechanisms according to each control data (steering reaction force control data, suspension control data, brake reaction force control data, accelerator reaction force control data, and seat control data) generated in the driving simulation executed in step S202.
 ステップS205において、クライアントガレージシステム213は、例えば、ユーザの操作入力に従って、運転シミュレーションを終了するか否かを判定する。 In step S205, the client garage system 213 determines whether or not to end the driving simulation, for example, according to the user's operational input.
 ステップS205において、クライアントガレージシステム213が、運転シミュレーションを終了しないと判定した場合、処理はステップS201に戻って、以下、同様の処理が繰り返して行われる。この場合、ユーザは、前回の運転シミュレーションの結果を意識しながら、各運転操作を行うことができる。 If in step S205 the client garage system 213 determines not to end the driving simulation, the process returns to step S201, and the same process is repeated thereafter. In this case, the user can perform each driving operation while being aware of the results of the previous driving simulation.
 一方、クライアントガレージシステム213が、運転シミュレーションを終了すると判定した場合、教師なしトレーニングは終了される。 On the other hand, if the client garage system 213 determines that the driving simulation should be ended, the unsupervised training is terminated.
 図24は、教師ありトレーニングについて説明するフローチャートである。 Figure 24 is a flowchart explaining supervised training.
 ステップS211において、クライアントガレージシステム213は、ハンドル21、アクセルペダル22、およびブレーキペダル23に対するユーザの各運転操作を取得する。 In step S211, the client garage system 213 acquires the user's driving operations for the steering wheel 21, the accelerator pedal 22, and the brake pedal 23.
 ステップS212において、クライアントガレージシステム213は、ステップS211で取得したユーザの各運転操作に応じて運転シミュレーションを実行する。そして、クライアントガレージシステム213は、ユーザの操作と教師データとの差異を生成する。 In step S212, the client garage system 213 executes a driving simulation according to each of the user's driving operations acquired in step S211. Then, the client garage system 213 generates a difference between the user's operations and the training data.
 ステップS213において、クライアントガレージシステム213は、ステップS212で実行した運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示する。このとき、クライアントガレージシステム213は、ステップS212で生成したユーザの操作と教師データとの差異を出力する。 In step S213, the client garage system 213 displays the simulation image generated in the driving simulation executed in step S212 on the garage display 41. At this time, the client garage system 213 outputs the difference between the user's operation generated in step S212 and the teacher data.
 ステップS214において、クライアントガレージシステム213は、ステップS212で実行した運転シミュレーションで生成された各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。このとき、クライアントガレージシステム213は、ステップS212で生成したユーザの操作と教師データとの差異を通知する。 In step S214, the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S212. At this time, the client garage system 213 notifies the difference between the user's operation generated in step S212 and the teacher data.
 ステップS215において、クライアントガレージシステム213は、例えば、ユーザの操作入力に従って、運転シミュレーションを終了するか否かを判定する。 In step S215, the client garage system 213 determines whether or not to end the driving simulation, for example, according to the user's operational input.
 ステップS215において、クライアントガレージシステム213が、運転シミュレーションを終了しないと判定した場合、処理はステップS211に戻って、以下、同様の処理が繰り返して行われる。この場合、ユーザは、前回の運転シミュレーションの結果を意識しながら、各運転操作を行うことができる。 If in step S215 the client garage system 213 determines not to end the driving simulation, the process returns to step S211, and the same process is repeated thereafter. In this case, the user can perform each driving operation while being aware of the results of the previous driving simulation.
 一方、クライアントガレージシステム213が、運転シミュレーションを終了すると判定した場合、教師ありトレーニングは終了される。 On the other hand, if the client garage system 213 determines that the driving simulation should be terminated, the supervised training is terminated.
 図25は、学習について説明するフローチャートである。 Figure 25 is a flowchart explaining learning.
 ステップS221において、クライアントガレージシステム213は、ユーザの操作入力に従って学習を選択する。 In step S221, the client garage system 213 selects learning according to the user's operational input.
 ステップS222において、クライアントガレージシステム213は、学習を開始し、理想とする教師運転パターンに従った運転シミュレーションを実行する。 In step S222, the client garage system 213 starts learning and performs a driving simulation according to the ideal teacher driving pattern.
 ステップS223において、クライアントガレージシステム213は、ステップS222で実行した運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S223, the client garage system 213 displays the simulation video generated in the driving simulation executed in step S222 on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS224において、クライアントガレージシステム213は、ステップS222で実行した運転シミュレーションで生成された各制御データに従って、それぞれ対応する駆動機構に対する制御を行った後、学習は終了される。 In step S224, the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S222, and then the learning process ends.
 このような学習では、生徒は、運転操作を行わずに、AIを含む教師の手本を体感することができる。例えば、生徒は、ステアリング操作やブレーキング、アクセル操作など、理想とする運転パターンを、生徒自身の車両13の運転席で、与えられた反力(教師の操作)で体感しながら学習することができる。 In this type of learning, students can experience the teacher's example, including AI, without actually driving. For example, students can learn ideal driving patterns, such as steering, braking, and accelerating, while sitting in the driver's seat of their own vehicle 13 and experiencing the reaction forces (operations by the teacher) given to them.
 次に、トレーニング段階では、生徒が操作を行い、教師からの操作補助やアドバイスを受けてトレーニングする。例えば、生徒の操作が理想とする運転パターンから乖離する場合、コースや路面、スピードの状況に応じた最適な復帰の方法や、乖離しないための対策をトレーニングする方法は、以下の図26乃至図28を参照して説明するように異なるものとなる。 Next, in the training stage, the student performs the operation and receives operational assistance and advice from the teacher while undergoing training. For example, if the student's operation deviates from the ideal driving pattern, the optimal method of returning to the ideal driving pattern depending on the course, road surface, and speed conditions, and the method of training measures to prevent deviation will differ as described below with reference to Figures 26 to 28.
 図26は、教師が人間であって、教師と生徒とが1対1で行われるトレーニングについて説明するフローチャートである。 Figure 26 is a flowchart that explains training conducted one-on-one between a teacher and a student, where the teacher is a human.
 ステップS231において、クライアントガレージシステム213は、ユーザ(生徒)の操作入力に従ってトレーニングを選択する。 In step S231, the client garage system 213 selects a training session according to the user's (student's) operational input.
 ステップS232において、クライアントガレージシステム213は、ネットワーク211を通じて教師と生徒との操作系とコミュニケーションパスを接続する。 In step S232, the client garage system 213 connects the operation systems and communication paths of the teacher and students via the network 211.
 ステップS233において、クライアントガレージシステム213は、理想とする教師運転パターンに従って、運転シミュレーションを実行する。 In step S233, the client garage system 213 executes a driving simulation according to the ideal teacher driving pattern.
 ステップS234において、クライアントガレージシステム213は、ステップS233で実行した運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S234, the client garage system 213 displays the simulation video generated in the driving simulation executed in step S233 on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS235において、クライアントガレージシステム213は、ステップS233で実行した運転シミュレーションで生成された各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S235, the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S233.
 ステップS236において、理想とする教師運転パターンとの乖離がある場合、教師は、ネットワーク211を通じた補助操作を行い、クライアントガレージシステム213は、その補助操作に基づいた各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S236, if there is a deviation from the ideal teacher driving pattern, the teacher performs auxiliary operations via the network 211, and the client garage system 213 controls the corresponding drive mechanisms according to each control data based on the auxiliary operations.
 ステップS237において、クライアントガレージシステム213は、ステップS236の教師の補助操作に従った運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S237, the client garage system 213 displays the simulation video generated in the driving simulation according to the teacher's assisted operations in step S236 on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS238において、クライアントガレージシステム213は、ハンドル21、アクセルペダル22、およびブレーキペダル23に対するユーザの各運転操作を取得する。 In step S238, the client garage system 213 acquires the user's driving operations for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
 ステップS239において、クライアントガレージシステム213は、トレーニングが完了したか否かを判定し、トレーニングが完了していないと判定した場合、処理はステップS233に戻って、以下、同様の処理が繰り返して行われる。一方、ステップS239において、クライアントガレージシステム213が、トレーニングが完了したと判定した場合、処理はステップS240に進む。 In step S239, the client garage system 213 determines whether the training is complete. If it determines that the training is not complete, the process returns to step S233, and the same process is repeated. On the other hand, if the client garage system 213 determines in step S239 that the training is complete, the process proceeds to step S240.
 ステップS240において、クライアントガレージシステム213は、ユーザIDに紐づけて、操作ログの評価および採点を記録する。 In step S240, the client garage system 213 records the evaluation and scoring of the operation log, linking it to the user ID.
 ステップS241において、クライアントガレージシステム213は、懸念がある場合、教師の操作に応じて、ソフトウェアの調整を実施する。このとき、教師は、必要であればディーラーへの調整依頼書を作成してもよい。 In step S241, if there are any concerns, the client garage system 213 adjusts the software in response to the teacher's operation. At this time, the teacher may prepare a request for adjustments to be sent to the dealer if necessary.
 ステップS242において、クライアントガレージシステム213は、教師とのコミュニケーションを実施し、その後、処理は終了される。 In step S242, the client garage system 213 communicates with the teacher, and then the process ends.
 このように、トレーニングシステム201を利用して、人間である教師と生徒とが1対1でトレーニングを行うことができる。即ち、ネットワーク211を通じて教師と生徒との操作系とコミュニケーションパスが接続される。そして、生徒が運転している状態で、教師がネットワーク211を通じてハンドル操作や、ブレーキング、アクセル操作などの運転に関わる動作の補助操作およびアドバイスをリモートで行うことができる。これにより、ユーザは、自身の車両13の運転席で、個人で、トレーニングを行うことができる。 In this way, training system 201 allows a human teacher and student to train one-on-one. That is, the operation systems and communication paths of the teacher and student are connected via network 211. Then, while the student is driving, the teacher can remotely provide assistance and advice on steering, braking, accelerator operation, and other driving-related actions via network 211. This allows the user to train individually in the driver's seat of their own vehicle 13.
 図27は、教師が人間であって、教師と生徒とが1対多で行われるトレーニングについて説明するフローチャートである。 Figure 27 is a flowchart that explains training in which the teacher is a human and the teacher and students are one-to-many.
 ステップS251において、クライアントガレージシステム213は、ユーザ(生徒)の操作入力に従ってトレーニングを選択する。 In step S251, the client garage system 213 selects a training session according to the user's (student's) operational input.
 ステップS252において、クライアントガレージシステム213は、ネットワーク211を通じて教師と生徒との操作系とコミュニケーションパスを接続し、生徒どうしのコミュニケーションパスを接続する。 In step S252, the client garage system 213 connects the operation systems and communication paths of the teacher and student via the network 211, and connects the communication paths between the students.
 ステップS253において、クライアントガレージシステム213は、理想とする教師運転パターンに従って、運転シミュレーションを実行する。 In step S253, the client garage system 213 executes a driving simulation according to the ideal teacher driving pattern.
 ステップS254において、クライアントガレージシステム213は、ステップS253で実行した運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S254, the client garage system 213 displays the simulation video generated in the driving simulation executed in step S253 on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS255において、クライアントガレージシステム213は、ステップS253で実行した運転シミュレーションで生成された各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S255, the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S253.
 ステップS256において、理想とする教師運転パターンとの乖離がある場合、教師は、ネットワーク211を通じて集団全体に対する傾向と対策として補助操作を行い、クライアントガレージシステム213は、その補助操作に基づいた各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S256, if there is a deviation from the ideal teacher driving pattern, the teacher performs auxiliary operations as a measure to address the tendency of the entire group via the network 211, and the client garage system 213 controls the corresponding drive mechanisms according to each control data based on the auxiliary operations.
 ステップS257において、クライアントガレージシステム213は、集団全体に対して、ステップS256の教師の補助操作に従った運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S257, the client garage system 213 displays the simulation video generated in the driving simulation following the teacher's assisted operations in step S256 for the entire group on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS258において、理想とする教師運転パターンとの乖離がある場合、教師は、ネットワーク211を通じて個別およびグループに対する傾向と対策として補助操作を行い、クライアントガレージシステム213は、その補助操作に基づいた各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S258, if there is a deviation from the ideal teacher driving pattern, the teacher performs auxiliary operations as a countermeasure and tendency for individuals and groups via the network 211, and the client garage system 213 controls the corresponding drive mechanisms according to each control data based on the auxiliary operations.
 ステップS259において、クライアントガレージシステム213は、個別およびグループに対して、ステップS258の教師の補助操作に従った運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S259, the client garage system 213 displays, individually and as a group, on the garage display 41 the simulation video generated in the driving simulation following the teacher's assisted operations in step S258, and outputs the audio generated together with the simulation video.
 ステップS260において、クライアントガレージシステム213は、ハンドル21、アクセルペダル22、およびブレーキペダル23に対するユーザの各運転操作を取得する。 In step S260, the client garage system 213 acquires the user's driving operations for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
 ステップS261において、クライアントガレージシステム213は、トレーニングが完了したか否かを判定し、トレーニングが完了していないと判定した場合、処理はステップS253に戻って、以下、同様の処理が繰り返して行われる。一方、ステップS261において、クライアントガレージシステム213が、トレーニングが完了したと判定した場合、処理はステップS262に進む。 In step S261, the client garage system 213 determines whether the training has been completed. If it is determined that the training has not been completed, the process returns to step S253, and the same process is repeated thereafter. On the other hand, if the client garage system 213 determines in step S261 that the training has been completed, the process proceeds to step S262.
 ステップS262において、クライアントガレージシステム213は、ユーザIDに紐づけて、各個の操作ログの評価および採点を記録する。 In step S262, the client garage system 213 records the evaluation and scoring of each individual operation log, linking it to the user ID.
 ステップS263において、クライアントガレージシステム213は、懸念がある場合、教師の操作に応じて、各個のソフトウェアの調整を実施する。このとき、教師は、必要であればディーラーへの調整依頼書を作成してもよい。例えば、ここでの懸念は、ユーザの運転技量に対して、ソフトウェアの設定が適切でないことや、ユーザの運転技量に合うような設定にソフトウェアでは調整できないことなどが含まれる。 In step S263, if there are any concerns, the client garage system 213 adjusts each piece of software in response to the teacher's operation. At this time, the teacher may prepare an adjustment request form for the dealer if necessary. For example, concerns here may include the software settings being inappropriate for the user's driving skill, or the software being unable to adjust the settings to suit the user's driving skill, etc.
 ステップS264において、クライアントガレージシステム213は、教師とのコミュニケーションや生徒どうしのコミュニケーションを実施し、その後、処理は終了される。 In step S264, the client garage system 213 communicates with the teacher and among the students, and then the process ends.
 このように、トレーニングシステム201を利用して、人間である教師と生徒とが1対多でトレーニングを行うことができる。即ち、ネットワーク211を通じて教師と多数の生徒との操作系とコミュニケーションパスが接続される(生徒どうしのコミュニケーションパスが接続されてもよい)。そして、生徒が運転している状態で、教師がネットワーク211を通じてハンドル操作や、ブレーキング、アクセル操作などの運転に関わる動作の補助操作およびアドバイスを集団全体に対して傾向と対策としてリモートで行うことができる。また、理想とする運転パターンから乖離の大きい個やグループを教師が選択して個別で、補助操作およびアドバイスをリモートで行うことができる。これにより、ユーザは、自身の車両13の運転席で、個人またはコミュニティと共に、トレーニングを行うことができる。 In this way, training system 201 can be used to allow one-to-many training between a human teacher and students. That is, the operation systems and communication paths of the teacher and many students are connected via network 211 (communication paths between students may also be connected). Then, while the students are driving, the teacher can remotely provide assistance and advice to the entire group via network 211 for actions related to driving, such as steering, braking, and accelerator operation, based on trends and countermeasures. In addition, the teacher can select individuals or groups that deviate greatly from the ideal driving pattern and provide assistance and advice to them individually, remotely. This allows users to train individually or together with a community, in the driver's seat of their own vehicle 13.
 図28は、教師がAIであって、教師と生徒とが1対1または1対多で行われるトレーニングについて説明するフローチャートである。 Figure 28 is a flowchart explaining training in which the teacher is an AI and the teacher and students are trained one-to-one or one-to-many.
 ステップS271において、クライアントガレージシステム213は、ユーザ(生徒)の操作入力に従ってトレーニングを選択する。 In step S271, the client garage system 213 selects a training session according to the user's (student's) operational input.
 ステップS272において、クライアントガレージシステム213は、ネットワーク211を通じて教師AIと生徒との操作系とコミュニケーションパスを接続し、生徒が多数である場合は、生徒どうしのコミュニケーションパスを接続する。 In step S272, the client garage system 213 connects the operation systems and communication paths of the teacher AI and the students via the network 211, and if there are a large number of students, connects the communication paths between the students.
 ステップS273において、クライアントガレージシステム213は、理想とする教師運転パターンに従って、運転シミュレーションを実行する。 In step S273, the client garage system 213 executes a driving simulation according to the ideal teacher driving pattern.
 ステップS274において、クライアントガレージシステム213は、ステップS273で実行した運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S274, the client garage system 213 displays the simulation video generated in the driving simulation executed in step S273 on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS275において、クライアントガレージシステム213は、ステップS273で実行した運転シミュレーションで生成された各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S275, the client garage system 213 controls the corresponding drive mechanisms according to the control data generated in the driving simulation executed in step S273.
 ステップS276において、理想とする教師運転パターンとの乖離がある場合、教師AIは、ネットワーク211を通じて個別に傾向と対策として補助操作を行い、クライアントガレージシステム213は、その補助操作に基づいた各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S276, if there is a deviation from the ideal teacher driving pattern, the teacher AI performs auxiliary operations individually via the network 211 as a measure to address the tendency, and the client garage system 213 performs control of the corresponding drive mechanism according to each control data based on the auxiliary operations.
 ステップS277において、クライアントガレージシステム213は、ステップS276の教師AIの個別の補助操作に従った運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S277, the client garage system 213 displays the simulation video generated in the driving simulation according to the individual auxiliary operations of the teacher AI in step S276 on the garage display 41, and outputs the audio generated together with the simulation video.
 ステップS278において、クライアントガレージシステム213は、運転シミュレーションを生成し、教師AIのサポートに応じて各制御データに従って、それぞれ対応する駆動機構に対する制御を行う。 In step S278, the client garage system 213 generates a driving simulation and controls the corresponding drive mechanisms according to each control data with the support of the teacher AI.
 ステップS279において、クライアントガレージシステム213は、個別およびグループに対して、ステップS278の教師AIの補助操作に従った運転シミュレーションで生成されたシミュレーション映像をガレージディスプレイ41に表示し、シミュレーション映像とともに生成されたオーディオを出力する。 In step S279, the client garage system 213 displays, for each individual and group, on the garage display 41 the simulation video generated by the driving simulation following the assisted operations of the teacher AI in step S278, and outputs the audio generated together with the simulation video.
 ステップS280において、クライアントガレージシステム213は、ハンドル21、アクセルペダル22、およびブレーキペダル23に対するユーザの各運転操作を取得する。 In step S280, the client garage system 213 acquires the user's driving operations for the steering wheel 21, accelerator pedal 22, and brake pedal 23.
 ステップS281において、クライアントガレージシステム213は、トレーニングが完了したか否かを判定し、トレーニングが完了していないと判定した場合、処理はステップS273に戻って、以下、同様の処理が繰り返して行われる。一方、ステップS281において、クライアントガレージシステム213が、トレーニングが完了したと判定した場合、処理はステップS282に進む。 In step S281, the client garage system 213 determines whether the training has been completed. If it is determined that the training has not been completed, the process returns to step S273, and the same process is repeated thereafter. On the other hand, if the client garage system 213 determines in step S281 that the training has been completed, the process proceeds to step S282.
 ステップS282において、クライアントガレージシステム213は、ユーザIDに紐づけて、各個の操作ログの評価および採点を記録する。 In step S282, the client garage system 213 records the evaluation and scoring of each individual operation log, linking it to the user ID.
 ステップS283において、クライアントガレージシステム213は、懸念がある場合、教師AIの出力に応じて、各個のソフトウェアの調整を実施する。このとき、教師AIは、必要であればディーラーへの調整依頼書を作成してもよい。 In step S283, if there are any concerns, the client garage system 213 adjusts each piece of software according to the output of the teacher AI. At this time, the teacher AI may prepare an adjustment request form for the dealer if necessary.
 ステップS284において、クライアントガレージシステム213は、生徒どうしのコミュニケーションを実施し、その後、処理は終了される。 In step S284, the client garage system 213 communicates between the students, and then the process ends.
 このように、教師AIと生徒とが1対1および1対多でトレーニングを行うことができる。即ち、ネットワーク211を通じて教師AIと多数の生徒との操作系とコミュニケーションパスが接続される(生徒どうしのコミュニケーションパスが接続されてもよい)。そして、理想とする運転パターンから乖離に対する補助操作およびアドバイスを、教師AIがリモートで行うことができる。これにより、ユーザは、自身の車両13の運転席で、個人またはコミュニティと共に、トレーニングを行うことができる。 In this way, the teacher AI and students can train one-to-one and one-to-many. That is, the operation systems and communication paths of the teacher AI and many students are connected via the network 211 (communication paths between students may also be connected). The teacher AI can then remotely provide auxiliary operations and advice in response to deviations from the ideal driving pattern. This allows the user to train individually or together with a community in the driver's seat of their own vehicle 13.
 以上のように、トレーニングシステム201では、トレーニングの状況に応じて教師AIに反復トレーニングを任せる様な、人間の教師と教師AIとを併用することで運営やトレーニング自体の効率化を可能とすることができる。また、教師AIの実行形態はクラウドシステム212およびクライアントガレージシステム213のどちら側で実行してもよい。 As described above, the training system 201 can improve the efficiency of management and training itself by using a combination of a human teacher and a teacher AI, such as entrusting repetitive training to the teacher AI depending on the training situation. Furthermore, the teacher AI may be executed on either the cloud system 212 or the client garage system 213.
 例えば、理想とする運転パターンは安全運転やレースなどの目的により異なるが、トレーニングシステム201では、トレーニングや、試験、ゲーム要素などの達成目標の設定と、その設定に伴うプロフェッショナルの運転、教師の手本、および、様々なAI技術により導出されるものとする。そして、トレーニングシステム201では、設定した目標の達成、および、トレーニングを積んだ行動の全てがメタバース空間上のメタデータとして記録されるので、公正な評価や採点などが可能になる。また、採点においては、カメラモニタリングによる巻き込み確認などの動作検出や、シフト、方向指示器、ハザードランプ、フォクランプ、デフロスター操作の検出など運転操作に関わる動作も含む事ができる。 For example, the ideal driving pattern differs depending on the purpose, such as safe driving or racing, but in the training system 201, it is derived by setting goals to be achieved through training, tests, game elements, etc., and by the accompanying professional driving, teacher examples, and various AI technologies. In the training system 201, the achievement of the set goals and all trained actions are recorded as metadata in the metaverse space, making fair evaluation and scoring possible. Scoring can also include actions related to driving operations, such as detection of actions such as checking for collisions through camera monitoring, and detection of shift, turn signal, hazard lamp, fog lamp, and defroster operation.
 このように、トレーニングシステム201を利用することにより、ユーザ自身の車両13での実体験をベースとした学習およびトレーニングによって、ユーザは、車両13の機能および性能を把握することや、それに伴う車両感覚や操作スキル向上などによる運転スキルを向上することなどが可能となる。例えば、トレーニングシステム201は、ユーザの運転スキルの向上に伴う事故の削減はもとより、公正な評価および採点が可能になることによるライセンス付与や更新などに貢献することができる。 In this way, by using the training system 201, the user can understand the functions and performance of the vehicle 13 through learning and training based on the user's own actual experience with the vehicle 13, and can improve their driving skills by improving their vehicle sense and operation skills. For example, the training system 201 can not only reduce accidents by improving the user's driving skills, but also contribute to the granting and renewal of licenses by enabling fair evaluation and scoring.
 次に、図29を参照して、トレーニングシステム201を利用した車両13の個人最適化について説明する。 Next, referring to FIG. 29, we will explain personal optimization of the vehicle 13 using the training system 201.
 例えば、車両13のEV化によって各OEM(Original Equipment Manufacturing(Manufacturer))のシャーシプラットフォームが1つ、または、2つに集約されていく中で、今後の車両13の乗り心地や操作感の設定の一部はソフトウェアによる制御が行われるようになると考えられる。ここでは、そのトレンドを鑑みた上での車両13の個人最適化に関する方法について説明する。 For example, as the trend towards electric vehicles for vehicles 13 leads to the consolidation of chassis platforms from each OEM (Original Equipment Manufacturing (Manufacturer)) into one or two, it is expected that some of the settings for the ride comfort and handling of vehicles 13 in the future will be controlled by software. Here, we will explain a method for personal optimization of vehicles 13 in light of this trend.
 例えば、トレーニングにより、運転者は、運転スキルを自分の車両13の性能合わせて向上させて行くことになる。このとき、例えば、ステアリングや、ブレーキ、アクセル、サスペンションなどの反応や、重さ、固さなどが各個人の能力の差によってトレーニング自体の妨げになる場合や、そもそも無理があるような場合があると想定される。その場合、トレーニングの反応を見ながらデータと照らし合わせ、教師と対話し、一部ソフトエア制御においてできる部分はソフトウェアで調整する事が可能である。 For example, through training, the driver will improve his/her driving skills to match the performance of his/her vehicle 13. At this time, it is expected that the response, weight, and stiffness of the steering, brakes, accelerator, suspension, etc., may be hindered by differences in each individual's abilities, or may be impossible to do in the first place. In such cases, it is possible to compare the training response with the data, communicate with the teacher, and adjust parts that can be partially controlled by software using software.
 一方、ソフトウェアで調整しきれない部分は、ディーラーに調整項目として申し送りして、調整する事が可能である。 On the other hand, any parts that cannot be adjusted using software can be passed on to the dealer as adjustment items and can be adjusted.
 ステップS291において、クライアントガレージシステム213は、ユーザが個人最適化の機能に対応の車両13にて、個人IDに紐づくパラメータを読み込み、ユーザの各運転操作を取得する。 In step S291, the client garage system 213 reads parameters linked to the personal ID of the user in the vehicle 13 that supports the personal optimization function, and acquires each driving operation of the user.
 ステップS292において、クライアントガレージシステム213は、クラウド上で、ユーザが搭乗する車両13の性能とマッチング処理をした上で、パラメータを個人最適化の機能に対応の車両13にダウンロードする。 In step S292, the client garage system 213 performs a matching process on the cloud with the performance of the vehicle 13 in which the user will be riding, and then downloads the parameters to a vehicle 13 that supports the personal optimization function.
 ステップS293において、クライアントガレージシステム213は、パラメータの調整を実行し、処理は終了される。 In step S293, the client garage system 213 performs parameter adjustment and the process ends.
 このように個人最適化されたパラメータは、以下のように活用することができる。 These personalized parameters can be used in the following ways:
 例えば、トレーニングにて、運転者は、自分の運転スキル向上と、車側の調整で、人と車のお互いの歩み寄りによる最適化が図れると、運転者にとって自分の車両13が唯一無二の存在になる。その一方で、他の車両13に乗り難くなるという課題が存在する。 For example, if a driver can improve their driving skills during training and achieve optimization through compromise between the driver and the vehicle by adjusting the vehicle, the driver's own vehicle 13 will become unique to the driver. On the other hand, there is an issue that it will become difficult for the driver to ride in other vehicles 13.
 そこで、自分のIDに紐づく車側の調整のパラメータを、クラウド上に置き、自分のパートナーの車両13を運転する際や、シェアカーやレンタカーなどの車両13を運転する際に、搭乗する車両13が個人最適化の機能に対応している事で、そのパラメータをクラウドから読み込み、搭乗する車両13の性能とマッチング処理を行う事によって、自動で、自分の車両13の設定に近づける事ができる。 Then, the vehicle adjustment parameters linked to one's ID are stored on the cloud, and when one drives one's partner's vehicle 13 or a shared car or rental vehicle 13, if the vehicle 13 one is driving supports personal optimization functions, the parameters are read from the cloud and a matching process is performed with the performance of the vehicle 13 one is driving, allowing the settings to be automatically adjusted to match those of one's own vehicle 13.
 また、車両13を買い替える場合、車両13を購入した時点でゼロからトレーニングをやり直すのも良いが、買い替える車両13が個人最適化の機能に対応している事で、自分のIDに紐づく車側の調整のパラメータをクラウド上から読み込み、買い替える車両13の性能とのマッチング処理を行う事によって、自分の車両13の設定に近づける事ができる。例えば、もし買い替える車両13が個人最適化の機能に対応している事が事前に把握できる場合、現状の車両13にて、買い替え後に個人最適化された設定をバーチャルに再現し、買い替える車両13に関する学習やトレーニングを事前にしておく事が可能となる。 In addition, when replacing the vehicle 13, it is possible to start training from scratch when purchasing the vehicle 13, but if the replacement vehicle 13 supports the personal optimization function, it is possible to read the vehicle adjustment parameters linked to one's ID from the cloud and perform a matching process with the performance of the replacement vehicle 13 to bring it closer to the settings of one's own vehicle 13. For example, if it is known in advance that the replacement vehicle 13 supports the personal optimization function, it is possible to virtually reproduce the personally optimized settings after replacement in the current vehicle 13, and to carry out learning and training related to the replacement vehicle 13 in advance.
 さらに、買い替える車両13の運転とその学習やトレーニングが事前に個人最適化された状態でバーチャルに体験できる事によって、より大勢の人に最適な体験を提供でき、物理的試運転を一部削減できるので、環境への配慮や事故削減と共に、車の効果的なプロモーションができる可能性がある。 Furthermore, by being able to virtually experience driving the replacement vehicle 13 and learning and training about it in a state that has been individually optimized in advance, it is possible to provide an optimal experience to a larger number of people and reduce some of the physical test drives, which could lead to effective promotion of the car while also being environmentally friendly and reducing accidents.
 以上のように、図1の運転シミュレーションシステム11を利用したシステムの例として、トレーニングシステムについて説明した。なお、運転シミュレーションシステム11は、疑似的なドライブ体験の提供システムとして利用されてもよい。また、運転シミュレーションシステム11は、車両13の新しい安全機能や、ユーザが所有する車両13に未搭載の安全機能などに関する体験を提供するシステムとして利用されてもよい。 As described above, a training system has been described as an example of a system that uses the driving simulation system 11 of FIG. 1. The driving simulation system 11 may be used as a system that provides a simulated driving experience. The driving simulation system 11 may also be used as a system that provides an experience related to new safety features of the vehicle 13, or safety features not yet installed in the vehicle 13 owned by the user.
 <コンピュータの構成例>
 次に、上述した一連の処理(情報処理方法)は、ハードウェアにより行うこともできるし、ソフトウェアにより行うこともできる。一連の処理をソフトウェアによって行う場合には、そのソフトウェアを構成するプログラムが、汎用のコンピュータ等にインストールされる。
<Example of computer configuration>
Next, the above-mentioned series of processes (information processing method) can be performed by hardware or software. When the series of processes is performed by software, a program constituting the software is installed in a general-purpose computer or the like.
 図30は、上述した一連の処理を実行するプログラムがインストールされるコンピュータの一実施の形態の構成例を示すブロック図である。 FIG. 30 is a block diagram showing an example of the configuration of one embodiment of a computer on which a program that executes the series of processes described above is installed.
 プログラムは、コンピュータに内蔵されている記録媒体としてのハードディスク305やROM303に予め記録しておくことができる。 The program can be pre-recorded on the hard disk 305 or ROM 303 as a recording medium built into the computer.
 あるいはまた、プログラムは、ドライブ309によって駆動されるリムーバブル記録媒体311に格納(記録)しておくことができる。このようなリムーバブル記録媒体311は、いわゆるパッケージソフトウェアとして提供することができる。ここで、リムーバブル記録媒体311としては、例えば、フレキシブルディスク、CD-ROM(Compact Disc Read Only Memory),MO(Magneto Optical)ディスク,DVD(Digital Versatile Disc)、磁気ディスク、半導体メモリ等がある。 Alternatively, the program can be stored (recorded) on a removable recording medium 311 driven by the drive 309. Such a removable recording medium 311 can be provided as so-called packaged software. Here, examples of the removable recording medium 311 include a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a magnetic disk, a semiconductor memory, etc.
 なお、プログラムは、上述したようなリムーバブル記録媒体311からコンピュータにインストールする他、通信網や放送網を介して、コンピュータにダウンロードし、内蔵するハードディスク305にインストールすることができる。すなわち、プログラムは、例えば、ダウンロードサイトから、ディジタル衛星放送用の人工衛星を介して、コンピュータに無線で転送したり、LAN(Local Area Network)、インターネットといったネットワークを介して、コンピュータに有線で転送することができる。 In addition to being installed on a computer from the removable recording medium 311 as described above, the program can also be downloaded to the computer via a communication network or broadcasting network and installed on the built-in hard disk 305. That is, the program can be transferred to the computer wirelessly from a download site via an artificial satellite for digital satellite broadcasting, or transferred to the computer via a wired connection via a network such as a LAN (Local Area Network) or the Internet.
 コンピュータは、CPU(Central Processing Unit)302を内蔵しており、CPU302には、バス301を介して、入出力インタフェース310が接続されている。 The computer has a built-in CPU (Central Processing Unit) 302, to which an input/output interface 310 is connected via a bus 301.
 CPU302は、入出力インタフェース310を介して、ユーザによって、入力部307が操作等されることにより指令が入力されると、それに従って、ROM(Read Only Memory)303に格納されているプログラムを実行する。あるいは、CPU302は、ハードディスク305に格納されたプログラムを、RAM(Random Access Memory)304にロードして実行する。 When a command is input by the user via the input/output interface 310 by operating the input unit 307, the CPU 302 executes a program stored in the ROM (Read Only Memory) 303 accordingly. Alternatively, the CPU 302 loads a program stored on the hard disk 305 into the RAM (Random Access Memory) 304 and executes it.
 これにより、CPU302は、上述したフローチャートにしたがった処理、あるいは上述したブロック図の構成により行われる処理を行う。そして、CPU302は、その処理結果を、必要に応じて、例えば、入出力インタフェース310を介して、出力部306から出力、あるいは、通信部308から送信、さらには、ハードディスク305に記録等させる。 As a result, the CPU 302 performs processing according to the above-mentioned flowchart, or processing performed by the configuration of the above-mentioned block diagram. Then, the CPU 302 outputs the processing results from the output unit 306 via the input/output interface 310, or transmits them from the communication unit 308, or even records them on the hard disk 305, as necessary.
 なお、入力部307は、キーボードや、マウス、マイク等で構成される。また、出力部306は、LCD(Liquid Crystal Display)やスピーカ等で構成される。 The input unit 307 is composed of a keyboard, mouse, microphone, etc. The output unit 306 is composed of an LCD (Liquid Crystal Display), speaker, etc.
 ここで、本明細書において、コンピュータがプログラムに従って行う処理は、必ずしもフローチャートとして記載された順序に沿って時系列に行われる必要はない。すなわち、コンピュータがプログラムに従って行う処理は、並列的あるいは個別に実行される処理(例えば、並列処理あるいはオブジェクトによる処理)も含む。 In this specification, the processing performed by a computer according to a program does not necessarily have to be performed in chronological order according to the order described in the flowchart. In other words, the processing performed by a computer according to a program also includes processing that is executed in parallel or individually (for example, parallel processing or processing by objects).
 また、プログラムは、1のコンピュータ(プロセッサ)により処理されるものであっても良いし、複数のコンピュータによって分散処理されるものであっても良い。さらに、プログラムは、遠方のコンピュータに転送されて実行されるものであっても良い。 The program may be processed by one computer (processor), or may be distributed among multiple computers. Furthermore, the program may be transferred to a remote computer for execution.
 さらに、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 Furthermore, in this specification, a system refers to a collection of multiple components (devices, modules (parts), etc.), regardless of whether all the components are in the same housing. Therefore, multiple devices housed in separate housings and connected via a network, and a single device in which multiple modules are housed in a single housing, are both systems.
 また、例えば、1つの装置(または処理部)として説明した構成を分割し、複数の装置(または処理部)として構成するようにしてもよい。逆に、以上において複数の装置(または処理部)として説明した構成をまとめて1つの装置(または処理部)として構成されるようにしてもよい。また、各装置(または各処理部)の構成に上述した以外の構成を付加するようにしてももちろんよい。さらに、システム全体としての構成や動作が実質的に同じであれば、ある装置(または処理部)の構成の一部を他の装置(または他の処理部)の構成に含めるようにしてもよい。 Also, for example, the configuration described above as one device (or processing unit) may be divided and configured as multiple devices (or processing units). Conversely, the configurations described above as multiple devices (or processing units) may be combined and configured as one device (or processing unit). Also, it is of course possible to add configuration other than that described above to the configuration of each device (or each processing unit). Furthermore, as long as the configuration and operation of the system as a whole are substantially the same, part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit).
 また、例えば、本技術は、1つの機能を、ネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 Also, for example, this technology can be configured as cloud computing, in which a single function is shared and processed collaboratively by multiple devices via a network.
 また、例えば、上述したプログラムは、任意の装置において実行することができる。その場合、その装置が、必要な機能(機能ブロック等)を有し、必要な情報を得ることができるようにすればよい。 Furthermore, for example, the above-mentioned program can be executed in any device. In that case, it is sufficient that the device has the necessary functions (functional blocks, etc.) and is capable of obtaining the necessary information.
 また、例えば、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。換言するに、1つのステップに含まれる複数の処理を、複数のステップの処理として実行することもできる。逆に、複数のステップとして説明した処理を1つのステップとしてまとめて実行することもできる。 Furthermore, for example, each step described in the above flowchart can be executed by one device, or can be shared and executed by multiple devices. Furthermore, if one step includes multiple processes, the multiple processes included in that one step can be executed by one device, or can be shared and executed by multiple devices. In other words, multiple processes included in one step can be executed as multiple step processes. Conversely, processes described as multiple steps can be executed collectively as one step.
 なお、コンピュータが実行するプログラムは、プログラムを記述するステップの処理が、本明細書で説明する順序に沿って時系列に実行されるようにしても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで個別に実行されるようにしても良い。つまり、矛盾が生じない限り、各ステップの処理が上述した順序と異なる順序で実行されるようにしてもよい。さらに、このプログラムを記述するステップの処理が、他のプログラムの処理と並列に実行されるようにしても良いし、他のプログラムの処理と組み合わせて実行されるようにしても良い。 In addition, the processing of the steps that describe a program executed by a computer may be executed chronologically in the order described in this specification, or may be executed in parallel, or individually at the required timing, such as when a call is made. In other words, as long as no contradictions arise, the processing of each step may be executed in an order different from the order described above. Furthermore, the processing of the steps that describe this program may be executed in parallel with the processing of other programs, or may be executed in combination with the processing of other programs.
 なお、本明細書において複数説明した本技術は、矛盾が生じない限り、それぞれ独立に単体で実施することができる。もちろん、任意の複数の本技術を併用して実施することもできる。例えば、いずれかの実施の形態において説明した本技術の一部または全部を、他の実施の形態において説明した本技術の一部または全部と組み合わせて実施することもできる。また、上述した任意の本技術の一部または全部を、上述していない他の技術と併用して実施することもできる。 Note that the multiple present technologies described in this specification can be implemented independently and individually, provided no contradictions arise. Of course, any multiple present technologies can also be implemented in combination. For example, some or all of the present technologies described in any embodiment can be implemented in combination with some or all of the present technologies described in other embodiments. Also, some or all of any of the present technologies described above can be implemented in combination with other technologies not described above.
 <構成の組み合わせ例>
 なお、本技術は以下のような構成も取ることができる。
(1)
 他の情報処理装置と通信を行う通信部と、
 車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得する操作データ取得部と、
 前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御する車両動作制御部と、
 前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データを前記車両動作制御部に供給する走行モードコントロール部と、
 前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から前記通信部を介して取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データを前記車両動作制御部に供給するシミュレーションモードコントロール部と、
 前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断する停車状態判断部と
 を備える情報処理装置。
(2)
 複数種類の前記操作系のうちの少なくとも1つと、その1つに対応する複数種類の前記駆動系のうちの少なくとも1つとが、バイワイヤ方式で繋がっており、
 前記シミュレーションモードのとき、その1つの前記操作系に対する運転操作が行われても、その1つの前記操作系にバイワイヤ方式で繋がっている前記駆動系は動作することがない構成となっている
 上記(1)に記載の情報処理装置。
(3)
 前記停車状態判断部は、前記車両が停車しているときに、前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われると、前記シミュレーションモードに移行すると判断する
 上記(1)または(2)に記載の情報処理装置。
(4)
 前記車両に設けられているセンサおよびカメラから供給されるセンサデータおよび映像データに基づいて前記車両の乗員の位置および姿勢を認識し、前記乗員の視点位置を少なくとも含む乗員位置姿勢データを取得する車両内部環境認識部
 をさらに備え、
 前記通信部は、前記乗員位置姿勢データを前記他の情報処理装置へ送信する
 上記(1)から(3)までのいずれかに記載の情報処理装置。
(5)
 前記車両は、前記車両の周囲を囲うように映像を表示するガレージディスプレイを備えたガレージ装置に格納された状態で運転シミュレーションを実行することができ、
 前記ガレージディスプレイには、前記運転操作データに従って、前記仮想空間内で前記車両を仮想的に走行させる運転シミュレーションが行われ、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで前記車両を中心とした全方位を対象として前記仮想空間を撮影して生成されたシミュレーション映像であって、前記車両の位置および姿勢を示す車両位置姿勢データと前記乗員位置姿勢データとに基づいて、前記乗員の視点に応じて幾何変換が施されたシミュレーション映像が表示される
 上記(4)に記載の情報処理装置。
(6)
 前記車両は、前記車両の側面に設置されているサイドミラーカメラで前記車両の後方に向かって撮影したサイドミラー映像を表示するサイドミラーディスプレイを備えており、
 前記シミュレーションモードのとき、前記仮想空間内における前記車両の側面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なサイドミラー映像が前記サイドミラーディスプレイに表示される
 上記(5)に記載の情報処理装置。
(7)
 前記車両は、前記車両の背面に設置されているバックミラーカメラで前記車両の後方に向かって撮影したバックミラー映像を表示するバックミラーディスプレイを備えており、
 前記シミュレーションモードのとき、前記仮想空間内における前記車両の背面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なバックミラー映像が前記バックミラーディスプレイに表示される
 上記(5)または(6)に記載の情報処理装置。
(8)
 前記車両は、前記車両の後部座席に設けられる後部座席用ディスプレイを備えており、
 前記シミュレーションモードのとき、前記乗員位置姿勢データに従って、前記後部座席の乗員の視点に対応するように前記仮想空間内に配置された仮想カメラで、前記後部座席用ディスプレイ越しに見える前記仮想空間を撮影して生成された後部座席視点映像が前記後部座席用ディスプレイに表示される
 上記(5)から(7)までのいずれかに記載の情報処理装置。
(9)
 情報処理装置が、
 他の情報処理装置と通信を行うことと、
 車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得することと、
 前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御することと、
 前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データで前記車両の動作を制御させることと、
 前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から通信により取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる前記仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データで前記車両の動作を制御させることと、
 前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断することと
 を含む情報処理方法。
(10)
 情報処理装置のコンピュータに、
 他の情報処理装置と通信を行うことと、
 車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得することと、
 前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御することと、
 前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データで前記車両の動作を制御させることと、
 前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から通信により取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる前記仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データで前記車両の動作を制御させることと、
 前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断することと
 を含む情報処理を実行させるためのプログラム。
(11)
 車両に搭載された他の情報処理装置と通信を行う通信部と、
 前記通信部を介して前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成する仮想空間生成部と、
 前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施す映像変換処理部と
 を備える情報処理装置。
(12)
 前記仮想空間生成部は、前記仮想空間内での前記車両の挙動を求めて、その挙動を再現するように前記車両の挙動を制御するための車両設定パラメータを生成し、前記通信部を介して前記車両に前記車両設定パラメータを送信させる
 上記(11)に記載の情報処理装置。
(13)
 前記車両は、前記車両の周囲を囲うように映像を表示するガレージディスプレイを備えたガレージ装置に格納された状態で運転シミュレーションを実行することができ、
 前記映像変換処理部から出力されるシミュレーション映像が、前記ガレージディスプレイに表示される
 上記(11)または(12)に記載の情報処理装置。
(14)
 前記映像変換処理部は、
  前記車両位置姿勢データに基づいて、前記ガレージディスプレイに対して相対的に前記車両の位置に前記車両が内接する直方体空間を設定し、前記乗員位置姿勢データに基づいて、前記直方体空間に対して相対的に前記乗員の視点位置を設定することで、前記ガレージディスプレイに対して相対的な前記乗員の視点位置を特定する視点検出部と、
  前記視点検出部によって特定された前記乗員の視点位置を基準として、前記仮想空間生成部において生成されたシミュレーション映像を、前記ガレージディスプレイを投影面として投影するような幾何変換を施す幾何変換部と
 を有する
 上記(13)に記載の情報処理装置。
(15)
 前記仮想空間生成部は、前記仮想空間内における前記車両の側面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なサイドミラー映像を生成し、前記通信部を介して前記車両に送信させて、前記車両のサイドミラーディスプレイに表示させる
 上記(14)に記載の情報処理装置。
(16)
 前記仮想空間生成部は、前記仮想空間内における前記車両の背面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なバックミラー映像を生成し、前記通信部を介して前記車両に送信させて、前記車両のバックミラーディスプレイに表示させる
 上記(14)または(15)に記載の情報処理装置。
(17)
 前記仮想空間生成部は、前記乗員位置姿勢データに従って、前記車両の後部座席の乗員の視点に対応するように前記仮想空間内に配置された仮想カメラで、前記車両の後部座席に設けられる後部座席用ディスプレイ越しに見える前記仮想空間を撮影して生成された後部座席視点映像を生成し、前記通信部を介して前記車両に送信させて、前記後部座席用ディスプレイに表示させる
 上記(14)から(16)までのいずれかに記載の情報処理装置。
(18)
 情報処理装置が、
 車両に搭載された他の情報処理装置と通信を行うことと、
 前記通信により前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成することと、
 前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施すことと
 を含む情報処理方法。
(19)
 情報処理装置のコンピュータに、
 車両に搭載された他の情報処理装置と通信を行うことと、
 前記通信により前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成することと、
 前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施すことと
 を含む情報処理を実行させるためのプログラム。
<Examples of configuration combinations>
The present technology can also be configured as follows.
(1)
A communication unit that communicates with other information processing devices;
an operation data acquisition unit that acquires driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle;
a vehicle operation control unit that supplies control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle;
a driving mode control unit that supplies control data to the vehicle operation control unit to control the driving of the vehicle in accordance with the driving operation data when the vehicle is in a driving mode in which the vehicle is driven;
a simulation mode control unit that, in a simulation mode in which a driving simulation is performed using the vehicle, supplies to the vehicle operation control unit control data for controlling a behavior of the vehicle so as to reproduce a behavior of the vehicle in a virtual space determined according to the driving operation data, based on vehicle setting parameters acquired from the other information processing device via the communication unit;
and a stopped state determination unit that determines whether to transition to the simulation mode according to a stopped state of the vehicle when an operation is performed to instruct switching from the driving mode to the simulation mode.
(2)
At least one of the plurality of types of operation systems and at least one of the plurality of types of drive systems corresponding to the operation systems are connected by a by-wire system,
The information processing device described in (1) above is configured so that, in the simulation mode, even if a driving operation is performed on one of the operating systems, the drive system connected to that one of the operating systems by a by-wire system will not operate.
(3)
The information processing device described in (1) or (2) above, wherein the stopped state determination unit determines that the vehicle will transition to the simulation mode when an operation is performed to instruct switching from the driving mode to the simulation mode while the vehicle is stopped.
(4)
a vehicle interior environment recognition unit that recognizes the positions and postures of occupants of the vehicle based on sensor data and image data supplied from sensors and cameras provided in the vehicle, and acquires occupant position and posture data including at least the viewpoint positions of the occupants,
The information processing device according to any one of (1) to (3) above, wherein the communication unit transmits the occupant position and attitude data to the other information processing device.
(5)
The vehicle can perform a driving simulation while being stored in a garage device having a garage display that displays an image surrounding the vehicle,
The garage display performs a driving simulation in which the vehicle is virtually driven within the virtual space in accordance with the driving operation data, and displays a simulation image generated by photographing the virtual space in all directions centered on the vehicle with a virtual camera positioned within the virtual space to correspond to a predetermined position of the vehicle, the simulation image being geometrically transformed according to the occupant's viewpoint based on vehicle position and attitude data indicating the position and attitude of the vehicle and the occupant position and attitude data.The information processing device described in (4) above.
(6)
The vehicle is equipped with a side mirror display that displays a side mirror image captured toward the rear of the vehicle by a side mirror camera installed on a side of the vehicle,
The information processing device described in (5) above, in which, in the simulation mode, a virtual side mirror image generated by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the side of the vehicle in the virtual space is displayed on the side mirror display.
(7)
The vehicle is equipped with a rearview mirror display that displays a rearview mirror image captured toward the rear of the vehicle by a rearview mirror camera installed on a rear surface of the vehicle,
The information processing device described in (5) or (6) above, in which, in the simulation mode, a virtual rearview mirror image generated by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the rear of the vehicle in the virtual space is displayed on the rearview mirror display.
(8)
The vehicle includes a rear seat display provided in a rear seat of the vehicle,
In the simulation mode, a rear seat viewpoint image generated by capturing an image of the virtual space as seen through the rear seat display using a virtual camera positioned in the virtual space to correspond to the viewpoint of the rear seat occupant in accordance with the occupant position and attitude data is displayed on the rear seat display. An information processing device as described in any of (5) to (7) above.
(9)
An information processing device,
Communicating with other information processing devices;
Acquiring driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle;
supplying control signals to a plurality of types of drive systems that drive the vehicle to control operation of the vehicle;
When the vehicle is in a driving mode, the vehicle is controlled by control data for controlling the driving of the vehicle in accordance with the driving operation data.
In a simulation mode in which a driving simulation is performed using the vehicle, control of the operation of the vehicle is performed using control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in the virtual space determined according to the driving operation data, based on vehicle setting parameters acquired by communication from the other information processing device.
and when an operation is performed to instruct switching from the driving mode to the simulation mode, determining whether to transition to the simulation mode according to a stopped state of the vehicle.
(10)
The computer of the information processing device
Communicating with other information processing devices;
Acquiring driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle;
supplying control signals to a plurality of types of drive systems that drive the vehicle to control operation of the vehicle;
When the vehicle is in a driving mode, the vehicle is controlled by control data for controlling the driving of the vehicle in accordance with the driving operation data.
In a simulation mode in which a driving simulation is performed using the vehicle, control of the operation of the vehicle is performed using control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in the virtual space determined according to the driving operation data, based on vehicle setting parameters acquired by communication from the other information processing device.
and when an operation is performed to instruct switching from the driving mode to the simulation mode, determining whether to transition to the simulation mode according to a stopped state of the vehicle.
(11)
A communication unit that communicates with other information processing devices mounted in the vehicle;
a virtual space generation unit that performs a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle via the communication unit, and that generates a simulation image by capturing images of the virtual space in all directions centered on the vehicle using a virtual camera that is arranged in the virtual space so as to correspond to a predetermined position of the vehicle;
an image conversion processing unit that performs image conversion processing on the simulation image based on vehicle position and attitude data indicating a position and attitude of the vehicle and occupant position and attitude data including at least a viewpoint position of an occupant of the vehicle.
(12)
The information processing device described in (11) above, wherein the virtual space generation unit determines the behavior of the vehicle within the virtual space, generates vehicle setting parameters for controlling the behavior of the vehicle so as to reproduce that behavior, and transmits the vehicle setting parameters to the vehicle via the communication unit.
(13)
The vehicle can perform a driving simulation while being stored in a garage device having a garage display that displays an image surrounding the vehicle,
The information processing device according to (11) or (12) above, wherein the simulation image output from the image conversion processing unit is displayed on the garage display.
(14)
The video conversion processing unit includes:
a viewpoint detection unit that identifies a viewpoint position of the occupant relative to the garage display by setting a rectangular parallelepiped space in which the vehicle is inscribed at a position of the vehicle relative to the garage display based on the vehicle position and orientation data, and setting a viewpoint position of the occupant relative to the rectangular parallelepiped space based on the occupant position and orientation data;
The information processing device according to claim 13, further comprising: a geometric transformation unit that performs a geometric transformation such that the simulation image generated in the virtual space generation unit is projected onto the garage display as a projection surface, based on the viewpoint position of the occupant identified by the viewpoint detection unit.
(15)
The information processing device described in (14) above, wherein the virtual space generation unit generates a virtual side mirror image by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the side of the vehicle within the virtual space, transmits the virtual side mirror image to the vehicle via the communication unit, and displays the image on a side mirror display of the vehicle.
(16)
The information processing device described in (14) or (15) above, wherein the virtual space generation unit generates a virtual rearview mirror image by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the back of the vehicle within the virtual space, transmits the image to the vehicle via the communication unit, and displays the image on a rearview mirror display of the vehicle.
(17)
The virtual space generation unit generates a rear seat viewpoint image by capturing the virtual space as seen through a rear seat display provided in the rear seat of the vehicle using a virtual camera positioned in the virtual space to correspond to the viewpoint of an occupant in the rear seat of the vehicle in accordance with the occupant position and attitude data, transmits the image to the vehicle via the communication unit, and displays it on the rear seat display. An information processing device described in any of (14) to (16) above.
(18)
An information processing device,
Communicating with other information processing devices mounted in the vehicle;
performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication; and capturing images of the virtual space in all directions from the vehicle center with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, thereby generating a simulation image;
and performing an image conversion process on the simulation image based on vehicle position and attitude data indicating a position and attitude of the vehicle and occupant position and attitude data including at least a viewpoint position of an occupant of the vehicle.
(19)
The computer of the information processing device
Communicating with other information processing devices mounted in the vehicle;
performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication; and capturing images of the virtual space in all directions from the vehicle center with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, thereby generating a simulation image;
and performing image conversion processing on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of occupants of the vehicle.
 なお、本実施の形態は、上述した実施の形態に限定されるものではなく、本開示の要旨を逸脱しない範囲において種々の変更が可能である。また、本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 Note that this embodiment is not limited to the above-described embodiment, and various modifications are possible without departing from the gist of this disclosure. Furthermore, the effects described in this specification are merely examples and are not limiting, and other effects may also be present.
 11 運転シミュレーションシステム, 12 ガレージシステム, 13 車両, 14 ガレージ装置, 15 充電装置, 16 ガレージシステム制御装置, 17 ホームサーバ, 18 車両内制御装置, 19 動力源, 21 ハンドル, 22 アクセルペダル, 23 ブレーキペダル, 24 車軸方向操舵部, 25 スロットルモータ, 26 ブレーキ, 27 ハンドル回転センサ, 28 アクセル位置センサ, 29 ブレーキ位置センサ, 30 車軸方向駆動機構, 31 スロットル駆動機構, 32 ブレーキ駆動機構, 33 ハンドル反力駆動機構, 34 アクセル反力駆動機構, 35 ブレーキ反力駆動機構, 36 サスペンション駆動機構, 37 シート駆動機構, 41 ガレージディスプレイ, 42 ガレージカメラ, 43 ガレージセンサ, 51 ナビゲーションディスプレイ, 52 サイドミラーディスプレイ, 53 インストルメントパネルディスプレイ, 54 バックミラーディスプレイ, 55 後部座席用ディスプレイ, 56 車内カメラ, 57 車内センサ, 58 スピーカ 11 driving simulation system, 12 garage system, 13 vehicle, 14 garage device, 15 charging device, 16 garage system control device, 17 home server, 18 in-vehicle control device, 19 power source, 21 steering wheel, 22 accelerator pedal, 23 brake pedal, 24 axle steering section, 25 throttle motor, 26 brake, 27 steering wheel rotation sensor, 28 accelerator position sensor, 29 brake position sensor, 30 axle drive mechanism, 31 throttle drive mechanism, 32 Brake drive mechanism, 33 Steering wheel reaction drive mechanism, 34 Accelerator reaction drive mechanism, 35 Brake reaction drive mechanism, 36 Suspension drive mechanism, 37 Seat drive mechanism, 41 Garage display, 42 Garage camera, 43 Garage sensor, 51 Navigation display, 52 Side mirror display, 53 Instrument panel display, 54 Rearview mirror display, 55 Rear seat display, 56 In-car camera, 57 In-car sensor, 58 Speaker

Claims (19)

  1.  他の情報処理装置と通信を行う通信部と、
     車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得する操作データ取得部と、
     前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御する車両動作制御部と、
     前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データを前記車両動作制御部に供給する走行モードコントロール部と、
     前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から前記通信部を介して取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データを前記車両動作制御部に供給するシミュレーションモードコントロール部と、
     前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断する停車状態判断部と
     を備える情報処理装置。
    A communication unit that communicates with other information processing devices;
    an operation data acquisition unit that acquires driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle;
    a vehicle operation control unit that supplies control signals to a plurality of types of drive systems that drive the vehicle to control the operation of the vehicle;
    a driving mode control unit that supplies control data for controlling the driving of the vehicle to the vehicle operation control unit in accordance with the driving operation data when the vehicle is in a driving mode in which the vehicle is driven;
    a simulation mode control unit that, in a simulation mode in which a driving simulation is performed using the vehicle, supplies to the vehicle operation control unit control data for controlling a behavior of the vehicle so as to reproduce a behavior of the vehicle in a virtual space determined according to the driving operation data, based on vehicle setting parameters acquired from the other information processing device via the communication unit;
    and a stopped state determination unit that determines whether to transition to the simulation mode according to a stopped state of the vehicle when an operation is performed to instruct switching from the driving mode to the simulation mode.
  2.  複数種類の前記操作系のうちの少なくとも1つと、その1つに対応する複数種類の前記駆動系のうちの少なくとも1つとが、バイワイヤ方式で繋がっており、
     前記シミュレーションモードのとき、その1つの前記操作系に対する運転操作が行われても、その1つの前記操作系にバイワイヤ方式で繋がっている前記駆動系は動作することがない構成となっている
     請求項1に記載の情報処理装置。
    At least one of the plurality of types of operation systems and at least one of the plurality of types of drive systems corresponding to the operation systems are connected by a by-wire system,
    2. The information processing device according to claim 1, wherein, in the simulation mode, even if a driving operation is performed on one of the operation systems, the drive system connected to the one of the operation systems by a by-wire system will not operate.
  3.  前記停車状態判断部は、前記車両が停車しているときに、前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われると、前記シミュレーションモードに移行すると判断する
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1 , wherein the vehicle stop state determination unit determines that the mode will be transitioned to the simulation mode when an operation for instructing switching from the driving mode to the simulation mode is performed while the vehicle is stopped.
  4.  前記車両に設けられているセンサおよびカメラから供給されるセンサデータおよび映像データに基づいて前記車両の乗員の位置および姿勢を認識し、前記乗員の視点位置を少なくとも含む乗員位置姿勢データを取得する車両内部環境認識部
     をさらに備え、
     前記通信部は、前記乗員位置姿勢データを前記他の情報処理装置へ送信する
     請求項1に記載の情報処理装置。
    a vehicle interior environment recognition unit that recognizes the positions and postures of occupants of the vehicle based on sensor data and image data supplied from sensors and cameras provided in the vehicle, and acquires occupant position and posture data including at least the viewpoint positions of the occupants,
    The information processing device according to claim 1 , wherein the communication unit transmits the occupant position and attitude data to the other information processing device.
  5.  前記車両は、前記車両の周囲を囲うように映像を表示するガレージディスプレイを備えたガレージ装置に格納された状態で運転シミュレーションを実行することができ、
     前記ガレージディスプレイには、前記運転操作データに従って、前記仮想空間内で前記車両を仮想的に走行させる運転シミュレーションが行われ、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで前記車両を中心とした全方位を対象として前記仮想空間を撮影して生成されたシミュレーション映像であって、前記車両の位置および姿勢を示す車両位置姿勢データと前記乗員位置姿勢データとに基づいて、前記乗員の視点に応じて幾何変換が施されたシミュレーション映像が表示される
     請求項4に記載の情報処理装置。
    The vehicle can perform a driving simulation while being stored in a garage device having a garage display that displays an image surrounding the vehicle,
    5. The information processing device according to claim 4, wherein a driving simulation is performed on the garage display in which the vehicle is virtually driven within the virtual space in accordance with the driving operation data, and a simulation image is displayed which is generated by photographing the virtual space in all directions centered on the vehicle with a virtual camera arranged within the virtual space so as to correspond to a predetermined position of the vehicle, and which has been geometrically transformed according to the viewpoint of the occupant based on vehicle position and attitude data indicating the position and attitude of the vehicle and the occupant position and attitude data.
  6.  前記車両は、前記車両の側面に設置されているサイドミラーカメラで前記車両の後方に向かって撮影したサイドミラー映像を表示するサイドミラーディスプレイを備えており、
     前記シミュレーションモードのとき、前記仮想空間内における前記車両の側面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なサイドミラー映像が前記サイドミラーディスプレイに表示される
     請求項5に記載の情報処理装置。
    The vehicle is equipped with a side mirror display that displays a side mirror image captured toward the rear of the vehicle by a side mirror camera installed on a side of the vehicle,
    6. The information processing device according to claim 5, wherein in the simulation mode, a virtual side mirror image generated by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on a side of the vehicle in the virtual space is displayed on the side mirror display.
  7.  前記車両は、前記車両の背面に設置されているバックミラーカメラで前記車両の後方に向かって撮影したバックミラー映像を表示するバックミラーディスプレイを備えており、
     前記シミュレーションモードのとき、前記仮想空間内における前記車両の背面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なバックミラー映像が前記バックミラーディスプレイに表示される
     請求項5に記載の情報処理装置。
    The vehicle is equipped with a rearview mirror display that displays a rearview mirror image captured toward the rear of the vehicle by a rearview mirror camera installed on a rear surface of the vehicle,
    6. The information processing device according to claim 5, wherein in the simulation mode, a virtual rearview mirror image generated by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the rear of the vehicle in the virtual space is displayed on the rearview mirror display.
  8.  前記車両は、前記車両の後部座席に設けられる後部座席用ディスプレイを備えており、
     前記シミュレーションモードのとき、前記乗員位置姿勢データに従って、前記後部座席の乗員の視点に対応するように前記仮想空間内に配置された仮想カメラで、前記後部座席用ディスプレイ越しに見える前記仮想空間を撮影して生成された後部座席視点映像が前記後部座席用ディスプレイに表示される
     請求項5に記載の情報処理装置。
    The vehicle includes a rear seat display provided in a rear seat of the vehicle,
    6. The information processing device according to claim 5, wherein in the simulation mode, a rear seat viewpoint image generated by capturing an image of the virtual space as seen through the rear seat display with a virtual camera arranged in the virtual space so as to correspond to the viewpoint of the rear seat occupant in accordance with the occupant position and attitude data is displayed on the rear seat display.
  9.  情報処理装置が、
     他の情報処理装置と通信を行うことと、
     車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得することと、
     前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御することと、
     前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データで前記車両の動作を制御させることと、
     前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から通信により取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる前記仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データで前記車両の動作を制御させることと、
     前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断することと
     を含む情報処理方法。
    An information processing device,
    Communicating with other information processing devices;
    Acquiring driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle;
    supplying control signals to a plurality of types of drive systems that drive the vehicle to control operation of the vehicle;
    When the vehicle is in a driving mode, the vehicle is controlled by control data for controlling the driving of the vehicle in accordance with the driving operation data.
    In a simulation mode in which a driving simulation is performed using the vehicle, control of the operation of the vehicle is performed using control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in the virtual space determined according to the driving operation data, based on vehicle setting parameters acquired by communication from the other information processing device.
    and when an operation is performed to instruct switching from the driving mode to the simulation mode, determining whether to transition to the simulation mode according to a stopped state of the vehicle.
  10.  情報処理装置のコンピュータに、
     他の情報処理装置と通信を行うことと、
     車両を操作する複数種類の操作系に対する運転操作の操作量を示す運転操作データを取得することと、
     前記車両を駆動させる複数種類の駆動系に対して制御信号を供給して、前記車両の動作を制御することと、
     前記車両を使用して走行する走行モードのとき、前記運転操作データに従って前記車両の走行を制御する制御データで前記車両の動作を制御させることと、
     前記車両を使用して運転シミュレーションを行うシミュレーションモードのとき、前記他の情報処理装置から通信により取得される車両設定パラメータに基づいて、前記運転操作データに従って求められる前記仮想空間内での前記車両の挙動を再現するように前記車両の挙動を制御する制御データで前記車両の動作を制御させることと、
     前記走行モードから前記シミュレーションモードへの切り替えを指示する操作が行われたときに、前記車両の停車状態に従って、前記シミュレーションモードに移行するか否かを判断することと
     を含む情報処理を実行させるためのプログラム。
    The computer of the information processing device
    Communicating with other information processing devices;
    Acquiring driving operation data indicating amounts of driving operations for a plurality of types of operation systems for operating a vehicle;
    supplying control signals to a plurality of types of drive systems that drive the vehicle to control operation of the vehicle;
    When the vehicle is in a driving mode, the vehicle is controlled by control data for controlling the driving of the vehicle in accordance with the driving operation data.
    In a simulation mode in which a driving simulation is performed using the vehicle, control of the operation of the vehicle is performed using control data that controls the behavior of the vehicle so as to reproduce the behavior of the vehicle in the virtual space determined according to the driving operation data, based on vehicle setting parameters acquired by communication from the other information processing device.
    and when an operation is performed to instruct switching from the driving mode to the simulation mode, determining whether to transition to the simulation mode according to a stopped state of the vehicle.
  11.  車両に搭載された他の情報処理装置と通信を行う通信部と、
     前記通信部を介して前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成する仮想空間生成部と、
     前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施す映像変換処理部と
     を備える情報処理装置。
    A communication unit that communicates with other information processing devices mounted in the vehicle;
    a virtual space generation unit that performs a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle via the communication unit, and that generates a simulation image by capturing images of the virtual space in all directions centered on the vehicle using a virtual camera that is arranged in the virtual space so as to correspond to a predetermined position of the vehicle;
    an image conversion processing unit that performs image conversion processing on the simulation image based on vehicle position and attitude data indicating a position and attitude of the vehicle and occupant position and attitude data including at least a viewpoint position of an occupant of the vehicle.
  12.  前記仮想空間生成部は、前記仮想空間内での前記車両の挙動を求めて、その挙動を再現するように前記車両の挙動を制御するための車両設定パラメータを生成し、前記通信部を介して前記車両に前記車両設定パラメータを送信させる
     請求項11に記載の情報処理装置。
    The information processing device according to claim 11, wherein the virtual space generation unit determines a behavior of the vehicle in the virtual space, generates vehicle setting parameters for controlling the behavior of the vehicle so as to reproduce the behavior, and causes the vehicle to transmit the vehicle setting parameters via the communication unit.
  13.  前記車両は、前記車両の周囲を囲うように映像を表示するガレージディスプレイを備えたガレージ装置に格納された状態で運転シミュレーションを実行することができ、
     前記映像変換処理部から出力されるシミュレーション映像が、前記ガレージディスプレイに表示される
     請求項11に記載の情報処理装置。
    The vehicle can perform a driving simulation while being stored in a garage device having a garage display that displays an image surrounding the vehicle,
    The information processing device according to claim 11 , wherein the simulation image output from the image conversion processing unit is displayed on the garage display.
  14.  前記映像変換処理部は、
      前記車両位置姿勢データに基づいて、前記ガレージディスプレイに対して相対的に前記車両の位置に前記車両が内接する直方体空間を設定し、前記乗員位置姿勢データに基づいて、前記直方体空間に対して相対的に前記乗員の視点位置を設定することで、前記ガレージディスプレイに対して相対的な前記乗員の視点位置を特定する視点検出部と、
      前記視点検出部によって特定された前記乗員の視点位置を基準として、前記仮想空間生成部において生成されたシミュレーション映像を、前記ガレージディスプレイを投影面として投影するような幾何変換を施す幾何変換部と
     を有する
     請求項13に記載の情報処理装置。
    The video conversion processing unit includes:
    a viewpoint detection unit that identifies a viewpoint position of the occupant relative to the garage display by setting a rectangular parallelepiped space in which the vehicle is inscribed at a position of the vehicle relative to the garage display based on the vehicle position and orientation data, and setting a viewpoint position of the occupant relative to the rectangular parallelepiped space based on the occupant position and orientation data;
    14. The information processing device according to claim 13, further comprising: a geometric transformation unit that performs a geometric transformation such that the simulation image generated by the virtual space generation unit is projected onto the garage display as a projection surface, based on the viewpoint position of the occupant identified by the viewpoint detection unit.
  15.  前記仮想空間生成部は、前記仮想空間内における前記車両の側面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なサイドミラー映像を生成し、前記通信部を介して前記車両に送信させて、前記車両のサイドミラーディスプレイに表示させる
     請求項14に記載の情報処理装置。
    15. The information processing device according to claim 14, wherein the virtual space generation unit generates a virtual side mirror image by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on a side of the vehicle within the virtual space, transmits the virtual side mirror image to the vehicle via the communication unit, and displays the virtual side mirror image on a side mirror display of the vehicle.
  16.  前記仮想空間生成部は、前記仮想空間内における前記車両の背面に配置された仮想カメラで前記車両の後方に向かって前記仮想空間を撮影して生成された仮想的なバックミラー映像を生成し、前記通信部を介して前記車両に送信させて、前記車両のバックミラーディスプレイに表示させる
     請求項14に記載の情報処理装置。
    15. The information processing device according to claim 14, wherein the virtual space generation unit generates a virtual rearview mirror image by photographing the virtual space toward the rear of the vehicle with a virtual camera arranged on the rear of the vehicle in the virtual space, transmits the virtual rearview mirror image to the vehicle via the communication unit, and displays the virtual rearview mirror image on a rearview mirror display of the vehicle.
  17.  前記仮想空間生成部は、前記乗員位置姿勢データに従って、前記車両の後部座席の乗員の視点に対応するように前記仮想空間内に配置された仮想カメラで、前記車両の後部座席に設けられる後部座席用ディスプレイ越しに見える前記仮想空間を撮影して生成された後部座席視点映像を生成し、前記通信部を介して前記車両に送信させて、前記後部座席用ディスプレイに表示させる
     請求項14に記載の情報処理装置。
    The information processing device according to claim 14, wherein the virtual space generation unit generates a rear seat viewpoint image by capturing an image of the virtual space as seen through a rear seat display provided in the rear seat of the vehicle using a virtual camera arranged in the virtual space to correspond to the viewpoint of an occupant in the rear seat of the vehicle in accordance with the occupant position and attitude data, transmits the image to the vehicle via the communication unit, and displays the image on the rear seat display.
  18.  情報処理装置が、
     車両に搭載された他の情報処理装置と通信を行うことと、
     前記通信により前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成することと、
     前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施すことと
     を含む情報処理方法。
    An information processing device,
    Communicating with other information processing devices mounted in the vehicle;
    performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication; and capturing images of the virtual space in all directions from the vehicle center with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, thereby generating a simulation image;
    and performing image conversion processing on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of occupants of the vehicle.
  19.  情報処理装置のコンピュータに、
     車両に搭載された他の情報処理装置と通信を行うことと、
     前記通信により前記車両から取得される運転操作データに従って、仮想空間内で前記車両を仮想的に走行させる運転シミュレーションを行い、前記車両の所定位置に対応するように前記仮想空間内に配置された仮想カメラで、前記車両を中心とした全方位を対象として前記仮想空間を撮影してシミュレーション映像を生成することと、
     前記シミュレーション映像に対して、前記車両の位置および姿勢を示す車両位置姿勢データと前記車両の乗員の視点位置を少なくとも含む乗員位置姿勢データとに基づいた映像変換処理を施すことと
     を含む情報処理を実行させるためのプログラム。
    The computer of the information processing device
    Communicating with other information processing devices mounted in the vehicle;
    performing a driving simulation in which the vehicle virtually travels in a virtual space in accordance with driving operation data acquired from the vehicle through the communication, and capturing images of the virtual space in all directions from the vehicle center with a virtual camera disposed in the virtual space so as to correspond to a predetermined position of the vehicle, thereby generating a simulation image;
    and performing image conversion processing on the simulation image based on vehicle position and attitude data indicating the position and attitude of the vehicle and occupant position and attitude data including at least the viewpoint positions of occupants of the vehicle.
PCT/JP2023/032950 2022-09-27 2023-09-11 Information processing device, information processing method, and program WO2024070608A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-153672 2022-09-27
JP2022153672 2022-09-27

Publications (1)

Publication Number Publication Date
WO2024070608A1 true WO2024070608A1 (en) 2024-04-04

Family

ID=90477489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/032950 WO2024070608A1 (en) 2022-09-27 2023-09-11 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2024070608A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001092343A (en) * 1999-09-27 2001-04-06 Toyota Motor Corp Vehicle driving simulator device
JP2002323848A (en) * 2001-04-24 2002-11-08 Mitsubishi Heavy Ind Ltd Device and method for vehicle travel simulator
JP2003154900A (en) * 2001-11-22 2003-05-27 Pioneer Electronic Corp Rear entertainment system and method of controlling the same
JP2005087580A (en) * 2003-09-19 2005-04-07 Toyota Motor Corp Automobile doubling as game
JP2006048386A (en) * 2004-08-05 2006-02-16 Nippon Telegr & Teleph Corp <Ntt> Force display device, method for calculating virtual object and virtual object calculation program
JP2010183170A (en) * 2009-02-03 2010-08-19 Denso Corp Display apparatus for vehicle
JP2011133695A (en) * 2009-12-24 2011-07-07 Okayama Prefecture Industrial Promotion Foundation Driving simulating device
JP2014119657A (en) * 2012-12-18 2014-06-30 Auto Network Gijutsu Kenkyusho:Kk Driving-operation device and virtual driving system
JP2019148677A (en) * 2018-02-27 2019-09-05 三菱自動車工業株式会社 Virtual reality training system and vehicle comprising the same

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001092343A (en) * 1999-09-27 2001-04-06 Toyota Motor Corp Vehicle driving simulator device
JP2002323848A (en) * 2001-04-24 2002-11-08 Mitsubishi Heavy Ind Ltd Device and method for vehicle travel simulator
JP2003154900A (en) * 2001-11-22 2003-05-27 Pioneer Electronic Corp Rear entertainment system and method of controlling the same
JP2005087580A (en) * 2003-09-19 2005-04-07 Toyota Motor Corp Automobile doubling as game
JP2006048386A (en) * 2004-08-05 2006-02-16 Nippon Telegr & Teleph Corp <Ntt> Force display device, method for calculating virtual object and virtual object calculation program
JP2010183170A (en) * 2009-02-03 2010-08-19 Denso Corp Display apparatus for vehicle
JP2011133695A (en) * 2009-12-24 2011-07-07 Okayama Prefecture Industrial Promotion Foundation Driving simulating device
JP2014119657A (en) * 2012-12-18 2014-06-30 Auto Network Gijutsu Kenkyusho:Kk Driving-operation device and virtual driving system
JP2019148677A (en) * 2018-02-27 2019-09-05 三菱自動車工業株式会社 Virtual reality training system and vehicle comprising the same

Similar Documents

Publication Publication Date Title
US9902403B2 (en) Sensory stimulation for an autonomous vehicle
US11192420B2 (en) Methods and systems for controlling vehicle body motion and occupant experience
CN104851330B (en) A kind of parking simulated training method and system
JP2021509646A (en) Software verification of autonomous vehicles
CN108319249B (en) Unmanned driving algorithm comprehensive evaluation system and method based on driving simulator
Bruck et al. A review of driving simulation technology and applications
US20180278920A1 (en) Entertainment apparatus for a self-driving motor vehicle
CN110097799A (en) Virtual driving system based on real scene modeling
JP3400969B2 (en) 4-wheel driving simulator
CN112221117A (en) Driving simulation platform and method
WO2024070608A1 (en) Information processing device, information processing method, and program
JP2003150038A (en) Apparatus and method for driving simulation
US20110060557A1 (en) Method and system for testing a vehicle design
JP2001092343A (en) Vehicle driving simulator device
WO2005066918A1 (en) Simulation device and data transmission/reception method for simulation device
Weir et al. An overview of the DRI driving simulator
US11836874B2 (en) Augmented in-vehicle experiences
Yoshimoto et al. The history of research and development of driving simulators in Japan
CN111798717B (en) Electric vehicle control system and method supporting VR driving training
JP7082031B2 (en) Driving training equipment and driving training method
US20240095418A1 (en) System and method for an augmented-virtual reality driving simulator using a vehicle
JP3878426B2 (en) Riding simulation equipment
Ge et al. Methodologies for evaluating and optimizing multimodal human-machine-interface of autonomous vehicles
JP2003114607A (en) Virtual driving system
US20210141972A1 (en) Method for generating an image data set for a computer-implemented simulation