CN113327479A - Motor vehicle driving intelligent training system based on MR technology - Google Patents
Motor vehicle driving intelligent training system based on MR technology Download PDFInfo
- Publication number
- CN113327479A CN113327479A CN202110739595.2A CN202110739595A CN113327479A CN 113327479 A CN113327479 A CN 113327479A CN 202110739595 A CN202110739595 A CN 202110739595A CN 113327479 A CN113327479 A CN 113327479A
- Authority
- CN
- China
- Prior art keywords
- driving
- motor vehicle
- driver
- motion
- platform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 49
- 238000005516 engineering process Methods 0.000 title claims abstract description 31
- 230000033001 locomotion Effects 0.000 claims abstract description 154
- 238000000034 method Methods 0.000 claims abstract description 66
- 238000004088 simulation Methods 0.000 claims abstract description 57
- 230000008569 process Effects 0.000 claims abstract description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 24
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 13
- 239000011521 glass Substances 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 9
- 238000005096 rolling process Methods 0.000 claims description 9
- 230000005484 gravity Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000005236 sound signal Effects 0.000 claims description 7
- 238000007621 cluster analysis Methods 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000002441 reversible effect Effects 0.000 claims description 5
- 238000010801 machine learning Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000011022 operating instruction Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000011410 subtraction method Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 11
- 239000010720 hydraulic oil Substances 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 238000002940 Newton-Raphson method Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011217 control strategy Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000003921 oil Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000295 fuel oil Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000006386 neutralization reaction Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000010399 physical interaction Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/04—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
- G09B9/052—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/04—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an intelligent training system for motor vehicle driving based on an MR (magnetic resonance) technology, which comprises a motor vehicle simulation cockpit, a motion platform, a first data processor and a CAVE immersive MR audio-visual system, wherein the first data processor is respectively connected with the motion platform, the motor vehicle simulation cockpit and the CAVE immersive MR audio-visual system; the motor vehicle simulation cockpit is fixedly arranged on the motion platform and used for providing a driving operation environment for a driver; the motion platform is used for simulating the vibration condition of the vehicle in the driving process and feeding back the simulated driving road condition and the vehicle driving in real time. The invention effectively solves the problem that a driver can not accurately interact with a motor vehicle simulation system in a virtual environment by utilizing the MR technology.
Description
Technical Field
The invention relates to the technical field of intelligent training of motor vehicle driving, in particular to an intelligent training system of motor vehicle driving based on an MR (magnetic resonance) technology.
Background
The motor vehicle has become one of the necessary tools for people to go out. According to statistics, the number of motor vehicle drivers in China reaches 3.96 hundred million people and is in a high-speed growth situation. Most of motor vehicle driving training need all carry out real car operation in real environment, need higher manpower and material resources cost input, easily receive real environment factor (like training place, weather) influence, and have certain risk, especially to some drivers who newly contact the motor vehicle, because it has fear psychology and operation action not skilled when carrying out real car operation, bring bigger potential safety hazard for real car driving training. How to achieve more effective, safe and cost-effective motor vehicle driving training has become a problem to be addressed in the field.
With the development of virtual reality technology, some driving training is carried out by using a head-mounted full-immersion virtual reality technology and matching with a motor vehicle simulation cockpit in the market at present. However, the inventor finds that a significant disadvantage of adopting the technology is that when the driving of the motor vehicle is simulated, the sight line of the driver is completely shielded by the head-mounted virtual reality equipment, so that the driver cannot accurately interact with the simulated driving cabin of the motor vehicle, such as gear shifting, pressing an accelerator pedal device or a service brake device, which inevitably reduces the training effect of the driving of the motor vehicle.
And the MR (mixed reality) technology has been developed for more than 20 years, and the related technology has made remarkable progress and shows strong development prospect. Human-computer interaction is an important support technology of MR, and is a research hotspot at home and abroad in recent years. A large number of innovative MR applications, physical interaction, 3D interaction, multi-channel and hybrid interaction show great vitality in the applications, and the development of interaction technology is greatly promoted.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides an MR technology-based intelligent training system for motor vehicle driving, which provides a multi-sense virtual audio-visual environment for a driver in a training process by utilizing the MR technology and can accurately and effectively interact with a motor vehicle simulation cockpit.
In order to achieve the purpose, the invention adopts the following technical scheme:
an MR technology-based intelligent training system for motor vehicle driving comprises a motor vehicle simulation cockpit, a motion platform, a first data processor and a CAVE immersive MR audio-visual system, wherein the first data processor is respectively connected with the motion platform, the motor vehicle simulation cockpit and the CAVE immersive MR audio-visual system;
the motor vehicle simulation cockpit is fixedly arranged on the motion platform and used for providing a driving operation environment for a driver;
the motion platform is used for simulating the vibration condition of the vehicle in the driving process and feeding back the simulated driving road condition and the vehicle driving in real time;
the first data processor is used for receiving a driver operation instruction and driving data acquired by each component in the motor vehicle simulation cockpit and simulating a motion and audio-visual scene in the motor vehicle driving process, and the first data processor performs cooperative operation on the driving data and then feeds the driving data back to the motion platform and the CAVE immersive MR audio-visual system in real time so as to perform simulation output on the motion and audio-visual scene in the motor vehicle driving process;
the driver operation instructions comprise steering wheel angle control, accelerator control, brake control, clutch control and gear control, the driving data comprise steering wheel angles, opening and closing amplitudes of various pedals and gear values, and the cooperative operation comprises kinematic analysis of a motion platform, namely kinematic forward solution and kinematic reverse solution;
the CAVE immersive MR audio-visual system comprises a CAVE main body support, a projection screen, a second data processor, a plurality of groups of motion tracking cameras, a motion tracking module, 3D glasses and a surrounding stereo system, wherein the first data processor is respectively connected with the plurality of groups of motion tracking cameras, the projection screen and the second data processor;
the projection screen and the surrounding stereo system are respectively and fixedly arranged on the CAVE main body support, so that a surrounding CAVE type stereo audio-visual space is formed, and the projection screen is used for displaying an interactive interface of a virtual driving scene;
the plurality of groups of motion tracking cameras are respectively fixedly arranged on the CAVE main body bracket, the plurality of groups of motion tracking cameras are used for capturing the motion condition of a driver in the process of simulating driving, and the motion tracking module is used for capturing the sight line position information of the driver and the position information of a motor vehicle simulation cockpit;
when the driver carries out virtual driving activity, utilize 3D glasses to combine projection screen to show, 2D video picture with the different frequency output of binary channels turns into the 3D image, combine motion tracking camera and motion tracking module to obtain the motion tracking condition, second data processor transmits driver operating instruction and driving data to first data processor and then adjusts motion platform, second data processor carries out the analysis and with the motion tracking video to the motion tracking condition, the motion tracking audio signal is exported in real time to OLED high definition display screen and around stereophonic system, thereby provide motor vehicle simulation driving environment for the driver.
As a preferred technical solution, the second data processor analyzes the motion tracking situation and outputs the motion tracking video and the motion tracking audio signals to the OLED high-definition display screen and the surround stereo system in real time, and specifically includes the following steps:
a moving object capturing step: detecting a moving target based on a foreground detection algorithm, wherein the foreground detection algorithm adopts any one of a background subtraction method, an interframe difference method and an optical flow method;
an identification step: identifying and classifying the moving target based on the classification characteristic parameters, screening out the moving target as an identification target of a driver and taking the moving target as a tracking target;
a tracking step: tracking a tracking target by a point tracking method based on the gravity center position;
the identification step adopts a template-based feature matching method or a machine learning method to identify a moving target;
the point tracking method based on the gravity center position is used for tracking a tracking target, and specifically comprises the following steps:
and (3) target matching: calculating barycentric coordinates and assigned object IDs of all objects in a first frame of the video, namely, assigning an ID to each object from the number 1 according to the direction of a scanned image, and identifying the ID with the largest value as ID _ max;
a search area acquisition step: calculating barycentric coordinates of an Nth target of an nth frame of the video, and simultaneously obtaining a search area of the target in an (N-1) th frame, wherein N is 2,3, … …, num _ N, and N is 1,2,3, … … num _ N, and num _ N and num _ N respectively represent the maximum frame number of the video and the maximum target number of the video;
and (3) target searching: and searching targets in the search area and tracking based on the target with the minimum similarity deviation value among the targets.
As the preferred technical scheme, the projection screen is provided with at least 3 OLED high-definition display screens, and the OLED high-definition display screens are sequentially connected to form an annular structure.
As a preferred technical scheme, the projection screen adopts a flexible curved screen, and the flexible curved screen is connected end to form an annular structure and is arranged on the outer periphery of the motor vehicle simulation cockpit.
As an optimal technical scheme, the projection screen is provided with 3 OLED high-definition display screens, the 3 OLED high-definition display screens are sequentially connected to form a U-shaped structure, and the 3 OLED high-definition display screens are respectively arranged in the front, the left side and the right side of the motor vehicle simulation cockpit.
As an optimal technical scheme, the projection screen adopts a flexible curved screen, the flexible curved screen is bent to form a U shape, and the flexible curved screen surrounds the front, the left side and the right side of the motor vehicle simulation cockpit.
Preferably, the motion tracking module comprises a first motion tracking component and a second motion tracking component, the first motion tracking component is fixedly arranged on the 3D glasses, and the second motion tracking component is arranged at the front end part of the simulated cockpit of the motor vehicle; the multiple groups of motion tracking cameras collect driving images of a driver, and when the position and the posture of the driver in the motor vehicle simulation cockpit change, the second data processor analyzes limb operation information of the driver based on the driving images of the driver, switches the limb operation information into a view angle change image of the current driver view angle in real time, and transmits the view angle change image to the projection screen for displaying; the motion tracking condition comprises image data acquired by a motion tracking camera and positioning information of a motion tracking module, the motion tracking condition is obtained by preprocessing the image data, and the preprocessing of the image data adopts one or more of the steps of digitalization, normalization, geometric change, smoothing, restoration and enhancement.
As a preferred technical scheme, the motor vehicle simulation cockpit is provided with an adjustable seat, a steering wheel device, a driving light control device, a motor vehicle starting switch, an accelerator pedal device, a driving brake device, a parking brake device, a clutch device, a gear shifting device, a collecting and detecting device, a driving guidance processor and a display;
the driving guidance processor is respectively connected with the adjustable seat, the steering wheel device, the running light control device, the motor vehicle starting switch, the accelerator pedal device, the running brake device, the parking brake device, the clutch device, the gear shifting device, the acquisition and detection device and the display;
the components are matched for use to simulate the real-time operation process of a driver, the driver operates the components to generate a driver operation instruction, the motor vehicle simulation cockpit acquires the driver operation instruction and transmits the driver operation instruction to the first data processor for processing, and corresponding road condition information and vehicle pose states are resolved by the first data processor and then fed back to the motion platform and the CAVE immersive MR audio-visual system;
the acquisition and detection device comprises one or more of a steering angle sensor, a pressure sensor and a displacement sensor, wherein the steering angle sensor is used for detecting the rotation angle of a steering wheel, the pressure sensor is used for detecting the pressure born by a brake pedal and an accelerator in the treading process, and the displacement sensor is used for detecting the displacement of the brake pedal and the accelerator in the treading process;
the driving guidance processor is embedded in the display, the driving guidance processor is used for providing real-time driving guidance information for a driver, and the display is used for displaying the real-time driving guidance information;
the driving guidance processor adopts a coach decision model based on model prediction control, and the coach decision model comprises a prediction module, a rolling optimization module and a feedback correction module;
the prediction module is used for predicting the driving information of the student at the next sampling moment at the current sampling moment, and the driving information of the student at the next sampling moment is an expected track;
the rolling optimization module is used for forming an error value according to the difference value between the student driving track and the expected track and issuing a driving instruction according to the error value;
the feedback correction module is used for circularly and continuously executing the prediction module and the rolling optimization module to perform feedback correction so that a student completes a driving task according to the guidance of a plurality of driving guidance instructions;
the coach decision model performs cluster analysis on the driving characteristics of the trainees based on the FCM algorithm, and performs analysis on the characteristics of the trainees to form a training closed loop consisting of a computer, the trainees, a vehicle and a road;
the cluster analysis of the trainee driving characteristics based on the FCM algorithm specifically comprises the following steps:
step 1: setting the class number c of the trainees, the maximum iteration number, the convergence precision e of the algorithm and a fuzzy weighting index;
step 2: setting iterative convergence conditions, initializing each student characteristic clustering center, calculating a membership function by using the current student characteristic clustering center according to a membership function formula of the ith sample to the jth class, and setting a student characteristic clustering target function so as to meet the weighted sum minimum value of the distance from the student characteristic sample points to each clustering center and the membership value;
and step 3: and modifying each clustering center by using a current membership function formula, wherein the membership function formula specifically comprises the following steps:
in the formula uj(xi) Is the membership function of the ith sample to the jth class, k is the fuzzy weighting index of the FCM algorithm, cjJ is the jth student characteristic clustering center, j is 1,2, …, C is the total number of student characteristic clustering centers;
and 4, step 4: repeating the step 1 and the step 2, and when the iteration step number is the membership u of ttMembership u with iteration step number t-1t-1Satisfy | | ut-ut-1And when the | | is less than or equal to the epsilon, terminating the iteration, and finally obtaining the clustering centers of the classes of the students and the membership values of the classes of the students, wherein the epsilon is a membership error coefficient.
As a preferred technical scheme, the motion platform is provided with a six-degree-of-freedom platform, a servo driver and a microcontroller, the servo driver is respectively connected with the six-degree-of-freedom platform and the microcontroller, the servo driver is used for driving the six-degree-of-freedom platform, and the microcontroller is used for providing driving parameters of the servo driver;
the six-degree-of-freedom platform comprises an upper platform, a lower platform fixedly arranged on the ground and 6 hydraulic electric cylinders, wherein the 6 hydraulic electric cylinders are respectively connected with the upper platform and the lower platform;
the 6 hydraulic electric cylinders are driven by a servo driver to realize the motion of the six-degree-of-freedom platform;
the motion platform is also provided with six-dimensional force sensors, the six-dimensional force sensors are respectively arranged at the connecting nodes of the upper platform and the hydraulic electric cylinders, the six-dimensional force sensors are arranged on U points of the six-degree-of-freedom platform, and six subscripts respectively correspond to six points;
when the six-degree-of-freedom platform starts to work, the six-dimensional force sensor senses the pressure values at the six connecting nodes in real time and feeds the pressure values back to the first data processor to provide data support for calculation of the length adjusting value of the hydraulic electric cylinder;
the first data processor receives a driver operation instruction and driving data collected by the motor vehicle simulation cockpit, a hydraulic electric cylinder length adjusting value is obtained by resolving the position and the posture of the moving platform and the hydraulic electric cylinder length, and the hydraulic electric cylinder length adjusting value is sent to the microcontroller to adjust the hydraulic electric cylinder length, so that the movement of the upper platform in six degrees of freedom is realized, wherein the movement of the upper platform in the six degrees of freedom is specifically three translation movements in a Cartesian coordinate system and rotation around three coordinate axes.
As a preferred technical scheme, the motion platform is further provided with a handheld terminal, the handheld terminal is in wireless connection with the microcontroller, and the handheld terminal is used for emergently braking the servo driver to stop the six-degree-of-freedom platform.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) when the motor vehicle driving training is carried out, a driver is in an indoor fixed environment and is not influenced by factors such as a driving training field, weather, vehicles, pedestrians and the like, so that psychological fear of the driver when the driver learns driving is effectively reduced, the training process has higher safety, the influence of various real environment factors on real vehicle operation is avoided, and actual casualties cannot be caused even if the conditions such as vehicle scratch and collision occur in a virtual environment, so that the safety coefficient of the motor vehicle driving training is improved; meanwhile, only renewable energy sources such as electric energy and the like are adopted during driving training, fuel oil consumption and real vehicle abrasion are avoided, and the driving training mode is more green and environment-friendly, so that the driving training method is safer and more environment-friendly in the driving training process of the motor vehicle.
(2) Compared with the traditional head-wearing virtual reality technology, the invention converts the video content of the projection screen on the OLED high-definition display screen into a 3D picture through the 3D glasses, and simultaneously utilizes the light transmission property of the 3D glasses to accurately position each component in the motor vehicle simulation system for a driver, such as a steering wheel device, an accelerator pedal device, a service brake device, a parking brake device, a clutch device and the like, thereby effectively solving the problem that the driver cannot accurately interact with the motor vehicle simulation system in a virtual environment by utilizing the MR technology.
(3) According to the invention, through collecting and analyzing related data (such as the vehicle speed, the safety example of the vehicle ahead, whether a turn light is turned during turning, the gear shifting opportunity, the accelerator and the brake condition) in the driving training process, more accurate driving training reports and driving attention points are provided for the driver after each training is finished, so that the bad driving habits of the driver are corrected in time, the vehicle driving operation is effectively normalized, and the driving training of the motor vehicle is more efficient.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent training system for motor vehicle driving based on MR technology in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a training closed loop in embodiment 1 of the present invention;
fig. 3 is a schematic diagram of a coordinate system of the motion platform in embodiment 1 of the present invention;
FIG. 4 is a schematic view of a projection screen of embodiment 1 of the present invention with a quadrilateral ring structure;
FIG. 5 is a top view of a coordinate system of a motion platform according to embodiment 1 of the present invention;
fig. 6 is a schematic diagram of a motion platform performing rotational coordinate transformation according to embodiment 1 of the present invention;
fig. 7 is a schematic diagram illustrating a solution to kinematic analysis of a motion platform according to embodiment 1 of the present invention;
fig. 8 is a schematic view of a projection screen in embodiment 3 of the present invention, which adopts a triangular ring structure;
fig. 9 is a schematic view of a projection screen of embodiment 3 of the present invention in a circular ring structure;
fig. 10 is a schematic view of a U-shaped projection screen in embodiment 3 of the present invention.
The system comprises a motor vehicle simulation cockpit, a 2-motion platform and a 3-projection screen.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
Example 1
As shown in fig. 1, the present embodiment provides an intelligent training system for motor vehicle driving based on MR technology, which includes a motor vehicle simulation cockpit 1, a motion platform 2, a first data processor, and a CAVE immersive MR audiovisual system, where CAVE is a CAVE Automatic Virtual Environment. The first data processor is respectively connected with the motion platform 2, the motor vehicle simulation cockpit 1 and the CAVE immersive MR audio-visual system, and the motor vehicle simulation cockpit 1 is also connected with the motion platform 2 and the CAVE immersive MR audio-visual system.
The motor vehicle simulation cockpit 1 is used for providing a driving operation environment for a driver, and the motion platform 2 is used for simulating vehicle vibration conditions in a driving process, such as turning, braking and the like, and feeding back the simulated driving road condition and vehicle driving in real time to improve the driving experience of the driver.
In the embodiment, a motor vehicle simulation cockpit 1 is fixedly arranged on a motion platform 2, and the motor vehicle simulation cockpit 1 is provided with an adjustable seat, a steering wheel device, a driving light control device, a motor vehicle starting switch, an accelerator pedal device, a driving brake device, a parking brake device, a clutch device, a gear shifting device, a collecting and detecting device, a driving guidance processor and a display.
The driving guidance processor is respectively connected with the adjustable seat, the steering wheel device, the running light control device, the motor vehicle starting switch, the accelerator pedal device, the running brake device, the parking brake device, the clutch device, the gear shifting device, the acquisition and detection device and the display. The acquisition and detection device is respectively connected with the steering wheel device, the accelerator pedal device, the service brake device, the parking brake device, the clutch device and the gear shifting device. The components are matched for use to simulate the real-vehicle operation process of a driver, the driver operates the components to generate a driver operation instruction, and the motor vehicle simulation cockpit 1 acquires the driver operation instruction and transmits the driver operation instruction to the first data processor for processing. The corresponding road condition information and the vehicle pose state are resolved by the first data processor and then fed back to the motion platform 2 and the CAVE immersive MR audio-visual system, so that the motor vehicle simulation cockpit 1 is driven to simulate the effects of vehicle vibration, acceleration, deceleration, turning, impact and the like, and the sensory effect of a driver in the MR environment is improved. Meanwhile, the driving guidance processor is embedded in the display and used as a tablet personal computer, the driving guidance processor provides real-time driving guidance information for a driver, such as the speed, the safety distance from a front vehicle, whether a steering lamp is turned during turning, the gear shifting time, the accelerator and the braking condition, so as to judge whether illegal operation exists in the driving process and provide improvement guidance, and the display is used for displaying the real-time driving guidance information. Specifically, the driving guidance processor feeds back warning information according to the traffic driving rule set by the virtual environment, and when the driving vehicle does not accord with the preset traffic rule in the virtual driving environment, the information is fed back, such as the vehicle speed, the steering lamp, the front vehicle safety distance and the like, problems occur, so that the internal logic code in the virtual environment can detect the vehicle speed, the steering logic and the front vehicle safety distance parameter and judge whether the parameters exceed the preset traffic threshold value, and then the warning is sent out when the corresponding parameters exceed the preset traffic threshold value. The traffic preset threshold comprises a vehicle speed threshold, a steering angle threshold and a front vehicle safety distance threshold.
In practical application, after a driver completes a vehicle driving training, the related vehicle driving conditions can be known through the tablet personal computer positioned on one side of the steering wheel device, for example, whether a steering lamp is turned on in time during steering, whether the using condition of a vehicle lamp is correct under special weather conditions such as heavy fog and the like, whether the vehicle speed meets the standard during driving in a deceleration lane and the like, and the tablet personal computer can help the driver to improve the vehicle driving skill based on corresponding guidance and advice according to the driving performance of the driver. Based on the existing driver model, a trainee model is provided, and the trainee model is used for describing characteristics of the trainee. Then, on the basis of obtaining characteristics of the trainee, a Model Predictive Control (MPC) based coach decision Model is provided, wherein the coach decision Model comprises three parts, namely a prediction module, a rolling optimization module and a feedback correction module.
The prediction module predicts the driving information of the student at the next sampling moment, namely the expected track, at the current sampling moment, the rolling optimization module forms an error value according to the difference value between the driving track of the student and the expected track and issues a driving instruction according to the error value, and the driving instruction is displayed by the display to ensure that the error of the student is optimized at the next sampling moment as much as possible so that the driving track of the student is close to the expected value at the next sampling moment to the maximum extent; and at the next sampling moment, a new error value is obtained again according to the running track and the new expected track at the next moment, so that a new driving instruction is sent to the learner through the display, and the feedback correction module is used for circularly and continuously executing the prediction module and the rolling optimization module to perform feedback correction so that the learner completes the driving task according to the guidance of the driving instruction for many times, thereby providing driving guidance for the learner.
After the motion and audio-visual simulation output of the motor vehicle is finished, the specific action information of the driver is collected through the collecting and detecting device, so that data support is provided for the driving performance analysis. During practical application, the acquisition and detection device comprises one or more of a steering angle sensor, a pressure sensor and a displacement sensor, the steering angle sensor is used for detecting the rotation angle of a steering wheel, the pressure sensor is used for detecting the pressure born by a brake pedal and an accelerator in the treading process, and the displacement sensor is used for detecting the displacement of the brake pedal and the accelerator in the treading process. In practice, the steering angle sensor employs a device for determining the angular position of the steering wheel shaft, such as a steering column, which comprises a coil assembly, a coil support and a coupler element having a coupler angular position related to the angular position of the steering wheel shaft. The coil assembly includes a transmitter coil and at least one receiver coil, the coupler element for altering the inductive coupling between the transmitter coil, the signal processing circuitry and the at least one receiver coil. The signal processing circuit receives a coil signal from the coil assembly and a reference signal, which is related to the axial displacement but otherwise substantially independent of the angular position, and determines the angular position using the receiver signal and the reference signal. The left or right turn of the steering wheel is detected by the steering angle sensor, and a correct steering command is issued. The rotation angle of the steering wheel provides basis for realizing the steering amplitude of the automobile, so that the automobile runs according to the steering intention of a driver. The steering angle sensor is composed of a photoelectric coupling element, a perforated slotted plate and the like. The photoelectric coupling element is a light emitting diode and a photosensitive transistor. The open-cell slot plate is arranged between the light-emitting diode and the photosensitive transistor. The perforated slotted plate has a plurality of small holes. When the steering wheel rotates, the perforated slotted plate rotates along with the steering wheel. The phototransistor operates according to light passing through the aperture plate and outputs a digital pulse signal. And the steering angle, the rotating direction and the rotating speed of the steering wheel are identified according to the signals. The pressure sensor adopts an oil pressure sensor, a semiconductor strain gauge is arranged in the oil pressure sensor, and the resistance of the strain gauge changes when the strain gauge deforms; in addition, the metal sheet is also arranged, and the change of the pressure is detected by the metal diaphragm strain gauge, converted into an electric signal and then output to the outside. The displacement sensor adopts a magnetostrictive displacement sensor, and the absolute position of the movable magnetic ring is accurately detected by an internal non-contact measurement and control technology to measure the actual displacement values of the brake pedal and the accelerator in the treading process.
And finally, classifying the trainees with different driving characteristics based on a Fuzzy C-Means (FCM) algorithm, namely performing cluster analysis on the driving characteristics of the trainees based on the FCM algorithm, and providing a training system for the purpose of teaching. And classifying the trainees with similar driving characteristics by using cluster analysis, thereby further designing corresponding coach control strategies for the classified trainees. Compared with the traditional classification and identification by using a neural network and fuzzy mathematics, the clustering analysis algorithm has the advantages of only needing a small amount of data and no need of constructing a nonlinear identifier, and ensures the stability of identification precision. In practical application, the FCM algorithm is introduced to classify the trainees, corresponding coach control strategies are designed for the trainees of different types on the basis of classification results, driving skill guidance is conducted on the trainees in a targeted mode, and the effect of teaching according to the factors is achieved.
As shown in fig. 2, the characteristics of the trainees are analyzed to form a training closed loop consisting of a computer-trainee-vehicle-road, and the trainees analyze the road information and the vehicle information individually and complete driving tasks by operating a steering wheel, an accelerator and a brake pedal of the vehicle under the prompt of driving instruction displayed on a display. The training closed loop reflects the learner's perception of road and vehicle information, acceptance of driving guidance instructions, decision making for driving tasks, and execution of 4 driving characteristics for the vehicle.
In this embodiment, the clustering analysis principle for the driving characteristics of the trainee based on the FCM algorithm is specifically as follows:
let the student characteristic parameter dataset X be:
X={x1,x2,…,xi}i=1,2,…,n;
wherein n is the number of data samples of n in the trainee characteristic data set X, and XiCharacteristic data samples of the ith student; wherein the ith student characteristic data sample is represented as:
xi={xi1,xi2,…,xim}
wherein m is a sample x of the characteristic data of the studentiThe dimension of (2) is the number of characteristic parameters of the trainee.
In this embodiment, the characteristic data sample of the trainee is a three-dimensional vector, and the characteristic parameter of the data sample includes Tp、α、TtA feedback parameter indicating the ability of the trainee to look ahead, a degree parameter indicating the trainee's receipt of guidance, and a trainee's execution hysteresis parameter, wherein the trainee's execution hysteresis parameter is specifically inverseThe degree of hysteresis in the driving performance performed by the trainee, either physiologically or psychologically, is mapped. x is the number ofiThe concrete expression is as follows:
αi,Ttia feedback parameter indicating forward looking ability of the ith student, a degree parameter indicating the receiving instruction of the ith student, and an execution delay parameter of the ith student.
Setting the class number c of the students, and clustering an objective function J based on the characteristics of the students of the membership functionFCMComprises the following steps:
in the formula uj(xi) Membership functions for the ith sample for the jth class; k is a fuzzy weighting index of the FCM algorithm; c. CjJ is the jth student characteristic clustering center, j is 1,2, …, C; c is the total number of student characteristic cluster centers. Where k affects the accuracy of the classification result, if k is too small, it will result in cjThe cluster points are far away from the mainstream points and show a dispersed state; if k is too large, c will be causedjToo centralized, too weak control ability to outliers, and using k to control the fuzzy degree in the data division process, the range is set between (0, 1). In addition, a person skilled in the art can set a k value as needed to further control the fuzzy degree of the fuzzy classification for further refinement, which is not limited herein.
Let JFCMTo ci,uj(xi) And when the partial derivative is zero, obtaining the condition that the membership characteristic clustering target function takes a minimum value, wherein the membership function of the ith membership characteristic clustering center and the ith sample to the jth category is specifically and respectively represented as follows:
in the formula, ci、cjRespectively representing the ith student characteristic clustering center and the jth student characteristic clustering center.
And (4) iteratively solving the two formulas until the FCM algorithm convergence condition is met, and clustering the objective function by the characteristics of the student to obtain a local optimal solution. In practical application, the detailed steps of FCM are as follows:
step 1: setting the class number c of the trainees, the maximum iteration number, the convergence precision e of the algorithm and a fuzzy weighting index;
step 2: setting iterative convergence conditions, initializing each student characteristic clustering center, calculating a membership function by using the current student characteristic clustering center according to a membership function formula of the ith sample to the jth class, and setting an optimization target, namely a student characteristic clustering target function, so as to meet the weighted sum minimum value of the distance from student characteristic sample points to each clustering center and the membership value;
and step 3: continuously correcting each clustering center by using a current membership function formula;
and 4, step 4: repeating the step 1 and the step 2, and when the iteration step number is the membership u of ttMembership u with iteration step number t-1t-1Satisfy | | ut-ut-1And when the | | is less than or equal to the epsilon, terminating the iteration, and finally obtaining the clustering centers of the classes of the students and the membership values of the classes of the students, wherein the epsilon is a membership error coefficient.
The first data processor is used for receiving the driver operation instructions and driving data collected by all components in the motor vehicle simulation cockpit 1 and simulating the motion and audio-visual scenes in the motor vehicle driving process. Specifically, the first data processor performs cooperative operation on the driving data and then feeds the driving data back to the motion platform 2 and the CAVE immersive MR audio-visual system in real time, so as to perform simulation output on the motion and audio-visual scenes in the driving process of the motor vehicle. In practical application, the driver operation instruction comprises steering wheel angle control, accelerator control, brake control, clutch control, gear control and the like, the driving data comprises steering wheel angles, opening and closing amplitudes of various pedals and gear values, and the cooperative operation comprises kinematic analysis of the motion platform 2, namely kinematic forward solution and kinematic reverse solution.
As shown in fig. 3, the motion platform 2 is provided with a six-degree-of-freedom platform, a servo driver and a microcontroller, and the servo driver is respectively connected with the six-degree-of-freedom platform and the microcontroller. The servo driver is used for driving the six-degree-of-freedom platform, and the microcontroller is used for providing driving parameters of the servo driver.
In this embodiment, the six-degree-of-freedom platform comprises an upper platform, a lower platform fixedly arranged on the ground and 6 hydraulic electric cylinders, wherein the 6 hydraulic electric cylinders are respectively connected with the upper platform and the lower platform, and the six-degree-of-freedom platform adopts a Stewart structure and then utilizes the hydraulic electric cylinders to support the upper platform. Specifically, each hydraulic electric cylinder is connected with the upper platform and the lower platform by a hook joint. In practical application, the 6 hydraulic electric cylinders are driven by a servo driver to realize the motion of the six-degree-of-freedom platform, specifically, the first data processor receives a driver operation instruction and driving data collected by a motor vehicle simulation cockpit 1, a hydraulic electric cylinder length adjusting value is obtained by resolving the pose and the hydraulic electric cylinder length of the motion platform 2, and then the hydraulic electric cylinder length adjusting value is sent to the microcontroller to adjust the hydraulic electric cylinder length, so that the motion of the upper platform in six degrees of freedom is realized; the motions of the upper platform in six degrees of freedom are three translation motions in a Cartesian coordinate system and rotation around three coordinate axes.
In this embodiment, the motion platform 2 is further provided with a six-dimensional force sensor and a handheld terminal, the six-dimensional force sensor is respectively arranged at a connection node of the upper platform and the hydraulic electric cylinder, the six-dimensional force sensor is arranged at a U point of the six-degree-of-freedom platform, and six subscripts respectively correspond to six points. When the six-degree-of-freedom platform starts to work, the six-dimensional force sensor senses the pressure values at the six connecting nodes in real time and feeds the pressure values back to the first data processor, and data support is provided for calculation of the length adjusting value of the hydraulic electric cylinder. The handheld terminal is in wireless connection with the microcontroller, and the handheld terminal is used for emergently braking the servo driver to stop the six-degree-of-freedom platform, so that the braking effect of the motor vehicle simulation cockpit 1 is achieved, and accidents of a driver caused by incapability of braking under the condition of out-of-control equipment are avoided.
In the present exemplary embodiment, the motor vehicle simulation cockpit 1 is arranged in a particularly fixed manner above the upper platform.
In this embodiment, the CAVE immersive MR audiovisual system includes a CAVE body mount, a projection screen 3, a second data processor, a plurality of sets of motion tracking cameras, a motion tracking module, and 3D glasses and a surround stereo system. The first data processor is respectively connected with the plurality of groups of motion tracking cameras, the projection screen 3 and the second data processor, the CAVE main body support is respectively connected with the plurality of groups of motion tracking cameras and the surrounding stereo system, the second data processor is respectively connected with the projection screen 3 and the surrounding stereo system, and the motion tracking module is respectively connected with the motor vehicle simulation cockpit 1 and the 3D glasses.
As shown in fig. 4, the projection screen 3 adopts a plurality of OLED high-definition display screens, specifically, 4 OLED high-definition display screens are sequentially connected to form a quadrilateral ring structure, and the projection screen 3 and the surround stereo system are respectively and fixedly disposed on the CAVE main body support, so as to form a surround CAVE stereo audiovisual space. The motion tracking modules are used for capturing sight line position information of the driver and position information of a motor vehicle simulation cockpit 1; the multiple groups of motion tracking cameras are fixedly arranged in different dimensions of an audio-visual space respectively, are hung in corner regions around the CAVE main body support, or are hung on the side surfaces around the CAVE main body support, and are used for taking pictures to obtain more accurate driving action information; the plurality of motion tracking modules comprise a first motion tracking component and a second motion tracking component, the first motion tracking component is fixedly arranged on the 3D glasses, and the second motion tracking component is arranged at the front end part of the motor vehicle simulation cockpit 1; in practical application, the motion tracking module adopts a positioning instrument, in particular a Bluetooth positioner. The multiple groups of motion tracking cameras collect driving images of a driver, and when the position and the posture of the driver in the motor vehicle simulation cockpit 1 change, the second data processor analyzes the limb operation information of the driver based on the driving images of the driver, switches the driving images into view angle change images of the current view angle of the driver in real time, and transmits the view angle change images to the projection screen 3 for displaying.
When the driver carries out virtual driving activity, utilize 3D glasses to combine projection screen 3 to show, 2D video picture with the different frequency output of binary channels turns into the 3D image, combine motion tracking camera and motion tracking module to obtain the motion tracking condition, second data processor is with driver operating instruction and drive data transmission to first data processor and then adjust motion platform 2, second data processor still feeds back the video picture on 4 high definition OLED display screens, and then the output picture on the projection screen 3 can be adjusted in real time along with the motion condition of driver's head and vehicle, with bad effects such as dizzy sense that reduction driver produced under the virtual reality environment. Specifically, the second data processor analyzes the motion tracking situation and outputs the motion tracking video and the motion tracking audio signals to the OLED high-definition display screen and the surround stereo system in real time, and the video content projected on the OLED high-definition display screen is converted into a 3D picture through the 3D glasses so as to enhance the audio-visual effect of the driver in the MR environment, so that a motor vehicle simulation driving environment is provided for the driver, and the motor vehicle is further controlled, such as acceleration, braking, gear shifting, turning and the like.
In practical application, the motion tracking condition includes image data acquired by the motion tracking camera and positioning information of the motion tracking module, the motion tracking condition is obtained by preprocessing the image data, and the preprocessing of the image data adopts one or more of digitalization, normalization, geometric change, smoothing, restoration and enhancement steps. The motion tracking video and the motion tracking audio signals are obtained when a virtual driving environment is built through a game engine.
In this embodiment, the second data processor analyzes the motion tracking situation and outputs the motion tracking video and the motion tracking audio signal to the OLED high-definition display screen and the surround stereo system in real time, and the method specifically includes the following steps:
a moving object capturing step: and detecting the moving target based on a foreground detection algorithm. In practical application, the foreground detection algorithm may adopt any one of a background subtraction method, an interframe difference method and an optical flow method.
An identification step: identifying and classifying the moving target based on the classification characteristic parameters, screening out the moving target as an identification target of a driver and taking the moving target as a tracking target;
a tracking step: and tracking the tracking target by a point tracking method based on the position of the center of gravity.
In the embodiment, the identification step adopts a template-based feature matching method or a machine learning method to identify the moving target, and the template feature matching method obtains the result of target classification by calculating the features of each target area and then comparing the features with a threshold value; the machine learning method includes the steps of obtaining a classification result by constructing a classifier, classifying a moving target into a training sample and a testing sample, enabling the training sample to be used for initializing all parameters of the classifier, enabling the testing sample to be used for verifying classification efficiency, and achieving preset classification accuracy through the training classifier to obtain the classifier used for identifying the moving target.
Common classification features comprise perimeter, area, aspect ratio, Hu invariant moment, texture, wavelet moment, SIFT and the like, as a moving target is a driver, the embodiment selects the aspect ratio and the Hu invariant moment as classification features, specifically selects photos of enough pedestrians, respectively extracts the Hu invariant moment, tests different Hu invariant moments, obtains the discrimination of the pedestrians and other real objects through comparison, selects the Hu invariant moment with the best discrimination performance as the target Hu invariant moment, and respectively endows the aspect ratio and the target Hu invariant moment with corresponding preset weight values to be combined to form a classification feature parameter finally used for classification.
In this implementation, tracking the tracked target by a point tracking method based on the position of the center of gravity specifically includes the following steps:
and (3) target matching: calculating barycentric coordinates and assigned object IDs of all objects in a first frame of the video, namely, assigning an ID to each object from the number 1 according to the direction of a scanned image, and identifying the ID with the largest value as ID _ max;
a search area acquisition step: calculating barycentric coordinates of an Nth target of an nth frame of the video, and simultaneously obtaining a search area of the target in an (N-1) th frame, wherein N is 2,3, … …, num _ N, and N is 1,2,3, … … num _ N, and num _ N and num _ N respectively represent the maximum frame number of the video and the maximum target number of the video;
and (3) target searching: and searching targets in the search area and tracking based on the target with the minimum similarity deviation value among the targets. In practical application, if a target exists in the search area of the (N-1) th frame, barycentric coordinates of all targets in the search area are calculated, a similarity deviation value is obtained according to the relative distance between the barycentric coordinates of all targets and the barycentric coordinate of the current target, namely, the similarity deviation value between all targets in the search area of the (N-1) th frame and the nth target of the nth frame is obtained, the ID of the target with the minimum similarity deviation value is taken as the ID of the nth target of the nth frame, if no target exists in the search area of the (N-1) th frame, the current target is taken as the new target, a new ID is allocated, namely the new ID is ID-max +1, meanwhile, the value of the ID-max is updated, and the new ID-max is ID-max + 1.
In addition, those skilled in the art can select game engines, such as UNITY, UE4, etc., according to actual situations, which is not limited herein.
In this embodiment, the projection screen 3 is used to display an interactive interface of a virtual driving scene, and the interactive interface provides options of the virtual driving environment on various operation interfaces, specifically including switching between different viewing angle interfaces such as an in-vehicle viewing angle and an out-vehicle viewing angle. In actual application, a user interaction interface system is designed in a virtual scene to provide a scene selection area with functions of getting on, getting off, walking pedestrians and the like, and different interfaces are switched by selecting a designated scene.
In the embodiment, the intelligent training system for motor vehicle driving based on the MR technology is also provided with a data storage, and the second data processor is connected with the data storage. In practical application, the second data processor specifically adopts a computer group configured with a high-performance display card, the data memory specifically adopts a database server, and the data memory stores and records data processed by the second data processor.
In this embodiment, the first data processor specifically employs a central control computer. In addition, a person skilled in the art may also implement a corresponding processing analysis function by using a server based on a cloud platform according to an actual situation, and the first data processor is not limited in this embodiment.
With reference to fig. 3 and 5, the pose of the motion platform 2 and the cylinder length of the hydraulic electric cylinder are resolved to obtain a cylinder length adjustment value of the hydraulic electric cylinder, which specifically comprises the following steps: performing kinematic analysis on the relationship between the pose, the speed and the acceleration of the motion platform 2 and the stretching amount, the stretching speed and the acceleration of the six hydraulic oil cylinders, wherein the kinematic analysis specifically comprises the following steps:
establishing a coordinate system, and fixing the origin D of the inertial coordinate system on the lower platform at a hinge point DiOn the enclosed hexagonal geometric center, the upper platform is fixedly connected with the origin of the moving coordinate system at a hinged joint UiThe enclosed hexagonal geometric center; in the initial position, the two coordinate systems are parallel, i being 1,2, … … 6.
Kinematic positive solution: solving the pose of the upper platform according to the elongation of the six hydraulic oil cylinders;
inverse kinematics solution: solving the elongation of each hydraulic oil cylinder according to the pose of the upper platform;
in the present embodiment, both the forward solution and the backward solution are very important resolving processes of the motion platform 2.
Wherein the inverse kinematics specifically comprises: based on moving coordinate system U-XUYUZUGenerating a derived coordinate system O-XOYOZOThe two are completely superposed at the beginning, the origin of the moving coordinate system is fixed, and the coordinate axis moves along with the platform, namely the body coordinate system; the derived coordinate system of the latter is only fixed at the original point, and the directions of the coordinate axes are always unchanged when the derived coordinate system moves along with the upper platform.
Converting any space rotation made by the upper platform into the position relation between the derived dynamic coordinate system and the dynamic coordinate system, namely in the dynamic coordinate system O-XOYOZOIn the first place around U-XUYUZUZ of (A)UAxis, sequentially again around YU,XURotating the shaft to obtain rotation angles gamma, beta and alpha of the three shafts;
here, the decomposition of a certain spatial rotation of the upper stage into a sequence around Z is further illustrated by way of exampleU、YU、XURotation of the shaft at angles gamma, beta, alpha, a certain upper pivot point UiIn a coordinate system U-XUYUZUCoordinate vector U ofi=[xui yui zui]TAfter rotating by gamma angle, it is at O-XOYOZOBecome a coordinate vector ofWhile leaving a fixed coordinate system U-X at this location that no longer rotates with the platformzYzZz(ii) a Likewise, the angle of rotation β is then generated at U-XZYZZZCoordinate vector of (5)And leaving the coordinate system U-XYYYZYFinally, after rotating alpha angle, the rotation is generated at U-XYYYZYCoordinate vector of (5)And in the final position, the hinge point U is arrangediIn the presence of O-XOYOZOThe relationship is established based on the expression of gamma, beta and alpha among the coordinate vectors in (1), wherein the final position is the position rotated by alpha angle.
As shown in FIG. 6, some spatial rotation about the upper stage may be broken down into sequential rotations about ZU、YU、XUThe axes are rotated by gamma, beta, alpha angles, and the derivation process is as follows: around ZUAfter the shaft rotates by an angle of gamma, a U is arrangediThe distance between two points of U is L and UiIn a coordinate system U-XUYUZUNeutralization of XUThe axis forms an angle theta, then
From U-XZYZZZTo O-XOYOZOThe above equation for the conversion is written in matrix form:
the same reason is U-XYYYZYTo O-XzYZZZAnd from U-XUYUZUFinal position to U-XYYYZYThe matrix form of the conversion is:
from U-XUYUZUTo final position of O-XOYOZOThe conversion form of (1) is as follows:
let the overall transformation matrix in the above equation be R, then there are:
the above derivation is based on the rotation rule of the angle and is determined according to the right-hand rule, otherwise the angle sign is inverted.
Making the initial position of the upper platform generate a displacement s ═ x, y, z relative to the inertial coordinate system at the point C]TReaches the position of the U point in the moving coordinate system and occurs at the U pointTurning over the space, wherein the hinge point position of the hydraulic oil cylinder is specifically combined with that shown in figure 6;
let c denote the vector DC, liIndicating the adjusted extension of the ith hydraulic cylinder according to vector Di、UiThe vector relation in the moving coordinate system can be obtained as follows:
in the formula of Ui、DiRespectively representing the upper and lower platforms at U-XUYUZUAnd R is an integral transformation matrix.The lower platform is O-XOYOZOThe position vector of a corresponding point under the coordinate system, i represents the serial number of the corresponding hydraulic oil cylinder;
and then obtaining:
li=c+s+R·Ui-Di
the elongation of the hydraulic oil cylinder is as follows:
in the formula I0Is the original length of the hydraulic cylinderiIs the coordinate vector adjusted by the ith hydraulic oil cylinder,is 1iThe transposed matrix of (2).
In this embodiment, the kinematic positive solution is specifically: the numerical solution based process involves a non-linear system of equations containing the elongation parameters of six hydraulic rams,
the present embodiment adopts a numerical method rather than an analytical method, because the numerical method can satisfy real-time performance in processing by selecting an appropriate initial value. The numerical method comprises an iterative search method, a successive approximation method and an optimization method, wherein the iterative search method is divided into six-dimensional iterative search (a Newton-Raphson method and an improved Jacobi matrix method) and three-dimensional iterative search, the successive approximation method is physical and chemical of the Newton-Raphson method, and the optimization method comprises a genetic algorithm and a neuron network method.
In practical application, compared with the numerical methods, the Newton-Raphson method has higher efficiency and is a positive solution solving method meeting the requirements of real-time, accuracy and stability. The Newton-Raphson method is very suitable for the situation because the motion platform 2 for simulating the motion of the vehicle has low requirement on precision and high requirement on real-time property. The Newton-Raphson method is specifically based on the Newton-Taylor expansion method for iterative computation, and specifically, the method can be described as follows: establishing a nonlinear equation set with the generalized coordinate of the motion platform 2 as a variable, then performing binary Taylor expansion near a preset initial value and taking a linear part of the binary Taylor expansion, constructing a new initial value after solving, performing binary Taylor expansion near the new initial value again, and repeating the steps until the preset precision is met. The method has the advantages of small iteration times, high precision and small execution time, and is suitable for the real-time control of the convergence.
As shown in fig. 7, the kinematics forward solution is used for feeding back a control signal in the motor vehicle simulation cockpit 1, that is, driving dynamics parameters are solved according to a vehicle dynamics model, the pose parameters of the motion platform 2 are determined through a washout filtering algorithm, the elongation of each hydraulic cylinder is calculated through kinematics reverse solution based on the driving dynamics parameters, a servo driver drives the hydraulic cylinder to act, a sensor arranged on the hydraulic cylinder detects the elongation of the hydraulic cylinder and then is subjected to kinematics forward solution operation, and the difference value between the obtained pose parameters and the pose parameters solved through the washout filtering algorithm is amplified and used as a positive feedback signal of the kinematics reverse solution motion. In practical application, the driving dynamics parameters comprise axial linear displacement, axial angular displacement, suspension dynamic deflection and tire spinning angular speed.
Example 2
In this embodiment 2, on the basis of embodiment 1, a CAVE immersive MR audiovisual system is further improved to meet various requirements for simulating real driving.
In this embodiment, the CAVE immersive MR audiovisual system is modeled using the following steps:
establishing various models required by a driving scene by using a game engine, wherein the models comprise a plurality of dynamic models and a plurality of static models of trees, vegetations, houses, roads, signal lamps and the like; in practical application, a terrain is built through the game engine, mountains, roads and illumination are given to the terrain, trees and houses are constructed, and a large number of built-in resource packages contained in the game engine are directly led into a part of models to complete scene building.
Combining the dynamic model and the static model to form a complete virtual driving environment, and adding a sound and particle system to ensure that the platform is more simulated; the intelligent vehicle control module, the operation response module and the pedestrian module are introduced on the basis of the virtual driving environment, so that the real and complex intelligent traffic driving scene is restored to a greater extent, the driving state of a driver can be better represented when the driver performs virtual driving, and the interactive and immersive driving experience is increased. In practical application, the intelligent vehicle control module writes virtual vehicle operation logic to realize standardization according to real driving traffic rules and vehicle states required by testing; when a driver drives in real time, the change of the vehicle and the surrounding environment is triggered, and the operation response module is used for displaying the change of the real-time driving environment; the pedestrian module is used for controlling the behavior track of the pedestrian model in the virtual driving environment. In practical application, the virtual driving scene is the most important component in the development of the virtual driving environment, and the reducibility and logic reasonability of real scenery can directly influence the impression of simulated driving of a driver, so that the reality and reliability of simulation test data are greatly influenced. Therefore, when a virtual driving scene is designed, physical effects such as appearance, color, material, texture, gravity, friction, collision and the like of a driving environment are continuously adjusted and changed, and the scene is comprehensively designed and built from 3 aspects of a static model, a dynamic model and an interactive interface, so that a real environment is more really restored.
In this embodiment, the static model specifically includes a terrain, a landscape, flowers, plants, trees, a driving road, a house building, a sky box, and the like. In practical application, the building model required by the scene is complex in structure and large in quantity, and the phenomena of blocking and frame dropping are easy to occur during running, so that when the building in the scene is added, the model which does not need to express details is optimized, specifically, the number of lines or the number of triangles is reduced, and after the building model is introduced, the material and the appearance of the building are expressed by adopting a texture mapping method, so that the system resources are saved, and the response time delay of the virtual driving environment is reduced.
In this embodiment, the road modeling in the scene can be directly completed by using a plug-in tool in the game engine, and details are added according to the situation required by the scene, so that a road model including an intersection, a T-shaped intersection, a circular intersection and the like can be created in real time, thereby avoiding the repeated import and export of software, reducing the occupation of system resources, and achieving the purpose of road modeling more accurately and efficiently.
In this embodiment, the dynamic model specifically includes a master control vehicle model, and on the basis of satisfying the functions such as acceleration, deceleration, braking and steering required by the platform vehicle, in order to facilitate control of the driving vehicle, the master control vehicle model is structurally optimized: the number of model points, lines and surfaces and the number of triangles in stl format are reduced on the basis of not changing the main structure of the vehicle, the optimized three-dimensional model of the vehicle is introduced, the coordinate axes of the three-dimensional model are adjusted to the coordinate setting more fitting to the scene, and the condition of upside down is avoided.
Example 3
In this embodiment 3, the structure of the projection screen 3 is expanded on the basis of the embodiments 1 and 2, and this embodiment provides an expanded annular structure and a U-shaped structure.
Specifically, in the ring structure, as shown in fig. 8, the projection screen 3 adopts 3 OLED high-definition display screens which are connected in sequence to form a triangular ring structure, the OLED high-definition display screens form corner regions at the joints, and the multiple sets of motion tracking cameras are respectively suspended in the corner regions. The plurality of groups of motion tracking cameras can be further respectively arranged on one side of the plane where each OLED high-definition display screen is located, and the specific hanging position of the plurality of groups of motion tracking cameras is not limited in the embodiment.
In addition, the OLED high-definition display screens with 5 or 6 or more numbers can be connected in sequence to form a ring structure by the person skilled in the art.
As shown in fig. 9, the projection screen 3 may also be a flexible curved screen which is connected end to form a circular ring structure and is disposed around the outer periphery of the simulated cockpit 1 of the vehicle.
As shown in fig. 10, in the U-shaped structure, the projection screen 3 adopts 3 OLED high-definition display screens, the 3 OLED high-definition display screens are sequentially connected to form the U-shaped structure, and the 3 OLED high-definition display screens are respectively arranged in front of, on the left of, and on the right of the simulated cockpit 1 of the motor vehicle. In addition, the projection screen 3 can also adopt a flexible curved screen which is bent to form a U shape and surrounds the front, the left side and the right side of the motor vehicle simulation cockpit 1 so as to enable a driver to obtain the visual field in the front, the left and the right directions.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (10)
1. An MR technology-based intelligent training system for motor vehicle driving is characterized by comprising a motor vehicle simulation cockpit, a motion platform, a first data processor and a CAVE immersive MR audio-visual system, wherein the first data processor is respectively connected with the motion platform, the motor vehicle simulation cockpit and the CAVE immersive MR audio-visual system;
the motor vehicle simulation cockpit is fixedly arranged on the motion platform and used for providing a driving operation environment for a driver;
the motion platform is used for simulating the vibration condition of the vehicle in the driving process and feeding back the simulated driving road condition and the vehicle driving in real time;
the first data processor is used for receiving a driver operation instruction and driving data acquired by each component in the motor vehicle simulation cockpit and simulating a motion and audio-visual scene in the motor vehicle driving process, and the first data processor performs cooperative operation on the driving data and then feeds the driving data back to the motion platform and the CAVE immersive MR audio-visual system in real time so as to perform simulation output on the motion and audio-visual scene in the motor vehicle driving process;
the driver operation instructions comprise steering wheel angle control, accelerator control, brake control, clutch control and gear control, the driving data comprise steering wheel angles, opening and closing amplitudes of various pedals and gear values, and the cooperative operation comprises kinematic analysis of a motion platform, namely kinematic forward solution and kinematic reverse solution;
the CAVE immersive MR audio-visual system comprises a CAVE main body support, a projection screen, a second data processor, a plurality of groups of motion tracking cameras, a motion tracking module, 3D glasses and a surrounding stereo system, wherein the first data processor is respectively connected with the plurality of groups of motion tracking cameras, the projection screen and the second data processor;
the projection screen and the surrounding stereo system are respectively and fixedly arranged on the CAVE main body support, so that a surrounding CAVE type stereo audio-visual space is formed, and the projection screen is used for displaying an interactive interface of a virtual driving scene;
the plurality of groups of motion tracking cameras are respectively fixedly arranged on the CAVE main body bracket, the plurality of groups of motion tracking cameras are used for capturing the motion condition of a driver in the process of simulating driving, and the motion tracking module is used for capturing the sight line position information of the driver and the position information of a motor vehicle simulation cockpit;
when the driver carries out virtual driving activity, utilize 3D glasses to combine projection screen to show, 2D video picture with the different frequency output of binary channels turns into the 3D image, combine motion tracking camera and motion tracking module to obtain the motion tracking condition, second data processor transmits driver operating instruction and driving data to first data processor and then adjusts motion platform, second data processor carries out the analysis and with the motion tracking video to the motion tracking condition, the motion tracking audio signal is exported in real time to OLED high definition display screen and around stereophonic system, thereby provide motor vehicle simulation driving environment for the driver.
2. The MR technology-based intelligent training system for motor vehicle driving according to claim 1, wherein the second data processor analyzes the motion tracking situation and outputs the motion tracking video and the motion tracking audio signals to an OLED high-definition display screen and a surround stereo system in real time, and the system comprises the following steps:
a moving object capturing step: detecting a moving target based on a foreground detection algorithm, wherein the foreground detection algorithm adopts any one of a background subtraction method, an interframe difference method and an optical flow method;
an identification step: identifying and classifying the moving target based on the classification characteristic parameters, screening out the moving target as an identification target of a driver and taking the moving target as a tracking target;
a tracking step: tracking a tracking target by a point tracking method based on the gravity center position;
the identification step adopts a template-based feature matching method or a machine learning method to identify a moving target;
the point tracking method based on the gravity center position is used for tracking a tracking target, and specifically comprises the following steps:
and (3) target matching: calculating barycentric coordinates and assigned object IDs of all objects in a first frame of the video, namely, assigning an ID to each object from the number 1 according to the direction of a scanned image, and identifying the ID with the largest value as ID _ max;
a search area acquisition step: calculating barycentric coordinates of an Nth target of an nth frame of the video, and simultaneously obtaining a search area of the target in an (N-1) th frame, wherein N is 2,3, … …, num _ N, and N is 1,2,3, … … num _ N, and num _ N and num _ N respectively represent the maximum frame number of the video and the maximum target number of the video;
and (3) target searching: and searching targets in the search area and tracking based on the target with the minimum similarity deviation value among the targets.
3. The intelligent training system for motor vehicle driving based on MR technology as claimed in claim 1, wherein the motor vehicle simulation cockpit is provided with an adjustable seat, a steering wheel device, a running light control device, a motor vehicle start switch, an accelerator pedal device, a running brake device, a parking brake device, a clutch device, a gear shifting device, a collecting and detecting device, a driving guidance processor and a display;
the driving guidance processor is respectively connected with the adjustable seat, the steering wheel device, the running light control device, the motor vehicle starting switch, the accelerator pedal device, the running brake device, the parking brake device, the clutch device, the gear shifting device, the acquisition and detection device and the display;
the components are matched for use to simulate the real-time operation process of a driver, the driver operates the components to generate a driver operation instruction, the motor vehicle simulation cockpit acquires the driver operation instruction and transmits the driver operation instruction to the first data processor for processing, and corresponding road condition information and vehicle pose states are resolved by the first data processor and then fed back to the motion platform and the CAVE immersive MR audio-visual system;
the acquisition and detection device comprises one or more of a steering angle sensor, a pressure sensor and a displacement sensor, wherein the steering angle sensor is used for detecting the rotation angle of a steering wheel, the pressure sensor is used for detecting the pressure born by a brake pedal and an accelerator in the treading process, and the displacement sensor is used for detecting the displacement of the brake pedal and the accelerator in the treading process;
the driving guidance processor is embedded in the display, the driving guidance processor is used for providing real-time driving guidance information for a driver, and the display is used for displaying the real-time driving guidance information;
the driving guidance processor adopts a coach decision model based on model prediction control, and the coach decision model comprises a prediction module, a rolling optimization module and a feedback correction module;
the prediction module is used for predicting the driving information of the student at the next sampling moment at the current sampling moment, and the driving information of the student at the next sampling moment is an expected track;
the rolling optimization module is used for forming an error value according to the difference value between the student driving track and the expected track and issuing a driving instruction according to the error value;
the feedback correction module is used for circularly and continuously executing the prediction module and the rolling optimization module to perform feedback correction so that a student completes a driving task according to the guidance of a plurality of driving guidance instructions;
the coach decision model performs cluster analysis on the driving characteristics of the trainees based on the FCM algorithm, and performs analysis on the characteristics of the trainees to form a training closed loop consisting of a computer, the trainees, a vehicle and a road;
the cluster analysis of the trainee driving characteristics based on the FCM algorithm specifically comprises the following steps:
step 1: setting the class number c of the trainees, the maximum iteration number, the convergence precision e of the algorithm and a fuzzy weighting index;
step 2: setting iterative convergence conditions, initializing each student characteristic clustering center, calculating a membership function by using the current student characteristic clustering center according to a membership function formula of the ith sample to the jth class, and setting a student characteristic clustering target function so as to meet the weighted sum minimum value of the distance from the student characteristic sample points to each clustering center and the membership value;
and step 3: and modifying each clustering center by using a current membership function formula, wherein the membership function formula specifically comprises the following steps:
in the formula uj(xi) Is the membership function of the ith sample to the jth class, and k is the fuzzy weighted index of the FCM algorithm,cjJ is 1,2, and C is the total number of characteristic clustering centers of the members;
and 4, step 4: repeating the step 1 and the step 2, and when the iteration step number is the membership u of ttMembership u with iteration step number t-1t-1Satisfy | | ut-ut-1And when the | | is less than or equal to the epsilon, terminating the iteration, and finally obtaining the clustering centers of the classes of the students and the membership values of the classes of the students, wherein the epsilon is a membership error coefficient.
4. The MR-technology-based intelligent training system for motor vehicle driving as claimed in claim 1, wherein the projection screen is provided with at least 3 0LED high-definition display screens, and the 0LED high-definition display screens are connected in sequence to form a ring structure.
5. The MR technology based intelligent training system for motor vehicle driving as claimed in claim 1, wherein the projection screen is a flexible curved screen, and the flexible curved screen is connected end to form a ring structure and is arranged around the outer periphery of the simulated cockpit of the motor vehicle.
6. The MR-technology-based intelligent training system for motor vehicle driving according to claim 1, wherein the projection screen is provided with 3 OLED high-definition display screens, the 3 OLED high-definition display screens are sequentially connected to form a U-shaped structure, and the 3 OLED high-definition display screens are respectively arranged in front of, on the left side of and on the right side of the motor vehicle simulation cockpit.
7. The intelligent training system for motor vehicle driving based on MR technology as claimed in claim 1, wherein the projection screen is a flexible curved screen, the flexible curved screen is bent to form a U shape, and the flexible curved screen surrounds the front, the left side and the right side of the simulated cockpit of the motor vehicle.
8. The MR technology based intelligent training system for motor vehicle driving as claimed in claim 1, wherein the motion tracking module comprises a first motion tracking component and a second motion tracking component, the first motion tracking component is fixedly arranged on the 3D glasses, and the second motion tracking component is arranged on the front end part of the simulated cockpit of the motor vehicle; the multiple groups of motion tracking cameras collect driving images of a driver, and when the position and the posture of the driver in the motor vehicle simulation cockpit change, the second data processor analyzes limb operation information of the driver based on the driving images of the driver, switches the limb operation information into a view angle change image of the current driver view angle in real time, and transmits the view angle change image to the projection screen for displaying; the motion tracking condition comprises image data acquired by a motion tracking camera and positioning information of a motion tracking module, the motion tracking condition is obtained by preprocessing the image data, and the preprocessing of the image data adopts one or more of the steps of digitalization, normalization, geometric change, smoothing, restoration and enhancement.
9. The MR technology based intelligent training system for motor vehicle driving according to claim 1, wherein the motion platform is provided with a six-degree-of-freedom platform, a servo driver and a microcontroller, the servo driver is respectively connected with the six-degree-of-freedom platform and the microcontroller, the servo driver is used for driving the six-degree-of-freedom platform, and the microcontroller is used for providing driving parameters of the servo driver;
the six-degree-of-freedom platform comprises an upper platform, a lower platform fixedly arranged on the ground and 6 hydraulic electric cylinders, wherein the 6 hydraulic electric cylinders are respectively connected with the upper platform and the lower platform;
the 6 hydraulic electric cylinders are driven by a servo driver to realize the motion of the six-degree-of-freedom platform;
the motion platform is also provided with six-dimensional force sensors, the six-dimensional force sensors are respectively arranged at the connecting nodes of the upper platform and the hydraulic electric cylinders, the six-dimensional force sensors are arranged on U points of the six-degree-of-freedom platform, and six subscripts respectively correspond to six points;
when the six-degree-of-freedom platform starts to work, the six-dimensional force sensor senses the pressure values at the six connecting nodes in real time and feeds the pressure values back to the first data processor to provide data support for calculation of the length adjusting value of the hydraulic electric cylinder;
the first data processor receives a driver operation instruction and driving data collected by the motor vehicle simulation cockpit, a hydraulic electric cylinder length adjusting value is obtained by resolving the position and the posture of the moving platform and the hydraulic electric cylinder length, and the hydraulic electric cylinder length adjusting value is sent to the microcontroller to adjust the hydraulic electric cylinder length, so that the movement of the upper platform in six degrees of freedom is realized, wherein the movement of the upper platform in the six degrees of freedom is specifically three translation movements in a Cartesian coordinate system and rotation around three coordinate axes.
10. The MR technology based intelligent training system for automobile driving as claimed in claim 9, wherein the motion platform is further provided with a hand-held terminal, the hand-held terminal is wirelessly connected with the microcontroller, and the hand-held terminal is used for emergency braking of the servo driver to stop the six-degree-of-freedom platform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110739595.2A CN113327479B (en) | 2021-06-30 | 2021-06-30 | MR technology-based intelligent training system for driving motor vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110739595.2A CN113327479B (en) | 2021-06-30 | 2021-06-30 | MR technology-based intelligent training system for driving motor vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113327479A true CN113327479A (en) | 2021-08-31 |
CN113327479B CN113327479B (en) | 2024-05-28 |
Family
ID=77423632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110739595.2A Active CN113327479B (en) | 2021-06-30 | 2021-06-30 | MR technology-based intelligent training system for driving motor vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113327479B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333489A (en) * | 2021-12-30 | 2022-04-12 | 广州小鹏汽车科技有限公司 | Remote driving simulation method, device and simulation system |
CN115206155A (en) * | 2022-07-28 | 2022-10-18 | 浙江极氪智能科技有限公司 | Vehicle-mounted entertainment system and automobile |
WO2024001911A1 (en) * | 2022-06-30 | 2024-01-04 | 延锋国际汽车技术有限公司 | Driving somatic sensation and sound simulation system |
CN118036200A (en) * | 2024-01-24 | 2024-05-14 | 德宝艺苑网络科技(北京)有限公司 | Force circulation bidirectional feedback simulation equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6431872B1 (en) * | 1998-12-25 | 2002-08-13 | Honda Kigen Kogyo Kabushiki Kaisha | Drive simulation apparatus |
US20020128751A1 (en) * | 2001-01-21 | 2002-09-12 | Johan Engstrom | System and method for real-time recognition of driving patters |
US20080038708A1 (en) * | 2006-07-14 | 2008-02-14 | Slivka Benjamin W | System and method for adapting lessons to student needs |
US20100209889A1 (en) * | 2009-02-18 | 2010-08-19 | Gm Global Technology Operations, Inc. | Vehicle stability enhancement control adaptation to driving skill based on multiple types of maneuvers |
CN108454628A (en) * | 2018-04-17 | 2018-08-28 | 吉林大学 | A kind of driver turns to rolling optimization control method in people's vehicle collaboration of ring |
CN108803870A (en) * | 2017-04-28 | 2018-11-13 | 原动力科技有限公司 | For realizing the system and method for the automatic virtual environment of immersion cavernous |
CN109035960A (en) * | 2018-06-15 | 2018-12-18 | 吉林大学 | Driver's driving mode analysis system and analysis method based on simulation driving platform |
CN209044930U (en) * | 2018-07-21 | 2019-06-28 | 河南黄烨科技有限公司 | Special vehicle drive training simulator system based on mixed reality and multi-degree-of-freedom motion platform |
CN110321605A (en) * | 2019-06-19 | 2019-10-11 | 中汽研(天津)汽车工程研究院有限公司 | A kind of human-computer interaction coordination control strategy based on Multiple Velocity Model PREDICTIVE CONTROL |
CN110410282A (en) * | 2019-07-24 | 2019-11-05 | 河北工业大学 | Wind turbines health status on-line monitoring and method for diagnosing faults based on SOM-MQE and SFCM |
CN110580836A (en) * | 2019-10-15 | 2019-12-17 | 公安部交通管理科学研究所 | driving emergency treatment training device and method based on MR |
CN111986334A (en) * | 2020-09-07 | 2020-11-24 | 桂林旅游学院 | Hololens and CAVE combined virtual experience system and method |
CN215298537U (en) * | 2021-06-30 | 2021-12-24 | 暨南大学 | Motor vehicle driving intelligent training system based on MR technology |
-
2021
- 2021-06-30 CN CN202110739595.2A patent/CN113327479B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6431872B1 (en) * | 1998-12-25 | 2002-08-13 | Honda Kigen Kogyo Kabushiki Kaisha | Drive simulation apparatus |
US20020128751A1 (en) * | 2001-01-21 | 2002-09-12 | Johan Engstrom | System and method for real-time recognition of driving patters |
US20080038708A1 (en) * | 2006-07-14 | 2008-02-14 | Slivka Benjamin W | System and method for adapting lessons to student needs |
US20100209889A1 (en) * | 2009-02-18 | 2010-08-19 | Gm Global Technology Operations, Inc. | Vehicle stability enhancement control adaptation to driving skill based on multiple types of maneuvers |
CN108803870A (en) * | 2017-04-28 | 2018-11-13 | 原动力科技有限公司 | For realizing the system and method for the automatic virtual environment of immersion cavernous |
CN108454628A (en) * | 2018-04-17 | 2018-08-28 | 吉林大学 | A kind of driver turns to rolling optimization control method in people's vehicle collaboration of ring |
CN109035960A (en) * | 2018-06-15 | 2018-12-18 | 吉林大学 | Driver's driving mode analysis system and analysis method based on simulation driving platform |
CN209044930U (en) * | 2018-07-21 | 2019-06-28 | 河南黄烨科技有限公司 | Special vehicle drive training simulator system based on mixed reality and multi-degree-of-freedom motion platform |
CN110321605A (en) * | 2019-06-19 | 2019-10-11 | 中汽研(天津)汽车工程研究院有限公司 | A kind of human-computer interaction coordination control strategy based on Multiple Velocity Model PREDICTIVE CONTROL |
CN110410282A (en) * | 2019-07-24 | 2019-11-05 | 河北工业大学 | Wind turbines health status on-line monitoring and method for diagnosing faults based on SOM-MQE and SFCM |
CN110580836A (en) * | 2019-10-15 | 2019-12-17 | 公安部交通管理科学研究所 | driving emergency treatment training device and method based on MR |
CN111986334A (en) * | 2020-09-07 | 2020-11-24 | 桂林旅游学院 | Hololens and CAVE combined virtual experience system and method |
CN215298537U (en) * | 2021-06-30 | 2021-12-24 | 暨南大学 | Motor vehicle driving intelligent training system based on MR technology |
Non-Patent Citations (3)
Title |
---|
吴秋淑: "驾校学员建模与基于MPC的教练决策研究", 《中国优秀硕士学位论文》, 15 February 2021 (2021-02-15) * |
王艳丽: "复杂场景下多运动目标实时检测与跟踪", 《中国优秀硕士学位论文》, 15 May 2012 (2012-05-15) * |
蔡忠法, 刘大健, 章安元: "基于虚拟现实的汽车驾驶模拟训练系统方案研究", 系统仿真学报, no. 06, 20 June 2002 (2002-06-20) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333489A (en) * | 2021-12-30 | 2022-04-12 | 广州小鹏汽车科技有限公司 | Remote driving simulation method, device and simulation system |
WO2024001911A1 (en) * | 2022-06-30 | 2024-01-04 | 延锋国际汽车技术有限公司 | Driving somatic sensation and sound simulation system |
CN115206155A (en) * | 2022-07-28 | 2022-10-18 | 浙江极氪智能科技有限公司 | Vehicle-mounted entertainment system and automobile |
CN118036200A (en) * | 2024-01-24 | 2024-05-14 | 德宝艺苑网络科技(北京)有限公司 | Force circulation bidirectional feedback simulation equipment |
CN118036200B (en) * | 2024-01-24 | 2024-07-12 | 德宝艺苑网络科技(北京)有限公司 | Force circulation bidirectional feedback simulation equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113327479B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113327479B (en) | MR technology-based intelligent training system for driving motor vehicle | |
WO2023207437A1 (en) | Scene flow digital twin method and system based on dynamic trajectory flow | |
Zhang et al. | Roadview: A traffic scene simulator for autonomous vehicle simulation testing | |
Cui et al. | 3D semantic map construction using improved ORB-SLAM2 for mobile robot in edge computing environment | |
CN111860269B (en) | Multi-feature fusion series RNN structure and pedestrian prediction method | |
CN111856963A (en) | Parking simulation method and device based on vehicle-mounted looking-around system | |
GB2550037A (en) | Method and system for virtual sensor data generation with depth ground truth annotation | |
CN111311009A (en) | Pedestrian trajectory prediction method based on long-term and short-term memory | |
US20230311932A1 (en) | Merging object and background radar data for autonomous driving simulations | |
CN110930811B (en) | System suitable for unmanned decision learning and training | |
Fouladinejad et al. | Modeling virtual driving environment for a driving simulator | |
Zhang et al. | Optimized segmentation with image inpainting for semantic mapping in dynamic scenes | |
DE102019102518A1 (en) | Validate gesture recognition capabilities of automated systems | |
CN112380735A (en) | Cabin engineering virtual assessment device | |
US20230311930A1 (en) | Capturing and simulating radar data for autonomous driving systems | |
CN215298537U (en) | Motor vehicle driving intelligent training system based on MR technology | |
Wang et al. | Lidar Point Cloud Object Detection and Semantic Segmentation Fusion Based on Bird's-Eye-View | |
Lu et al. | A cylindrical convolution network for dense top-view semantic segmentation with LiDAR point clouds | |
Li et al. | A Simulation System for Human-in-the-Loop Driving | |
CN112712061B (en) | Method, system and storage medium for recognizing multidirectional traffic police command gestures | |
Gupta et al. | Smart autonomous vehicle using end to end learning | |
da Costa | Detection and classification of road and objects in panoramic images on board the atlascar2 using deep learning | |
Wu et al. | Design and Simulation of an Autonomous Racecar: Perception, SLAM, Planning and Control | |
Fu et al. | Summary and Reflections on Pedestrian Trajectory Prediction in the Field of Autonomous Driving | |
Wu | Vehicle-road cooperative simulation and 3D visualization system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |