CN117806458A - Multi-mesh fusion virtual reality interaction method and system - Google Patents

Multi-mesh fusion virtual reality interaction method and system Download PDF

Info

Publication number
CN117806458A
CN117806458A CN202311566787.3A CN202311566787A CN117806458A CN 117806458 A CN117806458 A CN 117806458A CN 202311566787 A CN202311566787 A CN 202311566787A CN 117806458 A CN117806458 A CN 117806458A
Authority
CN
China
Prior art keywords
interaction
virtual
information
module
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311566787.3A
Other languages
Chinese (zh)
Inventor
闫军
霍建杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super Vision Technology Co Ltd
Original Assignee
Super Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super Vision Technology Co Ltd filed Critical Super Vision Technology Co Ltd
Priority to CN202311566787.3A priority Critical patent/CN117806458A/en
Publication of CN117806458A publication Critical patent/CN117806458A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a method and a system for virtual reality interaction through multi-mesh fusion, which relate to the technical field of virtual reality and comprise the following steps: connecting wearable interaction equipment; capturing motion in the running process of the vehicle, and acquiring user motion information; a touch detection module is adopted to collect touch detection information; extracting historical interaction voice information; performing associated feature mining to obtain associated feature indexes; building a virtual interaction module based on the associated characteristic indexes, the historical interaction voice information and the virtual test environment; based on the user action information and the touch detection information, a control instruction is set, the control instruction is sent to the virtual interaction module, dynamic adjustment and optimization are carried out on the virtual scene, and a virtual optimization result is output. The invention solves the technical problems of poor driving experience and low safety in the prior art, and achieves the technical effect of improving the driving comfort and safety.

Description

Multi-mesh fusion virtual reality interaction method and system
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method and a system for virtual reality interaction through multi-mesh fusion.
Background
With the development of technology and the improvement of driving experience requirements of people, the traditional vehicle driving mode can not meet the requirements of people on more intelligence, safety and comfort. The traditional vehicle driving mode needs users to pay attention to the operation of roads and vehicles, cannot process a plurality of tasks at the same time, such as navigation checking, phone answering and the like, so that the user experience is poor, and the traditional vehicle driving mode cannot timely and accurately cope with emergency situations if the emergency situations are met, so that the driving safety problem exists. The prior art has the technical problems of poor driving experience and low safety.
Disclosure of Invention
According to the virtual reality interaction method and system based on the multi-view fusion, the technical problems of poor driving experience and low safety in the prior art are effectively solved, and the technical effects of improving driving comfort and safety are achieved.
The application provides a virtual reality interaction method and a system for multi-view fusion, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for virtual reality interaction with multiple fusion, where the method includes:
the wearable interaction device comprises head-mounted display interaction equipment and wrist interaction equipment, wherein the wrist interaction equipment comprises wrist interaction equipment and ankle interaction equipment;
based on the wearable interaction equipment, performing motion capture in the running process of the vehicle to acquire user motion information, wherein the user motion information comprises facial expressions, limb motions and hand motions;
a touch detection module is arranged on an automobile steering wheel, a brake pedal and an accelerator pedal, and is used for collecting touch detection information, wherein the touch detection information comprises pressure detection information and vibration detection information;
extracting historical interaction voice information, wherein the historical interaction voice information is stored in a virtual reality interaction database and comprises a timestamp mark;
in the virtual reality interaction database, carrying out associated feature mining through the historical interaction voice information to obtain associated feature indexes, wherein the associated feature indexes comprise an in-vehicle temperature index and an in-vehicle humidity index;
building a virtual interaction module based on the associated characteristic index, the historical interaction voice information and a virtual test environment, wherein the virtual interaction module comprises an emergency control unit;
based on the user action information and the touch detection information, setting a control instruction, sending the control instruction to the virtual interaction module, dynamically adjusting and optimizing in a virtual scene, and outputting a virtual optimization result, wherein the virtual optimization result comprises speed optimization control information and direction optimization control information.
In a second aspect, embodiments of the present application provide a multi-mesh fused virtual reality interaction system, the system comprising:
the wearable interactive device comprises a wearable interactive device connection module, a wrist interaction module and a wrist interaction module, wherein the wearable interactive device connection module is used for connecting wearable interactive devices, the wearable interactive devices comprise head-mounted display interactive devices and wrist interaction devices, and the wrist interaction devices comprise wrist interaction devices and ankle interaction devices;
the user action information acquisition module is used for capturing actions based on the wearable interaction equipment in the running process of the vehicle to acquire user action information, wherein the user action information comprises facial expressions, limb actions and hand actions;
the system comprises a touch detection information acquisition module, a touch detection information acquisition module and a control module, wherein the touch detection information acquisition module is used for configuring a touch detection module on an automobile steering wheel, a brake pedal and an accelerator pedal, and acquiring touch detection information by adopting the touch detection module, wherein the touch detection information comprises pressure detection information and vibration detection information;
the system comprises a historical interaction voice information extraction module, a virtual reality interaction database and a time stamp marking module, wherein the historical interaction voice information extraction module is used for extracting historical interaction voice information which is stored in the virtual reality interaction database;
the associated feature index acquisition module is used for carrying out associated feature mining on the historical interaction voice information in the virtual reality interaction database to acquire associated feature indexes, wherein the associated feature indexes comprise in-vehicle temperature indexes and in-vehicle humidity indexes;
the virtual interaction module building module is used for building a virtual interaction module based on the associated characteristic indexes, the historical interaction voice information and the virtual test environment, and the virtual interaction module comprises an emergency control unit;
the virtual optimization result output module is used for setting a control instruction based on the user action information and the touch detection information, sending the control instruction to the virtual interaction module, carrying out dynamic adjustment and optimization on a virtual scene, and outputting a virtual optimization result, wherein the virtual optimization result comprises speed optimization control information and direction optimization control information.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
the application is firstly connected with wearable interactive equipment, the wearable interactive equipment comprises head-mounted display interactive equipment and wrist interactive equipment, the wrist interactive equipment comprises wrist interactive equipment and ankle interactive equipment, then motion capture is carried out in the running process of a vehicle based on the wearable interactive equipment, user motion information is obtained, the user motion information comprises facial expressions, limb motions and hand motions, a touch detection module is configured on an automobile steering wheel, a brake pedal and an accelerator pedal, the touch detection module is adopted to collect touch detection information, and the touch detection information comprises pressure detection information and vibration detection information. And further extracting historical interaction voice information, wherein the historical interaction voice information is stored in a virtual reality interaction database, the historical interaction voice information comprises a timestamp mark, then in the virtual reality interaction database, the historical interaction voice information is used for carrying out association feature mining to obtain association feature indexes, the association feature indexes comprise in-vehicle temperature indexes and in-vehicle humidity indexes, then a virtual interaction module is built based on the association feature indexes, the historical interaction voice information and a virtual test environment, the virtual interaction module comprises an emergency control unit, finally, based on user action information and touch detection information, a control instruction is set, the control instruction is sent to the virtual interaction module, dynamic adjustment and optimization are carried out in a virtual scene, and a virtual optimization result is output, wherein the virtual optimization result comprises speed optimization control information and direction optimization control information. The technical problems of poor driving experience and low safety in the prior art are effectively solved, and the technical effects of improving driving comfort and safety are achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for implementing multi-mesh fusion virtual reality interaction according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a multi-mesh fusion virtual reality interaction system according to an embodiment of the present application.
Reference numerals illustrate: the system comprises a wearable interactive device connection module 1, a user action information acquisition module 2, a touch detection information acquisition module 3, a history interactive voice information extraction module 4, an associated characteristic index acquisition module 5, a virtual interactive module construction module 6 and a virtual optimization result output module 7.
Detailed Description
The application provides the virtual reality interaction method and the system for achieving the multi-view fusion, which are used for solving the technical problems of poor driving experience and low safety in the prior art.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
As shown in fig. 1, the present invention provides a multi-mesh fusion virtual reality interaction method for improving driving comfort and safety, the method comprising:
the wearable interaction device is connected with the intelligent terminal such as a computer or mobile device in a Bluetooth connection or USB connection mode and the like, and is used for achieving man-machine interaction and information communication, and the wearable interaction device comprises a head-mounted display interaction device and a wrist interaction device. The head-mounted display interaction device (such as intelligent glasses or helmets) comprises a display, a camera, a microphone and other components, and is used for capturing visual and auditory information of a user and displaying information of a virtual scene and the like; the wrist interaction device comprises a wrist interaction device (such as a smart watch, a smart bracelet and the like), an ankle interaction device (such as a smart foot ring, a smart insole and the like). The wrist interaction device is used for capturing and sensing motion and posture information of both hands and wrists of a user, including bending and stretching degrees of wrists, opening and closing degrees of fingers, positions and postures of hands on a steering wheel and the like, and the ankle interaction device is used for capturing foot motion and posture information of the user, including force, angle, time and the like of stepping on an accelerator, a brake and a clutch, positions and postures of feet on pedals and the like.
Based on the wearable interaction equipment, motion capturing is performed in the running process of the vehicle, user motion information is obtained, and the information is collected and transmitted through a sensor and a circuit, wherein the information comprises facial expressions, limb motions and hand motions. Wherein the facial expression, limb motion, hand motion, etc. of the user are used for reflecting the emotion, fatigue degree, and other conditions of the user, for example, whether the emotion state of the user feels happy, tense, tired, etc. is judged by capturing the facial expression of the user; judging whether the user feels tired or distracted by capturing the limb actions of the user; whether the user is controlling the traveling direction of the vehicle or not is determined by capturing the hand motion of the user.
The touch detection module is configured on an automobile steering wheel, a brake pedal and an accelerator pedal and comprises a pressure sensor and a vibration sensor and is used for collecting touch detection information, and the touch detection information comprises pressure detection information and vibration detection information. Specifically, a pressure sensor is arranged on the steering wheel and used for detecting the pressure distribution and the strength of the hands of the user on the steering wheel, so as to judge the operation intention and the driving state of the user; vibration sensors are arranged on the brake pedal and the accelerator pedal and used for detecting the force and frequency of the user stepping on the pedal, so as to judge the driving state and the operation habit of the user.
Historical interaction voice information is extracted, wherein the historical interaction voice information refers to voice information generated in the past interaction process and comprises voice instructions of a user to a system, response of the system and other information related to interaction. The historical interactive voice information is stored in the virtual reality interactive database in the form of data records, each record containing the interactive voice information and a corresponding timestamp mark, the timestamp mark being an identifier of the point in time at which the interactive voice information was recorded, for helping to determine the order and time sequence of the interactive voice information. The records in the virtual reality interaction database are ordered according to the time stamp marks, the interaction voice information related to the specific historical time is found, and the voice interaction records related to the specific time period or the specific event are obtained by retrieving the historical interaction voice information containing the required time stamp marks.
In the virtual reality interaction database, association feature mining is carried out through the historical interaction voice information to obtain association feature indexes, the association feature mining is a data mining technology and is used for finding association rules in a data set, the association rules are association relations among different items in the data set, potential modes and association in the data are reflected, and the obtained association feature indexes comprise in-vehicle temperature indexes and in-vehicle humidity indexes. Firstly, preprocessing the extracted voice data, including operations such as voice recognition, transcription and word segmentation, extracting features related to the temperature and humidity in the vehicle from the preprocessed voice data, including specific words, phrases, commands and the like, such as ' too hot ', ' cool something ', ' adjust humidity to 50% ', and the like, further analyzing the features by using a correlation rule mining algorithm such as Apriori, FP-Growth and the like, finding out a correlation rule between the features, such as ' too hot ' of a frequently-occurring phrase which implies that a user is not full of the temperature in the vehicle, adjusting the temperature, finally generating a corresponding correlation feature index according to the mining result of the correlation rule, such as counting the occurrence frequency of the ' too hot ' phrase or the number of times the command is executed to 50% ', and storing the generated correlation feature index in a virtual reality interaction database.
Based on the associated characteristic indexes, the historical interaction voice information and the virtual test environment, a virtual interaction module is built, wherein the virtual interaction module is an interaction model in a virtual reality environment and is used for simulating the interaction process of a user and a vehicle, and the model achieves intelligent interaction with the user by capturing and analyzing voice instructions and other interaction information of the user. The virtual interactive module comprises an emergency control unit which is responsible for handling interactive operations and control systems in case of emergency, and when an emergency occurs in the vehicle or virtual reality environment, the emergency control unit can rapidly respond and take corresponding control measures, such as automatic braking, emergency steering, etc., to avoid accidents or mitigate the extent of damage. The virtual interaction model and the emergency control unit are integrated into a virtual test environment, the virtual test environment can simulate the actual running environment of the vehicle and the interaction process of a user in virtual reality, the virtual test environment is used for testing and verifying the functions and performances of the virtual interaction module, the virtual test environment is used for testing and optimizing, the virtual interaction module can accurately capture and analyze voice instructions and other interaction information of the user, intelligent control can be performed according to the requirements and instructions of the user, and meanwhile, the emergency control unit can rapidly respond under emergency conditions and take correct control measures.
Based on the user action information and the touch detection information, acquiring the intention and the demand of a user, such as the walking speed, the steering direction and the like of the user, setting control instructions, such as the speed or the steering angle of a vehicle, and the like, based on the information, sending the control instructions to the virtual interaction module, and dynamically adjusting and optimizing the speed of the vehicle according to the instructions and the current state of the vehicle, wherein, for example, if the user suddenly accelerates in the virtual scene, the virtual interaction module automatically adjusts the speed and the driving direction of the vehicle so as to keep matching with the action and the demand of the user, and after dynamically adjusting and optimizing in the virtual scene, outputting virtual optimization results, wherein, the virtual optimization results comprise speed optimization control information and direction optimization control information, wherein, the speed optimization control information refers to control instructions for optimally adjusting the speed of the vehicle according to the action and the demand of the user in the virtual scene, and comprises acceleration, deceleration, uniform speed and the like, and is used for matching the action and the demand of the user; the direction optimization control information refers to a control instruction for optimally adjusting the running direction of the vehicle according to the steering requirement of the user in the virtual scene, and the control instruction comprises left turning, right turning, steering wheel angle adjustment and the like, and is used for helping the user to better control the vehicle in the virtual scene. The technical effect of improving the driving comfort and safety is achieved.
In a preferred implementation manner provided in the embodiments of the present application, in the virtual reality interaction database, performing associated feature mining through the historical interaction voice information to obtain an associated feature index, where the method includes:
preprocessing the historical interactive voice information, including operations such as voice recognition, transcription, word segmentation and the like, extracting key words and semantic information from the historical interactive voice information, and obtaining preprocessed text information. Keyword positioning is performed based on keywords in the preprocessed text information, so that keyword labeling text information is obtained, wherein the keywords comprise important words and phrases related to vehicle control, environment and the like, and for example, if a user says "too hot", the "hot" is regarded as a keyword. After the keyword labeling text information is obtained, carrying out associated feature mining on the basis of the information in the virtual reality interaction database to obtain associated feature indexes, wherein the associated feature indexes comprise word frequency, keyword co-occurrence frequency, correlation among keywords and the like, and for example, the keyword of high occurrence frequency of heat is considered to be focused on temperature by a user. By correlating feature mining, the interaction pattern and habit of the user in the virtual reality environment, such as which instructions and words the user tends to use during driving, and the potential rules and patterns related to vehicle control and navigation, such as the temperature control strategy of the vehicle is optimized according to the rules when most users perform the operation of lowering the temperature after speaking "too hot" in the virtual reality environment. According to the preferred embodiment, the related characteristic mining is carried out through keyword positioning, so that the interference of noise and redundant information on a system is reduced, and the technical effect of improving the accuracy and efficiency of extracting the related characteristic indexes is achieved.
In another preferred implementation manner provided in the embodiments of the present application, in the virtual reality interaction database, associated feature mining is performed based on the keyword labeling text information, and the method includes:
in the virtual reality interaction database, performing directional association feature mining based on the keyword labeling text information, namely, performing feature association analysis on the keyword labeling text information and historical vehicle environment information and historical user action information in the virtual reality interaction database respectively to determine feature association mapping sets, wherein firstly, performing data preprocessing on the acquired historical vehicle environment information and historical user action information, converting original data into an analyzable form, such as converting user actions into understandable instructions and the like, then extracting features related to the keyword labeling text information from the preprocessed data, including vehicle environment features (such as vehicle speed, temperature, illumination and the like), user action features (such as gestures, body gestures, voices and the like) and keyword labeling text information features (such as words, phrases, semantics and the like), and then performing association analysis on the extracted vehicle environment features, user action features and keyword labeling text information features by constructing a feature association mapping set, and a user action-keyword labeling text information association mapping subset.
The feature index is performed through the feature association mapping set, and the first association feature index is determined through the modes of constructing feature vectors, calculating association degrees, optimizing association feature indexes and the like, wherein the first association feature index comprises quantitative indexes such as frequency, probability, correlation and the like, or qualitative indexes such as classification labels and the like, and reflects the behavior mode and preference of a user in a virtual reality environment, such as the behavior mode of the user using specific words or phrases in a specific environment. And finally, marking the first associated feature index into the associated feature index for subsequent data analysis and decision. According to the preferred embodiment, the vehicle environment-keyword labeling text information association mapping subset and the user action-keyword labeling text information association mapping subset are constructed, so that feature dimension is increased, feature dimension is enriched, and the technical effect of mining associated feature accuracy is improved.
In another preferred implementation manner provided in the embodiments of the present application, in the virtual reality interaction database, the method further includes:
in the virtual reality interaction database, non-directional association feature mining is performed based on the keyword labeling text information, that is, index fusion analysis is performed on two subsets in the feature association mapping set (all information is integrated) through methods such as weighted fusion and decision layer fusion, so that index fusion analysis results are obtained, for example, certain specific vehicle environment features have higher similarity with the keyword labeling text information, association degree between the vehicle environment features and the keyword labeling text information is higher, and higher weight distribution is given to the feature association relation.
And taking the index fusion analysis result as constraint information, performing feature association analysis in the virtual reality interaction database, namely performing deeper feature association analysis on the data in the virtual reality interaction database, and determining a feature fusion association mapping set, for example, assuming that certain specific vehicle environment features are found to have higher association degree with keyword labeling text information in the index fusion analysis result, and taking the specific vehicle environment features and the keyword labeling text information as constraint information, the method guides to find more user action features related to the vehicle environment features in subsequent feature association analysis, so as to find more complex association modes. Finally, feature indexing is performed through the feature fusion association mapping set, a second association feature index (reflecting the complex behavior mode and deeper preference of the user in the virtual reality environment) is determined, and the second association feature index is marked into the association feature index, and the step is the same as the method for determining the first association feature index, and is not repeated here. According to the preferred embodiment, through a characteristic fusion analysis mode, the behavior mode and the requirements of the user in the virtual reality environment are comprehensively analyzed from multiple angles and multiple levels, more abundant and comprehensive associated characteristic information can be obtained compared with the independent analysis of a certain aspect (such as a vehicle environment or a user action), and the technical effect of obtaining the more comprehensive and deeper associated characteristic information is achieved.
In another preferred implementation manner provided in the embodiments of the present application, a virtual interaction module is built based on the associated feature index, the historical interaction voice information and the virtual test environment, and the method includes:
and based on the time stamp, aligning or synchronizing the associated characteristic index and the historical interactive voice information data in a time dimension, and integrating the associated characteristic index and the historical interactive voice information together for data integration after data synchronization to form a comprehensive data set containing various information, namely virtual test data.
And preprocessing the virtual test data, including steps of data cleaning, standardization, normalization and the like, for removing noise, processing missing values and abnormal values, and adjusting the data to the same scale, so that subsequent model training is facilitated, and virtual test preprocessing data is obtained after preprocessing.
And (3) selecting a model algorithm (such as a model based on a transducer) by using the virtual test preprocessing data as training data, configuring parameters (including a learning rate, a batch size, a hidden layer size, a size of an encoder/decoder and the like), configuring and training the virtual interaction module, constructing a virtual interaction module, gradually adapting and learning a mapping relation from input to output through repeated iteration and learning by the model in the process, and finally generating or predicting corresponding output based on the input association characteristic index and the history interaction voice information. According to the preferred embodiment, a series of virtual test data based on time sequence is obtained through using the time stamp, so that the obtained data has synchronism, continuity and integrity, and the technical effect of constructing an efficient and reliable virtual interaction module is achieved.
In another preferred implementation manner provided in the embodiments of the present application, the virtual interaction module includes an emergency control unit, and the method further includes:
the emergency control unit is provided with a preset emergency processing program, which is a program stored in the emergency control unit for identifying and processing a preset emergency, for example, when a serious malfunction of the vehicle occurs or a dangerous situation is detected, the program is activated. When the preset emergency processing program is activated, the control authority of the vehicle is automatically taken over, namely, the control functions of the accelerator, the brake, the steering wheel and the like of the vehicle are taken over from a conventional driving system, so that the vehicle is prevented from entering a dangerous state.
If the control authority of the automatic take-over vehicle is activated, performing simulation test in the virtual test environment, namely simulating various emergency situations such as vehicle faults, road hazards and the like in the virtual test environment, and then recording the response and the performance of the vehicle, feedback information of a user and the like to obtain a simulation test result.
And finding out the defects of the virtual interaction module according to the simulation test result and the user feedback information, optimizing and improving the defects, and obtaining an updated virtual interaction module (more optimal virtual interaction module) after multiple iterations. The updated virtual interaction module can provide better performance when the emergency is processed, the emergency can be responded more quickly through an optimization algorithm and an improved data processing mode, and appropriate measures are taken to avoid or reduce potential danger; the updated virtual interaction module has higher accuracy, and the intention and the demand of the user can be identified more accurately by improving the accuracy of data acquisition, processing and analysis, so that more personalized service is provided. For example, if the deceleration accuracy of the vehicle is only accurate to one tenth (0.1) during braking, in some cases, if the user wishes the vehicle to brake faster, but the vehicle cannot accurately reach the user's expectations, at this time, if the system determines that an emergency situation (insufficient braking performance of the vehicle or dangerous road conditions, etc.) occurs, it may jump to the emergency control unit, but if the updated virtual interaction module can increase the deceleration accuracy to one thousandth (0.001), it can more accurately meet the user's expectations, thereby avoiding unnecessary jumps and providing a smoother and safer driving experience. The preferred implementation mode enables the emergency control unit to continuously optimize and improve the virtual interaction module while processing emergency, thereby achieving the technical effects of improving the module performance and enhancing the robustness.
Example two
Based on the same inventive concept as the multi-mesh fusion virtual reality interaction method in the foregoing embodiment, as shown in fig. 2, the present application provides a multi-mesh fusion virtual reality interaction system, and the system and method embodiments in the embodiments of the present application are based on the same inventive concept. Wherein the system comprises:
the wearable interactive device comprises a wearable interactive device connection module 1, wherein the wearable interactive device connection module 1 is used for connecting wearable interactive devices, the wearable interactive devices comprise head-mounted display interactive devices and wrist interactive devices, and the wrist interactive devices comprise wrist interactive devices and ankle interactive devices;
the user action information acquisition module 2 is used for capturing actions based on the wearable interaction equipment in the running process of the vehicle to acquire user action information, wherein the user action information comprises facial expressions, limb actions and hand actions;
the touch detection information acquisition module 3 is used for configuring a touch detection module on an automobile steering wheel, a brake pedal and an accelerator pedal, and acquiring touch detection information by adopting the touch detection module, wherein the touch detection information comprises pressure detection information and vibration detection information;
the historical interaction voice information extraction module 4 is used for extracting historical interaction voice information, the historical interaction voice information is stored in the virtual reality interaction database, and the historical interaction voice information comprises a timestamp mark;
the associated feature index obtaining module 5 is used for carrying out associated feature mining on the historical interaction voice information in the virtual reality interaction database to obtain associated feature indexes, wherein the associated feature indexes comprise an in-vehicle temperature index and an in-vehicle humidity index;
the virtual interaction module building module 6 is used for building a virtual interaction module based on the associated characteristic indexes, the historical interaction voice information and the virtual test environment, and the virtual interaction module comprises an emergency control unit;
the virtual optimization result output module 7 is used for setting a control instruction based on the user action information and the touch detection information, sending the control instruction to the virtual interaction module, dynamically adjusting and optimizing in a virtual scene, and outputting a virtual optimization result, wherein the virtual optimization result comprises speed optimization control information and direction optimization control information.
Further, the associated feature index obtaining module 5 is configured to perform the following method:
preprocessing the historical interactive voice information to obtain preprocessed text information;
performing keyword positioning based on keywords in the preprocessed text information to obtain keyword labeling text information;
and in the virtual reality interaction database, carrying out associated feature mining based on the keyword labeling text information to obtain associated feature indexes.
Further, the associated feature index obtaining module 5 is configured to perform the following method:
respectively carrying out feature association analysis on the keyword labeling text information, historical vehicle environment information and historical user action information in the virtual reality interaction database, and determining a feature association mapping set, wherein the feature association mapping set comprises a vehicle environment-keyword labeling text information association mapping subset and a user action-keyword labeling text information association mapping subset;
and carrying out feature indexing through the feature association mapping set, determining a first association feature index, and marking the first association feature index into the association feature index.
Further, the associated feature index obtaining module 5 is configured to perform the following method:
performing index fusion analysis on the feature association mapping set to obtain an index fusion analysis result;
taking the index fusion analysis result as constraint information, carrying out feature association analysis in the virtual reality interaction database, and determining a feature fusion association mapping set;
and carrying out feature indexing through the feature fusion association mapping set, determining a second association feature index, and marking the second association feature index into the association feature index.
Further, the virtual interactive module building module 6 is configured to execute the following method:
based on the time stamp, the associated characteristic index and the historical interaction voice information are subjected to data integration to obtain virtual test data;
preprocessing the virtual test data to obtain virtual test preprocessed data;
and configuring and training the virtual interaction module by using the virtual test preprocessing data as training data to build the virtual interaction module.
Further, the virtual interactive module building module 6 is configured to execute the following method:
the emergency control unit is provided with a preset emergency processing program which is used for automatically taking over the control authority of the vehicle;
if the control authority of the automatic take-over vehicle is activated, performing simulation test in the virtual test environment to obtain a simulation test result;
and carrying out optimization iteration on the virtual interaction module according to the simulation test result and the user feedback information to obtain an updated virtual interaction module.
It should be noted that the sequence of the embodiments of the present application is merely for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing description of the preferred embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to the particular embodiments of the present application.
The specification and drawings are merely exemplary of the application and are to be regarded as covering any and all modifications, variations, combinations, or equivalents that are within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (7)

1. The method for realizing the virtual reality interaction through the multi-mesh fusion is characterized by comprising the following steps of:
the wearable interaction device comprises head-mounted display interaction equipment and wrist interaction equipment, wherein the wrist interaction equipment comprises wrist interaction equipment and ankle interaction equipment;
based on the wearable interaction equipment, performing motion capture in the running process of the vehicle to acquire user motion information, wherein the user motion information comprises facial expressions, limb motions and hand motions; and
a touch detection module is arranged on an automobile steering wheel, a brake pedal and an accelerator pedal, and is used for collecting touch detection information, wherein the touch detection information comprises pressure detection information and vibration detection information;
extracting historical interaction voice information, wherein the historical interaction voice information is stored in a virtual reality interaction database and comprises a timestamp mark;
in the virtual reality interaction database, carrying out associated feature mining through the historical interaction voice information to obtain associated feature indexes, wherein the associated feature indexes comprise an in-vehicle temperature index and an in-vehicle humidity index;
building a virtual interaction module based on the associated characteristic index, the historical interaction voice information and a virtual test environment, wherein the virtual interaction module comprises an emergency control unit;
based on the user action information and the touch detection information, setting a control instruction, sending the control instruction to the virtual interaction module, dynamically adjusting and optimizing in a virtual scene, and outputting a virtual optimization result, wherein the virtual optimization result comprises speed optimization control information and direction optimization control information.
2. The method for multi-mesh fusion virtual reality interaction according to claim 1, wherein in the virtual reality interaction database, associated feature mining is performed through the historical interaction voice information to obtain associated feature indexes, and the method comprises:
preprocessing the historical interactive voice information to obtain preprocessed text information;
performing keyword positioning based on keywords in the preprocessed text information to obtain keyword labeling text information;
and in the virtual reality interaction database, carrying out associated feature mining based on the keyword labeling text information to obtain associated feature indexes.
3. The method for multi-mesh fusion virtual reality interaction according to claim 2, wherein in the virtual reality interaction database, associated feature mining is performed based on the keyword labeling text information, the method comprising:
respectively carrying out feature association analysis on the keyword labeling text information, historical vehicle environment information and historical user action information in the virtual reality interaction database, and determining a feature association mapping set, wherein the feature association mapping set comprises a vehicle environment-keyword labeling text information association mapping subset and a user action-keyword labeling text information association mapping subset;
and carrying out feature indexing through the feature association mapping set, determining a first association feature index, and marking the first association feature index into the association feature index.
4. The method of claim 3, wherein in the virtual reality interaction database, directed associative feature mining is performed based on the keyword tagged text information, the method further comprising:
performing index fusion analysis on the feature association mapping set to obtain an index fusion analysis result;
taking the index fusion analysis result as constraint information, carrying out feature association analysis in the virtual reality interaction database, and determining a feature fusion association mapping set;
and carrying out feature indexing through the feature fusion association mapping set, determining a second association feature index, and marking the second association feature index into the association feature index.
5. The multi-mesh fused virtual reality interaction method of claim 1, wherein a virtual interaction module is built based on the associated feature indicators, the historical interaction voice information and a virtual test environment, the method comprising:
based on the time stamp, the associated characteristic index and the historical interaction voice information are subjected to data integration to obtain virtual test data;
preprocessing the virtual test data to obtain virtual test preprocessed data;
and configuring and training the virtual interaction module by using the virtual test preprocessing data as training data to build the virtual interaction module.
6. The multi-converged virtual reality interaction method of claim 5, wherein the virtual interaction module includes an emergency control unit, the method further comprising:
the emergency control unit is provided with a preset emergency processing program which is used for automatically taking over the control authority of the vehicle;
if the control authority of the automatic take-over vehicle is activated, performing simulation test in the virtual test environment to obtain a simulation test result;
and carrying out optimization iteration on the virtual interaction module according to the simulation test result and the user feedback information to obtain an updated virtual interaction module.
7. A multi-mesh fused virtual reality interaction system, the system comprising:
the wearable interactive device comprises a wearable interactive device connection module, a wrist interaction module and a wrist interaction module, wherein the wearable interactive device connection module is used for connecting wearable interactive devices, the wearable interactive devices comprise head-mounted display interactive devices and wrist interaction devices, and the wrist interaction devices comprise wrist interaction devices and ankle interaction devices;
the user action information acquisition module is used for capturing actions based on the wearable interaction equipment in the running process of the vehicle to acquire user action information, wherein the user action information comprises facial expressions, limb actions and hand actions;
the system comprises a touch detection information acquisition module, a touch detection information acquisition module and a control module, wherein the touch detection information acquisition module is used for configuring a touch detection module on an automobile steering wheel, a brake pedal and an accelerator pedal, and acquiring touch detection information by adopting the touch detection module, wherein the touch detection information comprises pressure detection information and vibration detection information;
the system comprises a historical interaction voice information extraction module, a virtual reality interaction database and a time stamp marking module, wherein the historical interaction voice information extraction module is used for extracting historical interaction voice information which is stored in the virtual reality interaction database;
the associated feature index acquisition module is used for carrying out associated feature mining on the historical interaction voice information in the virtual reality interaction database to acquire associated feature indexes, wherein the associated feature indexes comprise in-vehicle temperature indexes and in-vehicle humidity indexes;
the virtual interaction module building module is used for building a virtual interaction module based on the associated characteristic indexes, the historical interaction voice information and the virtual test environment, and the virtual interaction module comprises an emergency control unit;
the virtual optimization result output module is used for setting a control instruction based on the user action information and the touch detection information, sending the control instruction to the virtual interaction module, carrying out dynamic adjustment and optimization on a virtual scene, and outputting a virtual optimization result, wherein the virtual optimization result comprises speed optimization control information and direction optimization control information.
CN202311566787.3A 2023-11-23 2023-11-23 Multi-mesh fusion virtual reality interaction method and system Pending CN117806458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311566787.3A CN117806458A (en) 2023-11-23 2023-11-23 Multi-mesh fusion virtual reality interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311566787.3A CN117806458A (en) 2023-11-23 2023-11-23 Multi-mesh fusion virtual reality interaction method and system

Publications (1)

Publication Number Publication Date
CN117806458A true CN117806458A (en) 2024-04-02

Family

ID=90432689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311566787.3A Pending CN117806458A (en) 2023-11-23 2023-11-23 Multi-mesh fusion virtual reality interaction method and system

Country Status (1)

Country Link
CN (1) CN117806458A (en)

Similar Documents

Publication Publication Date Title
CN110838286B (en) Model training method, language identification method, device and equipment
US9063704B2 (en) Identifying gestures using multiple sensors
Zhu et al. AR-mentor: Augmented reality based mentoring system
CN111432989A (en) Artificially enhanced cloud-based robot intelligence framework and related methods
CN110599557A (en) Image description generation method, model training method, device and storage medium
CN110072142A (en) Video presentation generation method, device, video broadcasting method, device and storage medium
CN111104820A (en) Gesture recognition method based on deep learning
CN109308466A (en) The method that a kind of pair of interactive language carries out Emotion identification
CN111680594A (en) Augmented reality interaction method based on gesture recognition
CN113254684B (en) Content aging determination method, related device, equipment and storage medium
CN111128178A (en) Voice recognition method based on facial expression analysis
CN117079299B (en) Data processing method, device, electronic equipment and storage medium
CN108536735A (en) Multi-modal lexical representation method and system based on multichannel self-encoding encoder
CN111737670A (en) Multi-mode data collaborative man-machine interaction method and system and vehicle-mounted multimedia device
CN107909003B (en) gesture recognition method for large vocabulary
CN111695408A (en) Intelligent gesture information recognition system and method and information data processing terminal
CN117806458A (en) Multi-mesh fusion virtual reality interaction method and system
CN115736925A (en) Monitoring operator fatigue
CN116167015A (en) Dimension emotion analysis method based on joint cross attention mechanism
CN115620268A (en) Multi-modal emotion recognition method and device, electronic equipment and storage medium
KR102337008B1 (en) Method for sensing pain of newborn baby using convolution neural network
CN114299295A (en) Data processing method and related device
Schuller et al. Speech communication and multimodal interfaces
WO2021039641A1 (en) Motion verbalization device, motion verbalization method, program, and motion recording device
CN114537409B (en) Multi-sensory vehicle-mounted interaction method and system based on multi-modal analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination