CN114265500A - Virtual reality enhancement method and system based on sensor technology - Google Patents
Virtual reality enhancement method and system based on sensor technology Download PDFInfo
- Publication number
- CN114265500A CN114265500A CN202111565758.6A CN202111565758A CN114265500A CN 114265500 A CN114265500 A CN 114265500A CN 202111565758 A CN202111565758 A CN 202111565758A CN 114265500 A CN114265500 A CN 114265500A
- Authority
- CN
- China
- Prior art keywords
- sound effect
- collision
- determining
- scene model
- reading
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000005516 engineering process Methods 0.000 title claims abstract description 41
- 230000000694 effects Effects 0.000 claims abstract description 132
- 239000000463 material Substances 0.000 claims abstract description 116
- 238000012544 monitoring process Methods 0.000 claims abstract description 26
- 230000002708 enhancing effect Effects 0.000 claims abstract description 5
- 230000003416 augmentation Effects 0.000 claims description 30
- 238000012545 processing Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 4
- 230000002238 attenuated effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 15
- 230000015654 memory Effects 0.000 description 15
- 238000004590 computer program Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of virtual reality, and particularly discloses a virtual reality enhancing method based on a sensor technology, which comprises the steps of acquiring a scene model in real time, sequentially reading the material of components in the scene model, and generating a material table; extracting movable parts in the material table according to the name items, traversing the material table based on the movable parts, sequentially determining a collision sound effect, and generating a sound effect table; receiving a control signal input by a user, switching scenes according to the control signal, and monitoring collision information of each component in a scene model in real time; and determining the sound effect display time and volume according to the collision information. According to the invention, the components in the scene model are classified and sequentially matched to generate the sound effect table, and when the collision information is monitored, the attenuated vibration signal and the sound effect are generated according to the collision information, so that a dynamic sound effect display process is provided, and the sense of reality of virtual reality is improved.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual reality enhancing method and system based on a sensor technology.
Background
The augmented reality technology is a new technology for seamlessly integrating real world information and virtual world information, and the method is characterized in that entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time space range of the real world originally is overlapped after simulation through scientific technologies such as computers and the like, virtual information is applied to the real world and is perceived by human senses, and therefore sensory experience beyond reality is achieved. The real environment and the virtual object are superimposed on the same picture or space in real time and exist simultaneously.
In the prior art, the development of visual information is well-established, but other information such as sound, taste, touch or smell is hardly involved, and in the other information, the sound is information which can be completely simulated by an electronic device, but the existing display process of the sound is mostly static, so that the virtual world has a sense of repetition, and the sense of repetition can reduce the reality of the virtual world. Therefore, how to design a new sound display mode to enhance the virtual reality from the perspective of hearing is a technical problem to be solved by the technical scheme of the invention.
Disclosure of Invention
The invention aims to provide a virtual reality augmentation method and a virtual reality augmentation system based on a sensor technology, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of virtual reality augmentation based on sensor technology, the method comprising:
acquiring a scene model in real time, sequentially reading the materials of components in the scene model, and generating a material table; the material table comprises a name item and a material item;
extracting movable parts in the material table according to the name items, traversing the material table based on the movable parts, sequentially determining a collision sound effect, and generating a sound effect table;
receiving a control signal input by a user, switching scenes according to the control signal, and monitoring collision information of each component in a scene model in real time; the collision information comprises a collision point, the distance between the collision point and the center point of the scene model and the material of a collision subject;
generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of a collision main body in the collision information, and determining the sound effect display time and the volume according to the distance between the collision point and the center point of the scene model.
As a further scheme of the invention: the method comprises the steps of obtaining a scene model in real time, sequentially reading materials of components in the scene model, and generating a material table, wherein the steps comprise:
acquiring virtual position information of a user, and determining a user area according to the virtual position information;
reading a connection point of the user area, reading an adjacent scene area according to the connection point, and generating a scene model according to the user area and the adjacent scene area;
sequentially reading initial materials and creation time of components in the scene model, and determining dynamic materials according to the initial materials and the creation time;
and generating a material table according to the dynamic material.
As a further scheme of the invention: extracting the movable parts in the material table according to the name items, sequentially determining the collision sound effect based on the traversal of the movable parts through the material table, and generating the sound effect table, wherein the step comprises the following steps:
traversing name items in the material table, and marking movable parts according to the name items; the name item comprises an index item, and the index item of the movable component comprises a movable label;
sequentially extracting the movable parts marked in the material table to obtain a movable part table;
generating a sub sound effect table with the movable parts as indexes according to the movable part table and the material table, and connecting the sub sound effect table to obtain a sound effect table;
and when the existence time of the fixed part reaches a preset time threshold value, converting the corresponding fixed part into a movable part.
As a further scheme of the invention: the step of determining the time threshold comprises:
sequentially reading components in the scene model and calculating the stress of the components;
acquiring the material and the size of the component, and calculating the strength of the component based on the material and the size;
determining a connection position based on the components, calculating the stress of the connection position, and acquiring a connection mode and corresponding connection strength;
a time threshold for the component is determined based on the stress of the component, the strength of the sub-component, the stress at the connection, and the strength of the connection.
As a further scheme of the invention: the steps of receiving a control signal input by a user, carrying out scene switching according to the control signal, and monitoring collision information of each component in a scene model in real time comprise:
receiving a control signal input by a user, carrying out scene switching according to the control signal, updating an active part related to a user character in real time, and adjusting the connection sequence of a sound effect table according to the active part;
recording the operation amount of a user, determining the type of the user according to the operation amount, determining the collision probability according to the type of the user, and determining a sound effect table reading channel according to the collision probability;
and monitoring collision information of all parts in the scene in real time.
As a further scheme of the invention: the method comprises the steps of generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in a sound effect table according to the material of a collision main body in the collision information, and determining the sound effect display time and the sound volume according to the distance between the collision point and the center point of the scene model, wherein the steps comprise:
reading the material of the collision body in the collision information, and determining an initial seismic source according to the material;
reading the distance between the collision point in the collision information and the center point of the scene model, and inputting the initial seismic source and the distance into a trained attenuation model to obtain a vibration signal; wherein the attenuation parameters of the attenuation model are determined by the propagation medium;
reading the sound effect type in the sound effect table after the connection sequence is adjusted according to the sound effect table reading channel, and calculating the propagation time and the attenuation amplitude of the sound effect according to the distance between the collision point and the center point of the scene model;
and determining sound effect display time according to the propagation time, and determining volume according to the attenuation amplitude.
As a further scheme of the invention: the method further comprises the following steps:
acquiring multi-frame images based on image acquisition equipment, wherein the multi-frame images comprise face information of all current users;
carrying out noise reduction processing on the multi-frame images to obtain multi-frame face images subjected to noise reduction;
and extracting the micro expression of the current frame face image to the current feedback information, determining the satisfaction degree of the current frame face image according to the micro expression, counting the satisfaction degrees of the multiple frames of face images, and performing fuzzy screening on the collision information when the satisfaction degree is lower than a threshold value.
The technical scheme of the invention also provides a virtual reality augmentation system based on the sensor technology, which comprises:
the material table generation module is used for acquiring a scene model in real time, sequentially reading the materials of components in the scene model and generating a material table; the material table comprises a name item and a material item;
the sound effect table determining module is used for extracting movable parts in the material table according to the name items, traversing the material table based on the movable parts, sequentially determining collision sound effects and generating a sound effect table;
the collision information monitoring module is used for receiving a control signal input by a user, switching scenes according to the control signal and monitoring collision information of each component in a scene model in real time; the collision information comprises a collision point, the distance between the collision point and the center point of the scene model and the material of a collision subject;
and the sound effect generation module is used for generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of the collision subject in the collision information, and determining sound effect display time and volume according to the distance between the collision point and the center point of the scene model.
As a further scheme of the invention: the collision information monitoring module includes:
the scene updating unit is used for receiving a control signal input by a user, carrying out scene switching according to the control signal, updating an active part related to a user character in real time, and adjusting the connection sequence of a sound effect table according to the active part;
the channel establishing unit is used for recording the operation amount of a user, determining the type of the user according to the operation amount, determining the collision probability according to the type of the user and determining a sound effect table reading channel according to the collision probability;
and the component monitoring unit is used for monitoring the collision information of each component in the scene in real time.
As a further scheme of the invention: the sound effect generation module comprises:
the seismic source determining unit is used for reading the material of the collision body in the collision information and determining an initial seismic source according to the material;
the vibration attenuation unit is used for reading the distance between a collision point in the collision information and the center point of the scene model, and inputting the initial seismic source and the distance into a trained attenuation model to obtain a vibration signal; wherein the attenuation parameters of the attenuation model are determined by the propagation medium;
the sound effect attenuation unit is used for reading the sound effect type in the sound effect table after the connection sequence is adjusted according to the sound effect table reading channel and calculating the propagation time and the attenuation amplitude of the sound effect according to the distance between the collision point and the central point of the scene model;
and the processing execution unit is used for determining the sound effect display time according to the propagation time and determining the volume according to the attenuation amplitude.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, the components in the scene model are classified and sequentially matched to generate the sound effect table, and when the collision information is monitored, the attenuated vibration signal and the sound effect are generated according to the collision information, so that a dynamic sound effect display process is provided, and the sense of reality of virtual reality is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 shows a flow diagram of a method of virtual reality augmentation based on sensor technology.
Fig. 2 shows a first sub-flow block diagram of a method for virtual reality augmentation based on sensor technology.
Fig. 3 shows a second sub-flow block diagram of a virtual reality augmentation method based on sensor technology.
Fig. 4 shows a third sub-flow block diagram of a virtual reality augmentation method based on sensor technology.
Fig. 5 shows a fourth sub-flow block diagram of a virtual reality augmentation method based on sensor technology.
Fig. 6 shows a block diagram of a virtual reality augmentation system based on sensor technology.
Fig. 7 shows a block diagram of a collision information monitoring module in a virtual reality augmentation system based on sensor technology.
Fig. 8 is a block diagram illustrating the structure of the sound effect generation module in the virtual reality augmentation system based on sensor technology.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Fig. 1 shows a flow chart of a virtual reality augmentation method based on a sensor technology, and in an embodiment of the present invention, the method includes steps S100 to S200:
step S100: acquiring a scene model in real time, sequentially reading the materials of components in the scene model, and generating a material table; the material table comprises a name item and a material item;
the purpose of step S100 is to analyze each object in the scene model, the components are units for building the scene model, each unit has its own parameters at the beginning of the design, and the technical solution of the present invention is to implement virtual enhancement from the sound effect direction, so that only the material information of each component in the scene model is read.
Step S200: extracting movable parts in the material table according to the name items, traversing the material table based on the movable parts, sequentially determining a collision sound effect, and generating a sound effect table;
the purpose of step S200 is to determine an audio table, in the background of the prior art, the research on the visual aspect is already perfect, but the development of other senses is not perfect, wherein the auditory sense is a sense which is easier to develop, and if virtual reality is desired to be more vivid, the requirement of audio is as real as possible, and it is conceivable that objects of different materials collide with each other, and the audio is different.
Step S300: receiving a control signal input by a user, switching scenes according to the control signal, and monitoring collision information of each component in a scene model in real time; the collision information comprises a collision point, the distance between the collision point and the center point of the scene model and the material of a collision subject;
step S400: generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of a collision main body in the collision information, and determining the sound effect display time and the volume according to the distance between the collision point and the center point of the scene model;
step S300 and step S400 are specific operation links, when a user inputs a control signal, various collisions may occur in the scene model, the collisions may generate different sound effects, and in order to make the sound effects more realistic, a propagation process needs to be simulated. It should be noted that the above is all accomplished under the framework of the scene model.
Fig. 2 shows a first sub-flow block diagram of a virtual reality augmentation method based on a sensor technology, where the real-time acquisition of a scene model sequentially reads the materials of components in the scene model, and the step of generating a material table includes steps S101 to S103:
step S101: acquiring virtual position information of a user, and determining a user area according to the virtual position information;
step S102: reading a connection point of the user area, reading an adjacent scene area according to the connection point, and generating a scene model according to the user area and the adjacent scene area;
step S103: sequentially reading initial materials and creation time of components in the scene model, and determining dynamic materials according to the initial materials and the creation time;
step S104: and generating a material table according to the dynamic material.
Step S101 to step S103 are further refined for the generation process of the material table, and different from the traditional technology, the generated material is dynamic, namely, each component has a service life, and when the service life is over, the component falls down due to gravity and the like; of course, the time is also set based on the scene model.
Fig. 3 shows a second sub-flow block diagram of a virtual reality augmentation method based on a sensor technology, where the method includes extracting a movable component in the material table according to the name item, sequentially determining a collision sound effect based on traversing the material table by the movable component, and generating a sound effect table including steps S201 to S203:
step S201: traversing name items in the material table, and marking movable parts according to the name items; the name item comprises an index item, and the index item of the movable component comprises a movable label;
step S202: sequentially extracting the movable parts marked in the material table to obtain a movable part table;
step S203: generating a sub sound effect table with the movable parts as indexes according to the movable part table and the material table, and connecting the sub sound effect table to obtain a sound effect table;
and when the existence time of the fixed part reaches a preset time threshold value, converting the corresponding fixed part into a movable part.
The method comprises the steps of firstly, dividing parts into a movable part and a fixed part, then, generating a movable part table and a fixed part table, sequentially extracting the movable part from the movable part table, sequentially matching the movable part with the fixed part in the fixed part table, finally determining a sub sound effect table, and further generating the sound effect table.
It is worth mentioning that the fixed part can be converted into a movable part, for example, when the "life" of a roof is about to end, it is converted into the movable part and generates the impact sound effect between the movable part and other fixed parts, and when the "life" is ended, the corresponding impact sound effect is read according to the impact condition.
Specifically, the step of determining the time threshold includes:
sequentially reading components in the scene model and calculating the stress of the components;
acquiring the material and the size of the component, and calculating the strength of the component based on the material and the size;
determining a connection position based on the components, calculating the stress of the connection position, and acquiring a connection mode and corresponding connection strength;
a time threshold for the component is determined based on the stress of the component, the strength of the sub-component, the stress at the connection, and the strength of the connection.
The parameters in the above are virtual values based on a scene model, which may be greatly different from the data parameters in reality, but the physical principles followed are approximately the same.
Fig. 4 shows a third sub-flow block diagram of a virtual reality augmentation method based on a sensor technology, where the step of receiving a control signal input by a user, performing scene switching according to the control signal, and monitoring collision information of each component in a scene model in real time includes steps S301 to S303:
step S301: receiving a control signal input by a user, carrying out scene switching according to the control signal, updating an active part related to a user character in real time, and adjusting the connection sequence of a sound effect table according to the active part;
step S302: recording the operation amount of a user, determining the type of the user according to the operation amount, determining the collision probability according to the type of the user, and determining a sound effect table reading channel according to the collision probability;
step S303: and monitoring collision information of all parts in the scene in real time.
The movable parts are generally related to the model operated by the user, and when one movable part is related to the model operated by the user, the corresponding sound effect list is more likely to be read, so that the working efficiency can be effectively improved by adjusting the movable part to the position which is easy to read.
For a gentle user and a vigorous user, the operation habits of the two are definitely different, and the collision of the latter is larger than that of the former, so that when the user is determined to be a vigorous user from the operation amount of the user, the collision is very frequent, and correspondingly, a more rapid sound effect table reading channel is determined to improve the reality.
Fig. 5 shows a fourth sub-flow block diagram of a virtual reality augmentation method based on a sensor technology, where the step of generating a vibration signal containing propagation time according to a distance between the collision point and a center point of a scene model, reading a corresponding sound effect type in a sound effect table according to a material of a collision subject in the collision information, and determining sound effect display time and volume according to the distance between the collision point and the center point of the scene model includes steps S401 to S404:
step S401: reading the material of the collision body in the collision information, and determining an initial seismic source according to the material;
step S402: reading the distance between the collision point in the collision information and the center point of the scene model, and inputting the initial seismic source and the distance into a trained attenuation model to obtain a vibration signal; wherein the attenuation parameters of the attenuation model are determined by the propagation medium;
step S403: reading the sound effect type in the sound effect table after the connection sequence is adjusted according to the sound effect table reading channel, and calculating the propagation time and the attenuation amplitude of the sound effect according to the distance between the collision point and the center point of the scene model;
step S404: and determining sound effect display time according to the propagation time, and determining volume according to the attenuation amplitude.
The display process of the vibration signal and the sound effect is specifically limited in steps S401 to S404, the principle is a basic mathematical and physical principle, and it should be noted that the parameters of the propagation medium and the like are determined based on a scene model.
Further, the method further comprises:
acquiring multi-frame images based on image acquisition equipment, wherein the multi-frame images comprise face information of all current users;
carrying out noise reduction processing on the multi-frame images to obtain multi-frame face images subjected to noise reduction;
and extracting the micro expression of the current frame face image to the current feedback information, determining the satisfaction degree of the current frame face image according to the micro expression, counting the satisfaction degrees of the multiple frames of face images, and performing fuzzy screening on the collision information when the satisfaction degree is lower than a threshold value.
The above contents provide an intelligent matching function, and it is known that some sound effects which are particularly dense and particularly real are likely to be disliked by a user, so that the technical scheme of obtaining the micro expression of the user, determining the satisfaction degree of the user according to the micro expression and then screening the collision information according to the satisfaction degree can be more suitable for the user. For example, if the detection result of a user is unsatisfactory, some collision information may be randomly or intentionally filtered to reduce the sound effect density.
Example 2
Fig. 6 is a block diagram illustrating a structure of a virtual reality augmentation system based on sensor technology, in an embodiment of the present invention, a virtual reality augmentation system based on sensor technology, where the system 10 includes:
the material table generating module 11 is configured to obtain a scene model in real time, sequentially read materials of components in the scene model, and generate a material table; the material table comprises a name item and a material item;
the sound effect table determining module 12 is configured to extract a movable part in the material table according to the name item, traverse the material table based on the movable part, sequentially determine a collision sound effect, and generate a sound effect table;
the collision information monitoring module 13 is configured to receive a control signal input by a user, perform scene switching according to the control signal, and monitor collision information of each component in a scene model in real time; the collision information comprises a collision point, the distance between the collision point and the center point of the scene model and the material of a collision subject;
and the sound effect generation module 14 is used for generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of the collision subject in the collision information, and determining sound effect display time and volume according to the distance between the collision point and the center point of the scene model.
Fig. 7 is a block diagram illustrating a structure of a collision information monitoring module in a virtual reality augmentation system based on sensor technology, where the collision information monitoring module 13 includes:
a scene updating unit 131, configured to receive a control signal input by a user, perform scene switching according to the control signal, update an active component related to a user character in real time, and adjust a connection sequence of a sound effect table according to the active component;
the channel establishing unit 132 is configured to record an operation amount of a user, determine a user type according to the operation amount, determine a collision probability according to the user type, and determine a sound effect table reading channel according to the collision probability;
and the component monitoring unit 133 is used for monitoring the collision information of each component in the scene in real time.
Fig. 8 is a block diagram illustrating the structure of an audio effect generation module in a virtual reality augmentation system based on sensor technology, wherein the audio effect generation module 14 comprises:
the seismic source determining unit 141 is configured to read a material of a collision subject in the collision information, and determine an initial seismic source according to the material;
the vibration attenuation unit 142 is configured to read a distance between a collision point in the collision information and a center point of the scene model, and input the initial seismic source and the distance into a trained attenuation model to obtain a vibration signal; wherein the attenuation parameters of the attenuation model are determined by the propagation medium;
the sound effect attenuation unit 143 is used for reading the sound effect type in the sound effect table after the connection sequence is adjusted according to the sound effect table reading channel, and calculating the propagation time and the attenuation amplitude of the sound effect according to the distance between the collision point and the center point of the scene model;
and the processing execution unit 144 is configured to determine a sound effect display time according to the propagation time, and determine a volume according to the attenuation amplitude.
The functions that can be implemented by the sensor technology-based virtual reality augmentation method are all performed by a computer device comprising one or more processors and one or more memories, wherein at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to implement the functions of the sensor technology-based virtual reality augmentation method.
The processor fetches instructions and analyzes the instructions one by one from the memory, then completes corresponding operations according to the instruction requirements, generates a series of control commands, enables all parts of the computer to automatically, continuously and coordinately act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
Illustratively, a computer program can be partitioned into one or more modules, which are stored in memory and executed by a processor to implement the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the terminal device.
Those skilled in the art will appreciate that the above description of the service device is merely exemplary and not limiting of the terminal device, and may include more or less components than those described, or combine certain components, or different components, such as may include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal equipment and connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the terminal device by operating or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory mainly comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs (such as an information acquisition template display function, a product information publishing function and the like) required by at least one function and the like; the storage data area may store data created according to the use of the berth-state display system (e.g., product information acquisition templates corresponding to different product types, product information that needs to be issued by different product providers, etc.), and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The terminal device integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the modules/units in the system according to the above embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the functions of the embodiments of the system. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A virtual reality augmentation method based on sensor technology is characterized by comprising the following steps:
acquiring a scene model in real time, sequentially reading the materials of components in the scene model, and generating a material table; the material table comprises a name item and a material item;
extracting movable parts in the material table according to the name items, traversing the material table based on the movable parts, sequentially determining a collision sound effect, and generating a sound effect table;
receiving a control signal input by a user, switching scenes according to the control signal, and monitoring collision information of each component in a scene model in real time; the collision information comprises a collision point, the distance between the collision point and the center point of the scene model and the material of a collision subject;
generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of a collision main body in the collision information, and determining the sound effect display time and the volume according to the distance between the collision point and the center point of the scene model.
2. The method for enhancing virtual reality based on sensor technology according to claim 1, wherein the step of acquiring a scene model in real time, sequentially reading the material of the components in the scene model, and generating a material table comprises:
acquiring virtual position information of a user, and determining a user area according to the virtual position information;
reading a connection point of the user area, reading an adjacent scene area according to the connection point, and generating a scene model according to the user area and the adjacent scene area;
sequentially reading initial materials and creation time of components in the scene model, and determining dynamic materials according to the initial materials and the creation time;
and generating a material table according to the dynamic material.
3. The method for enhancing virtual reality based on sensor technology according to claim 1, wherein the steps of extracting the active parts in the material table according to the name items, sequentially determining the collision sound effect based on the traversal of the material table by the active parts, and generating the sound effect table include:
traversing name items in the material table, and marking movable parts according to the name items; the name item comprises an index item, and the index item of the movable component comprises a movable label;
sequentially extracting the movable parts marked in the material table to obtain a movable part table;
generating a sub sound effect table with the movable parts as indexes according to the movable part table and the material table, and connecting the sub sound effect table to obtain a sound effect table;
and when the existence time of the fixed part reaches a preset time threshold value, converting the corresponding fixed part into a movable part.
4. The sensor-technology-based virtual reality augmentation method of claim 3, wherein the time threshold determination step comprises:
sequentially reading components in the scene model and calculating the stress of the components;
acquiring the material and the size of the component, and calculating the strength of the component based on the material and the size;
determining a connection position based on the components, calculating the stress of the connection position, and acquiring a connection mode and corresponding connection strength;
a time threshold for the component is determined based on the stress of the component, the strength of the sub-component, the stress at the connection, and the strength of the connection.
5. The method for enhancing virtual reality based on sensor technology according to claim 1, wherein the step of receiving a control signal input by a user, performing scene switching according to the control signal, and monitoring collision information of each component in a scene model in real time comprises:
receiving a control signal input by a user, carrying out scene switching according to the control signal, updating an active part related to a user character in real time, and adjusting the connection sequence of a sound effect table according to the active part;
recording the operation amount of a user, determining the type of the user according to the operation amount, determining the collision probability according to the type of the user, and determining a sound effect table reading channel according to the collision probability;
and monitoring collision information of all parts in the scene in real time.
6. The sensor-technology-based virtual reality augmentation method of claim 5, wherein the step of generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of a collision subject in the collision information, and determining sound effect display time and volume according to the distance between the collision point and the center point of the scene model comprises:
reading the material of the collision body in the collision information, and determining an initial seismic source according to the material;
reading the distance between the collision point in the collision information and the center point of the scene model, and inputting the initial seismic source and the distance into a trained attenuation model to obtain a vibration signal; wherein the attenuation parameters of the attenuation model are determined by the propagation medium;
reading the sound effect type in the sound effect table after the connection sequence is adjusted according to the sound effect table reading channel, and calculating the propagation time and the attenuation amplitude of the sound effect according to the distance between the collision point and the center point of the scene model;
and determining sound effect display time according to the propagation time, and determining volume according to the attenuation amplitude.
7. The sensor-technology-based virtual reality augmentation method of claim 6, further comprising:
acquiring multi-frame images based on image acquisition equipment, wherein the multi-frame images comprise face information of all current users;
carrying out noise reduction processing on the multi-frame images to obtain multi-frame face images subjected to noise reduction;
and extracting the micro expression of the current frame face image to the current feedback information, determining the satisfaction degree of the current frame face image according to the micro expression, counting the satisfaction degrees of the multiple frames of face images, and performing fuzzy screening on the collision information when the satisfaction degree is lower than a threshold value.
8. A virtual reality augmentation system based on sensor technology, the system comprising:
the material table generation module is used for acquiring a scene model in real time, sequentially reading the materials of components in the scene model and generating a material table; the material table comprises a name item and a material item;
the sound effect table determining module is used for extracting movable parts in the material table according to the name items, traversing the material table based on the movable parts, sequentially determining collision sound effects and generating a sound effect table;
the collision information monitoring module is used for receiving a control signal input by a user, switching scenes according to the control signal and monitoring collision information of each component in a scene model in real time; the collision information comprises a collision point, the distance between the collision point and the center point of the scene model and the material of a collision subject;
and the sound effect generation module is used for generating a vibration signal containing propagation time according to the distance between the collision point and the center point of the scene model, reading a corresponding sound effect type in the sound effect table according to the material of the collision subject in the collision information, and determining sound effect display time and volume according to the distance between the collision point and the center point of the scene model.
9. The sensor-technology-based virtual reality augmentation system of claim 8, wherein the collision information monitoring module comprises:
the scene updating unit is used for receiving a control signal input by a user, carrying out scene switching according to the control signal, updating an active part related to a user character in real time, and adjusting the connection sequence of a sound effect table according to the active part;
the channel establishing unit is used for recording the operation amount of a user, determining the type of the user according to the operation amount, determining the collision probability according to the type of the user and determining a sound effect table reading channel according to the collision probability;
and the component monitoring unit is used for monitoring the collision information of each component in the scene in real time.
10. The sensor technology based virtual reality augmentation system of claim 9, wherein the sound effect generation module comprises:
the seismic source determining unit is used for reading the material of the collision body in the collision information and determining an initial seismic source according to the material;
the vibration attenuation unit is used for reading the distance between a collision point in the collision information and the center point of the scene model, and inputting the initial seismic source and the distance into a trained attenuation model to obtain a vibration signal; wherein the attenuation parameters of the attenuation model are determined by the propagation medium;
the sound effect attenuation unit is used for reading the sound effect type in the sound effect table after the connection sequence is adjusted according to the sound effect table reading channel and calculating the propagation time and the attenuation amplitude of the sound effect according to the distance between the collision point and the central point of the scene model;
and the processing execution unit is used for determining the sound effect display time according to the propagation time and determining the volume according to the attenuation amplitude.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111565758.6A CN114265500A (en) | 2021-12-20 | 2021-12-20 | Virtual reality enhancement method and system based on sensor technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111565758.6A CN114265500A (en) | 2021-12-20 | 2021-12-20 | Virtual reality enhancement method and system based on sensor technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114265500A true CN114265500A (en) | 2022-04-01 |
Family
ID=80828153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111565758.6A Withdrawn CN114265500A (en) | 2021-12-20 | 2021-12-20 | Virtual reality enhancement method and system based on sensor technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114265500A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091708A (en) * | 2023-04-11 | 2023-05-09 | 深圳朗生整装科技有限公司 | Decoration modeling method and system based on big data |
-
2021
- 2021-12-20 CN CN202111565758.6A patent/CN114265500A/en not_active Withdrawn
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091708A (en) * | 2023-04-11 | 2023-05-09 | 深圳朗生整装科技有限公司 | Decoration modeling method and system based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108010112B (en) | Animation processing method, device and storage medium | |
CN109144647B (en) | Form design method and device, terminal equipment and storage medium | |
CN111275784B (en) | Method and device for generating image | |
CN107391505A (en) | A kind of image processing method and system | |
CN111368180B (en) | Page display method and device and electronic equipment | |
CN109492607B (en) | Information pushing method, information pushing device and terminal equipment | |
CN113655999B (en) | Page control rendering method, device, equipment and storage medium | |
CN116363261B (en) | Training method of image editing model, image editing method and device | |
CN104850388A (en) | Method and apparatus for drafting webpage | |
CN113268243B (en) | Memory prediction method and device, storage medium and electronic equipment | |
CN112215171A (en) | Target detection method, device, equipment and computer readable storage medium | |
CN115426525B (en) | High-speed dynamic frame linkage image splitting method and device | |
CN112330709A (en) | Foreground image extraction method and device, readable storage medium and terminal equipment | |
CN110782504B (en) | Method, apparatus, computer-readable storage medium and device for simulating writing trace | |
CN114265500A (en) | Virtual reality enhancement method and system based on sensor technology | |
CN117057318A (en) | Domain model generation method, device, equipment and storage medium | |
CN109697083B (en) | Fixed-point acceleration method and device for data, electronic equipment and storage medium | |
CN114638939A (en) | Model generation method, model generation device, electronic device, and readable storage medium | |
CN110569599A (en) | map service publishing method, system and medium | |
US20240319967A1 (en) | Script generation method and apparatus, device, and storage medium | |
CN116977195A (en) | Method, device, equipment and storage medium for adjusting restoration model | |
CN115035563A (en) | Method, device and equipment for detecting small target by introducing attention mechanism | |
CN115035565A (en) | Visual cortex imitated multi-scale small target detection method, device and equipment | |
CN113742804A (en) | Furniture layout generating method, device, equipment and storage medium | |
CN111860214A (en) | Face detection method, training method and device of model thereof and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220401 |