Disclosure of Invention
In view of the above, an object of the present application is to provide a virtual reality treadmill adaptive method and system based on AI deep learning.
According to a first aspect of the application, an adaptation method of a virtual reality treadmill based on AI deep learning is provided, which is applied to a server, and the method comprises the following steps:
determining preference intensity corresponding to treadmill feedback activity data corresponding to a treadmill feedback activity data sequence of a target virtual reality treadmill, determining feedback enhancement focus distribution of the treadmill feedback activity data sequence according to the preference intensity, performing content feedback enhancement updating of corresponding virtual reality elements on the target virtual reality treadmill based on the feedback enhancement focus, obtaining the virtual reality elements after content feedback enhancement updating, and obtaining running behavior data corresponding to the virtual reality elements after content feedback enhancement updating;
updating the running event label of the running behavior data according to the running posture characteristic extracted from the running behavior data, and outputting a first adaptive characteristic representing an adaptive adjustment event;
performing relation degree analysis on the distinguishing characteristic between the first adaptive characteristic and a second adaptive characteristic representing a planned running event and the adaptive characteristic subjected to mode configuration in advance, and outputting relation degree analysis information; the adaptive characteristic of the pre-mode configuration is determined according to running mode data group data of a running mode data group to which the running behavior data belongs, and the adaptive parameter of the adaptive characteristic of the pre-mode configuration is in an association relation with the adaptive parameter of the second adaptive characteristic;
analyzing whether the obtained first adaptive characteristic is matched with an adaptive control requirement or not according to the relevance analysis information; wherein the first adaptive feature matching the adaptive control requirement is used for adaptively controlling the virtual scene element of the target virtual reality treadmill, and the first adaptive feature not matching the adaptive control requirement is continuously updated in an iterative manner.
According to a second aspect of the present application, there is provided an AI deep learning based virtual reality treadmill adaptive system, which includes a server and a virtual reality treadmill communicatively connected to the server, wherein the server is specifically configured to:
determining preference intensity corresponding to treadmill feedback activity data corresponding to a treadmill feedback activity data sequence of a target virtual reality treadmill, determining feedback enhancement focus distribution of the treadmill feedback activity data sequence according to the preference intensity, performing content feedback enhancement updating of corresponding virtual reality elements on the target virtual reality treadmill based on the feedback enhancement focus, obtaining the virtual reality elements after content feedback enhancement updating, and obtaining running behavior data corresponding to the virtual reality elements after content feedback enhancement updating;
updating a running event label of the running behavior data according to the running posture characteristic extracted from the running behavior data, and outputting a first adaptive characteristic representing an adaptive adjustment event;
performing relation degree analysis on the distinguishing characteristic between the first adaptive characteristic and a second adaptive characteristic representing a planned running event and the adaptive characteristic subjected to mode configuration in advance, and outputting relation degree analysis information; the adaptive characteristic of the pre-mode configuration is determined according to running mode data set data of a running mode data set to which the running behavior data belongs, and the adaptive parameter of the adaptive characteristic of the pre-mode configuration is in a correlation relation with the adaptive parameter of the second adaptive characteristic;
analyzing whether the obtained first adaptive characteristic is matched with an adaptive control requirement or not according to the relevance analysis information; wherein the first adaptive feature matching the adaptive control requirement is used for adaptively controlling the virtual scene element of the target virtual reality treadmill, and the first adaptive feature not matching the adaptive control requirement is continuously updated in an iterative manner.
According to a third aspect of the present application, there is provided a server comprising a machine-readable storage medium storing machine-executable instructions and a processor, which when executing the machine-executable instructions, implements the aforementioned AI deep learning based virtual reality treadmill adaptation method.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed, implement the foregoing AI deep learning based virtual reality treadmill adaptation method.
Based on any one of the aspects, the running event label of the running behavior data is updated according to the running posture characteristic extracted from the running behavior data, the first adaptive characteristic representing the adaptive adjustment event is output, the relation degree analysis is carried out on the distinguishing characteristic between the first adaptive characteristic and the second adaptive characteristic representing the planned running event and the adaptive characteristic which is subjected to mode configuration in advance, the relation degree analysis information is output, and whether the obtained first adaptive characteristic is matched with the adaptive control requirement or not is analyzed according to the relation degree analysis information, so that the adaptive control decision is carried out, the adaptive characteristic analysis of the adaptive adjustment event is carried out by combining the running posture characteristic, and the reliability of the adaptive control decision can be improved.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below according to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are only for illustration and description purposes and are not used to limit the protection scope of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some of the embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart illustrating a virtual reality treadmill adaptation method based on AI deep learning according to an embodiment of the present application, and it should be understood that in other embodiments, the order of some steps in the virtual reality treadmill adaptation method based on AI deep learning according to the present embodiment may be interchanged according to actual needs, or some steps may be omitted or deleted. The detailed steps of the virtual reality treadmill adaptation method based on AI deep learning are described as follows.
And step S101, obtaining running behavior data.
In some exemplary embodiments, the server may obtain running behavior data. The running behavior data herein may be determined to include behavior data of a virtual display running scenario. The server may obtain the running behavior data according to the step of behavior data identification.
And step S102, updating the running event label of the running behavior data according to the running posture characteristic extracted from the running behavior data, and outputting a first adaptive characteristic representing an adaptive adjustment event.
In some exemplary embodiments, after obtaining the running behavior data, the server may perform running posture feature extraction on the running behavior data, and then may update a running event tag of the running behavior data according to the extracted running posture feature, and output a first adaptive feature representing an adaptive adjustment event.
In some exemplary embodiments, a deep learning network model of a different mining modality may be used in each sub-network of the built running pose extraction model. Accordingly, the running gesture component extraction can be carried out on the running behavior data according to the built running gesture extraction model. When the running attitude component of the running behavior data is extracted, the running attitude component of the running behavior data can be extracted according to the deep learning network models of different mining modes, first running attitude component tracks of different mining modes are output, feature cleaning is carried out on the obtained first running attitude component tracks, second running attitude component tracks of the running behavior data are output, and then the running attitude feature of the running behavior data is determined according to the second running attitude component tracks. Wherein the number of components of the first running attitude component trajectory overlaps the number of components of the second running attitude component trajectory.
Step S103, carrying out relation degree analysis on the distinguishing characteristic between the first adaptive characteristic and a second adaptive characteristic representing a planned running event and the adaptive characteristic which is subjected to mode configuration in advance, and outputting relation degree analysis information; the adaptive characteristic of the pre-mode configuration is determined according to running mode data group data of the running mode data group to which the running behavior data belongs.
In some exemplary embodiments, the server may determine a distinguishing characteristic between a first adaptive characteristic corresponding to the running behavior data and a second adaptive characteristic characterizing the planned running event after outputting the first adaptive characteristic corresponding to the running behavior data according to the running posture characteristic of the running behavior data. After the distinguishing feature between the first adaptive feature and the second adaptive feature is determined, the determined distinguishing feature and the adaptive feature which is configured in a mode in advance can be subjected to association degree analysis, and an association degree analysis result is output.
Step S104, analyzing whether the obtained first adaptive characteristic matches an adaptive control requirement or not according to the relevance analysis information; wherein the first adaptive feature that matches an adaptive control requirement is adaptively controlled to a virtual scene element of the target virtual reality treadmill, continuing to iteratively update the first adaptive characteristic that does not match an adaptive control requirement.
In some exemplary embodiments, the first adaptive feature obtained by the analysis matches an adaptive control requirement when the distinguishing feature inherits an adaptive feature previously configured by a mode.
In some exemplary embodiments, when the distinguishing characteristic between the first adaptive feature and the second adaptive feature does not inherit the adaptive feature configured in the pre-mode, the updating feature of the execution in the running event tag updating activity may be continuously updated according to the distinguishing characteristic between the first adaptive feature and the second adaptive feature.
In the above embodiment, if the distinguishing feature between the first adaptive feature and the second adaptive feature characterizing the planned running event inherits the adaptive feature previously mode-configured, it can be considered that there is no abnormality in the obtained first adaptive feature. When the adaptive features of the running mode configuration in advance are determined, the running mode data group data of the running mode data group to which the running behavior data belong can be obtained, and then the adaptive features of the running mode configuration in advance are determined according to the running mode data group data, wherein the adaptive features of the running mode configuration in advance corresponding to different running mode data groups are different.
In some exemplary embodiments, when the pre-configured adaptive feature is determined from the running mode data set data, the pre-configured adaptive feature may be configured as a first pre-configured adaptive feature when the running mode data set data indicates that the running mode data set is a first running mode data set; when the running mode data set data indicates that the running mode data set is a second running mode data set, configuring the adaptive feature configured in a pre-mode to be an adaptive feature configured in a second pre-mode; when the designated adaptive parameter corresponding to the first running mode data set covers the designated adaptive parameter corresponding to the second running mode data set, the adaptive characteristic of the first pre-mode configuration inherits the adaptive characteristic of the second pre-mode configuration.
In some exemplary embodiments, the specified adaptive parameter may be determined as an adaptive parameter of an adaptive feature labeled for data for running in the running mode data set. When the running posture extraction model is configured, running behavior data in at least two running mode data sets can be utilized, and each running behavior data has labeled adaptive characteristics.
In some exemplary embodiments, for a random one of the running behavior data, if the running behavior data of the running pattern data set of the running behavior data indicates that the running behavior data in the running pattern data set is from the first running pattern data set having the second adaptive characteristic as the real-time running activity characteristic amount, that is, the second adaptive characteristic of each running behavior data mark in the running pattern data set is the real-time running activity characteristic amount, the adaptive characteristic configured in the first pre-run mode may be configured for the running behavior data; if the running pattern data group of the running behavior data indicates that the running behavior data in the running pattern data group is from a second running pattern data group having a second adaptive characteristic as a non-real-time running activity characteristic quantity, that is, the second adaptive characteristic of each running behavior data mark in the running pattern data group and the real-time running activity characteristic quantity carry certain distinguishing characteristics, an adaptive characteristic configured in a second pre-processing mode can be set for the running behavior data.
Based on the above steps, in this embodiment, the running event tag of the running behavior data is updated according to the running posture feature extracted from the running behavior data, the first adaptive feature representing the adaptive adjustment event is output, the relation between the distinguishing feature between the first adaptive feature and the second adaptive feature representing the planned running event and the adaptive feature configured in advance is analyzed, the relation analysis information is output, and whether the obtained first adaptive feature matches the adaptive control requirement or not is analyzed according to the relation analysis information, so that the adaptive control decision is made, thereby performing the adaptive feature analysis of the adaptive adjustment event in combination with the running posture feature, and improving the reliability of the adaptive control decision.
And continuously updating the updating characteristic according to the distinguishing characteristic according to the embodiment of the disclosure. When the updating feature performed in the running event tag updating activity is continuously updated according to the distinguishing feature between the first adaptive feature and the second adaptive feature, the updating can be achieved through the following steps.
Step S201, when the distinguishing feature between the first adaptive feature and the second adaptive feature does not inherit the adaptive feature configured in advance, determining a scene optimization model corresponding to the first adaptive feature according to the distinguishing feature.
Step S202, the optimization dimension of the scene optimization model about the scene optimization characteristics is analyzed.
Step S203, debugging the scene optimization features according to the optimization dimensions.
When the server determines that the distinguishing feature between the first adaptive feature and the second adaptive feature does not inherit the adaptive feature configured in advance, the server may first determine a scene optimization model corresponding to the first adaptive feature according to the distinguishing feature between the first adaptive feature and the second adaptive feature, then determine an optimization dimension of the scene optimization model with respect to the scene optimization feature, and then continuously update the update feature according to the determined optimization dimension until the scene optimization model meets the requirement with the scene optimization feature. In this way, a distinguishing characteristic between the first adaptive characteristic and the second adaptive characteristic may be determined as a result of the weighting between the first adaptive characteristic and the second adaptive characteristic.
The running event label of the running behavior data can be updated according to the running posture characteristic extracted from the running behavior data, and the first adaptive characteristic corresponding to the running behavior data is output. In order to better perform the running attitude component extraction on the running behavior data, the embodiment also provides that the running behavior data is updated before the running attitude feature is extracted.
The method comprises the step of updating running behavior data according to an embodiment of the disclosure. Before updating the running event tags of the running behavior data, the following steps may also be included.
Step S301, analyzing the virtual reality scene update node of the running behavior data.
Step S302, the virtual reality scene updating node is connected with a target virtual reality scene updating node in a virtual reality running mode, and node contact information is output.
Step S303, performing virtual reality scene association on the running behavior data according to the node contact information, and obtaining running behavior data after virtual reality scene association.
Fig. 2 schematically illustrates a server 100 that may be used to implement various embodiments described in the present application.
For one embodiment, fig. 2 illustrates a server 100 having one or more processors 102, a control module (chipset) 104 coupled to at least one of the processor(s) 102, a memory 106 coupled to the control module 104, a non-volatile memory (NVM)/storage 108 coupled to the control module 104, one or more input/output devices 110 coupled to the control module 104, and a network interface 112 coupled to the control module 106.
The processor 102 may include one or more single-core or multi-core processors, and the processor 102 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 100 can be a server device such as a gateway described in the embodiments of the present application.
In some embodiments, the apparatus 100 may include one or more computer-readable media (e.g., the memory 106 or the NVM/storage 108) having instructions 114 and one or more processors 102 configured to execute the instructions 114 in conjunction with the one or more computer-readable media to implement modules to perform the actions described in this disclosure.
For one embodiment, control module 104 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 102 and/or any suitable device or component in communication with control module 104.
Control module 104 may include a memory controller module to provide an interface to memory 106. The memory controller module may be a hardware module, a software module, and/or a firmware module.
Memory 106 may be used, for example, to load and store data and/or instructions 114 for device 100. For one embodiment, memory 106 may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 106 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, control module 104 may include one or more input/output controllers to provide an interface to NVM/storage 108 and input/output device(s) 110.
For example, NVM/storage 108 may be used to store data and/or instructions 114. NVM/storage 108 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 108 may include storage resources that are physically part of the device on which apparatus 100 is installed, or it may be accessible by the device and need not be part of the device. For example, NVM/storage 108 may be accessible via input/output device(s) 110 in accordance with a network.
The input/output device(s) 110 may provide an interface for the apparatus 100 to communicate with any other suitable device, and the input/output devices 110 may include a communications component, a pinyin component, a sensor component, and so forth. The network interface 112 may provide an interface for the device 100 to communicate in accordance with one or more networks, and the device 100 may communicate wirelessly with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols, such as accessing a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, 5G, etc., or combinations thereof.
For one embodiment, at least one of the processor(s) 102 may be packaged together with logic for one or more controllers (e.g., memory controller modules) of the control module 104. For one embodiment, at least one of the processor(s) 102 may be packaged together with logic for one or more controllers of the control module 104 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 102 may be integrated on the same die with logic for one or more controller(s) of the control module 104. For one embodiment, at least one of the processor(s) 102 may be integrated on the same die with logic of one or more controllers of the control module 104 to form a system on a chip (SoC).
In various embodiments, the apparatus 100 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.) among other terminal devices. In various embodiments, the apparatus 100 may have more or fewer components and/or different architectures. For example, in some embodiments, device 100 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
An embodiment of the present application provides an electronic device, including: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform a data processing method as described in one or more of the present applications.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and the basis of a flow and/or block of the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the true scope of the embodiments of the present application.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The data processing method and apparatus provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for the ordinary skilled in the art, according to the idea of the present application, the specific implementation and the application scope can be changed, in view of the foregoing, the description should not be construed as limiting the present application.