CN113593348A - Virtual training control system, method, device, equipment and storage medium - Google Patents

Virtual training control system, method, device, equipment and storage medium Download PDF

Info

Publication number
CN113593348A
CN113593348A CN202110916263.7A CN202110916263A CN113593348A CN 113593348 A CN113593348 A CN 113593348A CN 202110916263 A CN202110916263 A CN 202110916263A CN 113593348 A CN113593348 A CN 113593348A
Authority
CN
China
Prior art keywords
training
displayed
virtual reality
virtual
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110916263.7A
Other languages
Chinese (zh)
Inventor
邓璐
王娟
孔令曼
谭伟琦
张先峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Rehabilitation And Convalescence Center Of Joint Logistics Support Force Of Chinese Pla
Original Assignee
Dalian Rehabilitation And Convalescence Center Of Joint Logistics Support Force Of Chinese Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Rehabilitation And Convalescence Center Of Joint Logistics Support Force Of Chinese Pla filed Critical Dalian Rehabilitation And Convalescence Center Of Joint Logistics Support Force Of Chinese Pla
Priority to CN202110916263.7A priority Critical patent/CN113593348A/en
Publication of CN113593348A publication Critical patent/CN113593348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a virtual training control system, a method, a device, equipment and a storage medium, and relates to the technical field of simulation. The virtual training control system includes: virtual reality wearing equipment, action sensor and processing apparatus, virtual reality wearing equipment includes: a first display screen; the virtual reality wearable device is used for displaying a target training scene on a first display screen; the action sensor is used for collecting motion data of a user in the training process under a target training scene; the processing equipment is used for determining a target training scene based on the selection operation of the user and indicating the virtual reality wearable equipment to display the target training scene; the processing equipment is further used for receiving the motion data collected by the motion sensor, analyzing the motion data to obtain information to be displayed, and indicating the virtual reality wearable equipment to display the information to be displayed. The user enters a target training scene in virtual reality, the immersion in the training process is enhanced, and the training efficiency of the user is improved.

Description

Virtual training control system, method, device, equipment and storage medium
Technical Field
The present application relates to the field of simulation technologies, and in particular, to a virtual training control system, method, apparatus, device, and storage medium.
Background
Military training is one of important ways for improving the military combat power of troops, military training injuries are important factors influencing the military training effect, and non-combat force reduction caused by the military training injuries is a difficult problem which always puzzles the military training of troops. Whether the problem is solved or not directly influences the arrangement and implementation of military tasks of troops and influences the improvement of the fighting capacity of the troops. With the increasing of special duty tasks of our army, the influence of military training injuries on the fighting capacity of special duty troops is increasingly remarkable, and deep research on the occurrence mechanism, risk factors and rehabilitation and recuperation means of the training injuries of special duty personnel such as sea, land and air is urgently needed, so that the occurrence of the training injuries and the rapid rehabilitation after the occurrence of the training injuries are reduced.
At present, special service personnel adopt traditional rehabilitation training mode mostly when carrying out rehabilitation training, specifically are: through playing a section of teaching video on the electronic screen, the special staff member trains along with the demonstration action of the coach in the teaching video so as to achieve the purpose of rehabilitation training.
However, the training action involved in the teaching video has the limitation characteristics of singleness, fixation and the like, so that special staff are easy to feel tired and uninteresting and generate boring emotion in the training process, and further the training efficiency is low.
Disclosure of Invention
The present invention is directed to provide a virtual training control system, method, device, apparatus and storage medium for improving training efficiency of a user.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a virtual training control system, where the virtual training control system includes: virtual reality wearing equipment, motion sensor and processing apparatus, include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively;
the virtual reality wearable device is used for displaying a target training scene on the first display screen;
the motion sensor is configured to collect motion data of a user in a training process executed under the target training scenario, where the motion data includes: the position, posture and angle of each joint part of the user;
the processing device is used for determining the target training scene based on the selection operation of the user and instructing the virtual reality wearable device to display the target training scene;
the processing device is further configured to receive the motion data acquired by the motion sensor, analyze the motion data to obtain information to be displayed, and instruct the virtual reality wearable device to display the information to be displayed.
Optionally, the processing device includes: a second display screen;
the processing device is specifically configured to display a selection control on the second display screen, and determine the target training scenario in response to a selection operation of the user on at least one selection control, where the selection control includes: the system comprises a gender selection control, an army selection control and a scene selection control, wherein the scene selection control comprises: a life scene selection control, an exercise scene selection control and a battle scene selection control.
Optionally, the target training scenario includes: a coach virtual object, a trainee virtual object, and a training background;
the processing device is specifically configured to generate a first video to be displayed according to the coach virtual object, the trainee virtual object and the training background, and send the first video to be displayed to the virtual reality wearable device;
the virtual reality wearable device is specifically used for displaying the first video to be displayed on the first display screen.
Optionally, the processing device is specifically configured to: analyzing the motion data, determining a target action of the virtual object of the trainer according to an analysis result, generating a second video to be displayed according to the target action, a preset action of the virtual object of the trainer and the training background, and sending the second video to be displayed to the virtual reality wearable device;
the virtual reality wearable device is specifically configured to display the second video to be displayed on the first display screen.
Optionally, the processing device is specifically configured to: analyzing the motion data at the current moment, determining the target action of the virtual object of the trainer at the current moment according to the analysis result, generating one frame of image of the second video to be displayed at the current moment according to the target action at the current moment, the preset action of the virtual object of the trainer at the current moment and the training background, and sending the one frame of image of the second video to be displayed at the current moment to the virtual reality wearable device;
the virtual reality wearable device is specifically configured to display a frame of image of the second video to be displayed at the current moment on the first display screen.
Optionally, the processing device is further configured to: and displaying the information to be displayed on the second display screen.
In a second aspect, an embodiment of the present application further provides a virtual training control method, which is applied to a processing device in a virtual training control system, where the system includes: virtual reality wearing equipment, motion sensor and processing apparatus, include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively;
the method comprises the following steps:
acquiring motion data of a user in a training process executed under a target training scene, wherein the motion data is acquired by the motion sensor and comprises: the position, the posture and the angle of each joint part of the user, and the target training scene is a scene displayed on the first display screen;
analyzing the motion data to obtain information to be displayed;
and indicating the virtual reality wearing equipment to display the information to be displayed.
Optionally, the processing device includes: a second display screen; displaying a selection control on the second display screen, the method further comprising:
responding to the selection operation of the user on at least one selection control, and determining the target training scene; wherein the selection control comprises: the system comprises a gender selection control, an army selection control and a scene selection control, wherein the scene selection control comprises: a life scene selection control, an exercise scene selection control and a battle scene selection control.
Optionally, the target training scenario includes: a coach virtual object, a trainee virtual object, and a training background; the method further comprises the following steps:
generating a first video to be displayed according to the coach virtual object, the trainee virtual object and the training background, and sending the first video to be displayed to the virtual reality wearable device, so that the first video to be displayed is displayed on the virtual reality wearable device.
Optionally, the analyzing the motion data to obtain information to be displayed includes:
analyzing the motion data, determining the target action of the virtual object of the trainer according to the analysis result, generating a second video to be displayed according to the target action, the preset action of the virtual object of the trainer and the training background, and sending the second video to be displayed to the virtual reality wearable device, so that the second video to be displayed is displayed on the virtual reality wearable device.
Optionally, the analyzing the motion data, determining a target action of the virtual object of the trainer according to an analysis result, and generating a second video to be displayed according to the target action, a preset action of the virtual object of the trainer, and the training background includes:
analyzing the motion data at the current moment, determining the target action of the virtual object of the trainer at the current moment according to the analysis result, generating one frame of image of the second video to be displayed at the current moment according to the target action at the current moment, the preset action of the virtual object of the trainer at the current moment and the training background, and sending the one frame of image of the second video to be displayed at the current moment to the virtual reality wearable device.
Optionally, the method further comprises:
and displaying the information to be displayed on the second display screen.
In a third aspect, an embodiment of the present application further provides a virtual training control apparatus, which is applied to a processing device in a virtual training control system, where the system includes: virtual reality wearing equipment, motion sensor and processing apparatus, include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively;
the device comprises:
an obtaining module, configured to obtain motion data of a user in a training process performed in a target training scenario, where the motion data is collected by the motion sensor, and the motion data includes: the position, the posture and the angle of each joint part of the user, and the target training scene is a scene displayed on the first display screen;
the analysis module is used for analyzing the motion data to obtain information to be displayed;
and the indicating module is used for indicating the virtual reality wearable equipment to display the information to be displayed.
Optionally, the processing device includes: a second display screen; displaying a selection control on the second display screen, the apparatus further comprising:
the response module is used for responding to the selection operation of the user on at least one selection control and determining the target training scene; wherein the selection control comprises: the system comprises a gender selection control, an army selection control and a scene selection control, wherein the scene selection control comprises: a life scene selection control, an exercise scene selection control and a battle scene selection control.
Optionally, the target training scenario includes: a coach virtual object, a trainee virtual object, and a training background; the device further comprises:
the generating module is used for generating a first video to be displayed according to the coach virtual object, the practicer virtual object and the training background;
the sending module is used for sending the first video to be displayed to the virtual reality wearable device, so that the first video to be displayed is displayed on the virtual reality wearable device.
Optionally, the analysis module is further configured to analyze the motion data;
the generation module is further used for determining a target action of the virtual object of the trainer according to the analysis processing result and generating a second video to be displayed according to the target action, the preset action of the virtual object of the trainer and the training background;
the sending module is further configured to send the second video to be displayed to the virtual reality wearable device, so that the second video to be displayed is displayed on the virtual reality wearable device.
Optionally, the analysis module is further configured to analyze the motion data at the current time;
the generation module is further used for determining the target action of the virtual object of the trainer at the current moment according to the analysis processing result, and generating a frame of image of the second video to be displayed at the current moment according to the target action of the current moment, the preset action of the virtual object of the trainer at the current moment and the training background;
the generating module is further configured to send one frame of image of the second video to be displayed at the current moment to the virtual reality wearable device.
Optionally, the apparatus further comprises:
and the display module is used for displaying the information to be displayed on the second display screen.
In a fourth aspect, an embodiment of the present application further provides a processing device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when a processing device is running, the processor executing the machine-readable instructions to perform the steps of the method as provided by the second aspect.
In a fifth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method as provided in the second aspect.
The beneficial effect of this application is:
the embodiment of the application provides a virtual training control system, a method, a device, equipment and a storage medium, wherein the virtual training control system comprises: virtual reality wearing equipment, action sensor and processing equipment include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively; the virtual reality wearable device is used for displaying a target training scene on a first display screen; the motion sensor is used for collecting motion data of a user in a training process executed under a target training scene, and the motion data comprises: the position, posture and angle of each joint part of the user; the processing equipment is used for determining a target training scene based on the selection operation of the user and indicating the virtual reality wearable equipment to display the target training scene; and the processing equipment is also used for receiving the motion data acquired by the motion sensor, analyzing the motion data to obtain information to be displayed and indicating the virtual reality wearable equipment to display the information to be displayed. In this scheme, through the virtual reality wearing equipment, the cooperation between a plurality of equipment such as action sensor and processing equipment, make can carry out analytic processing to the motion data of gathering by action sensor through processing equipment, in order to obtain the information of waiting to show, and show that the virtual reality wearing equipment goes up to show the information of waiting to show, in order to show the whole motion picture in the training process to the virtual reality wearing equipment that the user dressed on, thereby make the user can get into target training scene under the virtual reality, let the user produce the sense of immersion of being personally on the spot, the immersion and the interactivity in the training process have been strengthened, the enthusiasm of user participation training has been promoted widely, thereby user's training efficiency has been improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of a framework of a virtual training control system according to an embodiment of the present disclosure;
fig. 2 is a first schematic interface diagram of a processing device according to an embodiment of the present disclosure;
fig. 3 is a schematic interface diagram ii of a processing device according to an embodiment of the present disclosure;
fig. 4 is a third schematic interface diagram of a processing device according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a virtual training control method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a virtual training control apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a processing apparatus according to an embodiment of the present application.
Icon: 100-a virtual training control system; 101-a virtual reality wearable device; 102-a motion sensor; 103-a processing device; 201-second display screen.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The framework structure of the virtual training control system provided by the present application is briefly described by a plurality of embodiments as follows.
Fig. 1 is a schematic structural diagram of a framework of a virtual training control system according to an embodiment of the present disclosure; as shown in fig. 1, the virtual training control system 100 includes: virtual reality wearing equipment 101, motion sensor 102, processing apparatus 103. Virtual reality wearing equipment 101 and motion sensor 102 are connected with processing equipment 103 communication respectively.
Illustratively, Virtual Reality wearable device 101 may be a Virtual Reality (VR) head-mounted display device, such as VR glasses. Specifically, the method comprises the following steps: the virtual reality wearing device 101 can be in communication connection with the processing device 103 through a wired or wireless network, so that a user can bring the user into a three-dimensional virtual training environment by wearing the virtual reality wearing device 101, and can view a moving picture in a training scene generated by the processing device 103, and the user can have an immersion feeling on the spot.
The motion sensor 102 may be an inertial sensor integrated by an accelerometer, a gyroscope, a magnetometer, and the like, and one motion sensor 102 may be worn on each joint of the user, so as to collect motion data of each joint of the user during the training process through the motion sensor 102, and the motion sensor 102 is in communication connection with the processing device 103 through a receiver matched with the motion sensor 102, so that the motion data collected by the motion sensor 102 can be transmitted to the processing device 103, and further the processing device 103 analyzes the motion data, so as to obtain information to be displayed, where the "information to be displayed" is a "motion picture in the training scene.
The processing device 103 may be a computer, a mobile internet device, a tablet, a mobile phone terminal, or other devices having an operation function and a display function. For example, a training teaching simulation application is installed on the processing device 103, so that a user can enter a target training scenario through the training teaching simulation application installed in advance on the processing device 103 and instruct the virtual reality wearable device 101 to display the target training scenario.
In this embodiment, for example, the motion sensor 102 is configured to collect motion data of the user a during the training process performed in the target training scenario, where the motion data includes: information such as the position, posture, and angle of each joint part of the user a.
The processing device 103 is used for determining a target training scene based on the selection operation of the user, instructing the virtual reality wearable device to display the target training scene, and further receiving the motion data collected by the motion sensor 102, analyzing the motion data to obtain information to be displayed, wherein the information to be displayed may be a motion picture of the user in the training process executed under the target training scene, and instructing the virtual reality wearable device 101 to display the information to be displayed, so as to display the whole motion picture in the training process on a first display screen in the virtual reality wearable device 101 worn by the user a, thereby enabling the user to enter the target training scene under the virtual reality, enabling the user a to generate an immersive sensation in the scene, enhancing the immersive performance and interactive performance in the training process, and greatly improving the enthusiasm of the user in training, thereby improving the training efficiency of the user.
To sum up, an embodiment of the present application provides a virtual training control system, which includes: virtual reality wearing equipment, action sensor and processing equipment include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively; the virtual reality wearable device is used for displaying a target training scene on a first display screen; the motion sensor is used for collecting motion data of a user in a training process executed under a target training scene, and the motion data comprises: the position, posture and angle of each joint part of the user; the processing equipment is used for determining a target training scene based on the selection operation of the user and indicating the virtual reality wearable equipment to display the target training scene; and the processing equipment is also used for receiving the motion data acquired by the motion sensor, analyzing the motion data to obtain information to be displayed and indicating the virtual reality wearable equipment to display the information to be displayed. In this scheme, through the virtual reality wearing equipment, the cooperation between a plurality of equipment such as action sensor and processing equipment, make can carry out analytic processing to the motion data of gathering by action sensor through processing equipment, in order to obtain the information of waiting to show, and show that the virtual reality wearing equipment goes up to show the information of waiting to show, in order to show the whole motion picture in the training process to the virtual reality wearing equipment that the user dressed on, thereby make the user can get into target training scene under the virtual reality, let the user produce the sense of immersion of being personally on the spot, the immersion and the interactivity in the training process have been strengthened, the enthusiasm of user participation training has been promoted widely, thereby user's training efficiency has been improved.
The display interface in the processing apparatus 103 in fig. 1 described above will be specifically explained by the following embodiments.
Fig. 2 is a first schematic interface diagram of a processing device according to an embodiment of the present disclosure; fig. 3 is a schematic interface diagram ii of a processing device according to an embodiment of the present disclosure; fig. 4 is a third schematic interface diagram of a processing device according to an embodiment of the present disclosure; as shown in fig. 2 to 4, the processing device 103 includes: a second display screen 201.
Wherein a selection control is displayed on a second display screen in the processing device 103.
In this embodiment, displaying the selection control on the second display screen includes: gender selection controls, race selection controls, scene selection controls, and the like.
Wherein, as shown in fig. 2, the gender selection control comprises: a male selection control and a female selection control.
As shown in fig. 3, the munitions selection control includes: naval selection controls, army selection controls, air selection controls, and the like.
As shown in fig. 4, the scene selection control includes: a life scene selection control, an exercise scene selection control and a battle scene selection control. Wherein, the life scene includes but is not limited to: training actions such as crossing, lifting heavy objects, carrying bags on the back and the like; exercise scenarios include, but are not limited to: training actions such as walking and running; battle scenarios include, but are not limited to: high-posture creeping, crossing and other training actions.
It is worth explaining that, the user selects different scene selection controls, and the entered training scenes are different, so as to realize the quick switching of each training scene. And executing corresponding training actions according to the training content displayed by each training scene.
Before the user training starts, the user needs to perform a selection operation on the selection control displayed on the second display screen, so that the processing device 103 may also be configured to respond to the selection operation of the user on at least one selection control to determine a target training scenario to be entered by the user.
For example, the order of the "selection controls" selected by user a is in turn: the male selection control- > army selection control- > exercise scene selection control guides the user A to enter an exercise scene so as to complete the training task in the exercise scene.
The following embodiments will specifically explain the processing steps of the user entering the target training scenario in fig. 2 to 4.
Optionally, the target training scenario includes: a trainer virtual object, a trainee virtual object, and a training background.
The virtual trainer object is a virtual character in a target training scene, and is a virtual trainer object in the virtual training scene through acquiring motion data of each joint part of a real human trainer in the training process in the target training scene in advance and simulating according to the motion data of each joint part of the real human trainer. The exercise data of the real trainer in the training process can be recorded to be used as training teaching.
The virtual trainer object is a virtual trainer object in a virtual training scene generated according to the motion data of each joint part of the user in the training process under the target training scene.
The training background refers to a training background picture in a target training scene, for example, the training background may include: trees, hillsides, buildings, etc.
The processing device 103 is specifically configured to generate a first video to be displayed according to the coach virtual object, the trainee virtual object, and the training background, and send the first video to be displayed to the virtual reality wearable device.
The first video to be displayed refers to an initial training scene picture when the user enters the target training scene, and at this time, the user does not start to execute the training process. The processing device 103 can be further configured to further generate a first video to be displayed according to the trainer virtual object, the trainer virtual object and the training background, where the first video to be displayed is a virtual real training environment, and instruct the virtual reality wearable device 101 to display the first video to be displayed.
After the virtual reality wearable device 101 receives the first video information to be displayed sent by the processing device 103, the virtual reality wearable device 101 is further specifically configured to display the first video to be displayed on the first display screen, so that the user can watch a simulated real training environment displayed in the first video to be displayed through the virtual reality wearable device 101, and the user can more mentally enter the training process, thereby improving the immersion in the training process, and effectively improving the training effect.
As will be specifically explained by the following embodiments, in the training process performed by the user in the target training scenario, how the processing device 103 is specifically configured to analyze and process the motion data acquired by the motion sensor.
Optionally, the processing device 103 is specifically configured to: analyzing the motion data, determining the target action of the virtual object of the trainer according to the analysis result, generating a second video to be displayed according to the target action, the preset action of the virtual object of the trainer and the training background, and sending the second video to be displayed to the virtual reality wearable device.
The second video to be displayed is a training picture in which the user starts to perform a training process following the training action of the trainer virtual object in the target training scene after the user enters the target training scene.
At this time, the processing device 103 is specifically configured to perform analysis processing on the motion data generated during the training performed by the user to obtain an analysis result of information such as the position, posture and angle of each joint part of the user at different times; then, the executed target action of the virtual object of the practicer in the target exercise scene is further determined according to the analysis result at different time. For example, the target action may be: running, creeping forward, jumping and other training actions. Then, the executed target motion of the trainer virtual object, the preset motion of the trainer virtual object and the training background are used to generate a second video to be displayed, and the generated second video to be displayed is sent to the virtual reality wearable device 101.
Virtual reality wearing equipment 101 specifically is used for showing the second and treats the video that shows on first display screen for the user can see the second through virtual reality wearing equipment 101 and treat the virtual training scene of a simulation that shows in the video that shows, makes the user can face in this virtual training scene, lets the user accomplish the training process like playing the recreation, has improved the immersion nature among the training process, thereby has effectively improved the training effect.
In addition, the user can also see oneself and train in this virtual training scene to can track the motion condition of each joint position of oneself, so that the user can be according to the training condition of oneself, the user can also move the rotation in can watching the virtual training scene of simulation from virtual reality wearing equipment 101, in order to reach the purpose that different angles switch, thereby can watch the concrete exercise details of train from a plurality of visual angles, the standard nature that the user carried out the training action has been improved, thereby improve training efficiency.
As will be specifically explained by the following embodiments, in the training process performed by the user in the target training scenario, how the processing device 103 is specifically configured to perform analysis processing on the motion data at multiple time points is provided.
It can be understood that the motion sensor 102 collects motion data of a plurality of times during the training process performed by the user in the target training scene, and motion data such as "position, posture and angle of each joint of the user" at each time are different, so that the motion data at each time needs to be analyzed to obtain analysis processing at each time, and then, according to the analysis processing at each time, the target motion of the virtual object of the practicer in the virtual training scene at each time is determined, so as to track the training motion of the user at each time.
For convenience of explanation, the present embodiment will be described with reference to an example of a process of analyzing motion data at the present time.
Thus, the processing device 103 is specifically configured to: analyzing the motion data at the current moment, determining the target action of the virtual object of the trainer at the current moment according to the analysis result, generating one frame of image of the second video to be displayed at the current moment according to the target action at the current moment, the preset action of the virtual object of the trainer at the current moment and the training background, and sending the one frame of image of the second video to be displayed at the current moment to the virtual reality wearable device.
And the second video to be displayed consists of a plurality of frames of images.
Therefore, after the method is adopted to obtain the frame image of the second video to be displayed at the current moment, similarly, the frame image of the second video to be displayed at the next moment can also be obtained. Then, based on the result of the analysis processing of the motion data at each time, a plurality of frames of images in the second video to be displayed are generated, and one frame of image of the second video to be displayed at the current time and one frame of image of the second video to be displayed at the next time are sent to the virtual reality wearing device 101, so that the user can watch one frame of image of the second video to be displayed at the current time by wearing the virtual reality wearing device 101, and the training action of the user can be corrected in time according to the target action of the virtual object of the trainee displayed by the second video to be displayed at the current time and the preset action of the virtual object of the trainee at the current time, and the standard of the user in the training process can be improved.
Optionally, the processing device 103 is configured to: and displaying the information to be displayed on the second display screen.
The information to be displayed can be a 'first video to be displayed' generated according to a coach virtual object, a trainee virtual object and a training background; or a second video to be displayed generated according to the target action, the preset action of the coach virtual object and the training background.
In order to facilitate analysis and processing of motion data generated by a user in a training process under a target training scene, a "first video to be displayed" or a "second video to be displayed" may be displayed on a second display screen of the processing device 103, so that the processing device 103 may compare and analyze a "target action of a virtual object of a trainee" and a "preset action of a virtual object of a trainee" displayed in the "first video to be displayed" and the "second video to be displayed", find out that the user performs an irregular training action in time, and send a voice or text prompt message to the user in time, so that the user corrects the training action of the user in time, thereby improving the standard of the user in the training process and improving the training effect of the user.
The following describes, by way of a plurality of specific embodiments, implementation principles of the virtual training control method applied to the execution steps of the processing device in the virtual training control system and beneficial effects produced by the implementation principles.
Fig. 5 is a schematic flowchart of a virtual training control method according to an embodiment of the present disclosure; alternatively, the execution subject of the method may be the processing device shown in fig. 1, as shown in fig. 5, the method includes:
s501, acquiring motion data, collected by a motion sensor, of a user in a training process under a target training scene.
Wherein the motion data includes: the position, the posture and the angle of each joint part of the user, and the target training scene are displayed on the first display screen.
Specifically, the motion sensors are worn on the respective joint portions of the user, and the changes of the respective joints caused by the user performing different exercise motions are different. Therefore, the information of the position, the posture and the angle of each joint part of the user when the user performs different training motions can be collected through the motion sensor.
Illustratively, for example, the target training scenario is a user-selected battle scenario in which preset actions performed by the coaching virtual object at various times are displayed.
And S502, analyzing the motion data to obtain information to be displayed.
The information to be displayed may be a moving picture of the user in the training process performed in the target training scene.
In this embodiment, the motion data may be parsed as follows. Specifically, the method comprises the following steps: the method comprises the steps of sequentially tracing the positions of all joint parts of a user to obtain at least one human body line, then rendering the at least one human body line to determine the position of a virtual object of a practicer of the user in a target training scene, finally resolving the posture and the angle of all joint parts of the user by using an inertial navigation principle to obtain a target training action of the virtual object of the practicer, and obtaining information to be displayed based on the target training action of the virtual object of the practicer.
And S503, indicating the virtual reality wearable equipment to display information to be displayed.
On the basis of the embodiment, the processing equipment sends the obtained information to be displayed to the virtual reality wearing equipment, so as to indicate the virtual reality wearing equipment to display the information to be displayed, so as to display the whole motion scene in the training process to the virtual reality wearing equipment worn by the user, so that the user can enter a target training scene in the virtual reality, the user can generate an immersive sensation in the real world, the immersive and interactive effects in the training process are enhanced, the enthusiasm of the user in participating in the training is greatly improved, and the training efficiency of the user is improved.
To sum up, an embodiment of the present application provides a virtual training control method, including: acquiring motion data of a user in a training process executed under a target training scene, wherein the motion data is acquired by a motion sensor and comprises the following steps: the position, the posture and the angle of each joint part of the user, and the target training scene is a scene displayed on the first display screen; analyzing the motion data to obtain information to be displayed; and indicating the virtual reality wearable equipment to display information to be displayed. In the scheme, the motion data collected by the motion sensor is analyzed and processed through the processing equipment to obtain information to be displayed and indicate the virtual reality wearable equipment to display the information to be displayed, so that the whole motion picture in the training process is displayed on the virtual reality wearable equipment worn by a user, the user can enter a target training scene in the virtual reality, the user can have an immersive sensation, the immersive and interactive effects in the training process are enhanced, the enthusiasm of the user for participating in training is greatly improved, and the training efficiency of the user is improved.
Optionally, the processing device includes: a second display screen; displaying a selection control on a second display screen, the method further comprising:
responding to the selection operation of the user on at least one selection control, and determining a target training scene; wherein selecting the control comprises: gender selection control, army kind selection control, scene selection control, the scene selection control includes: a life scene selection control, an exercise scene selection control and a battle scene selection control.
The target training scenario includes: a coach virtual object, a trainee virtual object, and a training background; the method further comprises the following steps:
the method comprises the steps of generating a first video to be displayed according to a coach virtual object, a trainer virtual object and a training background, and sending the first video to be displayed to the virtual reality wearable device, so that the first video to be displayed is displayed on the virtual reality wearable device.
Optionally, analyzing the motion data to obtain information to be displayed, including:
analyzing the motion data, determining the target action of the virtual object of the trainer according to the analysis result, generating a second video to be displayed according to the target action, the preset action of the virtual object of the trainer and the training background, and sending the second video to be displayed to the virtual reality wearable device, so that the second video to be displayed is displayed on the virtual reality wearable device.
Optionally, the analyzing the motion data, determining a target action of the virtual object of the trainer according to a result of the analyzing, and generating a second video to be displayed according to the target action, a preset action of the virtual object of the trainer, and the training background, includes:
analyzing the motion data at the current moment, determining the target action of the virtual object of the trainer at the current moment according to the analysis result, generating one frame of image of the second video to be displayed at the current moment according to the target action at the current moment, the preset action of the virtual object of the trainer at the current moment and the training background, and sending the one frame of image of the second video to be displayed at the current moment to the virtual reality wearable device.
Optionally, the method further comprises: and displaying the information to be displayed on the second display screen.
In addition, because the height and the proportion of each limb of each person are different, before the user performs training, the user can acquire information of all aspects of the body, so that the accuracy of the motion data of each joint part of the user acquired by the motion sensor in the training process is improved. Therefore, the virtual training control method provided in the embodiment of the present application further includes:
collecting the height of a user and the proportion of each bone key part; then, collecting the posture information of at least one posture of the user in standing, sitting, squatting and the like; then, the posture information of the user in at least one posture is used for carrying out parameter correction on the height of the user and the proportion of key parts of each skeleton, so that the processing equipment can automatically distinguish the relevant proportion of the head, the four limbs, the waist and the like of the user according to the motion data collected by the motion sensor.
Optionally, implementation steps and beneficial effects of the virtual training control method provided in the embodiment of the present application have been described in detail in the foregoing specific embodiments, and are not described in detail here.
The following describes a virtual training control apparatus and a storage medium for executing the virtual training provided in the present application, and specific implementation processes and technical effects thereof are referred to above, and will not be described again below.
Fig. 6 is a schematic structural diagram of a virtual training control apparatus according to an embodiment of the present disclosure; alternatively, the execution subject of the virtual training control apparatus may be a processing device in the virtual training control system shown in fig. 1, and the apparatus includes:
an obtaining module 601, configured to obtain motion data, acquired by the motion sensor, of a user in a training process performed in a target training scenario, where the motion data includes: the position, the posture and the angle of each joint part of the user, and the target training scene is a scene displayed on the first display screen;
the analysis module 602 is configured to analyze the motion data to obtain information to be displayed;
an indicating module 603, configured to indicate the virtual reality wearable device to display the information to be displayed.
Optionally, the processing device includes: a second display screen; displaying a selection control on a second display screen, the apparatus further comprising:
the response module is used for responding to the selection operation of the user on the at least one selection control and determining a target training scene; wherein selecting the control comprises: gender selection control, army kind selection control, scene selection control, the scene selection control includes: a life scene selection control, an exercise scene selection control and a battle scene selection control.
Optionally, the target training scenario includes: a coach virtual object, a trainee virtual object, and a training background; the device also includes:
the generating module is used for generating a first video to be displayed according to the coach virtual object, the practicer virtual object and the training background;
the sending module is used for sending the first video to be displayed to the virtual reality wearing equipment so that the first video to be displayed is displayed on the virtual reality wearing equipment.
Optionally, the parsing module 602 is further configured to parse the motion data;
the generation module is also used for determining the target action of the virtual object of the trainer according to the analysis processing result and generating a second video to be displayed according to the target action, the preset action of the virtual object of the trainer and the training background;
the sending module is further used for sending a second video to be displayed to the virtual reality wearable device, so that the second video to be displayed is displayed on the virtual reality wearable device.
Optionally, the analyzing module 602 is further configured to analyze the motion data at the current time;
the generation module is also used for determining the target action of the virtual object of the trainer at the current moment according to the analysis processing result, and generating a frame of image of the second video to be displayed at the current moment according to the target action of the current moment, the preset action of the virtual object of the trainer at the current moment and the training background;
and the generating module is further used for sending a frame of image of the second video to be displayed at the current moment to the virtual reality wearable device.
Optionally, the apparatus further comprises:
and the display module is used for displaying the information to be displayed on the second display screen.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 7 is a schematic diagram of a structure of a processing device according to an embodiment of the present application, where the processing device may be integrated in a terminal device or a chip of the terminal device, and the terminal may be a computing device with a data processing function.
The processing apparatus includes: a processor 701, a memory 702.
The memory 702 is used for storing programs, and the processor 701 calls the programs stored in the memory 702 to execute the above method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the invention also provides a program product, for example a computer-readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A virtual training control system, characterized in that the virtual training control system comprises: virtual reality wearing equipment, motion sensor and processing apparatus, include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively;
the virtual reality wearable device is used for displaying a target training scene on the first display screen;
the motion sensor is configured to collect motion data of a user in a training process executed under the target training scenario, where the motion data includes: the position, posture and angle of each joint part of the user;
the processing device is used for determining the target training scene based on the selection operation of the user and instructing the virtual reality wearable device to display the target training scene;
the processing device is further configured to receive the motion data acquired by the motion sensor, analyze the motion data to obtain information to be displayed, and instruct the virtual reality wearable device to display the information to be displayed.
2. The system of claim 1, wherein the processing device comprises: a second display screen;
the processing device is specifically configured to display a selection control on the second display screen, and determine the target training scenario in response to a selection operation of the user on at least one selection control, where the selection control includes: the system comprises a gender selection control, an army selection control and a scene selection control, wherein the scene selection control comprises: a life scene selection control, an exercise scene selection control and a battle scene selection control.
3. The system of claim 2, wherein the target training scenario comprises: a coach virtual object, a trainee virtual object, and a training background;
the processing device is specifically configured to generate a first video to be displayed according to the coach virtual object, the trainee virtual object and the training background, and send the first video to be displayed to the virtual reality wearable device;
the virtual reality wearable device is specifically used for displaying the first video to be displayed on the first display screen.
4. The system of claim 3, wherein the processing device is specifically configured to: analyzing the motion data, determining a target action of the virtual object of the trainer according to an analysis result, generating a second video to be displayed according to the target action, a preset action of the virtual object of the trainer and the training background, and sending the second video to be displayed to the virtual reality wearable device;
the virtual reality wearable device is specifically configured to display the second video to be displayed on the first display screen.
5. The system of claim 4, wherein the processing device is specifically configured to: analyzing the motion data at the current moment, determining the target action of the virtual object of the trainer at the current moment according to the analysis result, generating one frame of image of the second video to be displayed at the current moment according to the target action at the current moment, the preset action of the virtual object of the trainer at the current moment and the training background, and sending the one frame of image of the second video to be displayed at the current moment to the virtual reality wearable device;
the virtual reality wearable device is specifically configured to display a frame of image of the second video to be displayed at the current moment on the first display screen.
6. The system of any of claims 2-5, wherein the processing device is further configured to: and displaying the information to be displayed on the second display screen.
7. A virtual training control method is applied to a processing device in a virtual training control system, and the system comprises: virtual reality wearing equipment, motion sensor and processing apparatus, include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively;
the method comprises the following steps:
acquiring motion data of a user in a training process executed under a target training scene, wherein the motion data is acquired by the motion sensor and comprises: the position, the posture and the angle of each joint part of the user, and the target training scene is a scene displayed on the first display screen;
analyzing the motion data to obtain information to be displayed;
and indicating the virtual reality wearing equipment to display the information to be displayed.
8. A virtual training control apparatus, applied to a processing device in a virtual training control system, the system comprising: virtual reality wearing equipment, motion sensor and processing apparatus, include in the virtual reality wearing equipment: the virtual reality wearable device and the motion sensor are in communication connection with the processing device respectively;
the device comprises:
an obtaining module, configured to obtain motion data of a user in a training process performed in a target training scenario, where the motion data is collected by the motion sensor, and the motion data includes: the position, the posture and the angle of each joint part of the user, and the target training scene is a scene displayed on the first display screen;
the analysis module is used for analyzing the motion data to obtain information to be displayed;
and the indicating module is used for indicating the virtual reality wearable equipment to display the information to be displayed.
9. A processing device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the processing device is operating, the processor executing the machine-readable instructions to perform the steps of the method of claim 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 7.
CN202110916263.7A 2021-08-11 2021-08-11 Virtual training control system, method, device, equipment and storage medium Pending CN113593348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110916263.7A CN113593348A (en) 2021-08-11 2021-08-11 Virtual training control system, method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110916263.7A CN113593348A (en) 2021-08-11 2021-08-11 Virtual training control system, method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113593348A true CN113593348A (en) 2021-11-02

Family

ID=78256990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110916263.7A Pending CN113593348A (en) 2021-08-11 2021-08-11 Virtual training control system, method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113593348A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114822127A (en) * 2022-04-20 2022-07-29 深圳市铱硙医疗科技有限公司 Training method and training device based on virtual reality equipment
CN115311918A (en) * 2022-08-01 2022-11-08 广东虚拟现实科技有限公司 Virtual-real fusion training system and method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104991639A (en) * 2015-05-27 2015-10-21 中国康复研究中心 Virtual reality rehabilitation training system and method
CN107221223A (en) * 2017-06-01 2017-09-29 北京航空航天大学 A kind of band is strong/the virtual reality aircraft cockpit system of touch feedback
WO2017219226A1 (en) * 2016-06-21 2017-12-28 马玉琴 Rehabilitation training system, computer, smart mechanical arm and virtual reality helmet
US20180126241A1 (en) * 2016-11-10 2018-05-10 National Taiwan University Augmented learning system for tai-chi chuan with head-mounted display
CN110302524A (en) * 2019-05-22 2019-10-08 北京百度网讯科技有限公司 Limbs training method, device, equipment and storage medium
CN110413112A (en) * 2019-07-11 2019-11-05 安徽皖新研学教育有限公司 A kind of safety experience educational system and its method based on virtual reality technology
RU2738489C1 (en) * 2020-04-03 2020-12-14 Общество с ограниченной ответственностью "Центр тренажеростроения и подготовки персонала" Educational training-simulator complex for cosmonaut preparation for intra-ship activity
CN112642133A (en) * 2020-11-24 2021-04-13 杭州易脑复苏科技有限公司 Rehabilitation training system based on virtual reality
EP3809967A1 (en) * 2018-07-23 2021-04-28 MVI Health Inc. Systems and methods for physical therapy
CN112717343A (en) * 2020-11-27 2021-04-30 杨凯 Method and device for processing sports data, storage medium and computer equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104991639A (en) * 2015-05-27 2015-10-21 中国康复研究中心 Virtual reality rehabilitation training system and method
WO2017219226A1 (en) * 2016-06-21 2017-12-28 马玉琴 Rehabilitation training system, computer, smart mechanical arm and virtual reality helmet
US20180126241A1 (en) * 2016-11-10 2018-05-10 National Taiwan University Augmented learning system for tai-chi chuan with head-mounted display
CN107221223A (en) * 2017-06-01 2017-09-29 北京航空航天大学 A kind of band is strong/the virtual reality aircraft cockpit system of touch feedback
EP3809967A1 (en) * 2018-07-23 2021-04-28 MVI Health Inc. Systems and methods for physical therapy
CN110302524A (en) * 2019-05-22 2019-10-08 北京百度网讯科技有限公司 Limbs training method, device, equipment and storage medium
CN110413112A (en) * 2019-07-11 2019-11-05 安徽皖新研学教育有限公司 A kind of safety experience educational system and its method based on virtual reality technology
RU2738489C1 (en) * 2020-04-03 2020-12-14 Общество с ограниченной ответственностью "Центр тренажеростроения и подготовки персонала" Educational training-simulator complex for cosmonaut preparation for intra-ship activity
CN112642133A (en) * 2020-11-24 2021-04-13 杭州易脑复苏科技有限公司 Rehabilitation training system based on virtual reality
CN112717343A (en) * 2020-11-27 2021-04-30 杨凯 Method and device for processing sports data, storage medium and computer equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114822127A (en) * 2022-04-20 2022-07-29 深圳市铱硙医疗科技有限公司 Training method and training device based on virtual reality equipment
CN114822127B (en) * 2022-04-20 2024-04-02 深圳市铱硙医疗科技有限公司 Training method and training device based on virtual reality equipment
CN115311918A (en) * 2022-08-01 2022-11-08 广东虚拟现实科技有限公司 Virtual-real fusion training system and method
CN115311918B (en) * 2022-08-01 2023-11-17 广东虚拟现实科技有限公司 Virtual-real fusion training system and method

Similar Documents

Publication Publication Date Title
CN109432753B (en) Action correcting method, device, storage medium and electronic equipment
CN109298779B (en) Virtual training system and method based on virtual agent interaction
CN107005585B (en) Method and system for event mode guided mobile content services
CN113593348A (en) Virtual training control system, method, device, equipment and storage medium
US10661148B2 (en) Dual motion sensor bands for real time gesture tracking and interactive gaming
CN107281728B (en) Sensor-matched augmented reality skiing auxiliary training system and method
CN112198959A (en) Virtual reality interaction method, device and system
US20180272189A1 (en) Apparatus and method for breathing and core muscle training
JP2007264055A (en) Training system and training method
CN108595004A (en) More people's exchange methods, device and relevant device based on Virtual Reality
US20180261120A1 (en) Video generating device, method of controlling video generating device, display system, video generation control program, and computer-readable storage medium
WO2018173383A1 (en) Information processing device, information processing method, and program
Ali et al. Virtual reality as a physical training assistant
CN104511079A (en) Method for removing psychological disorder for patient by virtual technology
CN108665755B (en) Interactive training method and interactive training system
Echeverria et al. KUMITRON: Artificial intelligence system to monitor karate fights that synchronize aerial images with physiological and inertial signals
CN113409651A (en) Live broadcast fitness method and system, electronic equipment and storage medium
CN114758415A (en) Model control method, device, equipment and storage medium
CN117148977B (en) Sports rehabilitation training method based on virtual reality
Echeverria et al. Punch Anticipation in a Karate Combat with Computer Vision
CN113051973A (en) Method and device for posture correction and electronic equipment
CN116099181A (en) Upper limb strength training auxiliary system based on universe and application method thereof
McGregor et al. New approaches for integration: Integration of haptic garments, big data analytics, and serious games for extreme environments
CN108969864A (en) Depression recovery therapeutic equipment and its application method based on VR technology
Paay et al. Weight-Mate: Adaptive training support for weight lifting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211102

RJ01 Rejection of invention patent application after publication