CN114220312B - Virtual training method, device and virtual training system - Google Patents

Virtual training method, device and virtual training system Download PDF

Info

Publication number
CN114220312B
CN114220312B CN202210072468.6A CN202210072468A CN114220312B CN 114220312 B CN114220312 B CN 114220312B CN 202210072468 A CN202210072468 A CN 202210072468A CN 114220312 B CN114220312 B CN 114220312B
Authority
CN
China
Prior art keywords
image
virtual
training
frequency domain
trainer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210072468.6A
Other languages
Chinese (zh)
Other versions
CN114220312A (en
Inventor
何惠东
韩鹏
张�浩
陈丽莉
姜倩文
杜伟华
石娟娟
秦瑞峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Display Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202210072468.6A priority Critical patent/CN114220312B/en
Publication of CN114220312A publication Critical patent/CN114220312A/en
Application granted granted Critical
Publication of CN114220312B publication Critical patent/CN114220312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention relates to a virtual training method, a device and a virtual training system, wherein the method comprises the following steps: in the process of virtual training of a trainer, acquiring limb posture data of the trainer through wearable equipment, and acquiring a real scene image of the trainer during virtual training through an image acquisition module; generating a virtual training scene matched with a preset training mode based on the real scene image; determining a moving track of a virtual object in the virtual training scene based on the real scene image and the limb posture data; and moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process. Therefore, a relatively real simulated training effect can be realized.

Description

Virtual training method, device and virtual training system
Technical Field
The embodiment of the invention relates to the field of virtual reality, in particular to a virtual training method, a virtual training device and a virtual training system.
Background
Various physical activities are gradually developed in long-term social practice, wherein contents beneficial to mind and body are consciously applied by people, and sports are gradually generated and formed. Sports are continually updated and evolving with advances in society and science.
For some sports, such as football games, basketball games, baseball games, etc., to better complete such sports, it is necessary for the athlete (either a professional athlete or a non-professional athlete loving the game) to train, such as training football tactics, shooting, serving, etc.
However, in a real training scenario, especially for non-professional athletes, their training is easily constrained by a variety of factors (e.g., field, personnel, time, etc.). Therefore, a method for realizing virtual training and enabling a trainer to experience a real simulated training effect is needed.
Disclosure of Invention
In view of this, the embodiment of the invention provides a virtual training method, a virtual training device and a virtual training system, so as to realize a real simulated training effect.
In a first aspect, an embodiment of the present invention provides a virtual training method, where the method includes:
In the process of virtual training of a trainer, acquiring limb posture data of the trainer through wearable equipment, and acquiring a real scene image of the trainer during virtual training through an image acquisition module;
generating a virtual training scene matched with a preset training mode based on the real scene image;
Determining a moving track of a virtual object in the virtual training scene based on the real scene image and the limb posture data;
And moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process.
Optionally, the method further comprises:
in the process of virtual training of a trainer, head posture data of the trainer are obtained through head-mounted equipment;
The generating a virtual training scene matched with a preset training mode based on the real scene image comprises the following steps:
Generating a virtual scene image based on the head pose data and a preset training mode;
And fusing the virtual scene image and the real scene image to obtain a virtual training scene matched with the training mode, wherein the head gesture data and the real scene image correspond to the same acquisition time.
Optionally, the fusing the virtual scene image and the real scene image includes:
Dividing the virtual scene image and the real scene image respectively based on a preset color space to obtain a first image subset of the virtual scene image in each subspace included in the color space and a second image subset of the real scene image in each subspace;
and respectively fusing each first image subset with a corresponding second image subset, wherein the first image subset and the corresponding second image subset correspond to the same subspace.
Optionally, the dividing the virtual scene image and the real scene image based on a preset color space to obtain a first image subset of the virtual scene image in each subspace included in the color space, and obtaining a second image subset of the real scene image in each subspace includes:
For each pixel point in the virtual scene image and the real scene image, determining a corresponding subspace from a preset color space according to the component value of the pixel point under each color channel, wherein the component value of the pixel point under each color channel respectively falls into the component range of the corresponding subspace under the color channel;
The pixel points corresponding to the same subspace in the virtual scene image are classified into the same first image subset, so that a first image subset of the virtual scene image under each subspace included in the color space is obtained;
and classifying the pixel points corresponding to the same subspace in the real scene image into the same second image subset to obtain a second image subset of the real scene image under each subspace included in the color space.
Optionally, the fusing each of the first image subsets with its corresponding second image subset includes:
For each first image subset, respectively carrying out wavelet transformation on the first image subset and a second image subset corresponding to the first image subset to obtain a frequency domain value of each pixel point in the first image subset and a second image subset corresponding to the first image subset;
According to the frequency domain values, respectively carrying out frequency domain division on the first image subset and the second image subset corresponding to the first image subset to obtain M first frequency domain areas corresponding to the first image subset and M second frequency domain areas corresponding to the second image subset;
And respectively fusing each first frequency domain region with a corresponding second frequency domain region, wherein the first frequency domain region and the corresponding second frequency domain region correspond to the same frequency domain.
Optionally, the fusing each of the first frequency domain regions with a corresponding second frequency domain region includes:
determining a first low-frequency band energy value of the first frequency domain region and a second low-frequency band energy value of a second frequency domain region corresponding to the first frequency domain region for each of the first frequency domain regions;
determining a target image fusion strategy according to the first low-frequency band energy value and the second low-frequency band energy value;
and fusing the first frequency domain region and the second frequency domain region corresponding to the first frequency domain region according to the target image fusion strategy.
Optionally, the determining the target image fusion strategy according to the absolute value of the difference between the first low-frequency band energy value and the second low-frequency band energy value includes:
determining an absolute value of a difference between the first low-band energy value and the second low-band energy value;
Comparing the absolute value with a preset threshold value;
if the absolute value is smaller than the preset threshold value, determining a first image fusion strategy as a target image fusion strategy;
and if the absolute value is larger than or equal to the preset threshold value as a comparison result, determining the second image fusion strategy as a target image fusion strategy.
Optionally, the determining, based on the real scene image and the limb gesture data, a moving track of a virtual object in the virtual training scene includes:
Determining a virtual collision position of limbs of the trainer and the virtual object in the virtual training scene based on the real scene image;
And determining a moving track of the virtual object in the virtual training scene based on the limb gesture data, wherein the moving track takes the virtual collision position as a starting point.
Optionally, the method further comprises:
in the process of virtual training of a trainer, obtaining stress data of the virtual object through the wearable equipment;
Updating the current state transition matrix based on the stress data;
the determining, based on the limb gesture data, a movement track of a virtual object in the virtual training scene includes:
Based on the updated state transition matrix and the limb posture data, determining the current system state of the trainer by using a Kalman filtering algorithm;
And determining the moving track of the virtual object in the virtual training scene based on the current system state of the trainer.
Optionally, the updating the current state transition matrix based on the stress data includes:
constructing a measurement noise covariance matrix based on the stress data;
and performing setting operation on the current state transition matrix by using the measurement noise covariance matrix to obtain an updated state transition matrix.
Optionally, the constructing a measurement noise covariance matrix based on the stress data includes:
and carrying out weighted summation on the measurement noise covariance matrix constructed according to the stress data under each dimension according to preset weights to obtain a final measurement noise covariance matrix.
In a second aspect, an embodiment of the present invention provides a virtual training system, the system including:
The wearable device is used for collecting limb posture data of a trainer in the process of virtual training of the trainer;
The image acquisition module is used for acquiring a real scene image of a trainer during virtual training in the process of the virtual training of the trainer;
The head-mounted display device is used for acquiring limb posture data of a trainer through the wearable device and acquiring a real scene image of the trainer during virtual training through the image acquisition module in the virtual training process of the trainer; generating a virtual training scene matched with a preset training mode based on the real scene image; determining a moving track of a virtual object in the virtual training scene based on the real scene image and the limb posture data; and moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process.
In a third aspect, an embodiment of the present invention provides a virtual training apparatus, including:
the first acquisition module is used for acquiring limb posture data of a trainer through wearable equipment in the process of virtual training of the trainer;
The second acquisition module is used for acquiring a real scene image of a trainer during virtual training by the trainer through the image acquisition module in the process of virtual training by the trainer;
The virtual scene generation module is used for generating a virtual training scene matched with a preset training mode based on the real scene image;
The track determining module is used for determining the moving track of the virtual object in the virtual training scene based on the real scene image and the limb gesture data;
And the moving module is used for moving the virtual object in the virtual training scene according to the moving track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including: a processor and a memory;
The processor is configured to execute a virtual training program stored in the memory to implement the virtual training method according to any one of the first aspects.
In a fifth aspect, embodiments of the present invention provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the steps of the method of any of the first aspects.
According to the technical scheme provided by the embodiment of the invention, in the process of virtual training of a trainer, limb posture data of the trainer is obtained through the wearable equipment, and a real scene image of the trainer during virtual training is obtained through the image acquisition module, a virtual training scene matched with a preset training mode is generated based on the real scene image, a moving track of a virtual object in the virtual training scene is determined based on the real scene image and the limb posture data, the virtual object in the virtual training scene is moved according to the moving track, virtual control of the virtual object through limbs of the trainer in the virtual training process can be realized, so that virtual training is realized, and compared with training in the real scene, the virtual training can save human resources and physical strength of athletes; in addition, the virtual training scene is generated based on the real scene image, so that the fusion of the real scene and the virtual scene can be realized, and the sense of reality of user experience is enhanced; and because the virtual training scene under the preset training mode is generated, the virtual training of multiple modes can be realized, and thus, the more real simulated training effect is achieved.
Drawings
FIG. 1 is a schematic diagram of a virtual training system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a special training shoe according to an embodiment of the present invention;
FIG. 3 is a flowchart of an embodiment of a virtual training method according to an embodiment of the present invention;
FIG. 4 is a flowchart of another embodiment of a virtual training method according to an embodiment of the present invention;
FIG. 5 is a flowchart of an embodiment of a method for virtual training according to an embodiment of the present invention;
FIG. 6 is a flowchart of an embodiment of fusing a virtual scene image and a real scene image according to an embodiment of the present invention;
FIG. 7 is an example of an RGB color space;
FIG. 8 is a block diagram of an embodiment of a virtual training apparatus according to an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a schematic architecture diagram of a virtual training system according to an embodiment of the present invention is shown. Included in the virtual training system 100 illustrated in fig. 1 are a wearable device 101, a head mounted display device 102, and an image acquisition module 103.
Wherein the wearable device 101 may be worn on a limb by a trainer. For example, in a football virtual training scenario, wearable device 101 may be embodied as a dedicated training shoe to be worn by a trainer on the foot. Referring to fig. 2, a schematic diagram of a special training shoe according to an embodiment of the present invention is provided. For another example, in a basketball or baseball training scenario, the wearable device 101 may be embodied as a dedicated training glove that is worn by a trainer on the hands.
In the embodiment of the present invention, regardless of the implementation form of the wearable device 101, the wearable device may include the following components: MCU control chip, wireless communication module (e.g., wi-Fi module), various sensors (including but not limited to: gyroscopes, acceleration sensors, pressure sensors, geomagnetic sensors, etc.). For example, the special training shoes illustrated in fig. 2 are provided with 5 sensors, which are indicated by reference numerals ① to ⑤.
The MCU control chip can be responsible for providing control signals and clocks of all sensors, receiving and preprocessing data acquired by all sensors, and the like.
The wireless communication module may be responsible for data transmission between the wearable device 101 and the head-mounted display device 102.
The various sensors can be responsible for acquiring limb posture data of a trainer. It will be appreciated that when the wearable device 101 is embodied as a dedicated training shoe, the wearable device 101 is responsible for gathering foot position data of the trainer; when the wearable device 101 is embodied as a dedicated training glove, the wearable device 101 is responsible for collecting the hand pose data of the trainer. Further, when the plurality of sensors include gyroscopes, the limb posture data of the trainer includes rotation angle information of the limb of the training subject; when the plurality of sensors include an accelerator sensor, the limb posture data of the trainer includes acceleration information of the limb of the training subject; etc.
The head mounted display device 102 may be worn by a trainer on the head. Alternatively, the head mounted display device 102 may be mixed reality glasses, i.e., VR glasses employing mixed reality technology. The mixed reality technology is a further development of the virtual reality technology, and by presenting virtual scene information in a real scene, an information loop for interactive feedback is built among the real world, the virtual world and a user, so that the sense of reality of user experience is enhanced. The mixed reality technology is used as a universal technology and is widely applied to various fields such as industry, design, exhibition, construction, medical treatment, education and the like based on the characteristics of holographic display, space positioning and the like of the mixed reality technology.
In an embodiment of the present invention, the head-mounted display device 102 may include the following components: processors (with strong image rendering capabilities), display screens, wireless communication modules (e.g., wi-Fi modules), one or more sensors, etc.
The processor may be responsible for powering, interface communication, data processing, image fusion, rendering, etc. of the head mounted display device 102.
The display screen is responsible for displaying virtual scenes, such as virtual training scenes. Alternatively, the display screen may be a high definition display screen.
The wireless communication module may be responsible for data transmission between the head mounted display device 102 and the wearable device 101.
The sensor can be used for collecting head posture data of a trainer. Alternatively, the sensor may comprise a gyroscope.
The image acquisition module 103 is configured to acquire a real scene image, and transmit the acquired real scene image to a processor of the head-mounted display device 102, so that the processor fuses the real scene image with a virtual scene to obtain the virtual scene that can enable a trainer to have a real experience.
Alternatively, the image acquisition module 103 may be integrated into the head-mounted display device 102, for example, the head-mounted display device 102 may further include a high-definition camera, which is the image acquisition module 103. The image acquisition module 103 may also be independent of the head mounted display device 102. The embodiments of the present invention are not limited in this regard.
Thus, a relevant description of the system architecture of the virtual training system 100 shown in FIG. 1 is completed. The virtual training method provided by the embodiment of the present invention is explained below based on the virtual training system 100 illustrated in fig. 1 by a specific embodiment, which is not limited to the present invention.
Referring to fig. 3, a flowchart of an embodiment of a virtual training method is provided in an embodiment of the present invention. As shown in fig. 3, the process may include the steps of:
Step 301, in the process of virtual training of a trainer, acquiring limb posture data of the trainer through a wearable device, and acquiring a real scene image of the trainer during virtual training through an image acquisition module.
In practice, when performing virtual training according to the virtual training method provided in the embodiment of the present invention, a trainer first wears the head-mounted display device 102 illustrated in fig. 1 and secures the wearable device 101 on the corresponding limb portion.
After the above-mentioned preparation, in an embodiment, the trainer may also select a training mode through the operation interface of the display screen on the head-mounted display device 102, for example, in the football virtual training scenario, the trainer may select one of the following training modes: tactical training, shooting training, pass training, etc. The trainer then initiates the virtual training system illustrated in FIG. 1 to begin virtual training.
In the process of virtual training of the trainer, the wearable device 101 can acquire limb posture data of the trainer through a plurality of sensors included in the wearable device, and simultaneously acquire real scene images of the trainer during training through the image acquisition module 103.
In an embodiment, the wearable device 101 and the image acquisition module 103 may periodically acquire limb posture data and real scene images of a trainer while training, respectively, at respective acquisition frequencies.
Step 302, generating a virtual training scene matched with a preset training mode based on the real scene image.
First, in the embodiment of the present invention, the virtual training scene may be generated according to a training mode selected by a trainer. Wherein the training patterns are different, and the corresponding virtual training scenes may be different. For example, in tactical training mode, multiple virtual players may be included in the virtual training scene, and in different tactical training modes, the trajectory of movement of the virtual players in the virtual training field may be different; for another example, in a goal training mode, a goal, a virtual goalkeeper, etc. may be included in the virtual training scene.
Therefore, the virtual training method provided by the embodiment of the invention can render virtual training scenes under different training modes, so that virtual training of multiple modes can be realized, such as football tactical training, pass training, shooting training and the like; and compared with training in a real scene, the training device can save human resources and physical strength of athletes.
Secondly, as can be seen from the description of step 302, in the embodiment of the present invention, the virtual training scene matching the preset training pattern is generated based on the real scene image, which can realize the fusion of the real scene and the virtual scene, so as to enhance the sense of reality of the user experience. As to how to generate a virtual training scene matching a preset training pattern based on a real scene image in particular, the following description will be given by way of the embodiment shown in fig. 4, which will not be described in detail here.
Step 303, determining a moving track of the virtual object in the virtual training scene based on the real scene image and limb posture data of the trainer.
And 304, moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process.
Step 303 and step 304 are collectively described below:
Taking the football virtual training scenario as an example, in practice, a trainer may use feet to virtually control the football in the virtual training scenario displayed by the head-mounted device 102, that is, in the virtual training scenario experienced, so as to implement virtual training. That is, in the soccer virtual training scenario, the virtual object refers to a virtual soccer ball, and a trainer virtually controls the virtual object through a limb during the virtual training.
Of course, in different virtual training scenarios, the virtual objects are different. For example, in a basketball virtual training scenario, a virtual object refers to a virtual basketball. The embodiment of the invention does not limit the specific form of the virtual object.
In a real scene, when a foot of a trainer touches a football, the football will move. Based on this, in the embodiment of the present invention, when it is determined that the foot of the trainer touches the virtual object (the touch here is also a virtual touch), a movement track of the virtual object in the virtual training scene may be determined based on the real scene image acquired at the touch time and the limb gesture data of the trainer, and then the virtual object in the virtual training scene may be moved according to the movement track. Thus, the real training effect can be simulated.
As to how to determine the movement track of the virtual object in the virtual training scene based on the real scene image and the limb posture data of the trainer, the embodiment shown in fig. 5 will be described below, which will not be described in detail.
According to the technical scheme provided by the embodiment of the invention, in the process of virtual training of a trainer, limb posture data of the trainer is obtained through the wearable equipment, and a real scene image of the trainer during virtual training is obtained through the image acquisition module, a virtual training scene matched with a preset training mode is generated based on the real scene image, a moving track of a virtual object in the virtual training scene is determined based on the real scene image and the limb posture data, the virtual object in the virtual training scene is moved according to the moving track, virtual control of the virtual object through limbs of the trainer in the virtual training process can be realized, so that virtual training is realized, and compared with training in the real scene, the virtual training can save human resources and physical strength of athletes; in addition, the virtual training scene is generated based on the real scene image, so that the fusion of the real scene and the virtual scene can be realized, and the sense of reality of user experience is enhanced; and because the virtual training scene under the preset training mode is generated, the virtual training of multiple modes can be realized, and thus, the more real simulated training effect is achieved.
Referring to fig. 4, a flowchart of an embodiment of another virtual training method according to an embodiment of the present invention is provided. The process shown in fig. 4, on the basis of the process shown in fig. 3, focusing on how to generate a virtual training scene matching with a preset training pattern based on the real scene image, may include the following steps:
Step 401, acquiring head posture data of a trainer through a head-mounted device in the process of virtual training of the trainer.
Step 402, generating a virtual scene image based on the head gesture data and a preset training mode.
In the training scene related to the embodiment of the invention, a trainer needs to run, and as the trainer runs, the visual angle and the watched picture of the trainer also change, so that the virtual scene image is generated based on the head posture data of the trainer in the virtual training process and the preset training mode. The virtual scene image generated through the processing can meet a preset training mode and can be matched with a real-time visual angle of a trainer, so that the sense of reality of user experience is enhanced.
Step 403, fusing the virtual scene image and the real scene image to obtain a virtual training scene matched with the training mode, wherein the head gesture data and the real scene image correspond to the same acquisition time.
In an embodiment, a currently-used image processing algorithm, such as laplacian pyramid, a neural network, discrete wavelet transform, and the like, may be applied to achieve fusion of the virtual scene image and the real scene image, so as to obtain a virtual training scene matched with the training mode.
However, by applying the conventional image processing algorithm, although better fusion performance can be obtained, the original image can be decomposed into components with different scales, so that the definition and recognition rate of the image texture are improved, and a good visual effect is achieved. However, when a smaller target exists in the original image, the edge contour of the image fused by using the algorithm is not clear, the contrast is poor and the details are not rich enough.
In this regard, another embodiment of the present invention is provided to implement fusion of a virtual scene image and a real scene image, so as to obtain a virtual training scene that matches with a training pattern. Specifically, as shown in fig. 6, a flowchart of an embodiment of fusing a virtual scene image and a real scene image according to an embodiment of the present invention includes the following steps:
Step 601, dividing the virtual scene image and the real scene image based on a preset color space, respectively, to obtain a first image subset of the virtual scene image in each subspace included in the color space, and to obtain a second image subset of the real scene image in each subspace.
Alternatively, the color space may be an RGB color space.
In one embodiment, as shown in FIG. 7, the RGB color space may be divided into 8 subspaces, the component ranges for the 8 subspaces for each color channel (including R channel, G channel, and B channel) are shown in Table 1 below.
TABLE 1
It should be understood that the above description of dividing the RGB color space into 8 subspaces is merely illustrative, and in practical applications, the RGB color space may be divided into more subspaces, which is not limited by the embodiment of the present invention.
Based on the above-mentioned color space, in step 601, based on the preset color space, the dividing the virtual scene image and the real scene image respectively to obtain a first image subset of the virtual scene image in each subspace included in the color space, and the specific implementation of obtaining a second image subset of the real scene image in each subspace may include: for each pixel point in the virtual scene image and the real scene image, determining a corresponding subspace from a preset color space according to the component value of the pixel point under each color channel, wherein the component value of the pixel point under each color channel respectively falls into the component range of the corresponding subspace under the color channel. The pixel points corresponding to the same subspace in the virtual scene image are classified into the same first image subset, and a first image subset of the virtual scene image in each subspace included in the color space is obtained; and classifying the pixel points corresponding to the same subspace in the real scene image into the same second image subset to obtain a second image subset of the real scene image in each subspace included in the color space.
For example, assume that a pixel value of a certain pixel point in the virtual scene image is (24, 24, 24). According to the above description, the subspace Black corresponding to the pixel may be determined, so that the pixel is classified into the first image subset under the subspace Black.
Step 602, fusing each first image subset with a corresponding second image subset, wherein the first image subset and the corresponding second image subset correspond to the same subspace.
As can be seen from the description of step 602, in the embodiment of the present invention, image fusion is performed in the subset of images, that is, in the color subspace, so that the image fusion accuracy can be improved. And, the greater the number of color subspaces, the higher the image fusion accuracy.
Specifically, the basic idea of fusing the first image subset with its corresponding second image subset is: and fusing the image areas with very similar content details. Specific implementations may include: and respectively carrying out wavelet transformation on the first image subset and the second image subset corresponding to the first image subset to obtain a frequency domain value of each pixel point in the first image subset and the second image subset corresponding to the first image subset, and then respectively carrying out frequency domain division on the first image subset and the second image subset corresponding to the first image subset according to the frequency domain value to obtain M first frequency domain regions corresponding to the first image subset and M second frequency domain regions corresponding to the second image subset. And finally, respectively fusing each first frequency domain region with the corresponding second frequency domain region. The correspondence referred to herein means: the first frequency domain region corresponds to the same frequency domain as the corresponding second frequency domain region.
Further, in the embodiment of the invention, when the first frequency domain region and the second frequency domain region corresponding to the first frequency domain region are fused, the method for judging the local energy based on the image frequency domain is used, so that the texture contour of the original image can be well reflected, the visual effect of the image is reflected, and the scene fusion effect is improved.
Specifically, the implementation of fusing the first frequency domain region with the corresponding second frequency domain region may include: for each first frequency domain region, a first low-frequency band energy value of the first frequency domain region and a second low-frequency band energy value of a second frequency domain region corresponding to the first frequency domain region are respectively determined according to the following formula (one).
In the above formula (one), Q is a point in the region Q, w (Q) represents a weight, C J (X, Q) represents a pixel value, and E (X, Q) represents a low-band energy value.
Then, a target image fusion strategy is determined according to the first low-frequency band energy value and the second low-frequency band energy value. And finally, fusing the first frequency domain region and the second frequency domain region corresponding to the first frequency domain region according to a target image fusion strategy.
Wherein determining the target image fusion policy based on the absolute value of the difference between the first low-band energy value and the second low-band energy value may comprise: the absolute value of the difference between the first low-frequency band energy value and the second low-frequency band energy value is determined, the absolute value is compared with a preset threshold value, and if the absolute value is smaller than the preset threshold value, the comparison indicates that the similarity between the first frequency domain area and the second frequency domain area is larger, so that the first image fusion strategy can be determined as the target image fusion strategy. Here, the first image fusion strategy refers to determining the first frequency domain region or the second frequency domain region as a fusion result. On the contrary, if the absolute value is greater than or equal to the preset threshold, the similarity between the first frequency domain region and the second frequency domain region is smaller, so that the second image fusion strategy can be determined as the target image fusion strategy, wherein the second image fusion strategy refers to the synthesis of the first frequency domain region and the second frequency domain region according to the principle of reserving pixel points. By the processing, the original pixel points can be reserved under the condition that the similarity of the first frequency domain area and the second frequency domain area is smaller, the loss of image contour details is avoided, and the image fusion precision is improved.
By the process shown in fig. 4, the generation of the virtual training scene matched with the preset training mode based on the real scene image is realized, wherein when the real scene image and the virtual scene image are fused, the image fusion is performed in the image subset, namely, the color subspace, so that the image fusion precision can be improved, and the finally generated virtual training scene can have higher real experience.
Referring to fig. 5, a flowchart of an embodiment of another virtual training method according to an embodiment of the present invention is provided. The process shown in fig. 5, on the basis of the process shown in fig. 3, focuses on how to determine the movement track of the virtual object in the virtual training scene based on the real scene image and the limb posture data of the trainer, and may include the following steps:
Step 501, determining a virtual collision position of limbs of a trainer and a virtual object in a virtual training scene based on a real scene image.
After the virtual training scene is rendered, the trainer can see objects such as virtual training sites, virtual team members, virtual objects (e.g., virtual football) and the like through the display screen of the head mounted display device 102. Taking the football virtual training scenario as an example, a trainer may lift his feet to touch a virtual football in the virtual training scenario (of course, a virtual touch here). When the trainer lifts the foot to touch the virtual football, the real scene image acquired by the image acquisition module 103 can contain the foot of the trainer, so that the virtual training scene at the moment also contains a picture corresponding to the foot of the trainer. Thus, the virtual collision position of the limb of the trainer and the virtual object can be positioned according to the real scene image.
Step 502, in the process of virtual training of a trainer, obtaining stress data of the virtual object through a wearable device.
Step 503, updating the current state transition matrix based on the stress data.
Step 504, determining the current system state of the trainer by using a Kalman filtering algorithm based on the updated state transition matrix and the limb posture data of the trainer.
Step 505, determining a moving track of the virtual object in the virtual training scene based on the current system state of the trainer, wherein the moving track takes the virtual collision position as a starting point.
Steps 502 to 505 are collectively described below:
First, as can be seen in fig. 1, there are multiple sensors on the wearable device, while for a multi-sensor system, the data is of diversity and complexity, so the basic requirement for the data fusion algorithm is robustness and parallel processing capability.
In the prior art, a Kalman filtering algorithm is mainly used for fusing low-level real-time dynamic multi-sensor redundant data, and when the error between the system state and the sensor accords with a Gaussian white noise model, the Kalman filtering can provide optimal estimation for the fused data. However, the accuracy and robustness of this approach can be limited in the presence of uncertainty in the system model and noise statistics.
In this regard, the embodiment of the invention proposes that, based on the traditional kalman filtering algorithm, a multivariate error observation equation is introduced for limb posture data of a trainer, and the measurement noise covariance matrix is adaptively adjusted to compensate, so that the kalman filtering algorithm is optimized, and in actual use, higher accuracy and robustness can be maintained against external environmental influence.
Specifically, as described in the above steps 502 to 505, during the virtual training of the trainer, the stress data of the virtual object is obtained through the wearable device. In a real training scenario, the external force applied to the football includes not only the impact force of the trainer but also the geomagnetic attraction force, so the stress data here may include: geomagnetic data collected by a geomagnetic sensor, acceleration data collected by an acceleration sensor, pressure data collected by a pressure sensor and the like. Then, the current state transition matrix is updated based on the stress data, and then the current system state of the trainer is determined based on the updated state transition matrix and limb posture data of the trainer by using a Kalman filtering algorithm as exemplified by the following formula (II). Finally, based on the current system state of the trainer, determining the moving track of the virtual object in the virtual training scene, wherein the moving track takes the virtual collision position as a starting point.
X k=Axk-1+Qk formula (II)
In the above formula (II), x k is a system state variable at k time, x k-1 is a system state variable at k-1 time, A is a state transition matrix responsible for linking the states at k and k-1 time, and Q k represents a Gaussian distribution noise function with a mean value of 0 and a variance of Q. It should be noted that in an ideal situation, the system noise conforms to a gaussian distribution.
The specific implementation of updating the current state transition matrix based on the stress data comprises the following steps: and constructing a measurement noise covariance matrix based on the stress data, and performing setting operation on the current state transition matrix by using the measurement noise covariance matrix based on the following formula (III) to obtain an updated state transition matrix.
A' =rar T formula (iii)
In the above formula (III), A' is the updated state transition matrix, R is the measurement noise covariance matrix, and A is the current state transition matrix.
In the above description, the specific implementation of constructing the measurement noise covariance matrix based on the stress data may include: and carrying out weighted summation on the measurement noise covariance matrix constructed according to the stress data under each dimension according to preset weights to obtain a final measurement noise covariance matrix.
Taking the noise introduced by the accelerometer and geomagnetic meter as an example, assuming that R a and R m are measurement noise covariance matrices in the acceleration dimension and in the magnetic force dimension, respectively, the final measurement noise covariance matrix can be calculated according to the following equation (four).
R=w aRa+wmRm formula (four)
In the above formula (four), w a and w m are weights between 0 and 1. In practice, the weights described above may be calibrated according to human dynamics.
In addition, it should be noted that the order of steps 501 to 505 is only schematically illustrated, and other execution orders consistent with reasonable execution logic are also within the scope of the embodiments of the present invention.
Through the flow shown in fig. 5, the determination of the moving track of the virtual object in the virtual training scene based on the real scene image and the limb posture data of the trainer is realized. When the moving track of the virtual object in the virtual training scene is determined according to the limb posture data of the trainer, a multivariate error observation equation is introduced, and the measurement noise covariance matrix is adaptively adjusted to compensate, so that a Kalman filtering algorithm is optimized, and in actual use, higher accuracy and robustness can be maintained against the influence of external environment.
Finally, taking a football virtual training scene as an example to explain the technical scheme provided by the embodiment of the invention:
The virtual training method provided by the embodiment of the invention can fuse the rendered scenes of virtual players, football and the like into the real court based on the mixed reality technology, and a trainer can set different player numbers and running (i.e. training modes) according to different tactics through software, and wear the head-mounted display device 102 illustrated in fig. 1 and the special training shoes illustrated in fig. 2.
In the training process, the image acquisition module 103 may acquire real scene images in real time and transmit the real scene images to the head-mounted display device 102, and the processor of the head-mounted display device 102 constructs virtual training scenes according to a preset training mode and displays the constructed virtual training scenes through the display screen, so that a trainer can watch the virtual training scenes and touch virtual football in the scenes by using special training shoes.
When a trainer uses the special training ball shoes to touch the virtual football in the scene, foot gesture data are collected through sensors on the special training ball shoes, and the virtual contact positions of the special training ball shoes and the virtual football are positioned through the image collection module 103, so that the movement track of the virtual football is judged (a football track database can be preset according to actual shooting force, speed, contact positions and the like), and the virtual football is rendered in the virtual training scene according to the movement track, so that football tactical training and simulation of effects of passing and shooting exercises can be achieved. Compared with VR football training games, the game can achieve more real simulation effect, and compared with training in a simple reality scene, human resources and physical strength of athletes can be saved.
Referring to fig. 8, a block diagram of an embodiment of a virtual training apparatus according to an embodiment of the present invention is provided. As shown in fig. 8, the apparatus includes: a first acquisition module 81, a second acquisition module 82, a virtual scene generation module 83, a trajectory determination module 84, and a movement module 85.
The first obtaining module 81 is configured to obtain, through a wearable device, limb posture data of a trainer during a virtual training process of the trainer;
The second obtaining module 82 is configured to obtain, through the image collecting module, an image of a real scene when the trainer performs virtual training during the virtual training of the trainer;
A virtual scene generating module 83, configured to generate a virtual training scene that matches a preset training mode based on the real scene image;
A trajectory determination module 84, configured to determine a movement trajectory of a virtual object in the virtual training scene based on the real scene image and the limb posture data;
and the moving module 85 is configured to move the virtual object in the virtual training scene according to the movement track, so as to implement virtual control on the virtual object through limbs of a trainer in the virtual training process.
Optionally, the apparatus further comprises (not shown):
The third acquisition module is used for acquiring head posture data of a trainer through head-mounted equipment in the process of virtual training of the trainer;
the virtual scene generation module 83 includes (not shown in the figure):
The generation sub-module is used for generating a virtual scene image based on the head gesture data and a preset training mode;
And the fusion sub-module is used for fusing the virtual scene image and the real scene image to obtain a virtual training scene matched with the training mode, wherein the head gesture data and the real scene image correspond to the same acquisition time.
Optionally, the fusion submodule includes (not shown in the figure):
the dividing sub-module is used for dividing the virtual scene image and the real scene image respectively based on a preset color space to obtain a first image subset of the virtual scene image in each subspace included in the color space and a second image subset of the real scene image in each subspace;
and the first processing sub-module is used for respectively fusing each first image subset with a corresponding second image subset, wherein the first image subset and the corresponding second image subset correspond to the same subspace.
Optionally, the dividing submodule is specifically configured to:
For each pixel point in the virtual scene image and the real scene image, determining a corresponding subspace from a preset color space according to the component value of the pixel point under each color channel, wherein the component value of the pixel point under each color channel respectively falls into the component range of the corresponding subspace under the color channel;
The pixel points corresponding to the same subspace in the virtual scene image are classified into the same first image subset, so that a first image subset of the virtual scene image under each subspace included in the color space is obtained;
and classifying the pixel points corresponding to the same subspace in the real scene image into the same second image subset to obtain a second image subset of the real scene image under each subspace included in the color space.
Optionally, the first processing submodule includes (not shown in the figure):
the wavelet transformation submodule is used for carrying out wavelet transformation on each first image subset and each corresponding second image subset to obtain a frequency domain value of each pixel point in the first image subset and each corresponding second image subset;
The frequency domain dividing sub-module is used for respectively carrying out frequency domain division on the first image subset and the second image subset corresponding to the first image subset according to the frequency domain value to obtain M first frequency domain areas corresponding to the first image subset and M second frequency domain areas corresponding to the second image subset;
And the frequency domain fusion submodule is used for respectively fusing each first frequency domain region with a corresponding second frequency domain region, wherein the first frequency domain region and the corresponding second frequency domain region correspond to the same frequency domain.
Optionally, the frequency domain fusion submodule includes (not shown in the figure):
An energy determination submodule, configured to determine, for each of the first frequency-domain regions, a first low-frequency-band energy value of the first frequency-domain region, and determine a second low-frequency-band energy value of a second frequency-domain region corresponding to the first frequency-domain region;
A fusion strategy determination submodule, configured to determine a target image fusion strategy according to the first low-band energy value and the second low-band energy value;
And the second processing sub-module is used for fusing the first frequency domain area with the corresponding second frequency domain area according to the target image fusion strategy.
Optionally, the fusion policy determining submodule is specifically configured to:
Determining an absolute value of a difference between the first low-band energy value and the second low-band energy value; comparing the absolute value with a preset threshold value; if the absolute value is smaller than the preset threshold value, determining a first image fusion strategy as a target image fusion strategy; and if the absolute value is larger than or equal to the preset threshold value as a comparison result, determining the second image fusion strategy as a target image fusion strategy.
Optionally, the trajectory determination module 84 includes (not shown):
a starting point determining sub-module, configured to determine a virtual collision position of the limb of the trainer and the virtual object in the virtual training scene based on the real scene image;
and the determining submodule is used for determining the moving track of the virtual object in the virtual training scene based on the limb posture data, wherein the moving track takes the virtual collision position as a starting point.
Optionally, the apparatus further comprises (not shown):
The fourth acquisition module is used for acquiring stress data of the virtual object through the wearable equipment in the process of virtual training of a trainer;
the updating module is used for updating the current state transition matrix based on the stress data;
The determination submodule includes (not shown in the figure):
the state determining submodule is used for determining the current system state of the trainer by utilizing a Kalman filtering algorithm based on the updated state transition matrix and the limb posture data;
and the track determination submodule is used for determining the moving track of the virtual object in the virtual training scene based on the current system state of the trainer.
Optionally, the update module includes (not shown in the figure):
The construction submodule is used for constructing a measurement noise covariance matrix based on the stress data;
and the operation sub-module is used for performing setting operation on the current state transition matrix by using the measurement noise covariance matrix to obtain an updated state transition matrix.
Optionally, the construction submodule is specifically configured to: and carrying out weighted summation on the measurement noise covariance matrix constructed according to the stress data under each dimension according to preset weights to obtain a final measurement noise covariance matrix.
The embodiment of the present invention also provides an electronic device, as shown in fig. 9, including a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 perform communication with each other through the communication bus 904,
A memory 903 for storing a computer program;
The processor 901 is configured to execute a program stored in the memory 903, and implement the following steps:
In the process of virtual training of a trainer, acquiring limb posture data of the trainer through wearable equipment, and acquiring a real scene image of the trainer during virtual training through an image acquisition module;
generating a virtual training scene matched with a preset training mode based on the real scene image;
Determining a moving track of a virtual object in the virtual training scene based on the real scene image and the limb posture data;
And moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process.
The communication bus mentioned by the server may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the server and other devices.
The memory may include random access memory (Random Access Memory, RAM) or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the virtual training method provided by the embodiments of the present application.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (12)

1. A virtual training method, the method comprising:
In the process of virtual training of a trainer, acquiring limb posture data of the trainer through wearable equipment, and acquiring a real scene image of the trainer during virtual training through an image acquisition module;
generating a virtual training scene matched with a preset training mode based on the real scene image;
Determining a moving track of a virtual object in the virtual training scene based on the real scene image and the limb posture data;
moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process;
The method further comprises the steps of:
in the process of virtual training of a trainer, head posture data of the trainer are obtained through head-mounted equipment;
The generating a virtual training scene matched with a preset training mode based on the real scene image comprises the following steps:
Generating a virtual scene image based on the head pose data and a preset training mode;
Fusing the virtual scene image and the real scene image to obtain a virtual training scene matched with the training mode, wherein the head gesture data and the real scene image correspond to the same acquisition time;
the fusing the virtual scene image and the real scene image includes:
Dividing the virtual scene image and the real scene image respectively based on a preset color space to obtain a first image subset of the virtual scene image in each subspace included in the color space and a second image subset of the real scene image in each subspace;
Fusing each first image subset with a corresponding second image subset, wherein the first image subset and the corresponding second image subset correspond to the same subspace;
The fusing each first image subset with the corresponding second image subset respectively includes:
For each first image subset, respectively carrying out wavelet transformation on the first image subset and a second image subset corresponding to the first image subset to obtain a frequency domain value of each pixel point in the first image subset and a second image subset corresponding to the first image subset;
According to the frequency domain values, respectively carrying out frequency domain division on the first image subset and the second image subset corresponding to the first image subset to obtain M first frequency domain areas corresponding to the first image subset and M second frequency domain areas corresponding to the second image subset;
And respectively fusing each first frequency domain region with a corresponding second frequency domain region, wherein the first frequency domain region and the corresponding second frequency domain region correspond to the same frequency domain.
2. The method according to claim 1, wherein the dividing the virtual scene image and the real scene image based on a preset color space to obtain a first subset of images of the virtual scene image in each subspace included in the color space, and obtaining a second subset of images of the real scene image in each subspace, respectively, includes:
For each pixel point in the virtual scene image and the real scene image, determining a corresponding subspace from a preset color space according to the component value of the pixel point under each color channel, wherein the component value of the pixel point under each color channel respectively falls into the component range of the corresponding subspace under the color channel;
The pixel points corresponding to the same subspace in the virtual scene image are classified into the same first image subset, so that a first image subset of the virtual scene image under each subspace included in the color space is obtained;
and classifying the pixel points corresponding to the same subspace in the real scene image into the same second image subset to obtain a second image subset of the real scene image under each subspace included in the color space.
3. The method of claim 1, wherein the fusing each of the first frequency domain regions with its corresponding second frequency domain region comprises:
determining a first low-frequency band energy value of the first frequency domain region and a second low-frequency band energy value of a second frequency domain region corresponding to the first frequency domain region for each of the first frequency domain regions;
determining a target image fusion strategy according to the first low-frequency band energy value and the second low-frequency band energy value;
and fusing the first frequency domain region and the second frequency domain region corresponding to the first frequency domain region according to the target image fusion strategy.
4. A method according to claim 3, wherein said determining a target image fusion strategy from an absolute value of a difference between said first low-band energy value and said second low-band energy value comprises:
determining an absolute value of a difference between the first low-band energy value and the second low-band energy value;
Comparing the absolute value with a preset threshold value;
if the absolute value is smaller than the preset threshold value, determining a first image fusion strategy as a target image fusion strategy;
and if the absolute value is larger than or equal to the preset threshold value as a comparison result, determining the second image fusion strategy as a target image fusion strategy.
5. The method of claim 1, wherein the determining a movement trajectory of a virtual object in the virtual training scene based on the real scene image and the limb gesture data comprises:
Determining a virtual collision position of limbs of the trainer and the virtual object in the virtual training scene based on the real scene image;
And determining a moving track of the virtual object in the virtual training scene based on the limb gesture data, wherein the moving track takes the virtual collision position as a starting point.
6. The method according to claim 1, wherein the method further comprises:
in the process of virtual training of a trainer, obtaining stress data of the virtual object through the wearable equipment;
Updating the current state transition matrix based on the stress data;
the determining, based on the limb gesture data, a movement track of a virtual object in the virtual training scene includes:
Based on the updated state transition matrix and the limb posture data, determining the current system state of the trainer by using a Kalman filtering algorithm;
And determining the moving track of the virtual object in the virtual training scene based on the current system state of the trainer.
7. The method of claim 6, wherein updating the current state transition matrix based on the force data comprises:
constructing a measurement noise covariance matrix based on the stress data;
and performing setting operation on the current state transition matrix by using the measurement noise covariance matrix to obtain an updated state transition matrix.
8. The method of claim 7, wherein said constructing a measurement noise covariance matrix based on the force data comprises:
and carrying out weighted summation on the measurement noise covariance matrix constructed according to the stress data under each dimension according to preset weights to obtain a final measurement noise covariance matrix.
9. A virtual training system, the system comprising:
The wearable device is used for collecting limb posture data of a trainer in the process of virtual training of the trainer;
The image acquisition module is used for acquiring a real scene image of a trainer during virtual training in the process of the virtual training of the trainer;
The head-mounted display device is used for acquiring limb posture data of a trainer through the wearable device and acquiring a real scene image of the trainer during virtual training through the image acquisition module in the virtual training process of the trainer; generating a virtual training scene matched with a preset training mode based on the real scene image; determining a moving track of a virtual object in the virtual training scene based on the real scene image and the limb posture data; moving the virtual object in the virtual training scene according to the movement track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process;
The head-mounted display device is also used for acquiring head posture data of a trainer in the process of virtual training of the trainer; generating a virtual scene image based on the head pose data and a preset training mode; fusing the virtual scene image and the real scene image to obtain a virtual training scene matched with the training mode, wherein the head gesture data and the real scene image correspond to the same acquisition time;
The head-mounted display device is specifically configured to divide the virtual scene image and the real scene image based on a preset color space, obtain a first image subset of the virtual scene image in each subspace included in the color space, and obtain a second image subset of the real scene image in each subspace; for each first image subset, respectively carrying out wavelet transformation on the first image subset and a second image subset corresponding to the first image subset to obtain a frequency domain value of each pixel point in the first image subset and a second image subset corresponding to the first image subset;
According to the frequency domain values, respectively carrying out frequency domain division on the first image subset and the second image subset corresponding to the first image subset to obtain M first frequency domain areas corresponding to the first image subset and M second frequency domain areas corresponding to the second image subset;
Fusing each first frequency domain region with a corresponding second frequency domain region, wherein the first frequency domain region and the corresponding second frequency domain region correspond to the same frequency domain; the first subset of images corresponds to the same subspace as the corresponding second subset of images.
10. A virtual training apparatus, the apparatus comprising:
the first acquisition module is used for acquiring limb posture data of a trainer through wearable equipment in the process of virtual training of the trainer;
The second acquisition module is used for acquiring a real scene image of a trainer during virtual training by the trainer through the image acquisition module in the process of virtual training by the trainer;
The virtual scene generation module is used for generating a virtual training scene matched with a preset training mode based on the real scene image;
The track determining module is used for determining the moving track of the virtual object in the virtual training scene based on the real scene image and the limb gesture data;
The moving module is used for moving the virtual object in the virtual training scene according to the moving track so as to realize virtual control of the virtual object through limbs of a trainer in the virtual training process;
The virtual scene generation module is specifically used for generating a virtual scene image based on head posture data of a trainer and a preset training mode; fusing the virtual scene image and the real scene image to obtain a virtual training scene matched with the training mode, wherein the head gesture data and the real scene image correspond to the same acquisition time;
The virtual scene generation module is specifically configured to divide the virtual scene image and the real scene image based on a preset color space, to obtain a first image subset of the virtual scene image in each subspace included in the color space, and to obtain a second image subset of the real scene image in each subspace; for each first image subset, respectively carrying out wavelet transformation on the first image subset and a second image subset corresponding to the first image subset to obtain a frequency domain value of each pixel point in the first image subset and a second image subset corresponding to the first image subset; according to the frequency domain values, respectively carrying out frequency domain division on the first image subset and the second image subset corresponding to the first image subset to obtain M first frequency domain areas corresponding to the first image subset and M second frequency domain areas corresponding to the second image subset; and respectively fusing each first frequency domain region with a corresponding second frequency domain region, wherein the first frequency domain region corresponds to the same frequency domain with the corresponding second frequency domain region, and the first image subset corresponds to the same subspace with the corresponding second image subset.
11. An electronic device, comprising: a processor and a memory;
the processor is configured to execute a virtual training program stored in the memory to implement the virtual training method of any of claims 1-8.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the method according to any of claims 1-8.
CN202210072468.6A 2022-01-21 2022-01-21 Virtual training method, device and virtual training system Active CN114220312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210072468.6A CN114220312B (en) 2022-01-21 2022-01-21 Virtual training method, device and virtual training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210072468.6A CN114220312B (en) 2022-01-21 2022-01-21 Virtual training method, device and virtual training system

Publications (2)

Publication Number Publication Date
CN114220312A CN114220312A (en) 2022-03-22
CN114220312B true CN114220312B (en) 2024-05-07

Family

ID=80708534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210072468.6A Active CN114220312B (en) 2022-01-21 2022-01-21 Virtual training method, device and virtual training system

Country Status (1)

Country Link
CN (1) CN114220312B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456611B (en) * 2023-12-22 2024-03-29 拓世科技集团有限公司 Virtual character training method and system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN107613224A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN111179679A (en) * 2019-12-31 2020-05-19 广东虚拟现实科技有限公司 Shooting training method and device, terminal equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN107613224A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN111179679A (en) * 2019-12-31 2020-05-19 广东虚拟现实科技有限公司 Shooting training method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN114220312A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN109191588B (en) Motion teaching method, motion teaching device, storage medium and electronic equipment
Craig Understanding perception and action in sport: how can virtual reality technology help?
Sheng et al. GreenSea: visual soccer analysis using broad learning system
Jain et al. Three-dimensional CNN-inspired deep learning architecture for Yoga pose recognition in the real-world environment
Wu et al. Spinpong-virtual reality table tennis skill acquisition using visual, haptic and temporal cues
CN105229666B (en) Motion analysis in 3D images
CN108369473A (en) Influence the method for the virtual objects of augmented reality
US20140078137A1 (en) Augmented reality system indexed in three dimensions
Suda et al. Prediction of volleyball trajectory using skeletal motions of setter player
CN111527520A (en) Extraction program, extraction method, and information processing device
KR102242994B1 (en) Method and device for recommending customized golf clubs using artificial neural networks
Elaoud et al. Skeleton-based comparison of throwing motion for handball players
Cordeiro et al. ARZombie: A mobile augmented reality game with multimodal interaction
CN114220312B (en) Virtual training method, device and virtual training system
Liu et al. A survey on location and motion tracking technologies, methodologies and applications in precision sports
Shen et al. Posture-based and action-based graphs for boxing skill visualization
Petri et al. Improvement of early recognition of attacks in karate kumite due to training in virtual reality
Yang et al. Research on face recognition sports intelligence training platform based on artificial intelligence
CN109407826B (en) Ball game simulation method and device, storage medium and electronic equipment
Sykora et al. Advances in sports informatics research
JP2021531057A (en) Dynamic region determination
Du RETRACTED: Preventive monitoring of basketball players' knee pads based on IoT wearable devices
KR102095647B1 (en) Comparison of operation using smart devices Comparison device and operation Comparison method through dance comparison method
CN115475373B (en) Display method and device of motion data, storage medium and electronic device
Barioni et al. BalletVR: a Virtual Reality System for Ballet Arm Positions Training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant