CN114721497A - Training method based on mixed reality, electronic equipment and storage medium - Google Patents

Training method based on mixed reality, electronic equipment and storage medium Download PDF

Info

Publication number
CN114721497A
CN114721497A CN202011530770.9A CN202011530770A CN114721497A CN 114721497 A CN114721497 A CN 114721497A CN 202011530770 A CN202011530770 A CN 202011530770A CN 114721497 A CN114721497 A CN 114721497A
Authority
CN
China
Prior art keywords
training
mixed reality
information
target object
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011530770.9A
Other languages
Chinese (zh)
Inventor
付博
何小波
彭炬
陈正兵
杜先祥
蓝燕生
张龙
袁锋
杨钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hongfucheng Precision Electronics Chengdu Co ltd
Hon Hai Precision Industry Co Ltd
Original Assignee
Hongfucheng Precision Electronics Chengdu Co ltd
Hon Hai Precision Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hongfucheng Precision Electronics Chengdu Co ltd, Hon Hai Precision Industry Co Ltd filed Critical Hongfucheng Precision Electronics Chengdu Co ltd
Priority to CN202011530770.9A priority Critical patent/CN114721497A/en
Priority to TW109146747A priority patent/TWI794715B/en
Publication of CN114721497A publication Critical patent/CN114721497A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a training method based on mixed reality, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving face information of a target object sent by a user terminal; if the login control information is legal, the login control information is sent to the mixed reality equipment; after receiving the successful login information fed back by the mixed reality equipment, starting a mixed reality training process, wherein the mixed reality training process comprises the steps of setting the position of the three-dimensional virtual material in the space information and sending the three-dimensional virtual material and the set position information to the mixed reality equipment; sending training step prompt information to the mixed reality equipment; receiving limb three-dimensional information of the target object, which is sent by the mixed reality equipment in real time; and determining a training result according to the limb three-dimensional information. The training efficiency can be improved, and the training cost is reduced.

Description

Training method based on mixed reality, electronic equipment and storage medium
Technical Field
The application relates to the technical field of mixed reality, in particular to a training method based on mixed reality, electronic equipment and a storage medium.
Background
Currently, most of the training methods for personnel are field training of professional technicians. Different fields need different professional technicians, cross-field and cross-region training cannot be realized, the universality is low, a large amount of manpower and material resources need to be consumed, and the training cost is high. And many trades practice training have certain danger, for example, in automatically controlled maintenance training, because the personnel that participate in the training are not skilled in the operation, the condition that easily appears being injured by the electrically controlled equipment mistake. Therefore, the traditional training mode wastes time and labor, and the safety of personnel cannot be guaranteed.
Disclosure of Invention
In view of the above, there is a need for a training method, an electronic device and a storage medium based on mixed reality, which can improve training efficiency, reduce training cost and ensure the safety of the training process.
A first aspect of the application provides a mixed reality-based training method, comprising:
receiving face information of a target object sent by the user terminal;
judging whether the target object is legal or not according to the face information;
if the target object is legal, sending login control information to mixed reality equipment;
after receiving the successful login information fed back by the mixed reality equipment, starting a mixed reality training process, comprising the following steps:
acquiring a three-dimensional virtual material corresponding to a material to be used in a training process;
after receiving the spatial information of the real training scene sent by the mixed reality equipment, setting the position of the three-dimensional virtual material in the spatial information, and sending the three-dimensional virtual material and the set position information to the mixed reality equipment;
sending prompt information of a plurality of training steps to the mixed reality equipment;
receiving limb three-dimensional information of the target object, which is sent by the mixed reality equipment in real time, wherein the limb three-dimensional information is information obtained when the target object executes training according to prompt information of each training step;
and determining a training result of the target object according to the limb three-dimensional information of the target object.
In one possible implementation, the method further includes:
and updating the set position information according to the received limb three-dimensional information, and sending the updated position information to the mixed reality equipment.
In a possible implementation manner, the determining whether the target object is legal according to the face information includes:
extracting face feature data of the target object based on the face information;
if the extracted face feature data is matched with face feature data stored in advance, outputting legal prompt information of the target object;
and if the extracted face feature data are not matched with the face feature data stored in advance, outputting illegal prompt information of the target object.
In one possible implementation manner, the determining the training result of the target object according to the limb three-dimensional information of the target object includes:
acquiring a training result of the target object in each training step to obtain a plurality of training results;
determining a training result for the target object based on the plurality of training results.
In one possible implementation, the obtaining training results of the target subject at each training step, and obtaining a plurality of training results includes:
extracting a plurality of first key frame images of limb three-dimensional information of the target object in each training step and a plurality of second key frame images of standard operation limb three-dimensional information which is stored in advance;
detecting a plurality of first limb key points in each first key frame image and a plurality of second limb key points in each second key frame image;
acquiring a first three-dimensional coordinate point of each first limb key point in the first key frame image;
acquiring a second three-dimensional coordinate point of each second limb key point in the second key frame image;
calculating the difference degree according to the first three-dimensional coordinate point and the second three-dimensional coordinate point;
and respectively determining the training result of the target object in each training step according to the plurality of the difference degrees.
In one possible implementation manner, the calculating the degree of difference according to the first three-dimensional coordinate point and the second three-dimensional coordinate point includes:
associating the first three-dimensional coordinate point and the second three-dimensional coordinate point according to the sequence of a time axis;
and calculating the Euclidean distance between the first three-dimensional coordinate point and the second three-dimensional coordinate point after correlation to obtain the difference degree.
In one possible implementation manner, the determining the training result of the target object at each training step according to the plurality of differences respectively includes:
respectively calculating the variance of the difference degree in each training step;
and respectively determining the score of the target object in each training step according to the variance and a preset threshold interval, wherein each threshold interval corresponds to one score.
In one possible implementation, the training results of the target object are determined according to each training result and a corresponding weight, wherein the weight represents the criticality of each training step.
A second aspect of the application provides an electronic device comprising a processor and a memory, the processor being configured to implement the mixed reality based training method when executing a computer program stored in the memory.
A third aspect of the application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the mixed reality based training method.
The training method based on the mixed reality, the electronic device and the storage medium can improve training efficiency and reduce training cost.
Drawings
Fig. 1 is an application environment diagram of a mixed reality-based training method disclosed in the present application.
Fig. 2 is a schematic structural diagram of a user terminal according to a preferred embodiment of the training method based on mixed reality.
Fig. 3 is a schematic structural diagram of a mixed reality device according to a preferred embodiment of the present application, which implements a mixed reality-based training method.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present application, which implements a mixed reality-based training method.
FIG. 5 is a flow chart of a preferred embodiment of a mixed reality based training method disclosed herein.
Fig. 6 is a flowchart illustrating a mixed reality training process that is initiated after receiving successful login information fed back by the mixed reality device.
Description of the main elements
User terminal 1
Mixed reality device 2
Electronic device 3
Memory device 31
Processor with a memory having a plurality of memory cells 32
Computer program 33
Communication bus 34
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
The technical solutions in the embodiments will be described clearly and completely with reference to the drawings in the embodiments, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is an application environment diagram of a training method based on mixed reality according to an embodiment of the present application. The training method based on mixed reality according to the embodiment of the present Application is applied to an electronic device 3, where the electronic device 3 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and hardware of the device includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The electronic device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network device, a server group consisting of a plurality of network devices, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network devices, wherein the Cloud Computing is one of distributed Computing, and is a super virtual computer consisting of a group of loosely coupled computers. The user device includes, but is not limited to, any electronic product that can interact with a user through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), or the like.
The electronic device 3 is in communication connection with the user terminal 1 and the mixed reality device 2.
As shown in fig. 2, the user terminal 1 includes a camera 10, a communication unit 11, a memory 12, and a processor 13. The camera 10, the communication unit 11, the memory 12 and the processor 13 are electrically connected. In the present embodiment, the camera 10 is used to capture face information of a target object. The communication unit 11 is configured to provide a network communication function for the user terminal 1. For example, the user terminal 1 is communicatively connected to the electronic device 3 through the communication unit 11. The user terminal 1 includes, but is not limited to, any electronic product that can interact with a user through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), and the like.
As shown in fig. 3, the mixed reality device 2 includes, but is not limited to, a camera 20, a communication unit 21, a memory 22, a processor 23, a sensor 24, and a display 25. The camera 20, the communication unit 21, the memory 22, the processor 23, the sensor 24 and the display 25 are electrically connected. The mixed reality device 2 is an integrated head-mounted device that can operate independently, and is responsible for input and output functions of human-computer interaction, such as the HoloLens device from Microsoft corporation.
The camera 20 is used for shooting a training evaluation video; the communication unit 21 is configured to provide a network communication function for the mixed reality device 2. For example, the mixed reality device 2 is communicatively connected to the electronic device 3 through the communication unit 21. The display 25 is used for displaying a training teaching video, and a user can conveniently train based on the training teaching video. It should be noted that the mixed reality device 2 may further include a headset and a wireless connection adapter, which are not shown in the figure.
As shown in fig. 4, the electronic device 3 comprises a memory 31, at least one processor 32, a computer program 33 stored in the memory 31 and executable on the at least one processor 32, at least one communication bus 34 and a communication unit 35.
It will be understood by those skilled in the art that the schematic diagram shown in fig. 4 is only an example of the electronic device 3, and does not constitute a limitation to the electronic device 3, and may include more or less components than those shown, or combine some components, or different components, for example, the electronic device 3 may further include an input and output device, a network access device, and the like.
The at least one processor 32 may be a CPU, but may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, field programmable gate arrays FPGA or other programmable logic devices, transistor logic devices, discrete hardware components, etc. The processor 32 may be a microprocessor or the processor 32 may be any conventional processor or the like, and the processor 32 is a control center of the electronic device 3 and connects various parts of the whole electronic device 3 by various interfaces and lines.
The memory 31 may be used to store the computer program 33 and/or the module/unit, and the processor 32 may implement various functions of the electronic device 3 by running or executing the computer program and/or the module/unit stored in the memory 31 and calling data stored in the memory 31. The memory 31 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the electronic device 3, and the like. In addition, the memory 31 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, and the like.
The communication unit 35 is configured to provide a network communication function for the electronic device 3. For example, the electronic device 3 is communicatively connected to the mixed reality device 2 through the communication unit 35.
The memory 31 in the electronic device 3 stores instructions to implement a mixed reality based training method as shown in fig. 5 below.
Referring to fig. 5, fig. 5 is a flowchart illustrating a training method based on mixed reality according to an embodiment of the present disclosure. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs. The execution subject of the mixed reality-based training method may be the electronic device.
And S11, receiving the face information of the target object sent by the user terminal.
Before training, in order to prevent the target object from illegally participating in training, for example, a person in a non-training range privately participates in training. The identity of the target object needs to be authenticated to confirm whether the target object is a legal user, and only the legal user can be trained to avoid the occurrence of information leakage. In this embodiment, a face authentication method is used to determine whether the target object is a valid user. Specifically, the user terminal collects face information of the target object and sends the face information to the electronic device.
As an optional implementation manner, before the step S11, the method further includes:
a training person database is created, wherein the database is used for storing information of training persons, such as face feature information.
And S12, judging whether the target object is legal or not according to the face information.
In an embodiment of the application, after receiving face information of a target object sent by the user terminal, the electronic device determines whether the target object is legal according to the face information. In particular, the amount of the solvent to be used,
extracting face feature data of the target object based on the face information;
comparing whether the extracted face feature data is matched with the face feature data in the database;
if the extracted face feature data is matched with the face feature data in the database, outputting the legal prompt information of the target object;
and if the extracted face feature data are not matched with the face feature data in the database, outputting illegal prompt information of the target object.
In this embodiment, the matching between the extracted face feature data and the face feature data in the database describes that the euclidean distance between the extracted face feature data and the face feature data in the database is smaller than a preset value.
It should be noted that the face feature extraction technology is the prior art, and is not described in detail in this application.
And S13, if the target object is legal, sending login control information to the mixed reality equipment.
In one embodiment of the application, a target subject who is legal needs to log in the mixed reality device worn by the target subject before the target subject starts to participate in training.
In this embodiment, the electronic device sends login control information to the mixed reality device, where the login control information includes personal information of the target object. For example, the personal information includes a name, a job number, and the like. And the electronic equipment controls the mixed reality equipment to input the personal information of the target object according to the login control information so as to execute login. And after receiving the login control information, the mixed reality equipment sends successful login information to the electronic equipment.
In addition, in other embodiments, in addition to the beginning of identity authentication of the training user to identify whether the target object is legal, the mixed reality-based training method may collect face information of the target object at preset time intervals during the training process to determine whether the training user has a replacement condition during the training.
And S14, after successful login information fed back by the mixed reality equipment is received, starting a mixed reality training process.
In one embodiment of the present application, the target object may be trained by the mixed reality device in order to reduce training costs while enabling the target object to be personally on the scene for learning the operation process.
With reference to fig. 5, after receiving the successful login information fed back by the mixed reality device, starting a mixed reality training process includes:
and S41, obtaining a three-dimensional virtual material corresponding to the material to be used in the training process.
In this embodiment, the material is an item that is needed for training the target subject.
The three-dimensional virtual material is a three-dimensional model corresponding to the material, and may be constructed by using any three-dimensional modeling software, such as BIM, solidworks, and the like, which is not limited in this application.
S42, receiving the spatial information of the real training scene sent by the mixed reality equipment, setting the position information of the three-dimensional virtual material in the spatial information, and sending the three-dimensional virtual material and the set position information to the mixed reality equipment.
In one embodiment of the present application, in order to accurately place the three-dimensional virtual material in a real training scene, spatial information extraction needs to be performed on the real training scene.
In this embodiment, the electronic device scans a real training scene by using a sensor of the mixed reality device, and acquires spatial information of the real scene based on a Simultaneous localization and mapping (SLAM) technique. Furthermore, the position information of the three-dimensional virtual material in the space information is set according to specific training content, and the position information can be set according to the actual requirements of users. And the electronic equipment sends the three-dimensional virtual material obtained in the step S41 and the set position information to the mixed reality equipment. The mixed reality device transfers the virtual material to the target object through the display.
And S43, sending prompting information of a plurality of training steps to the mixed reality equipment.
In this embodiment, the electronic device splits an operation flow into a plurality of training steps according to training contents, and generates training step prompt information, where the training step prompt information includes at least one of voice, text, numbers, graphics, and direction indications.
And the target object operates according to the training step prompt information. Specifically, the target object uses physical equipment to perform interactive operation with the three-dimensional virtual material, so as to complete the standardized training of work station operation steps and processes. The mixed reality equipment captures the three-dimensional limb information of the target object in each training step in real time through a camera, and sends the three-dimensional limb information to the electronic equipment.
S44, receiving limb three-dimensional information of the target object, which is sent by the mixed reality equipment in real time, wherein the limb three-dimensional information is information obtained when the target object performs training according to prompt information of each training step.
In an embodiment of the application, the electronic device performs subsequent operations according to the received limb three-dimensional information of the target object.
As an optional implementation manner, after the step S44, the method further includes:
and updating the set position information according to the received limb three-dimensional information, and sending the updated position information to the mixed reality equipment.
Specifically, the electronic device updates the position information of the three-dimensional virtual material in the space information in real time according to the received three-dimensional information of the limb, and sends the updated position information to the mixed reality device. And the mixed reality equipment transmits the updated three-dimensional virtual material to the target object through the display.
By combining the mixed reality equipment and the electronic equipment, a mixed reality training scene based on a real scene and virtual materials can be established, the virtual materials are updated in real time according to the operation of the target object, the target object can obtain mixed reality training experience comparable to the on-site training, and the effect closest to the on-site training is achieved.
S45, determining a training result of the target object according to the limb three-dimensional information of the target object.
In this embodiment, the three-dimensional limb information may be selected according to the requirement of the user, for example, the three-dimensional hand information of the target object is selected as a measure for training.
In this embodiment, the determining a training result of the target object according to the limb three-dimensional information of the target object includes:
acquiring a training result of the target object in each training step to obtain a plurality of training results;
determining a training result for the target object based on the plurality of training results.
In particular, the amount of the solvent to be used,
(1) and extracting a plurality of first key frame images of the limb three-dimensional information of the target object in each training step and a plurality of second key frame images of the pre-stored standard training operation limb three-dimensional information.
(2) A plurality of first limb key points in each first key frame image and a plurality of second limb key points in each second key frame image are detected. Specifically, each first keyframe image and each second keyframe image may be input to a limb keypoint detection model trained in advance, and a plurality of first limb keypoints in the first keyframe image and a plurality of second limb keypoints in the second keyframe image are detected by the limb keypoint detection model. The training process of the limb key point detection model is the prior art, and is not elaborated in detail in the application.
(3) And calculating the difference degree between each first limb key point and each corresponding second limb key point to obtain a plurality of difference degrees. Specifically, a first three-dimensional coordinate point of each first limb key point in the first key frame image and a second three-dimensional coordinate point of each second limb key point in the second key frame image are obtained. And associating the first three-dimensional coordinate point and the second three-dimensional coordinate point according to the sequence of a time axis, and calculating the Euclidean distance between the associated first three-dimensional coordinate point and the associated second three-dimensional coordinate point to obtain the difference.
(4) And respectively determining the training result of the target object in each training step according to the plurality of the difference degrees. Specifically, the variance of the difference degree in each training step is calculated respectively, and the score of the target object is determined according to a preset threshold interval where the variance is located, wherein each threshold interval corresponds to one score. Illustratively, if the variance in a certain step is in the interval of [0,0.1), the training result of the certain step is output as "training score 90 points, training is qualified"; if the variance in a certain step is in the interval of [0.1,0.2), outputting the training result of the certain step as 'training score 80 points and qualified training'; if the variance in a certain step is in the interval of [0.2,0.3), outputting the training result of the certain step as '70 points of training score and qualified training'; and if the variance in a certain step is in a [0.3, + ∞ ] interval, outputting the training result of the certain step as 'unqualified training'.
(5) And determining the training result of the target object according to each training result and the corresponding weight, wherein the weight represents the criticality of each training step, and the result is 1 after the weights of all the training steps are added. The weight corresponding to each training step may be set according to the actual requirement of the user, for example, the weight of a certain training step is set to be 0.3, and if the training result of the training step is 90 points, the final training result of the training step is 90 times 0.3, and 27 points. And adding the scores of all the training steps to obtain a training result of the target object.
As an optional implementation manner, before the step S45, the method further includes:
performing image processing on the first key frame image and the second key frame image, wherein the image processing comprises: image alignment processing and dynamic time alignment processing.
In an embodiment, the method may further include the step of transmitting the training results to the user terminal. Therefore, the target object can conveniently check the training result through the user terminal.
In the method flows described in fig. 5 and fig. 6, the face information of the target object sent by the user terminal may be received; judging whether the target object is legal or not according to the face information; if the target object is legal, sending login control information to mixed reality equipment; after receiving the successful login information fed back by the mixed reality equipment, starting a mixed reality training process, wherein the mixed reality training process comprises the steps of obtaining a three-dimensional virtual material corresponding to a material to be used in the training process; receiving spatial information of a real training scene sent by the mixed reality equipment, setting the position of the three-dimensional virtual material in the spatial information, and sending the three-dimensional virtual material and the set position information to the mixed reality equipment; sending prompt information of a plurality of training steps to the mixed reality equipment; receiving limb three-dimensional information of the target object, which is sent by the mixed reality equipment in real time, wherein the limb three-dimensional information is information obtained when the target object executes training according to prompt information of each training step; and determining a training result of the target object according to the limb three-dimensional information of the target object. The training efficiency can be improved, the training cost is reduced, and meanwhile, the safety of the training process is guaranteed.
It should be noted that the integrated modules/units of the electronic device 3 may be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program code may be in source code form, object code form, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (10)

1. A training method based on mixed reality is applied to electronic equipment, and the electronic equipment is in communication connection with a user terminal and the mixed reality equipment, and is characterized in that the training method based on mixed reality comprises the following steps:
receiving face information of a target object sent by the user terminal;
judging whether the target object is legal or not according to the face information;
if the target object is legal, sending login control information to mixed reality equipment;
after receiving the successful login information fed back by the mixed reality equipment, starting a mixed reality training process, comprising the following steps:
acquiring a three-dimensional virtual material corresponding to a material to be used in a training process;
receiving spatial information of a real training scene sent by the mixed reality equipment, setting the position of the three-dimensional virtual material in the spatial information, and sending the three-dimensional virtual material and the set position information to the mixed reality equipment;
sending prompt information of a plurality of training steps to the mixed reality equipment;
receiving limb three-dimensional information of the target object, which is sent by the mixed reality equipment in real time, wherein the limb three-dimensional information is information obtained when the target object executes training according to prompt information of each training step;
and determining a training result of the target object according to the limb three-dimensional information of the target object.
2. The mixed reality-based training method of claim 1, further comprising:
and updating the set position information according to the received limb three-dimensional information, and sending the updated position information to the mixed reality equipment.
3. The mixed reality-based training method of claim 1, wherein the determining whether the target object is legal according to the face information comprises:
extracting face feature data of the target object based on the face information;
if the extracted face feature data is matched with face feature data stored in advance, outputting legal prompt information of the target object;
and if the extracted face feature data are not matched with the face feature data stored in advance, outputting illegal prompt information of the target object.
4. The mixed reality-based training method of claim 1, wherein the determining the training result of the target object according to the three-dimensional information of the limbs of the target object comprises:
acquiring a training result of the target object in each training step to obtain a plurality of training results;
determining a training result for the target object based on the plurality of training results.
5. The mixed reality-based training method of claim 4, wherein the obtaining training results for the target subject at each training step comprises:
extracting a plurality of first key frame images of limb three-dimensional information of the target object in each training step and a plurality of second key frame images of standard operation limb three-dimensional information which is stored in advance;
detecting a plurality of first limb key points in each first key frame image and a plurality of second limb key points in each second key frame image;
acquiring a first three-dimensional coordinate point of each first limb key point in the first key frame image;
acquiring a second three-dimensional coordinate point of each second limb key point in the second key frame image;
calculating the difference degree according to the first three-dimensional coordinate point and the second three-dimensional coordinate point;
and respectively determining the training result of the target object in each training step according to the plurality of the difference degrees.
6. The mixed reality-based training method of claim 5, wherein the calculating the degree of dissimilarity from the first three-dimensional coordinate point and the second three-dimensional coordinate point comprises:
associating the first three-dimensional coordinate point and the second three-dimensional coordinate point according to the sequence of a time axis;
and calculating the Euclidean distance between the first three-dimensional coordinate point and the second three-dimensional coordinate point after correlation to obtain the difference degree.
7. The mixed reality-based training method of claim 5, wherein the determining the training result of the target object at each training step according to the plurality of the degrees of difference respectively comprises:
respectively calculating the variance of the difference degree in each training step;
and respectively determining the score of the target object in each training step according to the variance and a preset threshold interval, wherein each threshold interval corresponds to one score.
8. The mixed reality-based training method of claim 4, wherein the training results of the target object are determined according to the training results and the corresponding weights, wherein the weights represent the criticality of each training step.
9. An electronic device, comprising a processor and a memory, wherein the processor is configured to execute a computer program stored in the memory to implement the mixed reality based training method of any of claims 1-8.
10. A computer-readable storage medium storing at least one instruction which, when executed by a processor, implements a mixed reality based training method as recited in any one of claims 1-8.
CN202011530770.9A 2020-12-22 2020-12-22 Training method based on mixed reality, electronic equipment and storage medium Pending CN114721497A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011530770.9A CN114721497A (en) 2020-12-22 2020-12-22 Training method based on mixed reality, electronic equipment and storage medium
TW109146747A TWI794715B (en) 2020-12-22 2020-12-29 Training method based on mixed reality, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011530770.9A CN114721497A (en) 2020-12-22 2020-12-22 Training method based on mixed reality, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114721497A true CN114721497A (en) 2022-07-08

Family

ID=82229917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011530770.9A Pending CN114721497A (en) 2020-12-22 2020-12-22 Training method based on mixed reality, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114721497A (en)
TW (1) TWI794715B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3031771A1 (en) * 2016-07-25 2018-02-01 Magic Leap, Inc. Imaging modification, display and visualization using augmented and virtual reality eyewear
US20190130792A1 (en) * 2017-08-30 2019-05-02 Truinject Corp. Systems, platforms, and methods of injection training
CN111626234A (en) * 2020-05-29 2020-09-04 江苏中车数字科技有限公司 Implementation method of intelligent manufacturing system platform

Also Published As

Publication number Publication date
TW202228009A (en) 2022-07-16
TWI794715B (en) 2023-03-01

Similar Documents

Publication Publication Date Title
CN101178768B (en) Image processing apparatus, image processing method and person identification apparatus,
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
US20180308107A1 (en) Living-body detection based anti-cheating online research method, device and system
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN112581081B (en) Computer examination informatization-based machine room seat management method, device and equipment
CN111143813B (en) Verification problem generation method, verification method and device
CN113934297B (en) Interaction method and device based on augmented reality, electronic equipment and medium
CN113656761B (en) Business processing method and device based on biological recognition technology and computer equipment
CN111368808A (en) Method, device and system for acquiring answer data and teaching equipment
CN111738769B (en) Video processing method and device
CN109145786B (en) Picture identification method, device, equipment, medium and product
CN114639152A (en) Multi-modal voice interaction method, device, equipment and medium based on face recognition
CN114004639A (en) Preferential information recommendation method and device, computer equipment and storage medium
CN112785741B (en) Check-in system and method, computer equipment and storage equipment
US20220270505A1 (en) Interactive Avatar Training System
CN113822645A (en) Interview management system, equipment and computer medium
CN113486316A (en) User identity authentication method and device, electronic equipment and readable storage medium
CN117114475A (en) Comprehensive capability assessment system based on multidimensional talent assessment strategy
CN114721497A (en) Training method based on mixed reality, electronic equipment and storage medium
CN116205723A (en) Artificial intelligence-based face tag risk detection method and related equipment
US10922700B2 (en) Systems and methods to provide a software benefit when a consumer object is recognized in an image
Obeidallah et al. Students authentication in e-assessment sessions: a theoretical biometric model for smartphone devices
CN112055013A (en) Automatic authentication method, device, equipment and storage medium
CN114565814B (en) Feature detection method and device and terminal equipment
CN113269214A (en) Method, device and equipment for analyzing graph similarity and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination