CN115268626A - Industrial simulation system - Google Patents

Industrial simulation system Download PDF

Info

Publication number
CN115268626A
CN115268626A CN202210611796.9A CN202210611796A CN115268626A CN 115268626 A CN115268626 A CN 115268626A CN 202210611796 A CN202210611796 A CN 202210611796A CN 115268626 A CN115268626 A CN 115268626A
Authority
CN
China
Prior art keywords
dimensional
target object
host
hand
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210611796.9A
Other languages
Chinese (zh)
Inventor
严小天
郭秋华
张晴晴
刘浩然
刘鲁峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN202210611796.9A priority Critical patent/CN115268626A/en
Publication of CN115268626A publication Critical patent/CN115268626A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Abstract

The invention provides an industrial simulation system, which comprises a host and a head-mounted display device, wherein the host is in communication connection with the head-mounted display device, the head-mounted display device comprises a somatosensory controller, and the host is used for responding to a request for building a three-dimensional scene of a target object and building the three-dimensional scene of the target object, wherein the three-dimensional scene at least comprises a three-dimensional model of the target object; the body sensing controller is used for determining the three-dimensional space position information of the hand of a wearer under the condition that the wearer of the head-mounted display equipment executes target operation aiming at the target object, and sending the three-dimensional space position information of the hand to the host; and the host is used for controlling the three-dimensional model to simulate the target operation in the three-dimensional scene according to the three-dimensional space position information of the hand.

Description

Industrial simulation system
Technical Field
The embodiment of the disclosure relates to the technical field of intelligent electronic equipment, in particular to an industrial simulation system.
Background
Industrial simulation is widely applied to various industrial links by a plurality of enterprises at present, and plays an important role in improving development efficiency of the enterprises, strengthening data acquisition, analysis and processing capabilities, reducing decision errors and reducing enterprise risks.
Currently, the main form of industrial simulation is based on Virtual Reality (VR) technology, which generates a simulated environment by a computer to immerse a user in the environment, and Virtual scenes can be used for industrial simulation by using the technology, and a feeling of being personally on the scene is created for an experiencer by matching with a specific display device. However, the real world is simulated by computer technology, and the experience person can not sense the surrounding real environment by separating the sense of the user from the real world through the device, so that the user experience is low.
Disclosure of Invention
It is an object of the disclosed embodiments to provide a new technical solution for an industrial simulation system.
According to a first aspect of embodiments of the present disclosure, there is provided an industrial simulation system, comprising a host and a head mounted display device, the host and the head mounted display device being communicatively connected, the head mounted display comprising a somatosensory controller,
the host is used for responding to a request for building a three-dimensional scene of a target object, and building the three-dimensional scene of the target object, wherein the three-dimensional scene at least comprises a three-dimensional model of the target object;
the body sensing controller is used for determining the three-dimensional space position information of the hand of the wearer under the condition that the wearer of the head-mounted display equipment performs target operation on the target object, and sending the three-dimensional space position information of the hand to the host;
and the host is used for controlling the three-dimensional model to simulate the target operation in the three-dimensional scene according to the three-dimensional space position information of the hand.
Optionally, the somatosensory controller is configured to determine three-dimensional spatial position information of a hand of a wearer of the head-mounted display device when the wearer performs a target operation on the target object, and specifically includes:
tracking hand motion information of a wearer of the head mounted display device in a case where the wearer performs a target operation for the target object;
according to the hand motion information, positioning three-dimensional space position information of a plurality of key points of the hand;
and determining the three-dimensional space position information of the hand of the wearer according to the three-dimensional space position information of the key points of the hand.
Optionally, the host is configured to, in response to a request for building a three-dimensional scene of a target object, build the three-dimensional scene of the target object, and specifically includes:
responding to a request for building a three-dimensional scene of a target object, and acquiring an initial three-dimensional model of the target object and an initial scene model corresponding to the initial three-dimensional model;
carrying out model development on the initial three-dimensional model to obtain a three-dimensional model of the target object;
and carrying out logic development on the three-dimensional model and the initial scene model to obtain the three-dimensional scene.
Optionally, the model developing the initial three-dimensional model includes: and setting the material of the initial three-dimensional model and/or setting a UI control of the initial three-dimensional model.
Optionally, a first collision volume is bound to the hand of the wearer, and a second collision volume is arranged on the target object;
the body sensing controller is used for determining the three-dimensional space position information of the hand of the wearer under the condition that the wearer of the head-mounted display equipment performs target operation on the target object, and sending the three-dimensional space position information of the hand to the host;
the host is used for synchronizing the three-dimensional space position information of the hand to the first collision body, and executing logic codes to show operation results when a wearer of the head-mounted display device performs target operation on the target object so that the first collision body and the second collision body collide.
Optionally, the head mounted display device comprises a display module,
the host is also used for rendering a virtual hand according to the three-dimensional space position information of the hand part and sending the rendered virtual hand to the display module;
the display module is used for displaying the rendered virtual hand.
Optionally, the target object is provided with first identification information, the head-mounted display device further includes a camera module,
the camera module is used for acquiring a scene image of a real scene where the wearer is located and sending the scene image to the host;
the host is used for identifying the first identification information in the scene image and acquiring three-dimensional space position information of the target object; and (c) a second step of,
and determining the current state of the target object according to the three-dimensional space position information of the target object.
Optionally, the first identification information includes two-color identification information.
Optionally, the host provides at least one of a training mode, an exercise mode, and an assessment mode for the target subject;
wherein, in the training mode, the host provides voice explanation of the operation performed on the target object;
in the practice mode, the host provides a sequence of operations between operations performed on the target object;
in the qualifying mode, the host provides a score for an operation performed on the target object.
Optionally, second identification information is set on the target object, the head-mounted display further includes a camera module and a display module,
the camera module is used for acquiring a scene image of a real scene where the wearer is located and sending the scene image to the host;
the host is used for identifying the second identification information in the scene image, fusing the scene image and the three-dimensional scene according to the second identification information to obtain a fused image and outputting the fused image to the display module,
and the display module is used for displaying the fused image.
One beneficial effect of the disclosed embodiments is that the industrial simulation system provided by the disclosure includes a host and a head-mounted display device, a somatosensory controller is arranged on the head-mounted display device, the host can build a three-dimensional scene for a target object, the three-dimensional scene includes a three-dimensional model of the target object, the somatosensory controller can determine three-dimensional spatial position information of a hand of a user and send the three-dimensional spatial position information to the host when the user performs a target operation on the target object, and the host controls the three-dimensional model to simulate the target operation in the three-dimensional scene according to the three-dimensional spatial position information of the hand. Namely, by the industrial simulation system, the real components are matched in the virtual scene for operation, so that the sense of reality of practical training can be obviously improved, the problems existing in real environment training can be effectively overcome, the training efficiency can be effectively improved, and meanwhile, the experience sense of the system and the enthusiasm of a user for using the system are enhanced.
Other features of the present description and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description, serve to explain the principles of the specification.
FIG. 1 is a hardware configuration schematic of an industrial simulation system according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of the operation of a host according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a hand keypoint according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a rendered virtual hand according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a target object according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a target object according to another embodiment of the present disclosure;
FIG. 7 is a hardware configuration schematic of an industrial simulation system according to another embodiment of the present disclosure;
fig. 8 is a schematic diagram of a target object according to another embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the embodiments of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. .
In all examples shown and discussed herein, any particular value should be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be discussed further in subsequent figures.
< apparatus embodiment >
Please refer to fig. 1, which is a schematic diagram of a hardware structure of an industrial simulation system according to an embodiment of the present application. As shown in FIG. 1, the industrial simulation system 1 comprises a host 11 and a head-mounted display device 12, the host 11 and the head-mounted display device 12 are connected in a communication manner, and the head-mounted display device 12 comprises a body sensing controller 121.
The head-mounted display device 12 may be an Augmented Reality (AR) device, the AR device samples a Video See-Through (VST) technology, the VST is a real-time scene information obtained by a camera module on the VR device, and then is combined with virtual information generated by the host, and then is transmitted to a display module of the VR device to be output to eyes of a wearer. By the VST technology, a wearer can be ensured to have a sufficient field angle, and a good experience effect is achieved; meanwhile, by the VST technology, before the virtual reality is transmitted into human eyes, the virtual reality and the reality are well combined, and the shielding problem is effectively solved. Of course, the head-mounted display device 12 may also be other devices, and the embodiment is not limited herein.
A Motion sensing controller (Leap Motion) 121 is disposed on the head-mounted display device 12, and the Motion sensing controller 121 can capture the hand Motion information of the wearer wearing the head-mounted display device 12.
The host 11 may be a mobile phone, a tablet computer, a notebook computer, etc., and the host 11 uses a CPU (Central Processing Unit) or a GPU (Graphic Processing Unit) as an arithmetic device, and implements various applications of augmented reality through a software algorithm. Referring to fig. 2, a VST development engine is provided on the host 11, and the VST development engine is capable of building a three-dimensional scene corresponding to a target object in a real scene. Meanwhile, the Leap Motion plugin is installed on the host 11, and can manage data transmitted by the somatosensory controller 121, so that data transmission between the VST development engine and the somatosensory controller is realized, and the virtual hand corresponds to the real hand. The VST development engine can also carry out data transmission with the camera module so as to judge the position of the target object.
In this embodiment, the host 11 is configured to build a three-dimensional scene of a target object in response to a request for building the three-dimensional scene of the target object. Wherein at least a three-dimensional model of the target object is included in the three-dimensional scene.
In a specific embodiment, the host 11 is configured to, in response to a request for building a three-dimensional scene of a target object, build the three-dimensional scene of the target object, and specifically includes: responding to a request for building a three-dimensional scene of a target object, and acquiring an initial three-dimensional model of the target object and an initial scene model corresponding to the initial three-dimensional model; carrying out model development on the initial three-dimensional model to obtain a three-dimensional model of the target object; and carrying out logic development on the three-dimensional model and the initial scene model to obtain the three-dimensional scene.
The above target object may be an object in a real scene where the wearer of head mounted display device 12 is currently located, the target object being an object actually operated by the wearer. The target object may be a component such as a valve, a mold temperature regulator, or the like.
Specifically, referring to fig. 2, the solid works software and the 3DSMax software are installed on the host 11, and first, an initial three-dimensional model and an initial scene model corresponding to the initial three-dimensional model can be established for a target object through the solid works software, and meanwhile, the model surface number can be simplified according to an actual operation process. And then, optimizing the initial three-dimensional model and the initial scene model corresponding to the initial three-dimensional model through 3DSMax software, and importing the initial three-dimensional model and the initial scene model corresponding to the initial three-dimensional model into a VST development engine for development. The VST-based development engine is divided into two stages of development, wherein the first stage is model development, and the second stage is model logic development.
For model development, which involves model development of an initial three-dimensional model and model development of an initial scene model, performing model development on the initial three-dimensional model includes: and setting the material of the initial three-dimensional model and/or setting a UI control of the initial three-dimensional model. Of course, it also relates to animating and characterizing three-dimensional models, speech interpretation, and the like. The model development for the initial scene model comprises the following steps: setting the material of the initial scene model, making animation and special effects on the initial scene model, and the like. It is to be understood that, in the embodiment of the present disclosure, the initial three-dimensional model of the target object after model development is referred to as a three-dimensional model of the target object.
For logic development, it involves performing logic development on a model of a three-dimensional model of a target object, for example, establishing a correspondence between a UI control and an actual button control of the target object in a real scene, and establishing a business logic relationship between the UI controls. It also relates to the logic development between the three-dimensional model of the target object and the initial scene model after model development, for example, the interaction relationship, business logic relationship, etc. between the three-dimensional model of the target object and the initial scene model after model development are established.
Illustratively, taking target object 2 as the valve shown in fig. 8 as an example, an initial three-dimensional model of the valve is first created using SolidWorks, and an initial three-dimensional model of the tube connected to the valve and other valves connected to the tube is created. The initial three-dimensional model of the valve, the pipe connected to the valve and the initial three-dimensional models of the other valves connected to the pipe are then optimized based on 3d scax software. And finally, optimizing the initial three-dimensional model of the valve to obtain the three-dimensional model of the valve, optimizing the initial three-dimensional models of the pipe connected with the valve and other valves connected with the pipe, developing interactive logics such as UI controls of the three-dimensional model of the valve, and logically developing the interaction between the three-dimensional model of the valve, the optimized pipe connected with the valve and the initial three-dimensional models of other valves connected with the pipe to further obtain the three-dimensional scene of the valve.
In this embodiment, the body sensing controller 121 is configured to determine three-dimensional spatial position information of a hand of the wearer when the wearer of the head-mounted display device performs a target operation on the target object, and transmit the three-dimensional spatial position information of the hand to the host.
In a specific embodiment, the somatosensory controller 121 is configured to determine three-dimensional spatial position information of a hand of a wearer of the head-mounted display device when the wearer performs a target operation on the target object, and specifically includes: tracking hand motion information of a wearer of the head-mounted display device in a case where the wearer performs a target operation on the target object; according to the hand motion information, positioning three-dimensional space position information of a plurality of key points of the hand; and determining the three-dimensional spatial position information of the hand of the wearer according to the three-dimensional spatial position information of the key points of the hand.
Specifically, in the case where the wearer of the head-mounted display device 12 operates the target object in the real scene, the somatosensory controller 121 may obtain the hand motion information in the visual field range, and locate the three-dimensional spatial position information of 21 hand key points according to the hand motion information, where the three-dimensional spatial position information of the 21 hand key points is shown in fig. 3. And determining the three-dimensional space position information of the hand of the wearer according to the three-dimensional space position information of the 21 hand key points.
In this embodiment, the host 11 is configured to control the three-dimensional model to simulate the target operation in the three-dimensional scene according to the three-dimensional spatial position information of the hand.
In a specific embodiment, a first collision volume is bound to the wearer's hand and a second collision volume is disposed on the target object.
Specifically, the target object includes a target operation position and a related target UI control, and correspondingly, the second collision body may be bound to the target operation position and the related target UI control. The second collider may be the cubic collider 2 and the cubic collider 3 shown in FIG. 6.
Specifically, a first collision body is bound at the fingertip position of the hand of the wearer, the shape of the first collision body is matched with the size of the fingertip, and the first collision body can be the ball collision body 1 shown in fig. 6. For example, a first collision volume may be bound at the wearer's middle finger tip position in a shape conforming to the size of the middle finger tip.
In this embodiment, the somatosensory controller 121 is configured to, when the wearer performs a target operation on the target object, determine three-dimensional spatial position information of the hand of the wearer, and send the three-dimensional spatial position information of the hand to the host computer.
Specifically, the somatosensory controller acquires three-dimensional spatial position information of a hand of the wearer when the wearer performs a target operation on a target operation position of a target object, wherein the three-dimensional spatial position information of the hand comprises hand fingertip position information. And the finger tip position information of the hand is sent to the host.
In this embodiment, the host 11 is configured to synchronize the three-dimensional spatial position information of the hand to the first collision volume, and execute a logic code to present an operation result when the wearer of the head-mounted display device performs a target operation on the target object so that the first collision volume and the second collision volume collide with each other.
Specifically, after receiving the three-dimensional spatial position information of the hand of the wearer, the host synchronizes the hand fingertip position information to the first collision body with a shape conforming to the size of a fingertip, that is, the host controls the position of the first collision body according to the hand fingertip position information, so that when the wearer performs a target operation on a target object, and the first collision body collides with the second collision body at the target operation position, a logic judgment is performed to show an operation result.
Illustratively, taking the target object 2 as the mold temperature controller shown in fig. 5 as an example, as shown in fig. 6, a cubic collision body 2 is added to a button for controlling a temperature increasing function, and a cubic collision body 3 is added to a button for controlling a temperature decreasing function.
For example, when the wearer clicks a cooling button of a mold temperature controller in a real scene by a real hand, the three-dimensional position information of the hand of the wearer is transmitted to the host 11 through the wearable somatosensory controller 131. The host 11 gives the fingertip position information to a spherical collider with the fingertip in a size, wherein the spherical collider is the collider 1, and the collision of the collider 1 with the coordinate position of the fingertip is realized all the time. When the fingertip comes into contact with the temperature increase button, the host computer 11 detects a collision, and executes collision logic to increase the virtual temperature when the collider 1 collides with the cube collider 2 corresponding to the temperature increase button.
In this embodiment, referring to fig. 7, the head-mounted display device 12 further includes a display module 123.
The host 11 is further configured to render a virtual hand according to the three-dimensional spatial position information of the hand, and send the rendered virtual hand to the display module.
The display module 123 is configured to display the rendered virtual hand.
Continuing with the above example, when the wearer clicks the cool down button of the mold temperature controller in the real scene by the real hand, the three-dimensional position information of the wearer's hand is transmitted to the host 11 through the worn somatosensory controller 131. The host computer renders a virtual hand according to the three-dimensional space position information of the hand, and the rendered virtual hand is shown in fig. 4.
According to the embodiment of the application, the provided industrial simulation system comprises a host and a head-mounted display device, wherein a somatosensory controller is arranged on the head-mounted display device, the host can build a three-dimensional scene aiming at a target object, the three-dimensional scene comprises a three-dimensional model of the target object, the somatosensory controller can determine three-dimensional space position information of a hand of a user and send the three-dimensional space position information to the host under the condition that the user performs target operation on the target object, and the host controls the three-dimensional model to simulate the target operation in the three-dimensional scene according to the three-dimensional space position information of the hand. Namely, through the industrial simulation system, the real components are matched in the virtual scene for operation, so that the sense of reality of practical training can be obviously improved, the problems existing in real environment training can be effectively overcome, the training efficiency can be effectively improved, and meanwhile, the experience sense and the enthusiasm of a user for using the system are enhanced.
In one embodiment, the target object 2 is further provided with first identification information, which may be two-color identification information. Referring to fig. 8, the target object 2 is a valve, and the first identification information of the valve may be a first color 21 and a second color 22 set on the valve, wherein the first color and the second color are different colors, for example, the first color may be red, and the second color may be blue, and the valve can be distinguished from other objects in a real scene, such as a pipe and other valves, by the first color and the second color, thereby facilitating tracking and positioning.
Referring to fig. 7, the head mounted display device 12 further includes a camera module 122, and the camera module 122 may be a binocular camera.
In this embodiment, the camera module 122 is configured to obtain a scene image of a real scene where the wearer is located, and send the scene image to the host.
In this embodiment, the host 11 is configured to recognize the first identification information in the scene image, and obtain three-dimensional spatial position information of the target object; and determining the current state of the target object according to the three-dimensional space position information of the target object.
Illustratively, continuing with the valve shown in fig. 8 as an example, the camera module 122 acquires a scene image of a real scene where the valve is located and sends the scene image to the host 11. The host 11 detects the first identification information of the valve in the pursuit scene image, calculates the three-dimensional spatial position information of the valve, and then judges whether the operation of the wearer is correct according to the three-dimensional spatial position information.
According to the embodiment of the application, the invention adopts the manual identification for tracking, and achieves the aim of quick and accurate tracking by identifying the characteristics such as specific colors and the like. Compared with the prior method for acquiring the pose through a sensor and a tracker, the method for identifying the pose of the real object through the manual identification greatly reduces the cost of software development.
In one embodiment, the target object 2 is further provided with second identification information, which is different from the first identification information.
In this embodiment, the camera module 122 is configured to obtain a scene image of a real scene where the wearer is located, and send the scene image to the host 11.
In this embodiment, the host 11 is configured to recognize the second identification information in the scene image, and fuse the video image and the three-dimensional scene according to the second identification information to obtain a fused image and output the fused image to the display module.
Specifically, the three-dimensional registration technology is to align a virtual world coordinate system and a real world coordinate system, and map information in a virtual environment and information in a real environment to the same space. The common three-dimensional registration method is a tracking registration technology based on a hardware sensor, however, the hardware equipment cost is high, and the three-dimensional registration method based on the artificial marker is sampled in the embodiment of the disclosure. The three-dimensional registration method based on the artificial marker is based on the fact that a host computer identifies second marker information in a scene image, and then according to the second identification information, information of a video image and information of a three-dimensional scene are mapped to the same space to obtain a fused image which is output to a display module.
In this embodiment, the display module 123 is configured to display the fused image.
According to the embodiment of the disclosure, the industrial simulation system is based on a video perspective technology, namely, a real picture acquired by a camera module and a digital picture generated by a computer are fused together through a video synthesis technology, so that a larger visual angle range can be realized. Meanwhile, through the VST technology, the content is directly displayed through a video, the requirement on positioning accuracy is low, before the content is transmitted into human eyes, the virtual reality and the reality are well combined, and the shielding problem is effectively solved.
In one embodiment, the host 11 further provides at least one of a training mode, an exercise mode, and a qualification mode for the target subject.
Wherein, in the training mode, the host provides a voice explanation of an operation performed on the target object;
in the practice mode, the host provides a sequence of operations between operations performed on the target object;
in the qualifying mode, the host provides a score for an operation performed on the target object.
In this embodiment, the host 11 is provided with a selection interface, and a wearer can select one of a training mode, an exercise mode and an examination mode as a current target mode according to actual needs, so as to realize interaction between a real scene and a virtual scene in the target mode.
According to the embodiment of the application, three different modes, namely a training mode, an exercise mode and an assessment mode, can be provided, interaction between a real scene and a virtual scene can be realized under the realization of each mode, and the purposes of direct naturalness, good immersive experience and high operation efficiency are achieved.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. An industrial simulation system, which is characterized by comprising a host and a head-mounted display device, wherein the host and the head-mounted display device are in communication connection, the head-mounted display comprises a body sensing controller,
the host is used for responding to a request for building a three-dimensional scene of a target object, and building the three-dimensional scene of the target object, wherein the three-dimensional scene at least comprises a three-dimensional model of the target object;
the body sensing controller is used for determining the three-dimensional space position information of the hand of the wearer under the condition that the wearer of the head-mounted display equipment performs target operation on the target object, and sending the three-dimensional space position information of the hand to the host;
and the host is used for controlling the three-dimensional model to simulate the target operation in the three-dimensional scene according to the three-dimensional space position information of the hand.
2. The system according to claim 1, wherein the somatosensory controller is configured to determine three-dimensional spatial position information of the hand of the wearer when the wearer of the head-mounted display device performs the target operation on the target object, and specifically comprises:
tracking hand motion information of a wearer of the head mounted display device in a case where the wearer performs a target operation for the target object;
according to the hand motion information, positioning three-dimensional space position information of a plurality of key points of the hand;
and determining the three-dimensional spatial position information of the hand of the wearer according to the three-dimensional spatial position information of the key points of the hand.
3. The system according to claim 1, wherein the host is configured to, in response to a request for building a three-dimensional scene for a target object, build the three-dimensional scene for the target object, and specifically includes:
responding to a request for building a three-dimensional scene of a target object, and acquiring an initial three-dimensional model of the target object and an initial scene model corresponding to the initial three-dimensional model;
carrying out model development on the initial three-dimensional model to obtain a three-dimensional model of the target object;
and carrying out logic development on the three-dimensional model and the initial scene model to obtain the three-dimensional scene.
4. The system of claim 3, wherein model developing the initial three-dimensional model comprises: and setting the material of the initial three-dimensional model and/or setting a UI control of the initial three-dimensional model.
5. The system of claim 4, wherein a first collision volume is bound to the wearer's hand and a second collision volume is disposed on the target object;
the body sensing controller is used for determining the three-dimensional space position information of the hand of the wearer under the condition that the wearer of the head-mounted display equipment performs target operation on the target object, and sending the three-dimensional space position information of the hand to the host;
the host is used for synchronizing the three-dimensional space position information of the hand to the first collision body, and executing logic codes to show operation results when a wearer of the head-mounted display device performs target operation on the target object so that the first collision body and the second collision body collide.
6. The system of claim 5, wherein the head mounted display device comprises a display module,
the host is also used for rendering a virtual hand according to the three-dimensional space position information of the hand part and sending the rendered virtual hand to the display module;
the display module is used for displaying the rendered virtual hand.
7. The system of claim 1, wherein the target object has first identification information disposed thereon, the head mounted display device further comprises a camera module,
the camera module is used for acquiring a scene image of a real scene where the wearer is located and sending the scene image to the host;
the host is used for identifying the first identification information in the scene image and acquiring three-dimensional space position information of the target object; and (c) a second step of,
and determining the current state of the target object according to the three-dimensional space position information of the target object.
8. The system of claim 7, wherein the first identification information comprises two-color identification information.
9. The system of claim 1, wherein the host provides at least one of a training mode, an exercise mode, and an assessment mode for the target subject;
wherein, in the training mode, the host provides a voice explanation of an operation performed on the target object;
in the practice mode, the host provides a sequence of operations between operations performed on the target object;
in the qualifying mode, the host provides a score for an operation performed on the target object.
10. The system of claim 1, wherein second identification information is provided on the target object, wherein the head-mounted display further comprises a camera module and a display module,
the camera module is used for acquiring a scene image of a real scene where the wearer is located and sending the scene image to the host;
the host is used for identifying the second identification information in the scene image, fusing the scene image and the three-dimensional scene according to the second identification information to obtain a fused image and outputting the fused image to the display module,
and the display module is used for displaying the fused image.
CN202210611796.9A 2022-05-31 2022-05-31 Industrial simulation system Pending CN115268626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210611796.9A CN115268626A (en) 2022-05-31 2022-05-31 Industrial simulation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210611796.9A CN115268626A (en) 2022-05-31 2022-05-31 Industrial simulation system

Publications (1)

Publication Number Publication Date
CN115268626A true CN115268626A (en) 2022-11-01

Family

ID=83759854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210611796.9A Pending CN115268626A (en) 2022-05-31 2022-05-31 Industrial simulation system

Country Status (1)

Country Link
CN (1) CN115268626A (en)

Similar Documents

Publication Publication Date Title
CN109976519B (en) Interactive display device based on augmented reality and interactive display method thereof
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN110163942B (en) Image data processing method and device
CN111158469A (en) Visual angle switching method and device, terminal equipment and storage medium
CN107004279A (en) Natural user interface camera calibrated
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
EP3973453A1 (en) Real-world object recognition for computing device
WO2020061432A1 (en) Markerless human movement tracking in virtual simulation
CN107357434A (en) Information input equipment, system and method under a kind of reality environment
CN106293099A (en) Gesture identification method and system
KR102442637B1 (en) System and Method for estimating camera motion for AR tracking algorithm
WO2017061890A1 (en) Wireless full body motion control sensor
Bellarbi et al. A 3d interaction technique for selection and manipulation distant objects in augmented reality
US20230244354A1 (en) 3d models for displayed 2d elements
WO2023250267A1 (en) Robotic learning of tasks using augmented reality
TWI694355B (en) Tracking system, tracking method for real-time rendering an image and non-transitory computer-readable medium
CN111881807A (en) VR conference control system and method based on face modeling and expression tracking
CN115268626A (en) Industrial simulation system
US20220050527A1 (en) Simulated system and method with an input interface
KR20190036614A (en) Augmented reality image display system and method using depth map
Sobota et al. Mixed reality: a known unknown
Jiang et al. A brief analysis of gesture recognition in VR
Nivedha et al. Enhancing user experience through physical interaction in handheld augmented reality
Bai Mobile augmented reality: Free-hand gesture-based interaction
CN214959905U (en) Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination