CN113223182A - Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology - Google Patents

Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology Download PDF

Info

Publication number
CN113223182A
CN113223182A CN202110465049.4A CN202110465049A CN113223182A CN 113223182 A CN113223182 A CN 113223182A CN 202110465049 A CN202110465049 A CN 202110465049A CN 113223182 A CN113223182 A CN 113223182A
Authority
CN
China
Prior art keywords
gesture
unit
fixed point
student
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110465049.4A
Other languages
Chinese (zh)
Other versions
CN113223182B (en
Inventor
汤富斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Simaiyun Technology Co ltd
Original Assignee
Shenzhen Simaiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Simaiyun Technology Co ltd filed Critical Shenzhen Simaiyun Technology Co ltd
Priority to CN202110465049.4A priority Critical patent/CN113223182B/en
Publication of CN113223182A publication Critical patent/CN113223182A/en
Application granted granted Critical
Publication of CN113223182B publication Critical patent/CN113223182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Computer Graphics (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a learning terminal applied to the automobile industry based on an MR (magnetic resonance) glasses technology, which relates to the technical field of education in the automobile industry, and comprises a virtual transformation module, a virtual transformation module and a virtual transformation module, wherein the virtual transformation module is used for transforming the content of a real world and mutually fusing the content with the virtual world; the comment interpretation module is used for interpreting the teaching contents of the virtual world; the gesture analysis module is used for analyzing and judging gestures of the trainee for controlling the virtual world; a response terminal for displaying virtual world content and responding the gesture command of the student; according to the gesture recognition method and device, gesture movement information of the student is collected through the gesture collection unit, and fixed point coordinates of each movement gesture are analyzed and calculated through the position positioning unit, so that accurate recognition and judgment of the gestures of the student can be achieved under the condition that the student does not wear wearable equipment, the burden of the student can be relieved, and the learning efficiency is improved.

Description

Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology
Technical Field
The invention relates to the technical field of automobile industry education, in particular to a learning terminal applied to the automobile industry based on an MR (magnetic resonance) glasses technology.
Background
The MR interactive display system adopts the most advanced mixed reality technology in the world at present, and realizes interaction with holographic images in a real environment on the basis of a traditional physical sand table by using simple gesture actions through a wearable mobile interactive terminal to form an interactive 3D holographic scene;
the education of the automobile industry generally adopts on-site teaching at present, the teaching environment is poor, the learning content is narrow, the detailed introduction of key contents and difficult contents which a student wants to know cannot be well carried out completely depending on the oral description of a teaching worker, and the learning period is long;
how to improve the learning efficiency and learning interest of the trainees and improve the learning environment of the trainees becomes a problem to be solved urgently, so that a learning terminal applied to the automobile industry based on the MR glasses technology is urgently needed to solve the problem.
Disclosure of Invention
The invention aims to provide a learning terminal applied to the automobile industry based on the MR glasses technology, so as to solve the problems in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme: a learning terminal based on MR (magnetic resonance) glasses technology applied to the automobile industry comprises a virtual conversion module used for converting the content of a real world and mutually fusing with a virtual world;
the learner can learn knowledge in the automobile industry in the virtual world, the limitation of the original teaching field is broken, and the learning environment is improved;
the comment interpretation module is used for interpreting the teaching contents of the virtual world;
the student can selectively know key and difficult knowledge considered by the student according to the actual situation of the student in the learning process, so that the learning efficiency of the student is improved;
the gesture analysis module is used for analyzing and judging gestures of the trainee for controlling the virtual world;
the number of wearing devices is reduced, the control gesture of the student is not sensed through the wearing devices, but the control gesture is directly analyzed, the control command is judged, the learning pressure of the student is relieved, and the gesture control is more convenient;
a response terminal for displaying virtual world content and responding the gesture command of the student;
the output end of the comment interpretation module is connected with the input end of the virtual conversion module, the gesture analysis module is connected with the virtual conversion module, the output end of the virtual conversion module is connected with the input end of the response terminal, and the gesture analysis module is connected with the response terminal.
According to the technical scheme, the virtual conversion module comprises a picture capturing unit, a 3D modeling unit, a 3D model and a virtual fusion unit;
the picture capturing unit is used for capturing a 2D picture of real teaching contents in the automobile industry, so that the combination of a virtual world and the real world can be realized, the authenticity is ensured, and the controllability is improved;
the output end of the picture capturing unit is connected with the input end of the 3D modeling unit, the 3D modeling unit outputs a 3D model, the output end of the 3D model is connected with the virtual fusion unit, and the virtual fusion unit is connected with the real world and the virtual world.
According to the technical scheme, the comment interpretation module comprises a voice recognition unit and an information input unit;
the voice recognition unit is used for recognizing voice signals of a teacher in the automobile industry teaching process and displaying the voice signals in a text form, so that the voice signals of the teacher can be displayed in a 3D model in a commenting form, the impression of a student is enhanced, the student can know the comment explanation of each part in the 3D model at any time according to actual requirements, and the information input unit is used for inputting the comment information of each part in the 3D model, so that the student can know the detailed information of each part at any time according to own interest when learning automobile industry knowledge;
and the output ends of the voice recognition unit and the information input unit are connected with the 3D model.
According to the technical scheme, the gesture analysis module comprises a coordinate system establishing unit, a gesture collecting unit, a position positioning unit and a gesture analysis unit;
the coordinate system establishing unit is used for establishing a three-dimensional rectangular coordinate system of the 3D model, so that coordinate values of gestures of a student can be positioned, the gestures of the student can be conveniently analyzed and understood, the gesture collecting unit is used for collecting gesture information of the virtual world controlled by the student, the virtual world can be controlled according to the gesture information of the student, the position positioning unit is used for positioning the spatial position of the gestures of the student in the virtual world, the gesture analyzing unit is used for analyzing the gesture information of the student, the intention of the student is determined, the virtual world can be controlled according to the intention of the gestures of the student, and the student can learn more knowledge of the automobile industry;
the output ends of the coordinate system establishing unit and the gesture collecting unit are connected with the 3D model, the output end of the 3D model is connected with the input end of the position positioning unit, and the output end of the position positioning unit is connected with the input end of the gesture analyzing unit.
According to the technical scheme, the response terminal comprises a central control unit, a head-mounted display device and an instruction execution unit;
the central control unit is used for intelligently controlling the whole learning terminal, the head-mounted display equipment is used for displaying a fusion world of a virtual world and a real world, and the instruction execution unit is used for executing an instruction issued by the central control unit and controlling a display picture in the virtual world;
the output end of the posture analysis unit is connected with the input end of the central control unit, the output end of the central control unit is connected with the input ends of the head-mounted display device and the execution unit, and the output end of the instruction execution unit is connected with the input end of the 3D model.
According to the above technical scheme, the postureThe state acquisition unit is two acquisition cameras A and B, two the acquisition cameras are installed respectively in the front end middle part and the front end upper portion of wearing display device for the mobile information to student's the gesture of controlling gathers information, the gesture of controlling of gathering the camera collection is the single frame picture, makes through the analysis to each frame picture, reachs student's gesture change orbit, and the distance between acquisition camera A and the acquisition camera B is LABThe included angle between the shooting directions of the acquisition camera A and the acquisition camera B is theta, and the shooting directions of the acquisition camera A and the acquisition camera B are unchanged.
According to the technical scheme, the position positioning unit takes a certain point of the gesture of the student as a fixed point, the gesture fixed point of the student is placed at the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B, the control on the 3D model is started, the coordinate system establishing unit establishes a three-dimensional rectangular coordinate system by taking the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B as the circle center, and the three-dimensional coordinate value of the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B is (0,0,0) because the learning terminal is convenient to know the requirement that the student needs to control the 3D model;
the acquisition camera A and the acquisition camera B acquire N pictures to respectively form a picture set A of the acquisition camera ACollection={A1,A2,A3,…,ANAnd a photo set B of the acquisition camera BCollection={B1,B2,B3,…,BNIn which ANRepresents the Nth picture acquired by the acquisition camera A, BNRepresenting the Nth picture taken by the pick-up camera B, the set of pictures ACollectionAnd a set of photographs BCollectionThe first photo A in (1)1And B1The coordinate values of the central point positions in the three-dimensional image are (0,0, 0);
the ratio of the size of the pictures shot by the acquisition camera A and the acquisition camera B to the size of the actual scene is 1: m;
the position positioning unit collects the photos ACollectionA in (A)1,A2,A3,…,ANTwo adjacent pictures in the picture group are sequentially overlapped to form a gesture fixed point moving picture of an X-Z plane and an (X-Y)/cos theta plane;
the position locating unit confirms the X-axis and Z-axis coordinates of the gesture fixed point after moving according to the following steps:
s1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the X-Z plane after superposition
Figure BDA0003043489360000061
S2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the X-Z plane after superposition
Figure BDA0003043489360000062
The coordinate value of the gesture fixed point of the (i + 1) th photo on the X-Z plane is
Figure BDA0003043489360000063
Wherein (X)i,Yi) Coordinate values of the gesture fixed point of the ith photo on the X-Z plane can be determined by coordinate values of the i-1 th photo, and so on, and the 1 st photo A is known1The coordinate value of the gesture fixed point in the X-Z plane design line is (0, 0).
Through the technical scheme, the moving distance and the moving direction of the student gesture fixed point on the X-Z plane can be determined, the moving distance of the student gesture fixed point on the X axis and the Y axis is calculated, and the coordinate values of the student gesture fixed point on the X axis and the Y axis after the student gesture fixed point moves are determined.
According to the technical scheme, the position positioning unit confirms the X-axis coordinates and the Y-axis coordinates of the gesture fixed point after moving according to the following steps:
t1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the (X-Y)/cos theta plane after superposition
Figure BDA0003043489360000064
T2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the (X-Y)/cos theta plane after superposition
Figure BDA0003043489360000071
T3, measuring the straight-line distance L between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture on the (X-Y)/cos theta plane after superpositionii+1
Calculating the actual moving distance S of the gesture fixed point of the ith photo and the gesture fixed point of the (i + 1) th photo in space according to the following formulaii+1
Figure BDA0003043489360000072
Through the formula, the moving direction and distance of the student gesture fixed point on the (X-Y)/cos theta plane can be converted to the X-Y plane, so that the moving distance of the student gesture fixed point on the X-Y axis can be accurately confirmed, and the moving direction and distance of the student gesture fixed point on the X-Y axis can be confirmed.
The distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture which are confirmed by the position positioning unit moves on the Y axis is
Figure BDA0003043489360000073
Figure BDA0003043489360000074
Then, the coordinate value of the (i + 1) th photo in the three-dimensional rectangular coordinate system is
Figure BDA0003043489360000075
Wherein Z isiGesture for representing ith photoThe coordinate value of the gesture fixed point of the ith photo on the Y axis of the (X-Y)/cos theta plane can be determined by the coordinate value of the ith-1 photo, and so on, knowing that the 1 st photo A1The coordinate value of the gesture fixed point on the (X-Y)/cos theta flat Y-axis plane is 0.
Through the calculation mode, the moving direction and the moving distance of the student gesture fixed point on the X-Y axis can be converted into the moving distance of the student gesture fixed point on the Y axis, and then the moving distance of the student gesture fixed point on the Y axis can be obtained, so that the coordinate value of the point on the Y axis can be determined according to the coordinate value of the last student gesture fixed point on the Y axis.
According to the technical scheme, the position positioning unit inputs coordinate value information of each movement of the student gesture into the gesture analysis unit, the gesture analysis unit fits N coordinate values to form a student gesture curve, the gesture analysis unit conveys the fitted student gesture curve to the central control unit, the central control unit sends an operation command to the command execution unit according to the fitted student gesture curve, and the command execution unit adjusts the 3D model to complete the student gesture operation process.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the gesture recognition method and device, gesture movement information of the student is collected through the gesture collection unit, and fixed point coordinates of each movement gesture are analyzed and calculated through the position positioning unit, so that accurate recognition and judgment of the gestures of the student can be achieved under the condition that the student does not wear wearable equipment, the burden of the student can be relieved, and the learning efficiency is improved.
2. According to the invention, the voice signal of the instructor is recognized, the voice signal of the instructor is converted into the text information, and the interpretation and annotation are carried out in the fused 3D model, so that the learner can pay attention to and learn the key contents according to the self condition during learning, and the learning efficiency of the learner is improved.
3. The invention applies the MR glasses technology to the teaching of the automobile industry, so that students get rid of the severe learning environment, the learning efficiency of the students is improved, and the interest and the fun of the students in learning are increased.
Drawings
FIG. 1 is a schematic diagram of a simulation process of a learning terminal applied to the automobile industry based on the MR glasses technology according to the present invention;
FIG. 2 is a schematic diagram of module connection of a learning terminal applied to the automobile industry based on the MR glasses technology according to the present invention;
FIG. 3 is a schematic diagram of student gesture fixed-point analysis of a learning terminal applied to the automobile industry based on the MR glasses technology according to the present invention;
FIG. 4 is an X-Z plane gesture fixed point distribution diagram of a learning terminal applied to the automobile industry based on the MR glasses technology;
FIG. 5 is a (X-Y)/cos θ planar gesture fixed-point distribution diagram of a learning terminal applied to the automobile industry based on the MR glasses technology.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1-2, the present invention provides a learning terminal applied to the automobile industry based on the MR glasses technology, which includes a virtual transformation module for transforming the contents of the real world and fusing with the virtual world;
the learner can learn knowledge in the automobile industry in the virtual world, the limitation of the original teaching field is broken, and the learning environment is improved;
the comment interpretation module is used for interpreting the teaching contents of the virtual world;
the student can selectively know key and difficult knowledge considered by the student according to the actual situation of the student in the learning process, so that the learning efficiency of the student is improved;
the gesture analysis module is used for analyzing and judging gestures of the trainee for controlling the virtual world;
the number of wearing devices is reduced, the control gesture of the student is not sensed through the wearing devices, but the control gesture is directly analyzed, the control command is judged, the learning pressure of the student is relieved, and the gesture control is more convenient;
a response terminal for displaying virtual world content and responding the gesture command of the student;
the output end of the comment interpretation module is connected with the input end of the virtual conversion module, the gesture analysis module is connected with the virtual conversion module, the output end of the virtual conversion module is connected with the input end of the response terminal, and the gesture analysis module is connected with the response terminal.
The virtual conversion module comprises a picture capturing unit, a 3D modeling unit, a 3D model and a virtual fusion unit;
the picture capturing unit is used for capturing a 2D picture of real teaching contents in the automobile industry, so that the combination of a virtual world and a real world can be realized, the authenticity is ensured, and the controllability is improved, for example: the virtual fusion unit is used for fusing the 3D model of the real teaching content output by the 3D modeling unit into a virtual world, so that the picture of the real world can be enhanced by using the virtual content, the experience of a student is improved, and the learning efficiency and interest of the student are improved;
the output end of the picture capturing unit is connected with the input end of the 3D modeling unit, the 3D modeling unit outputs a 3D model, the output end of the 3D model is connected with the virtual fusion unit, and the virtual fusion unit is connected with the real world and the virtual world.
The comment interpretation module comprises a voice recognition unit and an information input unit;
the voice recognition unit is used for recognizing voice signals of a teacher in the automobile industry teaching process and displaying the voice signals in a text mode, so that the voice signals of the teacher can be displayed in a 3D model in a comment mode, the impression of a student is enhanced, the student can know the comment explanation of each part in the 3D model at any time according to actual requirements, the information input unit is used for inputting comment information of each part in the 3D model, and the student can know detailed information of each part at any time according to own interests when learning automobile industry knowledge, for example: when the student clicks the gear, the information of the number, torque, strength and the like of the gear is displayed in a commenting mode;
and the output ends of the voice recognition unit and the information input unit are connected with the 3D model.
The gesture analysis module comprises a coordinate system establishing unit, a gesture acquisition unit, a position positioning unit and a gesture analysis unit;
the coordinate system establishing unit is used for establishing a three-dimensional rectangular coordinate system of a 3D model, so that coordinate values of gestures of a student can be positioned, the meaning of the gestures of the student can be analyzed and known conveniently, the gesture collecting unit is used for collecting gesture information of the student for controlling a virtual world, the virtual world can be controlled according to the gesture information of the student, the position positioning unit is used for positioning the spatial position of the gestures of the student in the virtual world, the gesture analyzing unit is used for analyzing the meaning of the gesture information of the student, the intention of the student is determined, the virtual world can be controlled according to the intention of the gestures of the student, and the student can learn more knowledge of the automobile industry, for example: the gesture analysis unit analyzes that the student clicks the automobile gearbox, and then all annotation information of the gearbox is displayed;
the output ends of the coordinate system establishing unit and the gesture collecting unit are connected with the 3D model, the output end of the 3D model is connected with the input end of the position positioning unit, and the output end of the position positioning unit is connected with the input end of the gesture analyzing unit.
The response terminal comprises a central control unit, a head-mounted display device and an instruction execution unit;
the central control unit is used for intelligently controlling the whole learning terminal, the head-mounted display device is used for displaying a fusion world of a virtual world and a real world, and the instruction execution unit is used for executing an instruction issued by the central control unit and controlling a display picture in the virtual world, for example: the central control unit issues an instruction to display the information of the automobile gearbox, and the instruction execution unit controls the 3D model to display the annotation information of the gearbox;
the output end of the posture analysis unit is connected with the input end of the central control unit, the output end of the central control unit is connected with the input ends of the head-mounted display device and the execution unit, and the output end of the instruction execution unit is connected with the input end of the 3D model.
Gesture collection unit is two collection cameras A and B, two collection camera installs respectively in the front end middle part and the front end upper portion of wearing display device for the mobile information to student's the gesture of controlling carries out acquisition information and gathers, what collection camera gathered controls the gesture and is the single frame picture for through the analysis to each frame picture, reachs student's gesture change orbit, for example: the acquisition camera carries out acquisition of once controlling the gesture every ts, and the distance between acquisition camera A and acquisition camera B is LABThe included angle between the shooting directions of the acquisition camera A and the acquisition camera B is theta, and the shooting directions of the acquisition camera A and the acquisition camera B are unchanged.
The utility model discloses a teaching aid, including position location unit, camera A and collection camera B, the position location unit uses a certain point of student's gesture as the fixed point, student's gesture fixed point is placed and is opened the control to the 3D model when gathering camera A and gathering camera B and shoot the crossing point of direction, the coordinate system is established the unit and is used gathering camera A and gather camera B and shoot the crossing point of direction and establish three-dimensional rectangular coordinate system as the centre of a circle, gather camera A and gather camera B and shoot the three-dimensional coordinate value of the crossing point of direction and be (0,0,0), because make things convenient for the study terminal to know the demand that the student need be controlled the 3D model, for example: the method comprises the following steps of (1) positioning the gesture position by taking the index finger tip of a student as a fixed point, wherein the specific fixed point determination mode can adopt the existing image processing technology;
the acquisition camera A and the acquisition camera B acquire N pictures to respectively form a picture set A of the acquisition camera ACollection={A1,A2,A3,…,ANAnd a photo set B of the acquisition camera BCollection={B1,B2,B3,…,BNIn which ANRepresents the Nth picture acquired by the acquisition camera A, BNRepresenting the Nth picture taken by the pick-up camera B, the set of pictures ACollectionAnd a set of photographs BCollectionThe first photo A in (1)1And B1The coordinate values of the central point positions in the three-dimensional image are (0,0, 0);
the ratio of the size of the pictures shot by the acquisition camera A and the acquisition camera B to the size of the actual scene is 1: m;
the position positioning unit collects the photos ACollectionA in (A)1,A2,A3,…,ANTwo adjacent photos are overlapped in sequence to form a gesture fixed point moving picture of an X-Z plane and an (X-Y)/cos theta plane, as shown in FIGS. 4 and 5;
the position locating unit confirms the X-axis and Z-axis coordinates of the gesture fixed point after moving according to the following steps:
s1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the X-Z plane after superposition
Figure BDA0003043489360000141
S2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the X-Z plane after superposition
Figure BDA0003043489360000142
The coordinate value of the gesture fixed point of the (i + 1) th photo on the X-Z plane is
Figure BDA0003043489360000143
Wherein (X)i,Yi) Coordinate values of the gesture fixed point of the ith photo on the X-Z plane can be determined by coordinate values of the i-1 th photo, and so on, and the 1 st photo A is known1The coordinate value of the gesture fixed point in the X-Z plane design line is (0, 0).
Through the technical scheme, the moving distance and the moving direction of the student gesture fixed point on the X-Z plane can be determined, the moving distance of the student gesture fixed point on the X axis and the Y axis is calculated, and the coordinate values of the student gesture fixed point on the X axis and the Y axis after the student gesture fixed point moves are determined.
The position positioning unit confirms the X-axis and Y-axis coordinates of the gesture fixed point after moving according to the following steps:
t1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the (X-Y)/cos theta plane after superposition
Figure BDA0003043489360000151
T2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the (X-Y)/cos theta plane after superposition
Figure BDA0003043489360000152
T3, measuring the straight-line distance L between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture on the (X-Y)/cos theta plane after superpositionii+1
Calculating the actual moving distance S of the gesture fixed point of the ith photo and the gesture fixed point of the (i + 1) th photo in space according to the following formulaii+1
Figure BDA0003043489360000153
The specific calculation process is shown in fig. 3;
through the formula, the moving direction and distance of the student gesture fixed point on the (X-Y)/cos theta plane can be converted to the X-Y plane, so that the moving distance of the student gesture fixed point on the X-Y axis can be accurately confirmed, and the moving direction and distance of the student gesture fixed point on the X-Y axis can be confirmed.
The distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture which are confirmed by the position positioning unit moves on the Y axis is
Figure BDA0003043489360000154
Figure BDA0003043489360000155
Then, the coordinate value of the (i + 1) th photo in the three-dimensional rectangular coordinate system is
Figure BDA0003043489360000156
Wherein Z isiThe coordinate value of the gesture fixed point of the ith photo on the Y axis of the (X-Y)/cos theta plane can be determined by the coordinate value of the ith-1 photo, and so on, and the 1 st photo A is known1The coordinate value of the gesture fixed point on the (X-Y)/cos theta flat Y-axis plane is 0.
Through the calculation mode, the moving direction and the moving distance of the student gesture fixed point on the X-Y axis can be converted into the moving distance of the student gesture fixed point on the Y axis, and then the moving distance of the student gesture fixed point on the Y axis can be obtained, so that the coordinate value of the point on the Y axis can be determined according to the coordinate value of the last student gesture fixed point on the Y axis.
The gesture control system comprises a position positioning unit, a gesture analysis unit, a central control unit and an instruction execution unit, wherein the position positioning unit inputs coordinate value information of each movement of a student gesture into the gesture analysis unit, the gesture analysis unit fits N coordinate values to form a student gesture curve, the gesture analysis unit conveys the fitted student gesture curve to the central control unit, the central control unit issues an operation instruction to the instruction execution unit according to the fitted student gesture curve, and the instruction execution unit adjusts a 3D model to complete a student gesture operation process.
Example (b):
the gesture collection unit is two collection cameras A and B, two the collection cameras are respectively installed at the middle part and the upper part of the front end of the head-mounted display device and used for collecting the information of the movement information of the student for controlling the gesture, the control gesture collected by the collection cameras is a single-frame picture, and the distance between the collection camera A and the collection camera B is LABThe included angle between the shooting directions of the acquisition camera A and the acquisition camera B is 30 degrees which is 3cm, and the shooting directions of the acquisition camera A and the acquisition camera B are unchanged.
The position positioning unit takes a certain point of the gesture of the student as a fixed point, the gesture fixed point of the student is placed at the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B, the control on the 3D model is started, the coordinate system establishing unit establishes a three-dimensional rectangular coordinate system by taking the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B as the circle center, and the three-dimensional coordinate value of the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B is (0,0, 0);
the acquisition camera A and the acquisition camera B acquire N pictures to respectively form a picture set A of the acquisition camera ACollection={A1,A2,A3,…,ANAnd a photo set B of the acquisition camera BCollection={B1,B2,B3,…,BNIn which ANRepresents the Nth picture acquired by the acquisition camera A, BNRepresenting the Nth picture taken by the pick-up camera B, the set of pictures ACollectionAnd a set of photographs BCollectionThe first photo A in (1)1And B1The coordinate values of the central point positions in the three-dimensional image are (0,0, 0);
the ratio of the size of the pictures shot by the acquisition camera A and the acquisition camera B to the size of the actual scene is 1: 5;
the position positioning unit collects the photos ACollectionA in (A)1,A2,A3,…,ANTwo adjacent pictures in the picture group are sequentially overlapped to form a gesture fixed point moving picture of an X-Z plane and an (X-Y)/cos theta plane;
the position locating unit confirms the X-axis and Z-axis coordinates of the gesture fixed point after moving according to the following steps:
s1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the X-Z plane after superposition
Figure BDA0003043489360000171
S2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the X-Z plane after superposition
Figure BDA0003043489360000181
The coordinate value of the gesture fixed point of the (i + 1) th photo on the X-Z plane is
Figure BDA0003043489360000182
Wherein (X)i,Yi) The coordinate value of the gesture fixed point of the ith photo on the X-Z plane is represented as (1.5,2.3), the coordinate value of the gesture fixed point of the ith photo on the X-Z plane can be determined by the coordinate value of the ith-1 photo, and so on, and the 1 st photo A is known1The coordinate value of the gesture fixed point in the X-Z plane design line is (0, 0).
The position positioning unit confirms the X-axis and Y-axis coordinates of the gesture fixed point after moving according to the following steps:
t1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the (X-Y)/cos theta plane after superposition
Figure BDA0003043489360000183
T2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the (X-Y)/cos theta plane after superposition
Figure BDA0003043489360000184
T3, measuring the straight-line distance L between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture on the (X-Y)/cos theta plane after superpositionii+1=1.21cm;
Calculating the actual moving distance S of the gesture fixed point of the ith photo and the gesture fixed point of the (i + 1) th photo in space according to the following formulaii+1
Figure BDA0003043489360000185
Figure BDA0003043489360000191
The distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture which are confirmed by the position positioning unit moves on the Y axis is
Figure BDA0003043489360000192
Figure BDA0003043489360000193
Then, the coordinate value of the (i + 1) th photo in the three-dimensional rectangular coordinate system is
Figure BDA0003043489360000194
The student's gesture pointing moves to the (5.75,5.1,19.3) position on the (i + 1) th picture.
The gesture control system comprises a position positioning unit, a gesture analysis unit, a central control unit and an instruction execution unit, wherein the position positioning unit inputs coordinate value information of each movement of a student gesture into the gesture analysis unit, the gesture analysis unit fits N coordinate values to form a student gesture curve, the gesture analysis unit conveys the fitted student gesture curve to the central control unit, the central control unit issues an operation instruction to the instruction execution unit according to the fitted student gesture curve, and the instruction execution unit adjusts a 3D model to complete a student gesture operation process.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (9)

1. The utility model provides a study terminal based on MR glasses technique is applied to automotive industry which characterized in that: the learning terminal comprises a virtual conversion module which is used for converting the content of the real world and mutually fusing the content with the virtual world;
the comment interpretation module is used for interpreting the teaching contents of the virtual world;
the gesture analysis module is used for analyzing and judging gestures of the trainee for controlling the virtual world;
a response terminal for displaying virtual world content and responding the gesture command of the student;
the output end of the comment interpretation module is connected with the input end of the virtual conversion module, the gesture analysis module is connected with the virtual conversion module, the output end of the virtual conversion module is connected with the input end of the response terminal, and the gesture analysis module is connected with the response terminal.
2. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 1, wherein: the virtual conversion module comprises a picture capturing unit, a 3D modeling unit, a 3D model and a virtual fusion unit;
the image capturing unit is used for capturing a 2D image of real teaching content in the automobile industry, the 3D modeling unit is used for converting the 2D real teaching content image captured by the image capturing unit into a 3D model and outputting the 3D model, and the virtual fusion unit is used for fusing the 3D model of the real teaching content output by the 3D modeling unit into a virtual world;
the output end of the picture capturing unit is connected with the input end of the 3D modeling unit, the 3D modeling unit outputs a 3D model, the output end of the 3D model is connected with the virtual fusion unit, and the virtual fusion unit is connected with the real world and the virtual world.
3. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 2, wherein: the comment interpretation module comprises a voice recognition unit and an information input unit;
the voice recognition unit is used for recognizing voice signals of a teacher in the automobile industry teaching process and displaying the voice signals in a text form, and the information input unit is used for inputting annotation information of each part in the 3D model;
and the output ends of the voice recognition unit and the information input unit are connected with the 3D model.
4. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 3, wherein: the gesture analysis module comprises a coordinate system establishing unit, a gesture acquisition unit, a position positioning unit and a gesture analysis unit;
the coordinate system establishing unit is used for establishing a three-dimensional rectangular coordinate system of the 3D model, the gesture collecting unit is used for collecting gesture information of a student for controlling the virtual world, the position locating unit is used for locating the spatial position of the gesture of the student in the virtual world, and the gesture analyzing unit is used for analyzing the meaning of the gesture information of the student and determining the intention of the student;
the output ends of the coordinate system establishing unit and the gesture collecting unit are connected with the 3D model, the output end of the 3D model is connected with the input end of the position positioning unit, and the output end of the position positioning unit is connected with the input end of the gesture analyzing unit.
5. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 4, wherein: the response terminal comprises a central control unit, a head-mounted display device and an instruction execution unit;
the central control unit is used for intelligently controlling the whole learning terminal, the head-mounted display equipment is used for displaying a fusion world of a virtual world and a real world, and the instruction execution unit is used for executing an instruction issued by the central control unit and controlling a display picture in the virtual world;
the output end of the posture analysis unit is connected with the input end of the central control unit, the output end of the central control unit is connected with the input ends of the head-mounted display device and the execution unit, and the output end of the instruction execution unit is connected with the input end of the 3D model.
6. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 5, wherein: the gesture collection unit is two collection cameras A and B, two the collection cameras are respectively installed at the middle part and the upper part of the front end of the head-mounted display device and used for collecting the information of the movement information of the student for controlling the gesture, the control gesture collected by the collection cameras is a single-frame picture, and the distance between the collection camera A and the collection camera B is LABThe included angle between the shooting directions of the acquisition camera A and the acquisition camera B is theta, and the shooting directions of the acquisition camera A and the acquisition camera B are unchanged.
7. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 6, wherein: the position positioning unit takes a certain point of the gesture of the student as a fixed point, the gesture fixed point of the student is placed at the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B, the control on the 3D model is started, the coordinate system establishing unit establishes a three-dimensional rectangular coordinate system by taking the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B as the circle center, and the three-dimensional coordinate value of the intersection point of the shooting directions of the acquisition camera A and the acquisition camera B is (0,0, 0);
the acquisition camera A and the acquisition camera B acquire N pictures to respectively form a picture set A of the acquisition camera ACollection={A1,A2,A3,…,ANAnd a photo set B of the acquisition camera BCollection={B1,B2,B3,…,BNIn which ANRepresents the Nth picture acquired by the acquisition camera A, BNRepresenting the Nth picture taken by the pick-up camera B, the set of pictures ACollectionAnd a set of photographs BCollectionThe first photo A in (1)1And B1The coordinate values of the central point positions in the three-dimensional image are (0,0, 0);
the ratio of the size of the pictures shot by the acquisition camera A and the acquisition camera B to the size of the actual scene is 1: m;
the position positioning unit collects the photos ACollectionA in (A)1,A2,A3,…,ANTwo adjacent pictures in the picture group are sequentially overlapped to form a gesture fixed point moving picture of an X-Z plane and an (X-Y)/cos theta plane;
the position locating unit confirms the X-axis and Z-axis coordinates of the gesture fixed point after moving according to the following steps:
s1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the X-Z plane after superposition
Figure FDA0003043489350000041
S2, measuring the ith sheetThe distance between the gesture fixed point of the picture and the gesture fixed point of the (i + 1) th picture in the Z-axis direction on the X-Z plane after superposition
Figure FDA0003043489350000042
The coordinate value of the gesture fixed point of the (i + 1) th photo on the X-Z plane is
Figure FDA0003043489350000043
Wherein (X)i,Yi) Coordinate values of the gesture fixed point of the ith photo on the X-Z plane can be determined by coordinate values of the i-1 th photo, and so on, and the 1 st photo A is known1The coordinate value of the gesture fixed point in the X-Z plane design line is (0, 0).
8. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 7, wherein: the position positioning unit confirms the X-axis and Y-axis coordinates of the gesture fixed point after moving according to the following steps:
t1, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the X-axis direction on the (X-Y)/cos theta plane after superposition
Figure FDA0003043489350000051
T2, measuring the distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture along the Z-axis direction on the (X-Y)/cos theta plane after superposition
Figure FDA0003043489350000052
T3, measuring the straight-line distance L between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture on the (X-Y)/cos theta plane after superpositionii+1
Calculating the gesture fixed point and the gesture fixed point of the ith photo according to the following formulaActual moving distance S of gesture fixed point of i +1 photos in spaceii+1
Figure FDA0003043489350000053
The distance between the gesture fixed point of the ith picture and the gesture fixed point of the (i + 1) th picture which are confirmed by the position positioning unit moves on the Y axis is
Figure FDA0003043489350000054
Figure FDA0003043489350000055
Then, the coordinate value of the (i + 1) th photo in the three-dimensional rectangular coordinate system is
Figure FDA0003043489350000056
Wherein Z isiThe coordinate value of the gesture fixed point of the ith photo on the Y axis of the (X-Y)/cos theta plane can be determined by the coordinate value of the ith-1 photo, and so on, and the 1 st photo A is known1The coordinate value of the gesture fixed point on the (X-Y)/cos theta flat Y-axis plane is 0.
9. The learning terminal applied to the automobile industry based on the MR glasses technology as claimed in claim 8, wherein: the gesture control system comprises a position positioning unit, a gesture analysis unit, a central control unit and an instruction execution unit, wherein the position positioning unit inputs coordinate value information of each movement of a student gesture into the gesture analysis unit, the gesture analysis unit fits N coordinate values to form a student gesture curve, the gesture analysis unit conveys the fitted student gesture curve to the central control unit, the central control unit issues an operation instruction to the instruction execution unit according to the fitted student gesture curve, and the instruction execution unit adjusts a 3D model to complete a student gesture operation process.
CN202110465049.4A 2021-04-28 2021-04-28 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology Active CN113223182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110465049.4A CN113223182B (en) 2021-04-28 2021-04-28 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110465049.4A CN113223182B (en) 2021-04-28 2021-04-28 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology

Publications (2)

Publication Number Publication Date
CN113223182A true CN113223182A (en) 2021-08-06
CN113223182B CN113223182B (en) 2024-05-14

Family

ID=77089443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110465049.4A Active CN113223182B (en) 2021-04-28 2021-04-28 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology

Country Status (1)

Country Link
CN (1) CN113223182B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
EP2908219A1 (en) * 2014-02-14 2015-08-19 Omron Corporation Gesture recognition apparatus and control method of gesture recognition apparatus
CN104932691A (en) * 2015-06-19 2015-09-23 中国航天员科研训练中心 Real-time gesture interaction system with tactile perception feedback
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality
CN109448126A (en) * 2018-09-06 2019-03-08 国营芜湖机械厂 A kind of aircraft equipment repairing auxiliary system and its application method based on mixed reality
CN110491233A (en) * 2019-08-23 2019-11-22 北京枭龙科技有限公司 A kind of new-energy automobile disassembly system and method based on mixed reality
CN111191322A (en) * 2019-12-10 2020-05-22 中国航空工业集团公司成都飞机设计研究所 Virtual maintainability simulation method based on depth perception gesture recognition
CN112513787A (en) * 2020-07-03 2021-03-16 华为技术有限公司 Interaction method, electronic device and system for in-vehicle isolation gesture

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
EP2908219A1 (en) * 2014-02-14 2015-08-19 Omron Corporation Gesture recognition apparatus and control method of gesture recognition apparatus
CN104932691A (en) * 2015-06-19 2015-09-23 中国航天员科研训练中心 Real-time gesture interaction system with tactile perception feedback
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality
CN109448126A (en) * 2018-09-06 2019-03-08 国营芜湖机械厂 A kind of aircraft equipment repairing auxiliary system and its application method based on mixed reality
CN110491233A (en) * 2019-08-23 2019-11-22 北京枭龙科技有限公司 A kind of new-energy automobile disassembly system and method based on mixed reality
CN111191322A (en) * 2019-12-10 2020-05-22 中国航空工业集团公司成都飞机设计研究所 Virtual maintainability simulation method based on depth perception gesture recognition
CN112513787A (en) * 2020-07-03 2021-03-16 华为技术有限公司 Interaction method, electronic device and system for in-vehicle isolation gesture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尚倩等: "双目立体视觉的目标识别与定位", 《智能系统学报》, vol. 06, no. 04, pages 303 - 311 *
王天明: "基于头戴式摄像头的手势识别技术研究与应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 978, no. 03, pages 138 - 5107 *

Also Published As

Publication number Publication date
CN113223182B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN106023692A (en) AR interest learning system and method based on entertainment interaction
CN106325509A (en) Three-dimensional gesture recognition method and system
CN105278685B (en) A kind of assisted teaching system and teaching method based on EON
CN206105869U (en) Quick teaching apparatus of robot
CN107967057B (en) Leap Motion-based virtual assembly teaching method
CN111028579A (en) Vision teaching system based on VR reality
CN108986577A (en) A kind of design method of the mobile augmented reality type experiment based on forward type
Huang et al. An approach for augmented learning of finite element analysis
CN106293099A (en) Gesture identification method and system
CN208351776U (en) A kind of chip circuit tutoring system based on VR virtual reality technology
CN112331001A (en) Teaching system based on virtual reality technology
CN110288861A (en) A method of based on teaching material virtual display course content in kind
CN205540577U (en) Live device of virtual teaching video
CN113223182B (en) Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology
Patil et al. E-learning system using Augmented Reality
CN103971551A (en) Bidirectional audio-visual teaching education conduction marketing system
CN116645247A (en) Panoramic view-based augmented reality industrial operation training system and method
CN110262662A (en) A kind of intelligent human-machine interaction method
Qu et al. Design and Implementation of Teaching Assistant System for Mechanical Course based on Mobile AR Technology
CN115984437A (en) Interactive three-dimensional stage simulation system and method
Wang et al. Augmented Reality and Quick Response Code Technology in Engineering Drawing Course
CN112825215A (en) Nuclear power plant anti-anthropogenic training system and method based on virtual reality technology
CN111369854A (en) Vr virtual reality laboratory operating system and method
CN204759788U (en) Teaching device
Zhao et al. Practice and Exploration of Blended Teaching Based on VR Animation Laws of Motion Course

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant