CN113110742A - AR multi-person interaction industrial robot teaching system based on SLAM positioning technology - Google Patents

AR multi-person interaction industrial robot teaching system based on SLAM positioning technology Download PDF

Info

Publication number
CN113110742A
CN113110742A CN202110402698.XA CN202110402698A CN113110742A CN 113110742 A CN113110742 A CN 113110742A CN 202110402698 A CN202110402698 A CN 202110402698A CN 113110742 A CN113110742 A CN 113110742A
Authority
CN
China
Prior art keywords
module
model
server
client
working principle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110402698.XA
Other languages
Chinese (zh)
Inventor
张配雪
吴水龙
彭纪国
关靖丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hangzhou Iron And Steel Incandescent Orange Intelligent Technology Co ltd
Original Assignee
Hangzhou Hangzhou Iron And Steel Incandescent Orange Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hangzhou Iron And Steel Incandescent Orange Intelligent Technology Co ltd filed Critical Hangzhou Hangzhou Iron And Steel Incandescent Orange Intelligent Technology Co ltd
Priority to CN202110402698.XA priority Critical patent/CN113110742A/en
Publication of CN113110742A publication Critical patent/CN113110742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Abstract

The invention relates to the technical field of AR teaching, in particular to an AR multi-person interactive industrial robot teaching system based on SLAM positioning technology, which comprises a server, a client embedded with an SLAM system module, a model full-view display module, a part disassembly exercise module, a part voice explanation cognition module, a working principle voice explanation module, a working principle animation display module and a part installation exercise module, wherein the model full-view display module, the part disassembly exercise module, the part voice explanation cognition module, the working principle voice explanation module, the working principle animation display module and the part installation exercise module are arranged in the server, a plurality of client sides are arranged, one client side is set as a main client side, the other client sides are synchronous client sides, and all the client sides are communicated with the server. And the teaching experience and the teaching quality are improved.

Description

AR multi-person interaction industrial robot teaching system based on SLAM positioning technology
Technical Field
The invention relates to the technical field of AR teaching, in particular to an AR multi-person interactive industrial robot teaching system based on an SLAM positioning technology.
Background
In recent years, education departments clearly and comprehensively improve the capability of information technology support and lead the innovation development of intelligent education, accelerate the informatization work of the education industry, increase the application of new technologies such as cloud computing, big data, Internet of things, virtual reality/augmented reality, artificial intelligence and the like, and create education features. Around the emerging industry profession represented by advanced manufacturing intelligent manufacturing, the state increases the policy guidance strength, fully mobilizes the enthusiasm of each aspect of deep education, reform and innovation, guides the industrial enterprises to deeply participate in the technical skill talent culture and training, and integrates various resources in the field where the technical skill talents are in short supply, such as the advanced manufacturing, promotes the colleges and universities to strengthen professional construction, deepen course reform, enhance the practical training content, improve the teacher and fund level, and comprehensively improve the education and teaching quality.
The prior art has the following disadvantages:
(1) in the traditional practice teaching, the problems of insufficient time and insufficient resources of structure cognition and skill training are faced. The learning effect of the traditional teaching of the planarization theory is different due to different individual comprehension abilities and different cognition on the space imagination of a large part of students.
(2) The single individual AR show cannot satisfy all students in class study due to the fixed visual angle. The AR display technology in a multi-person interactive mode enables all students to observe the industrial robot at different visual angles and achieves synchronous operation.
(3) The AR technology has challenging difficulties in that a positioning means capable of accurately aligning a virtual object and a real environment, and a display device capable of integrating a virtual scene and a real environment into a whole. The requirements for the positioning means here are not only the accuracy, data refresh frequency and delay of the orientation tracking system required in the VR system, but also the understanding of the relationship between the virtual environment coordinate system, the real environment coordinate system and the user's own visual coordinate system, and the realization of the precise alignment between them and the maintenance of the alignment relationship during the movement, the accuracy of the movement of the virtual object in the virtual environment coordinate system and the accuracy of its movement relative to the real environment coordinate system.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an AR multi-person interactive industrial robot teaching system based on an SLAM positioning technology, which takes the digital teaching of an industrial robot as a starting point, and a teaching system combining the AR digital technology based on the SLAM positioning technology with professional theory teaching and practical training to stimulate the learning interest of students, reduce the teaching cost and create a teaching mode combining the theory teaching and the AR digital resources.
In order to achieve the purpose, the invention provides the following technical scheme: an AR multi-person interactive industrial robot teaching system based on SLAM positioning technology comprises a server, a client embedded with an SLAM system module, a model full-view display module, a part disassembling exercise module, a part voice explanation cognition module, a working principle voice explanation module, a working principle animation display module and a part installation exercise module, wherein the model full-view display module, the part disassembling exercise module, the part voice explanation cognition module, the working principle voice explanation module, the working principle animation display module and the part installation exercise module are arranged in the server, a plurality of model data are stored in the server, one client is set as a main client, the other clients are synchronous clients, all the clients are communicated with the server, the model full-view display module is used for displaying a model stored in the server and is in data communication with all the clients, the part disassembling exercise module is used for disassembling interaction of the model stored in the server, the part voice explanation cognitive module is used for performing structure cognitive teaching on corresponding parts on a model stored in the server, the working principle voice explanation module is used for performing working principle voice explanation teaching on the model stored in the server, the working principle animation display module is used for performing working principle animation display teaching on the model stored in the server, and the part installation practice module is used for performing part assembly interaction on the model stored in the server;
the teaching system has the following operation mode:
(1) connecting a main client and all synchronous clients with the same server to form a same teaching scene;
(2) according to a prearranged real scene, a main client identifies an identifier at a specified position in the real scene through an SLAM system module, and establishes a target coordinate system by taking the identifier as a target after identifying the identifier;
(3) the main client pulls a required model to be displayed in a coordinate system scene of the client according to the model data stored in the server, and synchronizes a display target to all synchronous clients;
(4) any client operates the model, and the target change of the model is synchronized to other clients in the same teaching scene through the server;
(5) after the model is placed and accurately positioned, the model is fixed and unchanged in position through a model full-view display module, and the form of the model under different views is observed through the movement of the camera position of a client;
(6) any client operates the part disassembling practice module, the disassembling data of the corresponding model in the server is pulled through the part disassembling practice module, and the part disassembling operation of the model is realized on all clients;
(7) any client operates the part voice explanation cognitive module, part structure cognitive data of a corresponding model in the server is pulled through the part voice explanation cognitive module, and part structure cognitive teaching of the model is realized on all clients;
(8) any client operates the working principle voice explanation module, the working principle voice explanation module pulls working principle explanation data of a corresponding model in the server, and working principle cognitive teaching of the model is realized on all clients;
(9) the method comprises the steps that any client side operates a working principle animation display module to pull working principle animation data of a corresponding model in a server through the working principle animation display module, and working principle cognitive teaching of the model is achieved on all client sides;
(10) and any client operates the part installation practice module to pull part assembly teaching data of the corresponding model in the server through the part installation practice module, and part assembly, installation and display of the model are realized on all clients.
Preferably, the server is internally provided with corresponding interactive buttons corresponding to the model full-view display module, the part disassembly exercise module, the part voice explanation cognitive module, the working principle voice explanation module, the working principle animation display module and the part installation exercise module, and the corresponding interactive buttons are displayed on all clients.
Preferably, the client is an AR wearable eye device embedded with a SLAM module.
Preferably, the server is a local area network server.
Preferably, the server is a virtual server arranged in the main client, and the other synchronous clients communicate with the main client through a local area network.
Preferably, the model data stored in the server includes model structure display data, model part disassembly data, model part structure cognition data, model working principle cognition interpretation data, model working principle animation display data, and model assembly interaction data.
Preferably, a visual odometer module, a loop detection module and a nonlinear optimization module are arranged in the client,
estimating camera motion of adjacent frame images collected by a camera of a client through a visual odometer module, and establishing a local map;
performing loop detection through a loop detection module to judge whether a model displayed in a client reaches a previous position, and if loop detection is performed, providing information to a nonlinear optimization module so as to correct the problem of position drift along with time;
and receiving the camera pose measured by the visual odometer at different moments and the information of loop detection through a nonlinear optimization module, and optimizing and denoising acquired data to obtain a globally consistent track and map.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention supports multi-user interaction, is beneficial to small group type teaching and training in the teaching process and promotes the group cooperation of students;
(2) the invention has expandability, supports the implantation of different model teaching contents, particularly realizes the virtual modeling of expensive equipment in the teaching process, and reduces the teaching cost expenditure;
(3) the invention has certain flexibility, and by the aid of the mobile equipment camera and the AR technology, teaching models can be placed in any coordinate, so that the requirement of the teaching equipment on space limitation is reduced;
(4) the method is based on the SLAM positioning technology, and has the advantages of stable positioning, no target coordinate offset and the like;
(5) the invention improves the professional learning effect and skill training of the industrial robot, and cultivates professional talents with high quality and high skill, which have the functions of equipment understanding, structure understanding, operation and application, for the intelligent manufacturing industry and the industry.
Drawings
FIG. 1 is a schematic diagram of a teaching system of the present invention;
FIG. 2 is a flow chart of the visual SLAM framework of the present invention;
fig. 3 is a schematic diagram of multi-person interactive communication according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
SLAM is an abbreviation of synchronous positioning And Mapping (Simultaneous Localization And Mapping), And refers to a moving subject carrying an environmental perception sensor And the like, And estimates the pose change And the motion track of the SLAM by using environmental observation information, And meanwhile, establishes an environmental map. Originally proposed by Hugh Durrant-Whyte and John J.Leonard. SLAM can be used in both the 2D and 3D motion domains. The two tasks of positioning and mapping are completed by a camera, and the task is called visual SLAM. As in fig. 3, the classic visual SLAM framework contains five modules: sensor data acquisition, visual odometer, back-end nonlinear optimization, loop detection and graph building:
(1) the sensor data acquisition module: the visual SLAM mainly reads and preprocesses camera image information.
(2) Visual Odometry module (Visual Odometry): the visual odometer is concerned with the camera motion between adjacent images, and the camera motion is estimated by using adjacent frame images collected by the camera to establish a local map.
(3) A nonlinear optimization module: the rear end receives the camera pose measured by the visual odometer at different moments and the information of loop detection, optimizes and de-noises the camera pose and the information of loop detection to obtain a globally consistent track and map.
(4) A loop detection module: and loop detection judges whether the robot reaches the previous position once, and if loop is detected, information is provided to a rear-end optimization module so as to correct the problem of position drift along with time.
(5) A drawing establishing module: and establishing a map corresponding to the task requirement according to the estimated track. The map comprises a sparse map and a dense map.
AR is short for Augmented Reality (Augmented Reality). AR utilizes wearable glasses equipment to superpose the information of virtual object and real world, realizes the content supplement to real world. The method is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time space range of the real world originally is overlapped after being simulated through scientific technologies such as computers, virtual information is applied to the real world and is perceived by human senses, so that the sensory experience beyond reality is achieved. The real environment and the virtual object are superimposed on the same picture or space in real time and exist simultaneously. The augmented reality technology not only shows real world information, but also displays virtual information simultaneously, and the two kinds of information are mutually supplemented and superposed. In visual augmented reality, a user can see the real world around it by multiply composing the real world with computer graphics using a display device.
Referring to fig. 1 to 3, the present invention provides a technical solution: an AR multi-person interactive industrial robot teaching system based on SLAM positioning technology comprises a server, a client embedded with an SLAM system module, a model full-view display module, a part disassembling exercise module, a part voice explanation cognition module, a working principle voice explanation module, a working principle animation display module and a part installation exercise module, wherein the model full-view display module, the part disassembling exercise module, the part voice explanation cognition module, the working principle voice explanation module, the working principle animation display module and the part installation exercise module are arranged in the server, a plurality of model data are stored in the server, one client is set as a main client, the other clients are synchronous clients, all the clients are communicated with the server, the model full-view display module is used for displaying a model stored in the server and is in data communication with all the clients, the part disassembling exercise module is used for disassembling interaction of the model stored in the server, the part voice explanation cognitive module is used for performing structure cognitive teaching on corresponding parts on a model stored in the server, the working principle voice explanation module is used for performing working principle voice explanation teaching on the model stored in the server, the working principle animation display module is used for performing working principle animation display teaching on the model stored in the server, and the part installation practice module is used for performing part assembly interaction on the model stored in the server;
the teaching system has the following operation mode:
(1) connecting a main client and all synchronous clients with the same server to form a same teaching scene;
(2) according to a prearranged real scene, a main client identifies an identifier at a specified position in the real scene through an SLAM system module, and establishes a target coordinate system by taking the identifier as a target after identifying the identifier;
(3) the main client pulls a required model to be displayed in a coordinate system scene of the client according to the model data stored in the server, and synchronizes a display target to all synchronous clients;
(4) any client operates the model, and the target change of the model is synchronized to other clients in the same teaching scene through the server;
(5) after the model is placed and accurately positioned, the model is fixed and unchanged in position through a model full-view display module, and the form of the model under different views is observed through the movement of the camera position of a client;
(6) any client operates the part disassembling practice module, the disassembling data of the corresponding model in the server is pulled through the part disassembling practice module, and the part disassembling operation of the model is realized on all clients;
(7) any client operates the part voice explanation cognitive module, part structure cognitive data of a corresponding model in the server is pulled through the part voice explanation cognitive module, and part structure cognitive teaching of the model is realized on all clients;
(8) any client operates the working principle voice explanation module, the working principle voice explanation module pulls working principle explanation data of a corresponding model in the server, and working principle cognitive teaching of the model is realized on all clients;
(9) the method comprises the steps that any client side operates a working principle animation display module to pull working principle animation data of a corresponding model in a server through the working principle animation display module, and working principle cognitive teaching of the model is achieved on all client sides;
(10) and any client operates the part installation practice module to pull part assembly teaching data of the corresponding model in the server through the part installation practice module, and part assembly, installation and display of the model are realized on all clients.
The server in corresponding to model full visual angle show module, spare part disassembly exercise module, spare part pronunciation explanation cognitive module, theory of operation pronunciation explanation module, theory of operation animation show module and spare part installation exercise module all be provided with corresponding mutual button, corresponding mutual button shows on all clients.
The client is AR wearable eye equipment with an embedded SLAM module.
The server is a local area network server, the local area network server can be set if the use environment is diversified according to actual needs, all model data are stored in the local area network server, and all clients call the reality in the local area network server to perform teaching operation.
The server is a virtual server arranged in the main client, the other synchronous clients are communicated with the main client through a local area network, when the teaching environment is simple, the main client can be used as the server, and the other clients and the main client form the same teaching scene.
The model data stored in the server comprises model structure display data, model part disassembly data, model part structure cognition data, model working principle cognition explanation data, model working principle animation display data and model assembly interaction data.
The client is internally provided with a visual odometer module, a loop detection module and a nonlinear optimization module,
estimating camera motion of adjacent frame images collected by a camera of a client through a visual odometer module, and establishing a local map;
performing loop detection through a loop detection module to judge whether a model displayed in a client reaches a previous position, and if loop detection is performed, providing information to a nonlinear optimization module so as to correct the problem of position drift along with time;
and receiving the camera pose measured by the visual odometer at different moments and the information of loop detection through a nonlinear optimization module, and optimizing and denoising acquired data to obtain a globally consistent track and map.
Through the adoption of the technical scheme, the device,
AR multi-person interactive collaboration: presenting the same AR scene in different clients, simultaneously operating a model by multiple students, forming a network environment among multiple clients through a local area network, and taking one client as a server to be responsible for synchronizing the AR scene;
target coordinate system synchronization technology: after the AR scene is positioned, different clients present different visual angles according to different positions of the camera, and a local coordinate system of each device is established on each device participating in online by using the position of the client and the angle of the gyroscope; selecting any one device as a master reference device and other devices as slave devices, taking the coordinate system of the master reference device as a master reference coordinate system, and correcting the coordinate systems of the other slave devices; the slave equipment acquires the position information of the master reference equipment and converts the position information into slave equipment coordinates based on a master reference coordinate system;
visual SLAM-based localization optimization: after the coordinates are determined, the AR scene does not deviate and shake, and the positioning accuracy is improved by optimizing nonlinear multi-constraint by adding a loop detection on the basis of the motion estimation of the visual odometer. In the visual motion estimation, aiming at the problem of high matching error rate of visual feature points, an ORB feature point clustering, sampling, matching and tracking method is provided. In the aspect of pose graph optimization, an improved loop detection method is provided, and the possibility of two kinds of error matching is reduced. Finally, the visual SLAM and the inertial navigation are combined, so that the stability and the positioning accuracy of the system are improved;
model disassembly based on AR technology: in an AR scene, the industrial robot disassembling step is displayed, the structure cognition teaching effect is achieved, an interactive hotspot is created for the industrial robot, the industrial robot disassembling step is presented by clicking hotspot interaction, and the teaching purpose is achieved through voice;
model animation display based on AR technology: in an AR scene, displaying an industrial robot model action animation to achieve a principle cognition teaching effect, wherein the model adopts an fbx format, animation is built in, and the animation is played in the teaching process;
model assembly based on AR technology: in the AR scene, the industrial robot assembling steps are displayed, the structure cognition teaching effect is achieved, an interaction hotspot is created for the industrial robot, the industrial robot assembling steps are displayed through the interaction of clicking hotspots, and the teaching purpose is achieved through voice.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. The utility model provides a mutual industrial robot teaching system of AR many people based on SLAM location technique which characterized in that: the system comprises a server, a client embedded with an SLAM system module, a model full-view display module, a part disassembling exercise module, a part voice explanation cognitive module, a working principle voice explanation module, a working principle animation display module and a part installation exercise module, wherein the model full-view display module, the part disassembling exercise module, the part voice explanation cognitive module, the working principle animation display module, the part installation exercise module and the part installation exercise module are arranged in the server, a plurality of client are arranged in the server, one client is set as a main client, the other clients are synchronous clients, all the clients are communicated with the server, the model full-view display module is used for displaying a model stored in the server, the model full-view display module is in data communication with all the clients, the part disassembling exercise module is used for disassembling and interacting the model stored in the server, and the part voice explanation cognitive module is used for performing structure cognitive teaching of corresponding parts on the model stored in the server, the working principle voice explanation module is used for conducting working principle voice explanation teaching on the model stored in the server, the working principle animation display module is used for conducting working principle animation display teaching on the model stored in the server, and the part installation exercise module is used for conducting part assembly interaction on the model stored in the server;
the teaching system has the following operation mode:
(1) connecting a main client and all synchronous clients with the same server to form a same teaching scene;
(2) according to a prearranged real scene, a main client identifies an identifier at a specified position in the real scene through an SLAM system module, and establishes a target coordinate system by taking the identifier as a target after identifying the identifier;
(3) the main client pulls a required model to be displayed in a coordinate system scene of the client according to the model data stored in the server, and synchronizes a display target to all synchronous clients;
(4) any client operates the model, and the target change of the model is synchronized to other clients in the same teaching scene through the server;
(5) after the model is placed and accurately positioned, the model is fixed and unchanged in position through a model full-view display module, and the form of the model under different views is observed through the movement of the camera position of a client;
(6) any client operates the part disassembling practice module, the disassembling data of the corresponding model in the server is pulled through the part disassembling practice module, and the part disassembling operation of the model is realized on all clients;
(7) any client operates the part voice explanation cognitive module, part structure cognitive data of a corresponding model in the server is pulled through the part voice explanation cognitive module, and part structure cognitive teaching of the model is realized on all clients;
(8) any client operates the working principle voice explanation module, the working principle voice explanation module pulls working principle explanation data of a corresponding model in the server, and working principle cognitive teaching of the model is realized on all clients;
(9) the method comprises the steps that any client side operates a working principle animation display module to pull working principle animation data of a corresponding model in a server through the working principle animation display module, and working principle cognitive teaching of the model is achieved on all client sides;
(10) and any client operates the part installation practice module to pull part assembly teaching data of the corresponding model in the server through the part installation practice module, and part assembly, installation and display of the model are realized on all clients.
2. The AR multi-person interactive industrial robot teaching system based on SLAM positioning technology of claim 1, characterized in that: the server in corresponding to model full visual angle show module, spare part disassembly exercise module, spare part pronunciation explanation cognitive module, theory of operation pronunciation explanation module, theory of operation animation show module and spare part installation exercise module all be provided with corresponding mutual button, corresponding mutual button shows on all clients.
3. The AR multi-person interactive industrial robot teaching system based on SLAM positioning technology of claim 1, characterized in that: the client is AR wearable eye equipment with an embedded SLAM module.
4. The AR multi-person interactive industrial robot teaching system based on SLAM positioning technology of claim 1, characterized in that: the server is a local area network server.
5. The AR multi-person interactive industrial robot teaching system based on SLAM positioning technology of claim 1, characterized in that: the server is a virtual server arranged in the main client, and other synchronous clients communicate with the main client through a local area network.
6. The AR multi-person interactive industrial robot teaching system based on SLAM positioning technology of claim 1, characterized in that: the model data stored in the server comprises model structure display data, model part disassembly data, model part structure cognition data, model working principle cognition explanation data, model working principle animation display data and model assembly interaction data.
7. The AR multi-person interactive industrial robot teaching system based on SLAM positioning technology of claim 1, characterized in that: the client is internally provided with a visual odometer module, a loop detection module and a nonlinear optimization module,
estimating camera motion of adjacent frame images collected by a camera of a client through a visual odometer module, and establishing a local map;
performing loop detection through a loop detection module to judge whether a model displayed in a client reaches a previous position, and if loop detection is performed, providing information to a nonlinear optimization module so as to correct the problem of position drift along with time;
and receiving the camera pose measured by the visual odometer at different moments and the information of loop detection through a nonlinear optimization module, and optimizing and denoising acquired data to obtain a globally consistent track and map.
CN202110402698.XA 2021-04-14 2021-04-14 AR multi-person interaction industrial robot teaching system based on SLAM positioning technology Pending CN113110742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110402698.XA CN113110742A (en) 2021-04-14 2021-04-14 AR multi-person interaction industrial robot teaching system based on SLAM positioning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110402698.XA CN113110742A (en) 2021-04-14 2021-04-14 AR multi-person interaction industrial robot teaching system based on SLAM positioning technology

Publications (1)

Publication Number Publication Date
CN113110742A true CN113110742A (en) 2021-07-13

Family

ID=76717617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110402698.XA Pending CN113110742A (en) 2021-04-14 2021-04-14 AR multi-person interaction industrial robot teaching system based on SLAM positioning technology

Country Status (1)

Country Link
CN (1) CN113110742A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202620A (en) * 2022-02-16 2022-03-18 杭州杰牌传动科技有限公司 Transmission product instant design display system and design method for realizing design interactive linkage
WO2024055396A1 (en) * 2022-09-14 2024-03-21 上海智能制造功能平台有限公司 System and method for interactive teaching of parts assembly

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202620A (en) * 2022-02-16 2022-03-18 杭州杰牌传动科技有限公司 Transmission product instant design display system and design method for realizing design interactive linkage
WO2024055396A1 (en) * 2022-09-14 2024-03-21 上海智能制造功能平台有限公司 System and method for interactive teaching of parts assembly

Similar Documents

Publication Publication Date Title
CN110233841B (en) Remote education data interaction system and method based on AR holographic glasses
CN106530894B (en) A kind of virtual head up display method and system of flight training device
CN106293087B (en) A kind of information interacting method and electronic equipment
CN114327060B (en) Working method of virtual teaching system based on AI assistant
CN113110742A (en) AR multi-person interaction industrial robot teaching system based on SLAM positioning technology
US20180367787A1 (en) Information processing device, information processing system, control method of an information processing device, and parameter setting method
Fang et al. An augmented reality-based method for remote collaborative real-time assistance: from a system perspective
CN114401414B (en) Information display method and system for immersive live broadcast and information pushing method
CN109901713B (en) Multi-person cooperative assembly system and method
CN109828666B (en) Mixed reality interaction system and method based on tangible user interface
CN105183161A (en) Synchronized moving method for user in real environment and virtual environment
CN110427107A (en) Virtually with real interactive teaching method and system, server, storage medium
CN114998063B (en) Immersion type classroom construction method, system and storage medium based on XR technology
CN114092290A (en) Teaching system in educational meta universe and working method thereof
CN112509401A (en) Remote real-practice teaching method and system based on augmented reality projection interaction
CN115933868A (en) Three-dimensional comprehensive teaching field system of turnover platform and working method thereof
CN110444061A (en) Internet of Things teaching one-piece
Dutta Augmented reality for e-learning
CN205540577U (en) Live device of virtual teaching video
CN110794952A (en) Virtual reality cooperative processing method, device and system
Fadzli et al. A robust real-time 3D reconstruction method for mixed reality telepresence
CN103700128A (en) Mobile equipment and enhanced display method thereof
Lu et al. An immersive telepresence system using rgb-d sensors and head mounted display
CN115576427A (en) XR-based multi-user online live broadcast and system
Ercan et al. On sensor fusion for head tracking in augmented reality applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination