CN111597674A - Intelligent engine maintenance method based on man-machine cooperation - Google Patents

Intelligent engine maintenance method based on man-machine cooperation Download PDF

Info

Publication number
CN111597674A
CN111597674A CN201910129572.2A CN201910129572A CN111597674A CN 111597674 A CN111597674 A CN 111597674A CN 201910129572 A CN201910129572 A CN 201910129572A CN 111597674 A CN111597674 A CN 111597674A
Authority
CN
China
Prior art keywords
user
engine
model
information
maintenance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910129572.2A
Other languages
Chinese (zh)
Other versions
CN111597674B (en
Inventor
吕舒亚
滕东兴
高庆
马翠霞
单钰皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN201910129572.2A priority Critical patent/CN111597674B/en
Publication of CN111597674A publication Critical patent/CN111597674A/en
Application granted granted Critical
Publication of CN111597674B publication Critical patent/CN111597674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The invention provides an intelligent engine maintenance method based on man-machine cooperation, which comprises the following steps: collecting data information of the engine, modeling all parts of the engine, determining the size correlation and the dependency relationship of the parts, establishing a U3D model of the engine, and storing the model in a database; adopting a Vuforia technology to identify objects, establishing an object database of the model, and realizing information matching between an engine image and the model; in augmented reality application, inquiring according to input information of a user, or judging through an object recognition function of an ARCamera camera, and analyzing the requirements of the user; and after the user enters a maintenance scene, acquiring a father node and a child node of an operation object according to the requirement of the user, and recommending a scheme according to the logic relationship between the objects so as to guide the user to carry out correct operation and realize intelligent maintenance of the engine.

Description

Intelligent engine maintenance method based on man-machine cooperation
Technical Field
The invention belongs to the field of human-computer interaction, and particularly relates to an intelligent engine maintenance method based on human-computer cooperation.
Background
With the rapid development of the industrial internet, the industrial production mode develops towards the direction of digitalization, automation and intellectualization. In the industrial field, industrial production puts higher demands on management, operation, maintenance and overhaul personnel of equipment and products, and the traditional operation and maintenance method cannot meet the demands on production and cost. Facing more severe reliability and predictive maintenance requirements and more comprehensive fault triggering factors, an operation and maintenance mode which saves cost, processing speed, time, socialization and informatization is urgently needed by industrial enterprises (Tibetan definition, Yangmeng wave, predictive maintenance strategy for intelligent maintenance under the condition of industrial internet [ J ]. equipment management and maintenance.2017, 19: 62-63).
Taking the automobile manufacturing industry as an example, in the field of automobile engine maintenance, a large number of professional relevant talents such as automobile repair and automobile accessories are needed, and the engine maintenance technology is well mastered. As the employment market has less theoretical research on the assembly and disassembly of the novel engine and can not keep the follow-up of the automobile engine equipment technology, the professional people need to spend more time for learning and mastering the knowledge, and the non-professional users also generate higher time cost in the process of waiting for maintenance. In addition, the operator may operate the device by mistake, the equipment itself may be wasted, and the cost may be high, which may increase the difficulty of maintenance and the accuracy requirement for operation. Under the fast-paced market demand environment, aiming at the problems of long maintenance time and high maintenance cost, auxiliary maintenance is required in a man-machine cooperation mode to improve the engine maintenance efficiency.
The man-machine cooperation emphasizes that the machine can be automatically matched with the work of a person to be automatically adapted to the environmental change. The man-machine cooperation mode changes the extremely fine worker pipelining work from simple and rough robots to a 'person-oriented' organization mode, so that the machines and the people respectively do more adept things, the machines undertake more repeated, boring and dangerous works, and the people undertake more creative work (the essence of 'artificial intelligence + manufacturing' in Lidawn, Wucapo Yang is 'man-machine cooperation' [ N ]. economic Japanese report.2018-09-28 (015)). The realization of efficient and natural interactive research and application of a human-computer intelligent system is the aim of researching human-computer intelligent cooperative work. Therefore, a man-machine intelligent cooperation control method and an intelligent interaction technology are used for developing man-machine intelligent cooperation-oriented services, and various problems in the industrial manufacturing field can be effectively solved.
Disclosure of Invention
The invention aims to realize an intelligent engine maintenance method based on man-machine cooperation. The method realizes real man-machine cooperation, effectively solves the problems of difficult information search, non-visual knowledge understanding, complex operation, easy error and the like in the engine maintenance process, shortens the time for a user to acquire required information, improves the maintenance efficiency, and greatly reduces the load of people in the operation process.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent engine maintenance method based on man-machine cooperation comprises the following steps:
collecting data information of the engine, modeling all parts of the engine, determining the size correlation and the dependency relationship of the parts, establishing a U3D model of the engine, and storing the model in a database;
adopting a Vuforia technology to identify objects, establishing an object database of the model, and realizing information matching between an engine image and the model;
in augmented reality application, inquiring according to input information of a user, or judging through an object recognition function of an ARCamera camera, and analyzing the requirements of the user;
and after the user enters a maintenance scene, acquiring a father node and a child node of an operation object according to the requirement of the user, and recommending a scheme according to the logic relationship between the objects so as to guide the user to carry out correct operation and realize intelligent maintenance of the engine.
Further, the data information of the engine includes a category, a model, an appearance, a size, a material, and a color.
Further, the modeling method comprises the following steps: carrying out three-dimensional modeling on parts of the engine by using SolidWorks to fit a model with a real object; adjusting the light source and material attributes of the generated model by using 3Dmax, and determining the relative position and connection relation between parts; and importing the finally generated model file into the Unity, and integrating the model file with the data information of the engine.
Further, the data stored in the database further includes:
menu options including model options, tool options, problems to be solved, help guidance required;
model exhibition, including automobile model, engine model, parts model;
the text description comprises information marking and scheme description;
the animation comprises disassembling and assembling the animation;
and the AR instruction comprises information types for identifying an object to be operated, indicating a position and guiding a step.
Further, when a Vuforia technology is adopted to perform object identification on engine real objects, each engine needs to acquire multi-angle real object pictures, and the pictures are required to have the definition of an overall outline and details; importing the SDK and matching the SDK with a corresponding model; using an ARCamera camera to perform framing, tracking and identifying an image object, performing image identification through Imagetarget, matching a shot image with a picture in an object database, and if a matching result is obtained, providing model information corresponding to the picture for a user; if the result is not matched, continuing tracking shooting and identifying until the result existing in the object database is matched; otherwise, there may be no information stored in the system for that type of engine.
Further, the Vuforia technology includes a Vuforia recognition algorithm, which completes recognition by detecting matching of natural feature points, stores feature points detected by Image in Target Manager in a database, detects feature points in a real Image in real time, and matches the feature points with feature point data of a template picture in the database.
Further, the Vuforia technology includes a Vuforia tracking algorithm, which is a model-Based tracking algorithm (model-Based tracking), and includes three main tasks of model, visual information processing and tracking, on the basis of the established model and visual information processing, the prior information of the tracked object is used as an initial value, and the acquired model information is matched with the input data to determine the attitude of the tracked object in space, so as to realize the tracking of the target.
Further, when the ARCamera camera performs real-time framing, the problems of object shielding and camera tracking are solved by adopting a linear interpolation method, namely, the distance between the camera and a target object is not changed, interpolation is performed in the range of a quarter circle right above and behind the object, when a shielding object exists, the camera finds a point capable of seeing the target object from a preset point, and then continues to follow the target to find the optimal position of the camera.
Further, if the user does not perform additional operation according to the guidance, the object which is being operated by the user is judged, and the sub-object of the new object is identified, so that the user can obtain correct step guidance in real time under the new condition; if the user does not perform the correct operation, the operation of this step cannot be completed.
Further, acquiring a father node and a child node of an operation object by adopting a recursive query algorithm, and acquiring the uppermost father node of the operation object by using a transform. For the acquisition of the child nodes, searching from the child objects of the current object, returning a linked list of the active objects marked by tag, and if not found, determining that the linked list is empty; the object is taken as a parent object to carry out recursive traversal, if the tag of a child object is consistent with the given tag, the child object is stored in a linked list array; if the sub-object has the sub-objects, then the sub-objects of the sub-object are recursively traversed.
Furthermore, in augmented reality application, a user adopts an interaction method based on gaze and gestures, and for an object gazed by the user, the model is dragged and zoomed according to the user gestures, and menu options and tools are selected so as to view any visual angle and part details, disassemble and adjust the model, assist the user in acquiring knowledge and analyze data; the gestures include a tap gesture, a long-press gesture, an operation gesture, and a navigation gesture.
An engine intelligent maintenance system based on man-machine cooperation adopting the method comprises the following steps:
the data layer module comprises a user instruction database, a model information database and a maintenance information database, and is respectively used for storing a user instruction, model data of the engine and maintenance information;
the service layer module comprises an information collection system, a scheme recommendation system and a teaching guidance system, and is respectively used for providing services of information collection, scheme recommendation and teaching guidance for the user;
the application layer module is used for realizing acquisition of instruction information input by a user, operation feedback of the system by the user, recommendation of reasonable schemes and step guidance to the user and demonstration of teaching guidance preset by the system to the user;
and the user layer module is a man-machine interaction interface between a user and the system and is applied to augmented reality based on Hololens.
The main content of the invention comprises:
1. visual method of engine parameters to U3D model
Models capable of replacing real objects are established by utilizing modeling software such as SolidWorks and 3Dmax, the models serve as objects in an augmented reality scene and correspond to the real objects in the reality scene, and the models comprise physical attributes, geometric characteristic parameters, other relevant information and the like. According to the characteristics of the engine product and part data, the invention adopts a visual technology which is from two dimensions to three dimensions and from zero to whole according to the subordinate correlation to finally generate a U3D model, and the visual technology is used for modeling step by step and performing cumulative superposition, and limits the freedom degree of parts according to the relative position relation between the parts, so that the parts are ensured to form a product whole under the relative constraint condition of each part, and the high fitting property with a real engine is realized.
2. Vuforia object recognition technology
The present invention uses the Vuforia Augmented reality software development kit (Vuforia Augmented reality sdk) to identify and capture planar images or simple three-dimensional objects in real-time using computer vision techniques, then place virtual objects through the camera viewfinder and adjust the position of the objects on the real background in front of the lens. Object recognition of engine life objects was performed using Vuforia technology in conjunction with ARCamera from Hololens augmented reality head. And a Vuforia recognition algorithm and a tracking algorithm are adopted to help a user to track, view and match the camera in real time. The user uses the Hololens camera to view a view, and tracks and identifies an image object in real time, matches the shot image with a picture in an object database, and provides the corresponding model information for the user. The technology allows a user to assist the user in searching the engine information without knowing the engine information, provides reasonable guess results to the user according to the understanding of the machine, and reduces the time for the user to screen required information from a large amount of information.
3. Plan recommendation algorithm
Aiming at the process of guiding the user to disassemble, a scheme recommendation algorithm is adopted, upward and downward node objects of an operation object are inquired recursively, a linked list is used for storing, and calling is carried out by combining a user instruction. And calculating the operation to be completed by the user in the next step according to the processing object of the user, the operation performed by the user, the result of the current operation, the sub-object of the object and other conditions, determining the object of the next operation, and providing maintenance guidance information and prompt for the user. If the user does not perform additional operation according to guidance in the guidance mode, the system judges the part which is operated by the user, and performs tag identification on a new sub-object, so that the user can acquire correct step guidance in real time under a new condition; and if the user finishes the correct operation according to the prompt, marking the part as being disassembled successfully, and guiding the user to carry out the next operation steps. Therefore, timely user information processing and real-time recommendation of operation guidance are realized.
4. Gesture interaction method based on augmented reality scene
In an augmented reality scene, an information channel is established between a person and a machine and bidirectional communication is carried out, a user inputs an instruction to the system through methods such as selection, gestures and the like, and the system enables the user to easily acquire information and freely process the information according to the requirements of the user. The system is provided with a plurality of gesture operations. In the interaction process, the user only needs to wear the AR glasses without using a computer to search required information, or look up a specification, search a tool box and the like, so that real liberation of both hands is realized. The augmented reality scene also provides immersive experience for the user, so that the engine model and the information are more intuitive and easy to understand for the user. The technology realizes low-load, natural and efficient interaction.
5. Man-machine cooperative intelligent auxiliary maintenance method
The engine maintenance industry is a highly complex industry, with a product often made up of many parts; different enterprises have different production processes and part investment when producing the same product. Due to different production processes, different equipment interfaces and the like, the technical requirements and information reserves of maintainers of engine products are higher, and the various complex problems in the maintenance process are solved by help of machines. The man-machine cooperation method really realizes the intellectualization of the auxiliary process instead of automation. The automation pursues automatic production of machines, the essence is that the machines replace people, and large-scale machine production is emphasized; while the flexible production of the machine is pursued intelligently, the machine can be matched with the work of people automatically and can adapt to the environmental change automatically. The man-machine cooperative working mode is not simple and rough robot exchange, but is a novel human-oriented organization mode, so that the robot and the human are respectively engaged in more adept work, the robot undertakes more repeated, boring and dangerous works, and the human undertakes more creative works. Intelligent service is provided through a man-machine cooperation method, the product state is monitored in real time, the user requirements are responded, services such as fault prediction, maintenance guidance and integrated solution are provided, and the aim of assisting according with the real requirements of the user is fulfilled.
Compared with the prior art, the invention has the advantages and positive effects as follows:
1. the invention provides an engine-oriented man-machine cooperative intelligent maintenance system, which supports a user to check from multiple angles and details by performing visual processing on the information of an engine and presenting the information in a 3D model manner, reduces the complexity of the information, reduces the cognitive burden of the user, enables the information to be more visual for the user, and facilitates the user to understand knowledge;
2. according to the method and the device, intelligent scheme recommendation is realized through a recommendation algorithm, and the user is helped to quickly obtain required information according to the cognitive psychology of the user. The human-computer cooperative working mode enables people to engage in the work of thought which is good at people, the machine intelligently 'understands' the meaning of people, assists according to the thought and the need of people, intelligently provides guidance for users, helps the users to carry out information organization and decision, reduces the memory burden of the users, and saves the labor and the time cost;
3. the invention applies the augmented reality technology to an auxiliary maintenance system, a user can acquire information through various modes such as gesture selection, object recognition and the like, natural interaction is carried out with the model, and a robot and a human cooperate naturally and efficiently, so that real man-machine cooperation is realized. Compared with the traditional auxiliary maintenance method, the user load is smaller, and the working efficiency is higher.
Drawings
FIG. 1 is a hierarchy frame diagram of an intelligent maintenance system for an engine based on human-computer coordination;
FIG. 2 is a diagram of a method for intelligent engine maintenance based on human-machine coordination;
FIG. 3 is a process diagram of data visualization and modeling U3D;
FIG. 4 is a diagram of an object recognition and tracking algorithm;
FIG. 5 is a schematic illustration of an interpolation method;
FIG. 6 is a visual layout of a model and interactive interface;
FIG. 7 is a diagram of Vuforia object recognition effects;
FIG. 8 is a product story map;
FIG. 9 is a diagram of recommended effect of the solution when the engine is disassembled, including a tool using prompt, a component to be operated prompt, a component to be disassembled prompt and the like;
FIG. 10 is a diagram of recommended performance of a solution for engine assembly, including use tool tips, installation location indications, etc.;
fig. 11 is a system teaching mode effect diagram including a model disassembling animation, a character information explanation, and the like.
Detailed Description
In order to make the present invention better understood by those skilled in the art, the following describes an intelligent engine maintenance method based on human-machine cooperation in detail with reference to the accompanying drawings, but the present invention is not limited thereto.
Taking the maintenance of an engine with a specific model as an example, the intelligent maintenance of man-machine cooperation on the engine is realized through the Hololens-based augmented reality application. The AR equipment provides real-time maintenance guidance for a user, and the machine is better matched with an organization mode of a human in a cooperative working mode to assist the user in processing problems encountered in the maintenance process, so that the memory burden of the user can be greatly reduced; aiming at the requirements of the user, the information is displayed in a mode of a model and an image by combining with the user interaction information, so that the cognitive burden of the user is reduced, and the user can conveniently and visually know the product information comprehensively; the robot and the human work cooperatively to help a user organize information, analyze problems and make decisions, so that the high intelligentization and visualization of the product data and information acquired by the user are realized; under the augmented reality scene, the user can carry out efficient interaction with the system, and better interactive experience is obtained.
Selecting an industrial product to be intelligently and auxiliarily maintained, wherein the data information of the existing C-130 engine is taken as an object in the embodiment; as shown in FIG. 1, the present invention relates to a hierarchical framework diagram of an intelligent maintenance system for an engine based on human-computer coordination, which comprises the following four parts:
1) and (3) a data layer: a database for storing data information such as user instructions, model data, maintenance information and the like;
2) and (3) a service layer: providing services such as information acquisition, scheme recommendation, teaching guidance and the like for a user;
3) an application layer: the method comprises the steps of collecting instruction information input by a user, feeding back operation of the user on a system, recommending a reasonable scheme and step guide to the user, demonstrating teaching guidance preset by the system to the user and the like;
4) and (3) a user layer: the user and the human-computer interaction interface of the system are augmented reality scenes based on Hololens.
As shown in FIG. 2, the flow thinking diagram of the intelligent engine maintenance method based on man-machine coordination of the present invention comprises the following steps:
1) information acquisition:
A) a user enters a three-dimensional holographic maintenance scene by using the system based on the Hololens, and selects a maintenance object and a demand according to needs, and the system presents a 3D model of the product and maintenance information thereof according to user selection;
B) the user can use the camera ARCamera to track and view the object and identify the information of the object, and the method for tracking and identifying the object is shown in fig. 3;
C) the user selects functions in the scene, such as selecting a guide maintenance mode, the system recommends a reasonable scheme to guide the user to operate, such as selecting a self-operation mode, the system can be disassembled and dragged at will to watch details of parts from multiple angles, and in addition, the user can select different tools to operate and change different parts of the model;
D) the operation of the user in the system can be fed back in real time, and whether the user operates according to the guidance influences the prompt of each step of the system, so that the user can find problems in time and find the fault reason.
With the above system and method, the following is specifically described:
1) and designing a hierarchical framework of the maintenance system according to industrial production and maintenance requirements. The system can be divided into four parts, namely a data layer, a service layer, an application layer and a user layer, wherein the data layer stores and provides a knowledge base required by auxiliary maintenance, the service layer processes collected user information and recommends a scheme to a user, the application layer provides operation information and feedback of the user and screening of the maintenance scheme and selection of a maintenance scene, and the user layer provides an operation platform, a model and presentation of a guidance scheme for the user for Hololens-based augmented reality application;
2) by adopting a visualization method suitable for data characteristics of engine products and parts, high fitness with elements in reality needs to be achieved, and high precision standard needs to be achieved. The engine model belongs to a parameter type and has accumulative superposition, so that all parts of the engine are modeled firstly, then the correlation among key sizes and the dependency relationship among the parts are processed, the U3D model of the engine is established, and the information visualization processing of the engine model is realized;
3) and (3) combining 1) and 2) to complete the visualization process from the data information of the engine product to the model, storing the model into a system database, and integrating the information. And (4) adopting a Vuforia technology to realize matching identification of the object with the image and the model. The system inquires according to the input information of the user or judges through the object recognition function of the camera, analyzes the requirements of the user and provides the inquiry result and the solution for the user;
4) and recommending the scheme according to the requirements of the user, acquiring a father node and a child node of the operation object by adopting a recursive query algorithm, and recommending according to the logical relationship between the objects. When a user enters a maintenance scene, the system provides information such as the model and the like for the user, analyzes the problem and the solution of the current situation according to the function selection and the real-time operation of the user, intelligently recommends the next operation prompt to the user, and guides the user to carry out correct operation;
5) and designing an interaction method based on gaze and gestures in an augmented reality scene. Aiming at an object gazed by a user, according to a user gesture, dragging and zooming of the model, selection of menu options and tools and the like are realized, the user can conveniently watch any visual angle and part details, and the model is disassembled and adjusted; and the user is assisted in acquiring knowledge and analyzing data.
(1) Visualizing engine data:
the data information of the engine products and parts comprises category information, model information, appearance information, size information, material information, color information and the like, and a corresponding U3D model is established according to the information, wherein the model comprises all relevant data information.
Modeling parts by using SolidWorks, constructing a two-dimensional graph of the engine parts, performing three-dimensional processing such as rotation and stretching on the two-dimensional graph to form a three-dimensional graph, and finishing details such as size, so that the model is more accurate and is more fitted with a real object; the 3Dmax is used for further adjusting attributes such as light sources and materials of the generated model, and the relative positions and the relation between the parts are determined; and finally, importing the finally generated model file into Unity, integrating the finally generated model file with the related information of the U3D model, and constructing a data layer of the system.
Compared with the traditional method for converting data into image and text description information for listing, the data visualization method adopted by the system provides a visualization method more suitable for data characteristics of engine products and parts, a visualization technology for generating a U3D model finally according to subordinate correlation from zero to whole from two dimensions to three dimensions is adopted, three modeling tools are used, step modeling is performed and accumulated superposition is performed, the freedom degree of the parts is limited according to the relative position relation between the parts, the parts are ensured to form a product whole under the relative constraint condition of the parts, and the high fitting performance with a real engine is realized.
(2) Establishing a database of models and information:
the data stored in the system database of the data layer comprises information types such as menu options (model options, tool options, problems to be solved, and required help guidance), model display (automobile models, engine models and part models), text descriptions (information labeling and scheme descriptions), animations (disassembly animations and assembly animations), AR instructions (identification of objects to be operated, position indication and step guidance), and the like.
The user can search for desired information in various ways as desired. For example, a model of the engine is matched at the data layer based on information such as the model, size, and appearance of the engine. If the model of the engine is known by a user, directly selecting the model of the engine of the model and entering a related maintenance scene; if the user does not know the engine information to be maintained at all, the AR equipment can be worn to carry out shooting and scanning by using an object recognition method, the information of the engine is estimated from the appearance, the search range is narrowed by combining artificial judgment and option prompt, and the information to be searched is finally determined.
The Vuforia technology is used for carrying out object identification on engine real objects, an object database is established, images of the engine real objects are uploaded to the database for storage, multi-angle real object pictures need to be collected by each engine, and the definition of the overall outline and details in the pictures needs to be guaranteed so as to improve the identification accuracy and efficiency. And importing the SDK into a system data layer and matching with a corresponding model. The method comprises the steps that a user uses an ARCamera camera to conduct framing and track and identify an image object, image target conducts image identification, a shot image is matched with a picture in an object database, and if a matching result is obtained, model information corresponding to the picture is provided for the user; if the result is not matched, continuing tracking shooting and identifying until the result existing in the object database is matched; otherwise, there may be no information stored in the system for that type of engine.
The Vuforia recognition algorithm is done by detecting matches of natural feature points. And storing the characteristic points detected by the Image in the Target Manager in a database, and then matching the characteristic points detected in the real Image with the characteristic point data of the template picture in the database in real time. The system uses Vuforia to identify and match the original image and provide the user with an understanding result, and the process is influenced by many factors, so that the identification rate of the target is low.
When the automatic focusing is not available, the shot real-time scene is fuzzy, and the identification of the target is greatly influenced. The performance of detection and tracking is greatly reduced. Therefore, some methods for adjusting the focusing of the camera are needed:
1. starting the current focusing MODE (FOCUS _ MODE _ CONTINUOUS _ AUTO): the device can automatically focus according to the current scene;
2. vuforia other focus modes: not all devices support linear focusing, requiring other focusing modes to be enabled;
3. TRIGGER autofocus (FOCUS _ MODE _ TRIGGER _ AUTO): clicking the screen triggers the autofocus mode.
In the AR algorithm, the illumination condition is also a non-negligible problem, and the illumination condition will largely affect the detection and tracking effect. Some method of adjusting the illumination needs to be used:
1. the illumination is enough in the environment, and the camera can be guaranteed to clearly acquire the information in the image.
2. The stability and controllability of illumination are ensured. There is a certain difference in algorithm between indoor AR and outdoor AR, and Vuforia is mostly applied indoors.
3. When the device is used as an ARCamera, if the application needs to operate in a dark environment, a flash lamp needs to be turned on for light supplement:
Vuforia API:
CameraDevice.Instance.SetFlashTorchMode(true);
in Unity3 d:
CameraDevice.Instance.SetFlashTorchMode(true);
the Vuforia tracking algorithm is a model-Based tracking algorithm, and comprises three main tasks of model, visual information processing and tracking, as shown in fig. 4. On the basis of the established model and visual information processing, the priori information of the tracked object is used as an initial value, the model information acquired by the visual information processing technology is matched with the input data, the attitude of the tracked object in the space is determined, and the tracking of the target is realized. Under the premise of ensuring that the original picture to be matched in the object database is clear and easy to identify, in the process of real-time framing of the AR camera, the problems of light, shadow, angle, object shielding and overlapping, lens distortion caused by equipment shaking and the like can exist, and all the factors can cause inaccurate identification results and judgment errors, so that manual selection and correction are needed. Fig. 7 is a Vuforia object recognition effect diagram.
By adopting an interpolation method, the problems of object occlusion and tracking of the ARCamera camera can be solved to a certain extent, as shown in FIG. 5. Interpolation is carried out in the range of quarter circles right above and right behind the Player without changing the distance between the camera and the Player, when a shelter exists, the camera finds a point where the Player can be seen from a preset point in advance, then continues to follow the Player, and finds the optimal position of the ARCamera; the linear interpolation is a height value obtained by using a series of line segments connected end to connect adjacent points in sequence, and the height of the point in each line segment as the interpolation. When the previous end point of a line segment is represented by (xi, yi), and the subsequent end point of the line segment is represented by (x (i +1), y (i +1)), the height y of a point with x as the abscissa in the range of [ xi, x (i +1) ] is:
Figure BDA0001974824760000091
the comparative symmetry form is:
Figure BDA0001974824760000101
wherein, two parameters of yi and y (i +1) are called as basis functions, the sum of the two is 1, and respectively represents weights of yi and y (i +1) to the height of the interpolation point.
(3) And (3) combining with user requirements, recommending a scheme:
the system realizes the two-way communication with the user, including collecting the information of the selection result, the function requirement, the real-time use feedback and the like of the user, processing the user instruction, returning the processing result to the user, and carrying out the real-time proposal recommendation according to the feedback of the user operation in the guidance process;
the scheme recommending function provides maintenance guide information and prompts recommended according to the requirements of the user under the specific scene and model state. Specifically, the intelligent recommendation system searches parts needing to be operated and tools needed to be used for completing the operation, and prompts a user to perform the next operation; if the user does not perform additional operation according to guidance in the guidance mode, the system judges the object which is operated by the user, identifies the sub-object of the new object, and enables the user to obtain correct step guidance in real time under the new condition; if the user does not perform the correct operation, the operation of this step cannot be completed.
In recommending a proposal to a user, an algorithm of recursive query is used. For the acquisition of the parent node, the uppermost parent node of the object is acquired by using a transform. For the acquisition of the child nodes, searching from the child objects of the current object, returning a linked list of the active objects marked by tag, and if not found, indicating that the linked list is empty.
When the times of each item of the recursive formula are all one time, the recursive formula is a linear recursive formula. A linear recursive formula in which an immediately succeeding term is expressed by expressions of consecutive k terms is called a k-order linear recursive formula. The general form of recursion is as follows:
an+k=m1an+k-1+m2an+k-2+…mkan
wherein, an=n;
Expressed as follows using the recurrence formula: a isn+1=an+1 with initial conditions a1 ═ 1;
expressed as follows using the recursive formula: a isn+2=2an+1-anThe initial conditions, a1 ═ 1 and a2 ═ 2.
And performing recursive traversal by taking the object as a parent object, and if the tag of a child object is consistent with the given tag, storing the child object into a linked list array. If the sub-object has the sub-objects, then the sub-objects of the sub-object are recursively traversed.
In the maintenance guided mode, the current mode is identified by the global variable tutorialMode as tag. When a user operates an Object, after the Object is removed, the dependency relationship of the Object on its parent Object is released, the Object attribute is false, which indicates that the step is completed, and at the same time, it is determined whether tutorial mode is true, and both conditions are satisfied, that is:
!ObjectAttached&&GameObject.Find("RealEncabulatorModel2").GetComponent<Encabulato rLogic>().tutorialMode==true;
and searching the next operation object according to the logical relationship, acquiring the parameter information of the sub-object, and modifying the loader model of the sub-object.
The proposal recommendation completely takes the human center, and realizes the cooperative work of the robot and the human in the maintenance process according to the psychology and the requirements of the human. When a person thinks, the robot searches the thinking of the person to acquire information required by the user and provides the information for the user to check and select; when a person analyzes, the machine performs layout presentation on the analyzed content, understands and visualizes the analysis and thinking of the user, and feeds back the analysis and thinking to the user in a mode of model, information and guidance; when the person makes conception and selection, the robot records the person, and performs analysis, proposal recommendation and step guidance according to the recording result.
(4) Gesture interaction in an augmented reality scene:
FIG. 6 is a visual layout of a model and an interactive interface. The system employs a gaze and gesture based interaction method. Gaze (size) is the HoloLens primary input mode, in a form function similar to the cursor of a desktop system, for selecting and manipulating holographic objects. Gaze is achieved by directing a forward ray between the eyes of the user's head, which ray identifies the object it impacts. In Unity, Main Camera is used to represent the position and orientation of a user's head. Add CursorPrefab component to the scene. When staring at a holographic object, a blue aperture appears on the surface of the holographic object, which indicates that the object is currently stared at, and when rays leave the object, Cursor becomes a point light source, so as to distinguish whether the object is stared at or not. Once a holographic image is located using gaze, gesture input allows the user to interact with the holographic image using hands natively. Hololens implements gesture recognition by tracking hands. Hand motion is tracked within a cone in front of the device, this region is called a gesture frame (gestured frame), extending the upper, lower, left and right boundaries of the holographic image display view. For each identified hand, its position (no direction) and click status are obtained.
The holographic image 1:1 is responsive to user hand movement such that manipulating the hand potential energy is used to move, zoom or rotate the holographic image. One use of this is to allow a user to draw images or draw in the world. Using all gestures, the initial target of the manipulation gesture is selected by gaze (size). Once the tap gesture begins, any manipulation of the object by hand movement can be processed. When a user enters the system, the system starts to automatically run a StartCapturing Gestures function, a gesture appearing in an ARCamera camera is captured in real time, the captured gesture is identified and judged by using a gettureRecognizer function, a gazeManager function is called to determine a gazing coordinate, an object to which the coordinate belongs is used as a selected object, and corresponding operation is carried out on the object according to an identification result. The system is provided with a plurality of gesture operations:
1. and (3) Tap: a tap gesture. According to the requirements, any option in the system can be selected, the OnTappedEvent of the selected component is called, the required information is inquired, and operation guidance and the like are obtained; a component may also be selected for operation.
2. Hold: a long press gesture. The gesture is used for executing secondary behaviors, after the user selects the component, if the gesture is not restored all the time, the component is in an 'OnHoldStarted' state, and the component is changed along with the gesture to perform operations such as translation, zooming, rotation and the like. In an augmented reality scene, the selected gesture is kept and the selected object moves on a vertical plane, the size of the selected object is kept unchanged, and the height and the left and right positions of the selected object are changed; keeping the selected gesture and moving on a non-vertical plane, and changing the relative distance and the watching size of the selected object to the user to zoom according to the rules of the size.
3. Management: an operating gesture. For an assembly with unfixed coordinates, if gesture reduction is carried out after a series of operations, the assembly is in an 'OnHold completed' state, the assembly does not change along with the gesture any more, the relative displacement of the assembly and the initial state is calculated at the moment, then the position coordinates of the object are updated, and a new changed picture is presented to a user.
4. Navigation: a navigation gesture. It acts like a virtual joystick that can be used for UI control navigation, such as an arc menu. The gesture is initiated by a tap and then the hand is moved in a standard cubic space centered at the tap. The user may move his hand along axis X, Y, Z, which results in a change of value-1 to 1, with an initial position value of 0. The navigation gesture can be used to construct a speed-based continuous scrolling or zooming gesture, similar to moving up and down on a 2D UI by holding a mouse wheel.
The gesture operation of the user comprises dragging and translating, zooming, rotating, selecting menu options, selecting tools, selecting parts and the like, the presenting scale and angle are changed by controlling the coordinate of the model and the distance between the model and the user, and the model can be checked and operated from different angles, layers and details.
The system adopts an interaction mode in an augmented reality scene, increases the perception of a user to the real world through information provided by a computer system, applies virtual content to the real world, and superimposes virtual objects, scenes and digital content generated by a computer to the real world to achieve the effect of augmented reality of objects. The augmented reality technology mainly comprises the following four aspects:
1) the display technology comprises the following steps: and displaying by using a Hololens helmet display, and presenting information such as a 3D model, AR instructions and guidance to a user.
2) Tracking and positioning technology: the position, sight line and motion situation of the participants in the scene are detected in real time, and the system is helped to display virtual things.
3) The deficiency-excess fusion technology comprises the following steps: the video information of the real environment acquired by the camera is fused with the three-dimensional virtual information generated by the computer, and the real-time requirement is met by controlling the display frequency of the video display and the virtual object information.
4) User interaction technology: and acquiring interaction control data by means of tracking and positioning technology so as to execute real-time behavior instructions of the user on the virtual things.
(5) The working method for realizing man-machine cooperation comprises the following steps:
1) through carrying out visual processing with the information of engine to the mode of 3D model presents, supports the user to look over from many angles, many sizes and detail, has reduced the complexity of information, makes the information more directly perceived to the user, alleviates user's cognitive burden, is convenient for the user to understand knowledge.
2) According to the selection operation of the user, the recommendation algorithm is combined to perform intelligent scheme recommendation, maintenance guidance and step prompt required by the user are provided, the user is helped to quickly obtain required information, the user is assisted to perform information organization and decision, the memory burden of the user is relieved, and the time cost is saved.
3) The augmented reality technology is applied to the auxiliary maintenance system, and a user performs simple and natural interaction in the modes of staring, gestures and the like in a three-dimensional holographic maintenance scene, so that compared with a traditional interaction method, the interaction load of the user is reduced, and the working efficiency is improved.
The user uses the system for auxiliary maintenance, and the process is fully immersed in the augmented reality environment for operation. In the interaction process, a user only needs to wear the AR glasses, does not need to search a computer, look up a specification and look up a tool box at the same time, and operates the AR glasses in a real object mode by contrasting an engine, and the whole maintenance process is carried out in an augmented reality scene in a man-machine cooperation mode. The method provides an efficient interaction method which can more intuitively understand the maintenance object and more easily acquire information.
The product design works around the needs of the human, conforming to the cognitive psychology of the human, as shown in fig. 8, taking a certain specific scene as an example:
(1) the current state is as follows: at a vehicle service point, a new engine fails, but the service personnel is not familiar with the engine of the new engine and is not aware of the disassembly process.
(2) The problems are as follows: learning the structure and the disassembling method of the engine requires surfing the internet to search for data, and certain time and energy are needed, so that the efficiency is low; if other professional maintenance personnel are searched, more time and labor are needed.
(3) The method comprises the following steps: and a timely professional maintenance guide is provided for maintenance personnel, so that the maintenance personnel do not need to search for a tool book on the internet or search for the tool book, spend time on learning and operate only by following the system guide.
(4) Competitive products: online consultation, telephone consultation and the like, and various auxiliary maintenance apps.
(5) Competitive advantage: aiming at the method of dialing a maintenance consultation telephone by an online consultation expert, a user needs to quickly find out a consultation object meeting the conditions, the remote guidance limitation is large, and the error probability in the process of describing problems and guiding operation is high; for various apps for auxiliary maintenance, the contents obtained by searching on the internet are more professional and more convenient and easier to search than a tool book, but the provided knowledge still needs to be understood and memorized, and the operation process and the guidance process of a user are not intuitive. The method provides real-time solution and operation guidance for the user in an augmented reality mode, and the user can carry out uninterrupted operation only by following the guidance, so that the method is similar to the field guidance of the expert, the cognitive burden and the memory burden are reduced, and the maintenance efficiency is improved.
(6) As a result: the user has solved the problem of disassembling this engine rapidly.
(7) The achieved goal is that: the auxiliary maintenance process of man-machine cooperation with the user as the center is realized, and the user is efficiently assisted to solve the problems.
In the man-machine cooperative interaction environment, a reasonable work division mode is that a person does uninterrupted thinking, and a machine assists the person to filter and process information and presents the information in a mode easy for a decision maker to use. In the information presentation process, according to the thinking characteristics of people in the decision-making activities, a natural interaction mode is provided for decision-making people, so that the decision-making people can conveniently carry out basic information utilization activities in the thinking processes of information comparison, information exploration and the like, and further the energy of people is saved to engage in thinking activities. The method based on man-machine cooperation provided by the invention is used for inquiring the search of the user, laying out the analysis of the user, searching the association of the user, recording and analyzing the thinking and selection of the user, and guiding the operation of the user in real time, thereby maximally assisting the user to complete the complete decision-making and maintenance process.
When the scheme is recommended, the system processes and analyzes the user instruction and intelligently provides services according to the user requirements.
As shown in fig. 9, in the recommendation of the solution and guidance process of the step for disassembling the engine, the user enters a solution guidance scene for disassembling the engine, fig. 9a is a prompt for a nut disassembling operation, and a nut on the left upper side of the engine is marked with red (shown by a dotted circle) to prompt the user to disassemble the object by using a marked tool; fig. 9b and 9c are illustrations of the operation of the separating member, in which after the nut is removed in fig. 9b, the body below the engine is marked with a blue mark (indicated by a dotted circle) to prompt the user to perform a translation operation on the part by using a gesture drag to separate the part from the top cover; after the screw is removed in fig. 9c, the middle embedded body is marked with blue (indicated by a dotted circle), prompting the user to perform a translation operation on the part by using gesture dragging, and separating the part from the base; fig. 9d is a prompt of a screw operation (the disassembling step using the hexagonal wrench is omitted in the preamble), and the screw with the embedded component is marked with green (white dots in the dotted circle) to prompt the user to disassemble the object using the marked tool.
As shown in fig. 10, in the plan recommendation and step guidance process of engine assembly, similar to the guidance of the disassembled part, in the plan guidance scene of engine assembly, fig. 10a and 10c are operation prompts of assembly parts, in fig. 10a, the lower engine body and the embedded parts are marked with blue (indicated by dotted circles) at the same time, so as to prompt the user to perform translation operation on the part by using gesture dragging, and assemble the two parts; in fig. 10c, the engine lower body and the upper cover are simultaneously marked with blue (indicated by dashed circles) to prompt the user to perform a translation operation on the part by using gesture dragging, and the two parts are assembled; 10b, 10d are prompts for operation of the separating member, in FIG. 10b the user selects the embedded component to move, and a downward white arrow appears on the lower base member to prompt the user to place the component above the base; in fig. 10d, the user selects the lower body to move, and an upward white arrow appears under the upper cover, prompting the user to place the body under the cover.
As shown in FIG. 11 for teaching the disassembly of locomotive components and the maintenance of specific faults, the system automatically demonstrates to the user the disassembly animation of the vehicle, matches the text beside the model and explains the detailed steps to the user.
The above describes in detail an intelligent engine maintenance method based on human-computer coordination, but it is obvious that the specific implementation form of the present invention is not limited thereto. It will be apparent to those skilled in the art that various obvious changes can be made therein without departing from the spirit of the process of the invention and the scope of the claims.

Claims (10)

1. An intelligent engine maintenance method based on man-machine cooperation comprises the following steps:
collecting data information of the engine, modeling all parts of the engine, determining the size correlation and the dependency relationship of the parts, establishing a U3D model of the engine, and storing the model in a database;
carrying out object recognition on the engine, establishing an object database of an engine model, and realizing information matching between an engine image and the model;
in augmented reality application, inquiring according to input information of a user, or judging through an object recognition function of an ARCamera camera, and analyzing the maintenance requirement of an engine of the user;
and after the user enters a maintenance scene, acquiring a father node and a child node of an operation object according to the requirement of the user, and recommending a scheme according to the logic relationship between the objects so as to guide the user to carry out correct operation and realize intelligent maintenance of the engine.
2. The method of claim 1, wherein the data information of the engine includes a category, a model, an appearance, a size, a material, a color;
the data stored in the database further comprises: menu options, model display, text description, animation and AR instructions, wherein the menu options comprise model options, tool options, problems to be solved and required help guidance; the model display comprises an automobile model, an engine model and a part model; the text description comprises information labels and scheme descriptions; the animation comprises disassembly animation and assembly animation; the AR instruction comprises information types for identifying an object to be operated, indicating a position and guiding a step.
3. The method of claim 1, wherein the modeling method is: carrying out three-dimensional modeling on parts of the engine by using SolidWorks to fit a model with a real object; adjusting the light source and material attributes of the generated model by using 3Dmax, and determining the relative position and connection relation between parts; and importing the finally generated model file into Unity.
4. The method of claim 1, wherein when the Vuforia technology is used for object recognition of engine real objects, multiple-angle pictures of the real objects need to be acquired for each engine; importing the SDK and matching the SDK with a corresponding model; tracking and shooting and identifying an image object by using an ARCamera camera, matching the shot image with a picture in an object database, if a result is matched, providing model information corresponding to the picture for a user, and if not, continuing tracking and shooting and identifying until the result existing in the object database is matched; otherwise, the system is determined not to store the information of the engine.
5. The method of claim 1 or 4, wherein object recognition is performed using a Vuforia technique, said Vuforia technique comprising a Vuforia recognition algorithm and a Vuforia tracking algorithm.
6. The method of claim 1, wherein the ARCamera camera uses linear interpolation to solve the problems of object occlusion and camera tracking during live view.
7. The method of claim 1, wherein if the user does not perform additional operations according to the guidance, determining the object being operated by the user, identifying the child object of the new object, and enabling the user to obtain correct step guidance in real time under the new situation; if the user does not perform the correct operation, the operation of this step cannot be completed.
8. The method of claim 1, wherein the parent and child nodes of the operands are obtained using an algorithm of recursive queries.
9. The method of claim 1, wherein in an augmented reality application, a user employs a gaze and gesture based interaction method.
10. An intelligent maintenance system for an engine based on human-machine coordination, which adopts the method of any one of the preceding claims 1 to 9, comprising:
the data layer module comprises a user instruction database, a model information database and a maintenance information database, and is respectively used for storing a user instruction, model data of the engine and maintenance information;
the service layer module comprises an information collection system, a scheme recommendation system and a teaching guidance system, and is respectively used for providing services of information collection, scheme recommendation and teaching guidance for the user;
the application layer module is used for realizing acquisition of instruction information input by a user, operation feedback of the system by the user, recommendation of reasonable schemes and step guidance to the user and demonstration of teaching guidance preset by the system to the user;
and the user layer module is a man-machine interaction interface between a user and the system and is applied to augmented reality based on Hololens.
CN201910129572.2A 2019-02-21 2019-02-21 Intelligent engine maintenance method based on man-machine cooperation Active CN111597674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910129572.2A CN111597674B (en) 2019-02-21 2019-02-21 Intelligent engine maintenance method based on man-machine cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910129572.2A CN111597674B (en) 2019-02-21 2019-02-21 Intelligent engine maintenance method based on man-machine cooperation

Publications (2)

Publication Number Publication Date
CN111597674A true CN111597674A (en) 2020-08-28
CN111597674B CN111597674B (en) 2023-07-04

Family

ID=72183156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910129572.2A Active CN111597674B (en) 2019-02-21 2019-02-21 Intelligent engine maintenance method based on man-machine cooperation

Country Status (1)

Country Link
CN (1) CN111597674B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330818A (en) * 2020-11-03 2021-02-05 北京卫星环境工程研究所 Unmanned aerial vehicle part manual layering auxiliary system and method based on augmented reality
CN112685860A (en) * 2021-01-08 2021-04-20 深圳睿晟自动化技术有限公司 Curved surface attitude detection method and device, terminal equipment and storage medium
CN113902287A (en) * 2021-09-30 2022-01-07 海南电网有限责任公司电力科学研究院 Power safety production inspection method, terminal and system
CN114371804A (en) * 2021-12-03 2022-04-19 国家能源集团新能源技术研究院有限公司 Electronic drawing browsing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371455A1 (en) * 2014-06-23 2015-12-24 GM Global Technology Operations LLC Augmented reality based interactive troubleshooting and diagnostics for a vehicle
CN106203446A (en) * 2016-07-05 2016-12-07 中国人民解放军63908部队 Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system
WO2017041372A1 (en) * 2015-09-07 2017-03-16 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN107798391A (en) * 2016-08-31 2018-03-13 王振福 A kind of analysis of equipment fault using augmented reality and maintenance system
CN108090574A (en) * 2018-01-05 2018-05-29 中邮建技术有限公司 A kind of smart city Information Management System based on augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371455A1 (en) * 2014-06-23 2015-12-24 GM Global Technology Operations LLC Augmented reality based interactive troubleshooting and diagnostics for a vehicle
WO2017041372A1 (en) * 2015-09-07 2017-03-16 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN106203446A (en) * 2016-07-05 2016-12-07 中国人民解放军63908部队 Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system
CN107798391A (en) * 2016-08-31 2018-03-13 王振福 A kind of analysis of equipment fault using augmented reality and maintenance system
CN108090574A (en) * 2018-01-05 2018-05-29 中邮建技术有限公司 A kind of smart city Information Management System based on augmented reality

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
OSCAR DANIELSSON等: "Operators perspective on augmented reality as a support tool in engine assembly", 《PROCEDIA CIRP》, vol. 72, pages 45 - 50 *
代云霏: "基于增强现实技术的航空维修技能教学研究", 《科技与创新》, no. 5, pages 52 - 53 *
任栋梁;关守东;杨佳强;宋时雨;李梓航;: "基于增强现实的飞机发动机辅助维修系统研究", 科技创新导报, no. 06, pages 22 - 24 *
滕东兴等: "基于管道隐喻的工作流可视化方法", 《计算机工程与设计》, vol. 34, no. 1, pages 327 - 332 *
王超;王诗然;王呼生;: "虚拟现实技术在计算机硬件维修与维护课程中的应用", 信息与电脑(理论版), no. 04, pages 234 - 235 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330818A (en) * 2020-11-03 2021-02-05 北京卫星环境工程研究所 Unmanned aerial vehicle part manual layering auxiliary system and method based on augmented reality
CN112330818B (en) * 2020-11-03 2021-06-22 北京卫星环境工程研究所 Unmanned aerial vehicle part manual layering auxiliary system and method based on augmented reality
CN112685860A (en) * 2021-01-08 2021-04-20 深圳睿晟自动化技术有限公司 Curved surface attitude detection method and device, terminal equipment and storage medium
CN113902287A (en) * 2021-09-30 2022-01-07 海南电网有限责任公司电力科学研究院 Power safety production inspection method, terminal and system
CN114371804A (en) * 2021-12-03 2022-04-19 国家能源集团新能源技术研究院有限公司 Electronic drawing browsing method and system

Also Published As

Publication number Publication date
CN111597674B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN111597674B (en) Intelligent engine maintenance method based on man-machine cooperation
US20200218359A1 (en) Visual collaboration interface
US10157502B2 (en) Method and apparatus for sharing augmented reality applications to multiple clients
CN101690165B (en) Control method based on a voluntary ocular signal, particularly for filming
Posada et al. Graphics and media technologies for operators in industry 4.0
EP3284078B1 (en) Augmented interface authoring
US20160358383A1 (en) Systems and methods for augmented reality-based remote collaboration
Zollmann et al. Interactive 4D overview and detail visualization in augmented reality
JP2011022984A (en) Stereoscopic video interactive system
CN106030610A (en) Real-time 3D gesture recognition and tracking system for mobile devices
Tang et al. GrabAR: Occlusion-aware grabbing virtual objects in AR
CN111124117B (en) Augmented reality interaction method and device based on sketch of hand drawing
CN105589553A (en) Gesture control method and system for intelligent equipment
CN112527112B (en) Visual man-machine interaction method for multichannel immersion type flow field
WO2021041755A1 (en) Semantically supported object recognition to provide knowledge transfer
Schütt et al. Semantic interaction in augmented reality environments for microsoft hololens
Fang et al. Head-mounted display augmented reality in manufacturing: A systematic review
CN113688290A (en) Interactive electronic maintenance system for vehicle chassis
Fiorentino et al. Natural interaction for online documentation in industrial maintenance
CN115294308A (en) Augmented reality auxiliary assembly operation guiding system based on deep learning
CN113570732A (en) Shield maintenance auxiliary method and system based on AR technology
CN109118584A (en) Method, control system and the computer program product of control automation system
Zohra et al. An overview of interaction techniques and 3d representations for data mining
Osimani et al. Point Cloud Deep Learning Solution for Hand Gesture Recognition
Hoesl et al. TrackLine: Refining touch-to-track Interaction for Camera Motion Control on Mobile Devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant