CN117541219A - Visual maintenance auxiliary system based on augmented reality - Google Patents

Visual maintenance auxiliary system based on augmented reality Download PDF

Info

Publication number
CN117541219A
CN117541219A CN202311428648.4A CN202311428648A CN117541219A CN 117541219 A CN117541219 A CN 117541219A CN 202311428648 A CN202311428648 A CN 202311428648A CN 117541219 A CN117541219 A CN 117541219A
Authority
CN
China
Prior art keywords
maintenance
image
augmented reality
function module
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311428648.4A
Other languages
Chinese (zh)
Inventor
杨鸣
张泽群
唐敦兵
朱海华
蔡祺祥
宗陆杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202311428648.4A priority Critical patent/CN117541219A/en
Publication of CN117541219A publication Critical patent/CN117541219A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Operations Research (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a visual maintenance auxiliary system based on augmented reality, which comprises a visual maintenance guide function module, a fault information virtual-real mapping function module and a remote expert cooperative function module, wherein the visual maintenance guide function module is used for superposing a real workshop scene and a virtual model based on a three-dimensional registration algorithm facing the maintenance scene to realize quick positioning and guiding of maintenance parts, and the visual maintenance guide function module is also used for displaying an electronic manual and a virtual model positioning indication operation guide; the fault information virtual-real mapping functional module displays fault information and maintenance tasks through a maintenance task pushing mechanism, and reminds personnel to timely handle faults; the remote expert coordination function module is based on a WebRTC remote expert system and is used for realizing communication between an expert and field maintenance personnel.

Description

Visual maintenance auxiliary system based on augmented reality
Technical Field
The invention relates to the field of intelligent manufacturing, in particular to a visual maintenance auxiliary system based on augmented reality.
Background
With the continuous development of the manufacturing industry, the number and variety of equipment are increased, the production rhythm is accelerated, the manufacturing workshops are more sensitive to equipment faults and the reduction of the production capacity caused by the equipment faults, and the requirements on maintenance efficiency are improved. The complex condition of workshop maintenance scene has caused great pressure to maintainer, and traditional manual maintenance mode is difficult to satisfy the requirement that improves maintenance efficiency. The traditional maintenance auxiliary modes comprise experience communication among personnel, paper documents, flat plates and other electronic documents, and the maintenance auxiliary modes have the defects of portability and reliability and do not improve the maintenance efficiency much. Depending on experience communication among personnel, operation details are easy to miss or the operation sequence is easy to mistake, standardization is difficult, and the operation sequence is easy to forget for a long time. By means of electronic equipment such as paper documents or flat plates, the portability is poor, and maintenance personnel who have difficulty need to look up and operate at the same time, and cannot operate with both hands. Therefore, there is a need for a system that can help maintenance personnel obtain an immersive information experience during maintenance, and that can operate with both hands while obtaining visual maintenance instructions without the need to continuously hold the equipment.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a maintenance auxiliary system based on augmented reality, so as to help personnel to improve maintenance efficiency and accuracy.
The application provides the following scheme:
an augmented reality-based visual maintenance assistance system comprises a visual maintenance guide function module, a fault information virtual-real mapping function module and a remote expert coordination function module, wherein,
the visual maintenance guide function module is used for superposing a real workshop scene and a virtual model based on a three-dimensional registration algorithm facing the maintenance scene to realize quick positioning and guiding of maintenance parts, and is also used for displaying an electronic manual and a virtual model positioning indication operation guide;
the fault information virtual-real mapping functional module displays fault information and maintenance tasks through a maintenance task pushing mechanism, and reminds personnel to timely handle faults;
the remote expert coordination function module is based on a WebRTC remote expert system and is used for realizing communication between an expert and field maintenance personnel.
Further, the visual maintenance guide function module comprises the following units:
an ORB feature extraction unit, wherein the ORB feature extraction unit firstly performs sharpening adjustment based on an information entropy threshold value, and then extracts ORB features from the image;
the map coordinate system construction and conversion unit is used for realizing the construction of the local map by corresponding the world coordinate system to a maintenance scene after inserting the key frame through the maintenance of the key frame and the map points after determining the key frame according to the rule;
the global pose resolving unit is used for calculating the pose of a camera of the mobile device according to the three-dimensional coordinates of a target and two-dimensional imaging in a key frame according to the least square solving process in the EPnP global pose resolving method based on Gaussian-Newton optimization, and then solving a pose change matrix according to the matching relation between the pose of the camera and a world coordinate system, so that the real-time pose of the wearable AR equipment in a workshop maintenance site is calculated, and the information of a part to be maintained is highlighted according to the real-time pose of the wearable AR equipment in the workshop maintenance site so as to carry out visual maintenance guidance.
Further, the ORB feature extraction unit performs the following operations to obtain ORB features of the image:
acquiring an equipment image, determining characteristic points of the image through a FAST algorithm, and describing the characteristic points through a BRIEF algorithm;
a high-pass filtering method is adopted in a spatial domain, and a Laplacian operator is used for carrying out convolution operation on each pixel of the image so as to increase variance among the pixels and realize sharpening of the image;
sharpening adjustment is performed based on the information entropy threshold.
Further, describing the feature points by adopting a BRIEF algorithm specifically comprises the following steps:
firstly, determining a neighborhood range of a feature point p, selecting n pairs of pixel points x and y, calculating by tau operation of a formula 1 to obtain whether the value of each pair of pixel points is 0 or 1, and then calculating a feature descriptor f (p) of the feature point p by a formula 2; finally, a similarity threshold is input, and the similarity of the two feature descriptors is calculated through exclusive OR operation of the image feature descriptors, so that whether the matching is successful or not is judged:
wherein, p is the current characteristic point, p (x) is the gray value of p at the point x, and p (y) is the gray value of p at the point y;
the rotation angle is determined according to the change of the mass center in the rotation process, so that a coordinate system of the rotated image is updated, and a specific calculation formula is as follows:
where p and q represent the boundaries of the two-dimensional image, respectively, I (x, y) represents the gray values at coordinates (x, y), and C is the coordinate system after the update image is rotated.
Furthermore, a high-pass filtering method is adopted in a spatial domain, and a Laplacian operator is used for carrying out convolution operation on each pixel of the image so as to increase the variance among the pixels, thereby realizing the sharpening of the image, and the specific process is as follows:
the variables of the convolution are set to be the sequences x (n) and h (n), and the result of the convolution is:
the pixel gray value is multiplied by the value on the corresponding convolution kernel, and all multiplied values are added as the gray value of the pixel on the image corresponding to the pixel in the middle of the convolution kernel, and the expression of the convolution function is as follows:
in equation 8, anchor is the reference point of the kernel; kernel is the convolution kernel.
Further, the sharpening adjustment based on the information entropy threshold is specifically:
the information entropy calculation formula is as follows:
in formula 10, p (x i ) The gray level in the image is i (i=0..255) Is a probability of a pixel of (2);
if the amount of information contained in an image is expressed by information entropy, the entropy value of an mxn image is defined as follows:
wherein P is ij Is the result of normalization processing on f (i, j).
Further, the convolution template of the convolution function is in a matrix form of a Laplacian variant operator, and the convolution template is isotropic by using second derivative information of an image, and the specific expression is as follows:
further, the ORB feature extracting unit further includes a process of setting and adjusting a threshold of the information entropy, specifically:
in formula 13, E 0 Is the information entropy threshold of the scene, H (i) ave Is the average value of the information entropy in the scene, i is the number of frames of the video sequence, and delta is the correction factor.
As a preferred embodiment of the present application, the effect is optimal when the correction factor δ is 0.5.
The beneficial effects are that:
the invention develops a maintenance auxiliary system based on an augmented reality technology, which is used in the field of intelligent manufacturing, and realizes deployment and auxiliary personnel maintenance by using the augmented reality technology. The advantages are that:
1. the ORB-SLAM2 algorithm improved based on the sharpening adjustment algorithm of the information entropy threshold enables the glasses to correctly identify the maintenance scene, enables the virtual model to be in a correct position and not interfered by a true object, and keeps the relative position stable.
2. Image sharpening is introduced, so that the edges and the contours of the image are clear, the details of the image are enhanced, and the textures of the image are enriched.
3. TCP is adopted as a communication protocol in the maintenance task pushing mechanism, so that the stability of a connection process can be ensured, and the instruction is not lost. And the designed maintenance task instruction format can better meet the pushing of maintenance tasks and the feedback requirement of maintenance results, and after the maintenance task instruction is stored in the database, the maintenance task instruction format can be used as an equipment maintenance record, so that the next maintenance is convenient.
4. The establishment of the local maintenance expert experience library can fix maintenance details and operation sequences in a text or picture mode to serve as maintenance operation standards, reduce the possibility of misoperation of maintenance personnel, and avoid the loss of maintenance experience caused by human factors.
5. The introduction of the remote expert coordination function enables on-site personnel to clearly acquire the guidance of the expert, and the expert can clearly see the real scene and virtual information of the site on the webpage. In the communication process, maintenance personnel can vacate both hands to continue operation, and faults which are never encountered can be solved better.
6. The user realizes man-machine interaction through the man-machine interaction interface of the mobile augmented reality device, has great advantages in the aspect of wearability and portability, and can present virtual reality information to the user in a manner of high immersion in the interaction process. Meanwhile, in the use process, maintenance personnel do not need to hold or touch equipment, and the influence of the equipment in the operation process is small.
Drawings
FIG. 1 is a diagram of an augmented reality based maintenance assistance system architecture;
FIG. 2 is a flow chart of a repair parts registration positioning method based on the modified ORB-SLAM 2;
FIG. 3 is a diagram of a maintenance task instruction architecture;
FIG. 4 is an augmented reality based maintenance assistance flow chart;
FIG. 5 is a diagram of a manufacturing shop repair assistance system expert interface.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings:
this invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art:
visual maintenance auxiliary system based on augmented reality, staff can realize visual maintenance guide, fault information virtual-real mapping, remote expert cooperation through deploying in the man-machine interaction interface of augmented reality glasses.
The visual maintenance guide is used for displaying operation guidance such as an electronic manual, a virtual model positioning instruction and the like, and is established on the basis of a three-dimensional registration algorithm facing a maintenance scene. Meanwhile, the fault guidance content in the maintenance expert experience library is called under the support of the three-dimensional registration algorithm, and visual maintenance assistance is provided for personnel. The fault information virtual-real mapping realizes the display of fault information and maintenance tasks through a maintenance task pushing mechanism, and reminds personnel to timely handle faults. And the maintenance task pushing mechanism comprises: information protocol design, maintenance task pushing, fault information pushing and maintenance result feedback. In addition, the system builds a remote expert system based on WebRTC, and realizes the communication between the expert and the field maintenance personnel.
In order to realize virtual maintenance operation information, seamless superposition of a real workshop scene and rapid positioning and guiding of maintenance components, and avoid the situation that a sufficiently stable matching point pair cannot be obtained when the system runs in a maintenance scene with rich textures, so that system position and attitude tracking is lost, an ORB-SLAM2 algorithm is optimized and improved. The frame of the improved ORB-SLAM2 tracking registration algorithm mainly comprises three parts, namely feature extraction point extraction, map coordinate system construction and conversion and global pose calculation. The calculation process provided in the present embodiment is performed based on the following parameters:
TABLE 1
The following is an optimization process;
1. ORB feature extraction process based on sharpening adjustment and adaptive information entropy:
the feature extraction point extraction part firstly performs sharpening adjustment based on the information entropy threshold after the equipment acquires the image, and then extracts ORB features from the image. Where the ORB algorithm, after determining the feature points of the image by the FAST (Features from Accelerated Segment Test) algorithm, needs to describe the feature points in some way using the BRIEF (Binary Robust Independent Elementary Features) algorithm to distinguish the differences between objects. The BRIEF algorithm is a set of feature descriptors consisting of binary, firstly, determining the neighborhood range of the feature point p, selecting n pairs of pixel points x and y, generally, n is 256, calculating the value of each pair through tau operation of the formula 1, and then calculating the feature descriptor f (p) of the feature point p through the formula 2. Finally, a similarity threshold is input, and the similarity of the two feature descriptors can be calculated through the exclusive OR operation of the image feature descriptors, so that whether the matching is successful or not is judged, and the calculation process is shown in the following formula.
Because the BRIEF algorithm does not have rotation invariance, namely the original coordinate system is not changed after the image rotates, the selected pixel pairs are also different, and the object feature descriptors are correspondingly changed, but are basically the same object. The ORB adopts the method of intensity centroid to measure the rotation angle change of the object, namely, the object is assumed to rotate along the center by a certain angle, and a coordinate system is reestablished after rotation, so that the coordinate system is ensured to be the same as the rotation angle of the object, and the consistency of the selected pixel pairs is ensured. The rotation angle can be determined according to the change of the mass center in the rotation process, so that the coordinate system is updated, and the specific calculation formula is as follows:
where p and q represent the boundaries of the two-dimensional image, respectively, I (x, y) represents the gray values at coordinates (x, y), and C is the coordinate system after the update image is rotated.
And the definition of the image may be reduced during the image transmission and conversion of the AR wearable device. Image sharpening is therefore introduced with the aim of sharpening the edges and contours of the image and enhancing the details of the image. And a high-pass filtering method is adopted in the spatial domain, and a Laplacian operator is used for carrying out convolution operation on each pixel of the image so as to increase the variance among the pixels, thereby realizing the presentation of a clear image. If the variables of the convolution are the sequences x (n) and h (n), then the result of the convolution is:
the convolution operation of the segmented image block is actually sliding over the image using a convolution kernel. The pixel gray values are multiplied by the values on the corresponding convolution kernel, and all multiplied values are added as the gray values of the pixels on the image corresponding to the pixels in the middle of the convolution kernel. Thus, the expression of the convolution function is as follows:
in equation 8, anchor is the reference point of the kernel; kernel is a convolution kernel, where the convolution template is a matrix form of Laplacian variant operator that uses the second derivative information of the image, is isotropic, and has the following specific expression:
in the extraction of the feature points, the information entropy reflects the richness of texture information or the gradient change degree of image pixels contained in the partial image. The larger the information entropy value is, the more abundant the image texture information is, the more obvious the change of image pixel gradient is, and the information entropy calculation formula is as follows:
in formula 10, p (x i ) Is the probability of a pixel in the image with a gray level i (i=0..255). The uncertainty of the information is smaller as the probability is closer to 1. For example, when the amount of information contained in an image is expressed by information entropy, then the entropy value of an image of size m×n is defined as follows:
wherein P is ij The result of normalization processing of f (i, j), where f (i, j) is the feature point coordinate.
However, since the amount of information entropy is closely related to scenes, different video sequences in different scenes have different information richness, and thus the information entropy threshold value of different scenes is necessarily different. In each different scene, repeated experiments are needed, and the information entropy threshold value is set for multiple times to perform matching calculation so as to obtain a corresponding threshold value. However, the thresholds in different scenarios differ significantly. Therefore, the threshold value has no versatility, and can affect image preprocessing and feature extraction, which can lead to failure to quickly obtain better matching results. Aiming at the problems, an information entropy threshold self-adaption method is provided, and according to different scenes, the threshold is adjusted, and the formula is as follows:
in 13, H (i) ave Is the average value of the information entropy in the scene, and can be obtained by obtaining the information entropy of each frame in the video of the first running scene and dividing the information entropy by the frame number. i is the number of frames of the video sequence and delta is the correction factor. Through repeated experiments, the best effect is achieved when the correction factor delta is 0.5. E calculated by the above formula 7 0 The entropy threshold of the scene.
2. And (3) constructing and converting a maintenance scene map coordinate system:
in the tracking process, the SLAM world coordinate system can be corresponding to the maintenance scene through the camera position matrix, so that the real-time viewpoint tracking process is realized. Setting the world coordinate of a target point in a virtual scene as P N (X N ,Y N ,Z N ) Setting a camera coordinate on the wearable AR device as P M (X M ,Y M ,Z M ) Then there is a rotation matrix R and a translation vector t such that P M =RP N +t. Therefore, the mapping relationship of the world coordinate system and the camera coordinate system is as follows:
in equation 14, R represents a 3×3 orthogonal identity matrix, t= (t) x ,t y ,t z ,1) T Is a unit translation vector, o= (0, 0) T
Setting the coordinates of the target point in the pixel coordinate system as (mu, gamma), and setting the origin of the pixel coordinate system as (mu) 00 ) The mapping relationship between the camera coordinate system and the pixel coordinate system is as follows:
in 15, f x =α·f,f y Let β·f, α denotes the horizontal size of the pixel, β denotes the vertical size of the pixel, and f denotes the focal length of the wearable AR device camera. Combining the conversion relationship between the world coordinate system and the camera coordinate system of formula 14, the coordinates of the target point pixels can be obtained as follows:
knowing the camera reference matrix, the reference matrix and the world coordinates of the target, the imaging position (μ, γ) of the target at the camera can be found by equation 16. The camera registration process of the augmented reality device may be accomplished by copying the intrinsic parameters of the real camera to the camera intrinsic parameters in the virtual maintenance scene.
3. EPnP global pose solving process based on Gaussian-Newton optimization:
in order to further improve the accuracy of the pose calculation result, we convert the pose estimation problem into a least squares problem, and then optimize the least squares solution problem in the EPnP (Efficient Perspective-n-Point, high-efficiency multi-Point perspective imaging) method by gaussian-newton method. The EPnP global pose solving method based on Gaussian-Newton optimization can calculate the pose of the mobile equipment camera according to the three-dimensional coordinates of the target and the two-dimensional imaging in the key frame. The EPnP method uses a homogeneous linear combination of 4 control points to represent a target three-dimensional point, and the conversion relationship between the coordinates of the target three-dimensional point and the coordinates of the control points is as follows:
in formula 17, P i N Is the world coordinates of the three-dimensional point of the object,three-dimensional coordinates of 4 control points, alpha ij Is homogeneous barycentric coordinates representing a linear relationship between the target three-dimensional point and the control point. Setting a camera pose matrix of AR wearable equipment to be solved as [ R|t ]]Then under the camera coordinate system, three-dimensional point coordinate P i M The representation of (2) is as follows:
in the process of 18, the process is carried out,the coordinates of the 4 control points in the camera coordinate system can be obtained by a camera projection model:
in 19, ω is used for scalar projection parameters i Expressed by the optical center of the target three-dimensional point i (μ) ii ) To represent, camera internal use of an AR wearable deviceTo represent.
From equation 19, two linear equations can be derived, as shown in equation 20:
the n points are concatenated to obtain a linear system of equations, as shown in equation 21:
mx=0 equation 34
Wherein,x is the coordinate of the control point in the camera coordinate system of the AR device>x ε ker (M) can be given by the following relationship:
in formula 22, V i Is the eigenvector corresponding to the N zero eigenvalues of M, for the j-th control point:
in formula 23, V i It is known that accurate coordinates in the AR device camera coordinate system are requiredI.e. to find the optimal value of beta. The objective function iteratively optimized by the gauss-newton method is expressed as follows:
due to alpha ij Obtained in the world coordinate system of the formula (4-17), ω ii And the internal parameters of the camera are known, so that the control point coordinate c in the camera coordinate system can be found by using the formulas 19 and 23 j . And then substituting it into equation 18, due to alpha ij The obtained coordinates of the three-dimensional point of the object in the camera coordinate system can be obtainedP i M . Then according to the matching relation between the camera coordinate system and the world coordinate system, the pose change matrix [ R|t ] can be solved]. And calculating the real-time pose of the wearable AR equipment on the workshop maintenance site, and highlighting the information of the part to be maintained according to the pose so as to carry out visual maintenance guidance.
Through verification, the ORB sharpening adjustment algorithm, the self-adaptive information entropy fusion algorithm and the ORB-SLAM2 three-dimensional registration algorithm after global pose resolving optimization based on Gaussian-Newton optimization can achieve extraction of maintenance scene characteristics and improve the precision of pose resolving results, and rich image textures can be achieved. Therefore, the system can realize the functions of identifying equipment, scenes and parts to be maintained, positioning a virtual model and the like, and can realize visual maintenance guidance with the assistance of an expert experience library arranged in the system.
The local experience library is built because of various equipment types and complex structures in a manufacturing workshop, and the experience and knowledge involved in the maintenance process are great, so that great pressure is caused to maintenance personnel. In the past, maintenance experience and knowledge are mostly communicated among personnel or paper documents, the situation that details are lost and the operation sequence is incorrect easily occurs by simply relying on the experience communication of maintenance personnel, the trouble of searching easily occurs in paper document recording, and electronic equipment such as augmented reality glasses, flat plates and the like cannot be effectively matched. The method for establishing the local maintenance expert experience library is a means capable of effectively improving the maintenance level, and the maintenance details and the operation sequence are fixed in a text or picture mode to serve as operation standards for maintenance, so that the possibility of misoperation of maintenance personnel is reduced, and the loss of maintenance experience caused by human factors can be avoided. Meanwhile, the electronic local maintenance experience library can be matched with auxiliary equipment such as augmented reality glasses, flat plates and mobile phones to visually guide maintenance personnel, so that maintenance efficiency is improved. When the manufacturing workshop breaks down, the maintenance auxiliary system calls corresponding maintenance guidance from a maintenance experience library according to the fault diagnosis result through the three-dimensional registration identification equipment, if corresponding records or records of similar equipment exist, the maintenance auxiliary system can call, and if relevant experience does not exist, a maintenance person is prompted to request the help of an expert. After the maintenance is finished, maintenance staff can newly add or adjust corresponding records in the maintenance experience library according to own experience, and visual guidance of the maintenance auxiliary system is standardized continuously by continuously perfecting the maintenance experience library so as to more comprehensively cover the maintenance requirements of workshops.
In the part of the fault information virtual-real mapping, the most important is the push mechanism of maintenance tasks. When the fault diagnosis model diagnosis equipment breaks down, a person is notified to carry out maintenance by transmitting a maintenance instruction to the man-machine interaction interface, and after the maintenance task is completed, the maintenance result is fed back through the man-machine interaction interface and the instruction is required to be transmitted, so that a maintenance task pushing mechanism is required to be designed to meet the requirement of the instruction transmission. Since the maintenance task pushing mechanism only transmits byte stream transmission, and the stability of the connection process is required to be ensured. The present system uses the TCP protocol to transmit maintenance task instructions. The transmission instructions are formatted to include a request type, a maintenance task number, a failed device identity (Identity Document, ID), a failure type, a maintenance person ID, a maintenance process result, and maintenance feedback. Wherein the request type is a type for identifying the piece of data; the maintenance task serial number is used for numbering maintenance records; the failed device ID is used to designate the failed device; the fault type is used for indicating the type of equipment fault and calling related fault reason explanation and fault maintenance guide; the maintenance processing result and the maintenance feedback are used for feeding back the maintenance behavior.
The remote expert collaboration system is built based on WebRTC, because WebRTC is a standard and framework of real-time audio and video technology, and comprises various technologies such as client api, audio and video coding/decoding lib, streaming media transmission protocol, echo cancellation, safe transmission and the like, low-delay video call capability can be realized. The video call function can be realized by only establishing the same socket at different expert interfaces of the user side and the corresponding expert side. At this time, the user end clicks to establish connection, the end initiating connection is the offer (carrying signal source information), the offer is sent to the expert end, after the expert end receives the offer, the expert end sends out response answer (carrying signal source information), and the offer end receives the answer end information to store; thus, each terminal has own information and information of the other party, after local Description and remote Description are set after the offer sends out the answer, the onicecandididate is triggered, and therefore, the two parties have local Description, remote Description and candida of the other party; with these three data, the connection. Onddstream function can be triggered, then the stream is written into the video tag by the method of the ir video. Src object=e.stream, and then there is a video picture of the other party in the video tag.
Finally, developing software in the whole set of system into a whole set of system; when the Unity 3D newly builds an augmented reality application project, firstly, switching a release Platform of software to a UWP Platform, adding a Scene to be released In the upper Scene In Build, selecting Universal Windows Platform In Platform, clicking a switch, and then switching the Platform. Development packages such as MRTK and Vufronia are sequentially added to Asset- > report Package- > Custom Package and Window- > Package Manager- > Add Package from disk, and attributes such as Package name, capabilities and Depth formats are set in the edition- > Project Settings. After the project editing is completed, clicking the File- > building Setting to generate the sln File. And compiling the project by using the Visual Studio, deploying the project into the augmented reality glasses, selecting Release by the DeBug option, selecting ARM64 or ARM by the solution platform, and operating the project selecting equipment. The notebook is connected with Hollolens through USB and paired, and then the project deployment is completed.
Therefore, the application designs a visual auxiliary maintenance system, and the functions comprise three parts of visual maintenance guide, fault information virtual-real mapping and remote expert cooperation. The visual maintenance guide is used for displaying operation guides such as an electronic manual, a virtual model positioning instruction and the like, and the fault information virtual-real mapping is used for displaying fault information and maintenance tasks to remind personnel to timely handle faults. In addition, the system is connected with a signaling server at the webpage end, and establishes communication with the augmented reality glasses, so that communication between an expert and on-site maintenance personnel is realized, and the system remote expert interface is provided with a maintenance contact way of different types of equipment, and the contact interface is marked with the direction of expert's adequacy, thereby better helping the maintenance personnel to solve the major faults of various equipment in a manufacturing workshop. Meanwhile, in order to fix the maintenance details and the operation sequence in a text or picture mode to be used as the operation standard of maintenance, the possibility of misoperation of maintenance personnel is reduced, the loss of maintenance experience caused by human factors is avoided, and the system also comprises a local maintenance expert experience library. Meanwhile, the electronic local maintenance experience library can be matched with auxiliary equipment such as augmented reality glasses, flat plates and mobile phones to visually guide maintenance personnel, so that maintenance efficiency is improved.
The foregoing is merely a specific embodiment of the present invention, but the protection scope of the present invention is not limited thereto, and several modifications and adjustments can be made by those skilled in the art within the technical scope of the present invention disclosed herein. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (9)

1. The visual maintenance auxiliary system based on augmented reality is characterized by comprising a visual maintenance guide function module, a fault information virtual-real mapping function module and a remote expert cooperative function module, wherein,
the visual maintenance guide function module is used for superposing a real workshop scene and a virtual model based on a three-dimensional registration algorithm facing the maintenance scene to realize quick positioning and guiding of maintenance parts, and is also used for displaying an electronic manual and a virtual model positioning indication operation guide;
the fault information virtual-real mapping functional module displays fault information and maintenance tasks through a maintenance task pushing mechanism, and reminds personnel to timely handle faults;
the remote expert coordination function module is based on a WebRTC remote expert system and is used for realizing communication between an expert and field maintenance personnel.
2. The augmented reality-based visual maintenance assistance system of claim 1, wherein the visual maintenance guidance function module comprises the following elements:
an ORB feature extraction unit that first performs sharpening adjustment based on the information entropy threshold, then extracts ORB features from the image,
the map coordinate system construction and conversion unit is used for realizing the construction of the local map by corresponding the world coordinate system to a maintenance scene after inserting the key frame through the maintenance of the key frame and the map points after determining the key frame according to the rule;
the global pose resolving unit is used for calculating the pose of a camera of the mobile device according to the three-dimensional coordinates of a target and two-dimensional imaging in a key frame according to the least square solving process in the EPnP global pose resolving method based on Gaussian-Newton optimization, and then solving a pose change matrix according to the matching relation between the pose of the camera and a world coordinate system, so that the real-time pose of the wearable AR equipment in a workshop maintenance site is calculated, and the information of a part to be maintained is highlighted according to the real-time pose of the wearable AR equipment in the workshop maintenance site so as to carry out visual maintenance guidance.
3. The augmented reality-based visual maintenance assistance system of claim 1, wherein the ORB feature extraction unit obtains the ORB features of the image by:
acquiring an equipment image, determining characteristic points of the image through a FAST algorithm, and describing the characteristic points through a BRIEF algorithm;
a high-pass filtering method is adopted in a spatial domain, and a Laplacian operator is used for carrying out convolution operation on each pixel of the image so as to increase variance among the pixels and realize sharpening of the image;
sharpening adjustment is performed based on the information entropy threshold.
4. The augmented reality-based visual maintenance assistance system according to claim 3, wherein the description of the feature points using the BRIEF algorithm is specifically:
firstly, determining a neighborhood range of a feature point p, selecting n pairs of pixel points x and y, calculating through sigma operation of a formula 1 to obtain whether the value of each pair of pixel points is 0 or 1, and then calculating a feature descriptor f (p) of the feature point p through a formula 2; finally, a similarity threshold is input, and the similarity of the two feature descriptors is calculated through exclusive OR operation of the image feature descriptors, so that whether the matching is successful or not is judged:
wherein, p is the current characteristic point, p (x) is the gray value of p at the point x, and p (y) is the gray value of p at the point y;
the rotation angle is determined according to the change of the mass center in the rotation process, so that a coordinate system of the rotated image is updated, and a specific calculation formula is as follows:
where p and q represent the boundaries of the two-dimensional image, respectively, I (x, y) represents the gray values at coordinates (x, y), and C is the coordinate system after the update image is rotated.
5. The augmented reality-based visual maintenance assistance system according to claim 3, wherein a high-pass filtering method is adopted in a spatial domain, and a laplace operator is used to perform convolution operation on each pixel of an image so as to increase variance between the pixels, thereby realizing sharpening of the image, and the specific process is as follows:
the variables of the convolution are set to be the sequences x (n) and h (n), and the result of the convolution is:
the pixel gray value is multiplied by the value on the corresponding convolution kernel, and all multiplied values are added as the gray value of the pixel on the image corresponding to the pixel in the middle of the convolution kernel, and the expression of the convolution function is as follows:
in equation 8, anchor is the reference point of the kernel; kernel is the convolution kernel.
6. An augmented reality-based visual maintenance assistance system as claimed in claim 3, wherein said sharpening adjustment based on an information entropy threshold is in particular:
the information entropy calculation formula is as follows:
in formula 10, p (x i ) Is the probability of a pixel in the image with a gray level i (i=0..255);
if the amount of information contained in an image is expressed by information entropy, the entropy value of an mxn image is defined as follows:
wherein P is ij Is the result of normalization processing on f (i, j).
7. An augmented reality-based visual maintenance assistance system according to claim 3, wherein the convolution template of the convolution function is in the form of a matrix of Laplacian variant operators, which uses the second derivative information of the image, is isotropic, and the specific expression is as follows:
8. an augmented reality-based visual maintenance assistance system according to claim 3, wherein said ORB feature extraction unit further comprises a process of setting and adjusting a threshold of information entropy, in particular:
in formula 13, E 0 Is the information entropy threshold of the scene, H (i) ave Is the average value of the information entropy in the scene, i is the number of frames of the video sequence, and delta is the correction factor.
9. The augmented reality-based visual maintenance assistance system of claim 7, wherein the correction factor δ is optimally effective at 0.5.
CN202311428648.4A 2023-10-31 2023-10-31 Visual maintenance auxiliary system based on augmented reality Pending CN117541219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311428648.4A CN117541219A (en) 2023-10-31 2023-10-31 Visual maintenance auxiliary system based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311428648.4A CN117541219A (en) 2023-10-31 2023-10-31 Visual maintenance auxiliary system based on augmented reality

Publications (1)

Publication Number Publication Date
CN117541219A true CN117541219A (en) 2024-02-09

Family

ID=89790947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311428648.4A Pending CN117541219A (en) 2023-10-31 2023-10-31 Visual maintenance auxiliary system based on augmented reality

Country Status (1)

Country Link
CN (1) CN117541219A (en)

Similar Documents

Publication Publication Date Title
US11270460B2 (en) Method and apparatus for determining pose of image capturing device, and storage medium
US11394950B2 (en) Augmented reality-based remote guidance method and apparatus, terminal, and storage medium
US11854118B2 (en) Method for training generative network, method for generating near-infrared image and device
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
CN107491174B (en) Method, device and system for remote assistance and electronic equipment
CN109032348B (en) Intelligent manufacturing method and equipment based on augmented reality
TW201915943A (en) Method, apparatus and system for automatically labeling target object within image
US10672143B2 (en) Image processing method for generating training data
US10402657B2 (en) Methods and systems for training an object detection algorithm
CN110109535A (en) Augmented reality generation method and device
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
US7377650B2 (en) Projection of synthetic information
CN110706159B (en) Space coordinate conversion server and method
US20220358675A1 (en) Method for training model, method for processing video, device and storage medium
WO2022088881A1 (en) Method, apparatus and system for generating a three-dimensional model of a scene
CN113627005B (en) Intelligent vision monitoring method
CN114266823A (en) Monocular SLAM method combining SuperPoint network characteristic extraction
CN113936121A (en) AR (augmented reality) label setting method and remote collaboration system
CN112288876A (en) Long-distance AR identification server and system
CN109816791B (en) Method and apparatus for generating information
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN117541219A (en) Visual maintenance auxiliary system based on augmented reality
CN115131528A (en) Virtual reality scene determination method, device and system
CN112017247A (en) Method for realizing unmanned vehicle vision by using KINECT
CN113593049B (en) Virtual-real fusion method for geometric consistency of real object and virtual object in scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination