CN112085223A - Guidance system and method for mechanical maintenance - Google Patents

Guidance system and method for mechanical maintenance Download PDF

Info

Publication number
CN112085223A
CN112085223A CN202010775224.5A CN202010775224A CN112085223A CN 112085223 A CN112085223 A CN 112085223A CN 202010775224 A CN202010775224 A CN 202010775224A CN 112085223 A CN112085223 A CN 112085223A
Authority
CN
China
Prior art keywords
image
information
maintenance
glasses
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010775224.5A
Other languages
Chinese (zh)
Inventor
刘德生
王斌
罗亚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen New Brilliant Intelligent Technology Co ltd
Original Assignee
Shenzhen New Brilliant Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen New Brilliant Intelligent Technology Co ltd filed Critical Shenzhen New Brilliant Intelligent Technology Co ltd
Priority to CN202010775224.5A priority Critical patent/CN112085223A/en
Publication of CN112085223A publication Critical patent/CN112085223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Computer Graphics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an induction system and method for mechanical maintenance, and the method comprises the following steps: receiving first image information of at least one to-be-maintained part in each direction image sample collected by AR glasses and related to a maintenance target, and labeling the image sample to obtain a post-marking image sample; inputting the marked image sample into a preset deep learning model, and training the deep learning model to obtain a training model; acquiring second image information of a maintenance target in a maintenance process, determining position information of at least one part to be maintained in the second image information according to a training model, and determining the difference between the position information and preset position information; and determining maintenance information or prompt information according to the difference, and sending the maintenance information or the prompt information to the AR glasses so that the user can perform maintenance based on the maintenance information or the prompt information displayed by the AR glasses. The embodiment of the application solves the technical problem of poor effect of mechanical maintenance in the prior art.

Description

Guidance system and method for mechanical maintenance
Technical Field
The application relates to the technical field of sound and graphic processing, in particular to an induction system and method for mechanical maintenance.
Background
Mechanical maintenance is widely applied to various fields, such as aerospace equipment, communication equipment or medical equipment, and the like, wherein a plurality of large and complex electromechanical systems with complex structures and high density are involved, and the large and complex electromechanical systems put higher requirements on technical skills of technicians in use or maintenance. Due to the limitation of factors such as environment or conditions, the maintenance labor intensity is high, the technical difficulty is high, the maintenance effect is further influenced, and the problem that how to improve the mechanical maintenance effect is to be solved urgently is solved.
At present, chinese patent discloses an augmented reality maintenance guidance system and method for space equipment (No. CN110333775A), in which the maintenance guidance system includes a graphic workstation, a transmission module, a scene camera and a see-through helmet, and the process of implementing maintenance guidance according to the maintenance guidance system is as follows: firstly, a maintenance flow is completely and accurately manufactured through three-dimensional modeling and script compiling, then the maintenance flow is projected onto an augmented reality perspective helmet through a graphic workstation, after a maintenance person wears the augmented reality perspective helmet, the specific operation of each step of the maintenance person is prompted according to guidance information, although the work efficiency of the maintenance person can be well improved, the maintenance quality is improved, in order to achieve maintenance guidance, the real perspective helmet further comprises a head position tracking sensor, an eye camera and the like, the head position tracking sensor is used for overlaying the enhancement information onto the accurate position of a maintenance scene, the sensor of the head position tracker can obtain the position relation of the maintenance scene relative to the head of the person, and the eye camera tracks the eyeball movement of the maintenance person in the maintenance process. Therefore, on one hand, in the prior art, in the maintenance guidance process, not only the image shot in the maintenance process needs to be analyzed, but also the shot image needs to be combined with the eyeball and head movement information of the maintenance personnel for maintenance guidance, so that the calculation amount is increased; on the other hand, due to the existence of factors such as environment or interference, error information is introduced when the eyeball and head movement information of a maintenance worker is collected, and the maintenance effect is further influenced.
Disclosure of Invention
The technical problem that this application was solved is: the effect of mechanical maintenance among the prior art is relatively poor. According to the scheme provided by the embodiment of the application, the induction system does not depend on the depth information of the acquired image information in the maintenance induction process, the acquired image information is trained and recognized through a preset depth learning model, the photographed image is not required to be combined with eyeball and head movement information of maintenance personnel for maintenance induction, and the calculated amount is reduced; and the condition that error information is introduced when the eyeball and head movement information of a maintenance worker are collected due to factors such as environment or interference is avoided, and the maintenance effect is improved.
In a first aspect, an inducing system for machine maintenance is provided in an embodiment of the present application, the system including: AR glasses, an external input controller and a server; wherein the content of the first and second substances,
the AR glasses are used for acquiring first image information of at least one part to be maintained related to a maintenance target, acquiring second image information of the maintenance target in a maintenance process, receiving an auxiliary image fed back by the server based on the second image information, and displaying the auxiliary image so that a user can maintain based on the auxiliary image;
the server is used for receiving the first image information and the second image information, performing depth training according to the first image information to obtain a training model, calculating the auxiliary image according to the training model and the second image information, and sending the auxiliary image to the AR glasses;
the external input controller is configured to receive control information input by a user based on the auxiliary image, and send the control information to the AR glasses, so that the AR glasses send the first image information and the second image information to the server based on the control instruction.
Optionally, the external input controller includes: and the special line control or voice control module is used for sending the control instruction to the AR glasses through line control or voice control.
Optionally, the AR glasses comprise: a monocular camera; the external input controller comprises an integrated microphone; the monocular camera and the integrated microphone are connected through an internet, a local area network or a USB interface.
In a second aspect, the present application provides a method for an induction system for machine maintenance, which is applied to the system in the first aspect, and the method includes:
receiving first image information of an image sample, which is acquired by AR glasses and contains at least one part to be maintained related to a maintenance target, in each direction, and labeling the image sample to obtain a post-marking image sample; inputting the labeled image sample into a preset deep learning model, and training the preset deep learning model to obtain a training model;
acquiring second image information of the maintenance target in the maintenance process, determining position information of the at least one part to be maintained in the second image information according to the training model, and determining the difference between the position information and preset position information;
and determining maintenance information or prompt information according to the difference, and sending the maintenance information or the prompt information to the AR glasses, so that the user can perform maintenance based on the maintenance information or the prompt information displayed by the AR glasses.
Optionally, the image sample comprises: and the ratio of the number of the image samples in the training set to the number of the image samples in the test set is 9: 1.
Optionally, the preset deep learning model is a Convolutional Neural Network (CNN) model;
training the preset deep learning model to obtain a training model, comprising:
preprocessing the image sample after the labeling to calculate an average value of three channels of RGB, subtracting the average value from each pixel value in the image sample after the labeling, and scaling the image sample after the labeling to different scales by adopting multi-scale training;
and randomly cutting the zoomed image sample, carrying out horizontal turning and random RGB color difference adjustment on the cut image sample, and training according to the adjusted image sample to obtain the training model.
Optionally, after receiving first image information, acquired by the AR glasses, including image samples of at least one to-be-repaired part related to the repair target in various directions, the method further includes: and privatizing and deploying the training set and the test set in a public cloud.
Optionally, the convolutional neural network model comprises a convolutional layer, a linear rectifying layer, a pooling layer and a full-link layer, wherein each convolutional layer is composed of a plurality of convolutional units.
In a third aspect, an embodiment of the present application provides a device for using an induction system for machine maintenance, the device including:
the training unit is used for receiving first image information of image samples, collected by the AR glasses, of at least one part to be maintained related to a maintenance target in each direction, and marking the image samples to obtain marked image samples; inputting the labeled image sample into a preset deep learning model, and training the preset deep learning model to obtain a training model;
the first determining unit is used for acquiring second image information of the maintenance target in the maintenance process, determining the position information of the at least one part to be maintained in the second image information according to the training model, and determining the difference between the position information and preset position information;
and the second determining unit is used for determining maintenance information or prompt information according to the difference and sending the maintenance information or the prompt information to the AR glasses so that the user can perform maintenance based on the maintenance information or the prompt information displayed by the AR glasses.
Optionally, the image sample comprises: and the ratio of the number of the image samples in the training set to the number of the image samples in the test set is 9: 1.
Optionally, the preset deep learning model is a Convolutional Neural Network (CNN) model;
the training unit is specifically configured to:
preprocessing the image sample after the labeling to calculate an average value of three channels of RGB, subtracting the average value from each pixel value in the image sample after the labeling, and scaling the image sample after the labeling to different scales by adopting multi-scale training;
and randomly cutting the zoomed image sample, carrying out horizontal turning and random RGB color difference adjustment on the cut image sample, and training according to the adjusted image sample to obtain the training model.
Optionally, the training unit is further configured to: and privatizing and deploying the training set and the test set in a public cloud.
Optionally, the convolutional neural network model comprises a convolutional layer, a linear rectifying layer, a pooling layer and a full-link layer, wherein each convolutional layer is composed of a plurality of convolutional units.
In a fourth aspect, the present application provides a server, comprising:
a memory for storing instructions for execution by at least one processor;
a processor for executing instructions stored in the memory to perform the method of the second aspect.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon computer instructions which, when run on a computer, cause the computer to perform the method of the second aspect.
Compared with the prior art, the beneficial effect of this application:
1. in the scheme provided by the embodiment of the application, the induction system does not depend on the depth information of the collected image information in the maintenance induction process, the collected image information is trained and recognized through the preset depth learning model, the photographed image, the eyeball of a maintenance worker and the head movement information do not need to be combined for maintenance induction, and the calculated amount is reduced.
2. In the scheme that this application embodiment provided, avoid because there are factors such as environment or interference, can introduce error information when gathering maintenance personal's eyeball and head motion information, and then improved the effect of maintenance.
Drawings
FIG. 1 is a schematic structural diagram of an induction system for machine maintenance according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating a method for an induction system for machine maintenance according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an apparatus for using an induction system for machine maintenance according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the solutions provided in the embodiments of the present application, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Referring to fig. 1, an embodiment of the present application provides an induction system for machine maintenance, including: AR glasses 1, an external input controller 2, and a server 3; wherein the content of the first and second substances,
the AR glasses 1 are configured to collect first image information of at least one to-be-maintained part related to a maintenance target, collect second image information of the maintenance target in a maintenance process, receive an auxiliary image fed back by the server 3 based on the second image information, and display the auxiliary image, so that a user performs maintenance based on the auxiliary image;
the server 3 is configured to receive the first image information and the second image information, perform depth training according to the first image information to obtain a training model, calculate the auxiliary image according to the training model and the second image information, and send the auxiliary image to the AR glasses 1;
the external input controller 2 is configured to receive control information input by a user based on the auxiliary image, and send the control information to the AR glasses 1, so that the AR glasses 1 send the first image information and the second image information to the server 3 based on the control instruction.
Optionally, the external input controller 2 includes: and the special line control or voice control module is used for sending the control instruction to the AR glasses through line control or voice control.
Optionally, the AR glasses 1 comprise: a monocular camera; the external input controller 2 comprises an integrated microphone; the monocular camera and the integrated microphone are connected through an internet, a local area network or a USB interface.
The method for inducing the system for machine maintenance provided by the embodiments of the present application is further described in detail below with reference to the drawings of the specification, and the method is applied to the system shown in fig. 1, and a specific implementation manner of the method may include the following steps (a method flow is shown in fig. 2):
step 201, receiving first image information of an image sample, which is acquired by AR glasses and contains at least one part to be maintained related to a maintenance target, in each direction, and labeling the image sample to obtain a post-labeling image sample; and inputting the marked image sample into a preset deep learning model, and training the preset deep learning model to obtain a training model.
In one possible implementation, the image samples include: and the ratio of the number of the image samples in the training set to the number of the image samples in the test set is 9: 1.
Specifically, the training set refers to an image sample set participating in training, and the test set refers to an image sample set not participating in training and used for evaluating a training effect. In the scheme provided by the embodiment of the application, the training model needs to satisfy the following conditions:
1. the difference between the illumination conditions of the constructed environment and the actual maintenance processing environment needs to be smaller than a preset threshold, for example, the position of the light source, the illumination intensity, and the like, otherwise, the accuracy of image recognition is affected.
2. The material and texture of the modeled article cannot be too single, and the article with too few texture cannot be accurately identified.
Further, in the solution provided in the embodiment of the present application, there are a plurality of types of the preset deep learning models, and a preferred method is described as an example below.
In a possible implementation manner, the preset deep learning model is a Convolutional Neural Network (CNN);
training the preset deep learning model to obtain a training model, comprising: preprocessing the image sample after the labeling to calculate an average value of three channels of RGB, subtracting the average value from each pixel value in the image sample after the labeling, and scaling the image sample after the labeling to different scales by adopting multi-scale training; and randomly cutting the zoomed image sample, carrying out horizontal turning and random RGB color difference adjustment on the cut image sample, and training according to the adjusted image sample to obtain the training model.
Specifically, the default input of the convolutional neural network model is an image, and the convolutional neural network model has the following advantages:
1) it allows specific properties to be encoded into the network structure, making the feed forward function more efficient and reducing a large number of parameters.
2. The neuron can be designed into three dimensions by utilizing the characteristic that the input is a picture: (width, height, depth), for example, the input picture size is 32 × 32 × 3(RGB), then the input neuron also has dimensions of 32 × 32 × 3, the convolutional neural network input is an RGB image with size 224 × 224, the average of three channels is calculated during preprocessing (preprocessing), and the average is subtracted at each pixel, thus making the post-processing iterations less and converge faster.
3. The original image is scaled to different sizes by adopting Multi-scale training (Multi-scale), then the picture of 224 x 224 is randomly cropped, and the picture is horizontally flipped and randomly adjusted by RGB color difference, so that a large amount of data can be increased, and the model overfitting prevention effect is good.
Further, in order to facilitate processing of the acquired second image information, in an embodiment of the present application, after receiving first image information acquired by the AR glasses and including image samples of at least one to-be-repaired part related to a repair target in various directions, the method further includes: and privatizing and deploying the training set and the test set in a public cloud.
In the scheme provided by the embodiment of the application, after the training set and the test set are determined to be completed, the server can deploy the training set and the test set in a privatized manner or in a public cloud. In addition, after the training set and the test set are determined to be completed, the server needs to set the position of at least one part involved in maintenance, specifically, the position may be determined according to actual delay requirements and network conditions, for example, the overlap ratio of the positions of the part a and the part B reaches 50%, or the part a must be on the left of the center line of the part B, and it is determined that the position is the correct position, if the part a appears on the right of the center line of the part B, the interface prompts to move the part a to the left, and whether some other parts are movable or not, or the possible operation is preset by the server.
Further, in a possible implementation manner, the convolutional neural network model comprises convolutional layers, linear rectifying layers, pooling layers and full-connection layers, wherein each convolutional layer is composed of a plurality of convolutional units.
Specifically, in the scheme provided by the embodiment of the present application, the parameters of each convolution unit are obtained by optimizing through a back propagation algorithm, and the purpose of convolution operation is to extract features of different depths of an input image; the Linear rectification process includes a neural Activation function (Activation function), wherein the neural Activation function uses a Linear rectification (rectilinearly Units, ReLU) f (x) max (0, x); the pooling layer is used for cutting the characteristics with large dimensionality obtained after the convolution layer into a plurality of areas, and obtaining the maximum value or the average value of the areas to obtain new characteristics with small dimensionality, so as to reduce a characteristic diagram; the full connection layer is used for combining all local features obtained by the pooling layer into a global feature for calculating the score of each final class.
Step 202, collecting second image information of the maintenance target in the maintenance process, determining position information of the at least one part to be maintained in the second image information according to the training model, and determining a difference between the position information and preset position information.
Step 203, determining maintenance information or prompt information according to the difference, and sending the maintenance information or the prompt information to the AR glasses, so that the user can perform maintenance based on the maintenance information or the prompt information displayed by the AR glasses.
Specifically, in the solution provided in the embodiment of the present application, the correct position information, that is, the preset position information, of the at least one to-be-repaired part after the repair is stored in the local database of the server. The acquired second image information is transmitted to the server through the AR glasses, then the server identifies the object type and the position in the second image information according to the training model, the server determines the difference between the position of at least one part and the preset position according to the object type and the position, then the difference between the current position of at least one part and the preset position is compared, adjustment information is given, such as movement suggestion information or prompt information with missing relative position, and then the server sends the adjustment information or the prompt information to the AR glasses.
And after receiving the adjustment information or the prompt information, the AR glasses display the adjustment information or the prompt information, and then the user feeds back and performs information interaction with the AR glasses through the external input controller according to a prompt result. In the scheme provided by the embodiment of the application, the external input controller can realize information interaction and feedback with the AR glasses through the voice control or line control module, for example, the voice control module only needs to identify a control command through a template, and does not need complete full-language identification, so that the identification accuracy can be improved, and the use threshold can be reduced.
Further, in order to reduce system workload, the AR glasses are set to transmit images back to the server only when receiving the control instruction, the server does not need to perform image processing all the time, so that delay can be improved, user experience can be optimized, the user can issue the control instruction through the voice control module, for example, the control instruction is an image recognition instruction, then the AR glasses are controlled to send collected image information to the server, and the server processes the image information and then returns a result to the AR glasses.
In the scheme provided by the embodiment of the application, the guidance system does not depend on the depth information of the acquired image information in the maintenance guidance process, the acquired image information is trained and recognized through the preset depth learning model, the photographed image is not required to be combined with the eyeball and head movement information of a maintenance worker for maintenance guidance, and the calculated amount is reduced; and the condition that error information is introduced when the eyeball and head movement information of a maintenance worker are collected due to factors such as environment or interference is avoided, and the maintenance effect is improved.
Based on the same application concept as the method shown in fig. 2, the embodiment of the present application provides a device for using an induction system for machine maintenance, which comprises:
the training unit 301 is configured to receive first image information, which is acquired by the AR glasses and contains image samples of at least one to-be-maintained part related to a maintenance target in each direction, and label the image samples to obtain labeled image samples; inputting the labeled image sample into a preset deep learning model, and training the preset deep learning model to obtain a training model;
a first determining unit 302, configured to acquire second image information of the maintenance target in a maintenance process, determine, according to the training model, position information of the at least one to-be-maintained part in the second image information, and determine a difference between the position information and preset position information;
a second determining unit 303, configured to determine maintenance information or prompt information according to the difference, and send the maintenance information or the prompt information to the AR glasses, so that a user performs maintenance based on the maintenance information or the prompt information displayed by the AR glasses.
Optionally, the image sample comprises: and the ratio of the number of the image samples in the training set to the number of the image samples in the test set is 9: 1.
Optionally, the preset deep learning model is a Convolutional Neural Network (CNN) model;
the training unit 301 is specifically configured to:
preprocessing the image sample after the labeling to calculate an average value of three channels of RGB, subtracting the average value from each pixel value in the image sample after the labeling, and scaling the image sample after the labeling to different scales by adopting multi-scale training;
and randomly cutting the zoomed image sample, carrying out horizontal turning and random RGB color difference adjustment on the cut image sample, and training according to the adjusted image sample to obtain the training model.
Optionally, the training unit 301 is further configured to: and privatizing and deploying the training set and the test set in a public cloud.
Optionally, the convolutional neural network model comprises a convolutional layer, a linear rectifying layer, a pooling layer and a full-link layer, wherein each convolutional layer is composed of a plurality of convolutional units.
Referring to fig. 4, the present application provides a server, comprising:
a memory 401 for storing instructions for execution by at least one processor;
a processor 402 for executing instructions stored in memory to perform the method described in fig. 2.
A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of fig. 2.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (8)

1. An induction system for machine maintenance, comprising: AR glasses, an external input controller and a server; wherein the content of the first and second substances,
the AR glasses are used for acquiring first image information of at least one part to be maintained related to a maintenance target, acquiring second image information of the maintenance target in a maintenance process, receiving an auxiliary image fed back by the server based on the second image information, and displaying the auxiliary image so that a user can maintain based on the auxiliary image;
the server is used for receiving the first image information and the second image information, performing depth training according to the first image information to obtain a training model, calculating the auxiliary image according to the training model and the second image information, and sending the auxiliary image to the AR glasses;
the external input controller is configured to receive control information input by a user based on the auxiliary image, and send the control information to the AR glasses, so that the AR glasses send the first image information and the second image information to the server based on the control instruction.
2. The system of claim 1, wherein the external input controller comprises: and the special line control or voice control module is used for sending the control instruction to the AR glasses through line control or voice control.
3. The system of claim 1 or 2, wherein the AR glasses comprise a monocular camera; the external input controller comprises an integrated microphone; the monocular camera and the integrated microphone are connected through an internet, a local area network or a USB interface.
4. A method for inducing a system for mechanical maintenance, applied to a system according to any one of claims 1 to 3, comprising:
receiving first image information of an image sample, which is acquired by AR glasses and contains at least one part to be maintained related to a maintenance target, in each direction, and labeling the image sample to obtain a post-marking image sample; inputting the labeled image sample into a preset deep learning model, and training the preset deep learning model to obtain a training model;
acquiring second image information of the maintenance target in the maintenance process, determining position information of the at least one part to be maintained in the second image information according to the training model, and determining the difference between the position information and preset position information;
and determining maintenance information or prompt information according to the difference, and sending the maintenance information or the prompt information to the AR glasses, so that the user can perform maintenance based on the maintenance information or the prompt information displayed by the AR glasses.
5. The method of claim 4, wherein the image samples comprise: and the ratio of the number of the image samples in the training set to the number of the image samples in the test set is 9: 1.
6. The method of claim 5, wherein the preset deep learning model is a Convolutional Neural Network (CNN) model;
training the preset deep learning model to obtain a training model, comprising:
preprocessing the image sample after the labeling to calculate an average value of three channels of RGB, subtracting the average value from each pixel value in the image sample after the labeling, and scaling the image sample after the labeling to different scales by adopting multi-scale training;
and randomly cutting the zoomed image sample, carrying out horizontal turning and random RGB color difference adjustment on the cut image sample, and training according to the adjusted image sample to obtain the training model.
7. The method according to any one of claims 1 to 6, wherein after receiving first image information collected by the AR glasses and containing image samples of at least one part to be repaired related to the repair target in all directions, the method further comprises:
and privatizing and deploying the training set and the test set in a public cloud.
8. The method of any one of claims 1-6, wherein the convolutional neural network model comprises convolutional layers, linear rectifying layers, pooling layers, and fully-connected layers, wherein each convolutional layer consists of a number of convolutional units.
CN202010775224.5A 2020-08-04 2020-08-04 Guidance system and method for mechanical maintenance Pending CN112085223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010775224.5A CN112085223A (en) 2020-08-04 2020-08-04 Guidance system and method for mechanical maintenance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010775224.5A CN112085223A (en) 2020-08-04 2020-08-04 Guidance system and method for mechanical maintenance

Publications (1)

Publication Number Publication Date
CN112085223A true CN112085223A (en) 2020-12-15

Family

ID=73735766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010775224.5A Pending CN112085223A (en) 2020-08-04 2020-08-04 Guidance system and method for mechanical maintenance

Country Status (1)

Country Link
CN (1) CN112085223A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052561A (en) * 2021-04-01 2021-06-29 苏州惟信易量智能科技有限公司 Flow control system and method based on wearable device
CN114792404A (en) * 2022-04-26 2022-07-26 北京大学 AR enhancement auxiliary repair control platform, method, medium and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834379A (en) * 2015-05-05 2015-08-12 江苏卡罗卡国际动漫城有限公司 Repair guide system based on AR (augmented reality) technology
CN109919331A (en) * 2019-02-15 2019-06-21 华南理工大学 A kind of airborne equipment intelligent maintaining auxiliary system and method
CN109961030A (en) * 2019-03-18 2019-07-02 北京邮电大学 Pavement patching information detecting method, device, equipment and storage medium
CN110187773A (en) * 2019-06-04 2019-08-30 北京悉见科技有限公司 Method, equipment and the computer storage medium of augmented reality glasses control
WO2019214313A1 (en) * 2018-05-08 2019-11-14 阿里巴巴集团控股有限公司 Interactive processing method, apparatus and processing device for vehicle loss assessment and client terminal
US20200026257A1 (en) * 2018-07-23 2020-01-23 Accenture Global Solutions Limited Augmented reality (ar) based fault detection and maintenance
CN210382798U (en) * 2019-06-03 2020-04-24 南方电网科学研究院有限责任公司 Hidden intelligent AR electric power overhauls helmet
KR102104326B1 (en) * 2019-06-28 2020-04-27 한화시스템 주식회사 Maintenance training system and method based on augmented reality

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834379A (en) * 2015-05-05 2015-08-12 江苏卡罗卡国际动漫城有限公司 Repair guide system based on AR (augmented reality) technology
WO2019214313A1 (en) * 2018-05-08 2019-11-14 阿里巴巴集团控股有限公司 Interactive processing method, apparatus and processing device for vehicle loss assessment and client terminal
US20200026257A1 (en) * 2018-07-23 2020-01-23 Accenture Global Solutions Limited Augmented reality (ar) based fault detection and maintenance
CN109919331A (en) * 2019-02-15 2019-06-21 华南理工大学 A kind of airborne equipment intelligent maintaining auxiliary system and method
CN109961030A (en) * 2019-03-18 2019-07-02 北京邮电大学 Pavement patching information detecting method, device, equipment and storage medium
CN210382798U (en) * 2019-06-03 2020-04-24 南方电网科学研究院有限责任公司 Hidden intelligent AR electric power overhauls helmet
CN110187773A (en) * 2019-06-04 2019-08-30 北京悉见科技有限公司 Method, equipment and the computer storage medium of augmented reality glasses control
KR102104326B1 (en) * 2019-06-28 2020-04-27 한화시스템 주식회사 Maintenance training system and method based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗又文;王崴;瞿珏;: "基于Faster R-CNN的诱导维修自动交互设计", 计算机工程与应用, no. 12 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052561A (en) * 2021-04-01 2021-06-29 苏州惟信易量智能科技有限公司 Flow control system and method based on wearable device
CN114792404A (en) * 2022-04-26 2022-07-26 北京大学 AR enhancement auxiliary repair control platform, method, medium and equipment
CN114792404B (en) * 2022-04-26 2022-11-15 北京大学 AR enhancement auxiliary repair control platform, method, medium and equipment

Similar Documents

Publication Publication Date Title
CN108369643B (en) Method and system for 3D hand skeleton tracking
EP4009231A1 (en) Video frame information labeling method, device and apparatus, and storage medium
CN111771231A (en) Matching mesh for avatars
CN111126272A (en) Posture acquisition method, and training method and device of key point coordinate positioning model
JP2021503662A (en) Neural network model training
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
US11945125B2 (en) Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis
CN112184705A (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
CN104978764A (en) Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment
CN112070782B (en) Method, device, computer readable medium and electronic equipment for identifying scene contour
JP7335370B2 (en) Computer-implemented method, data processing apparatus and computer program for generating 3D pose estimation data
CN112085223A (en) Guidance system and method for mechanical maintenance
CN111178170A (en) Gesture recognition method and electronic equipment
CN113591763A (en) Method and device for classifying and identifying face shape, storage medium and computer equipment
CN113419623A (en) Non-calibration eye movement interaction method and device
CN113643329B (en) Twin attention network-based online update target tracking method and system
WO2021021085A1 (en) Modification of projected structured light based on identified points within captured image
CN115023742A (en) Facial mesh deformation with detailed wrinkles
CN109509262B (en) Intelligent enhanced modeling method and device based on artificial intelligence
US20190377935A1 (en) Method and apparatus for tracking features
CN105718050B (en) Real-time human face interaction method and system
Will et al. An Optimized Marker Layout for 3D Facial Motion Capture.
US11954943B2 (en) Method for generating synthetic data
CN116524572B (en) Face accurate real-time positioning method based on self-adaptive Hope-Net
CN112667088B (en) Gesture application identification method and system based on VR walking platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination