CN113052561A - Flow control system and method based on wearable device - Google Patents

Flow control system and method based on wearable device Download PDF

Info

Publication number
CN113052561A
CN113052561A CN202110358518.2A CN202110358518A CN113052561A CN 113052561 A CN113052561 A CN 113052561A CN 202110358518 A CN202110358518 A CN 202110358518A CN 113052561 A CN113052561 A CN 113052561A
Authority
CN
China
Prior art keywords
image
module
display
information
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110358518.2A
Other languages
Chinese (zh)
Inventor
王森
李志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Weixin Yiliang Intelligent Technology Co ltd
Original Assignee
Suzhou Weixin Yiliang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Weixin Yiliang Intelligent Technology Co ltd filed Critical Suzhou Weixin Yiliang Intelligent Technology Co ltd
Priority to CN202110358518.2A priority Critical patent/CN113052561A/en
Publication of CN113052561A publication Critical patent/CN113052561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The embodiment of the application discloses a flow control system based on wearable equipment. The wearable device comprises an image acquisition device and a display device, and the process control system at least comprises a recording module, a monitoring module and a prompting module; the recording module is used for: controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part; the monitoring module is used for: acquiring a first image acquired by the image acquisition device; identifying a current operation procedure by using an image identification model based on the first image; the prompt module is used for: and controlling the display device to display corresponding guide information based on the operation process.

Description

Flow control system and method based on wearable device
Technical Field
The present disclosure relates to the field of process control technologies, and in particular, to a process control system and method based on a wearable device.
Background
With the continuous development of science and technology, the automation degree of mechanical equipment in the processes of production, manufacturing, assembly, detection, maintenance and the like is higher and higher. There are still many fields (e.g., military, aviation, aerospace, etc.) that require technicians to manually perform the work in the process. In order to better assist technicians in completing related work in production, manufacturing, assembly, detection and maintenance, the embodiment of the specification provides a flow control system and a flow control method based on wearable equipment.
Disclosure of Invention
One of the embodiments of the present application provides a process control system based on a wearable device, where the wearable device includes an image acquisition device and a display device, and the process control system at least includes a recording module, a monitoring module, and a prompt module; the recording module is used for: controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part; the monitoring module is used for: acquiring a first image acquired by the image acquisition device; identifying a current operation procedure by using an image identification model based on the first image; the prompt module is used for: and controlling the display device to display corresponding guide information based on the operation process.
In some embodiments, the wearable device further comprises an audio acquisition device, the process control system further comprises a voice recognition module; the speech recognition module is configured to: acquiring a first audio collected by the audio collecting device; identifying whether a voice instruction about image acquisition is contained in the first audio; the recording module is used for: and when the first audio is recognized to contain a voice instruction about image acquisition, controlling the image acquisition device to record the manufacturing, assembling and/or overhauling process of the first part.
In some embodiments, the image recognition model is a machine learning model, and the training process of the image recognition model includes: obtaining a plurality of sample pairs, wherein each sample pair comprises a sample image and an operation procedure label corresponding to the sample image; training an initial image recognition model based on the plurality of sample pairs to obtain a trained image recognition model.
In some embodiments, the plurality of sample pairs includes at least two sample pairs, the two sample pairs having the same operating procedure label but different capture angles of the sample images.
In some embodiments, the display device is an augmented reality display device, and the process control system further includes a control module configured to: acquiring a control image sequence acquired by the image acquisition device, wherein each image of the control image sequence comprises a specific control gesture; determining a movement trajectory of the control gesture based on the sequence of control images; and controlling the display position of the guide information in the display device based on the movement track.
In some embodiments, the display device is an augmented reality display device, the wearable apparatus further includes an audio acquisition device, the process guidance system further includes a control module configured to: acquiring a second audio collected by the audio collecting device; and identifying whether a voice control instruction related to the guide information is contained in the second audio, and controlling display parameters of the guide information in the display device when the voice control instruction related to the guide information is identified to be contained in the second audio, wherein the display parameters comprise brightness, size and/or color.
In some embodiments, the wearable device further comprises an identification device for identifying identity information of the user; the control module is further configured to: acquiring a historical use record of the user according to the identity information of the user; determining a display position and/or a display parameter of the guide information in the display device based on the historical usage record of the user.
In some embodiments, the identification device comprises a voiceprint recognition device and/or an iris recognition device.
One of the embodiments of the present application provides a process control method based on a wearable device, where the wearable device includes an image acquisition device and a display device, and the process control method includes: controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part; acquiring a first image acquired by the image acquisition device; identifying a current operation procedure by using an image identification model based on the first image; and controlling the display device to display corresponding guide information based on the operation process.
One of the embodiments of the present application provides a wearable device, including an image acquisition device, a display device and a processor, the processor is configured to: controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part; acquiring a first image acquired by the image acquisition device; identifying a current operation procedure by using an image identification model based on the first image; and controlling the display device to display corresponding guide information based on the operation process.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of a wearable device based process control system according to some embodiments of the present application;
fig. 2 is a schematic diagram of a wearable device according to some embodiments of the present application;
FIG. 3 is a diagram of an exemplary application scenario for a wearable device based process control system according to some embodiments of the present application;
FIG. 4 is an exemplary flow chart of a wearable device based flow control method according to some embodiments of the present application;
FIG. 5 is a diagram of an exemplary application scenario of a wearable device based process control system according to yet another embodiment of the present application;
FIG. 6 is an exemplary flow chart of a wearable device based flow control method according to yet another embodiment of the present application;
FIG. 7 is a diagram of an exemplary application scenario of a wearable device based process control system according to yet another embodiment of the present application;
FIG. 8 is an exemplary flow chart of a wearable device based flow control method according to yet another embodiment of the present application;
FIG. 9 is a schematic diagram of a training process for an image recognition model according to some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The embodiment of the application relates to a flow control system and method based on wearable equipment. The process control system and the process control method can be applied to the processes of production, manufacturing, assembly, detection, maintenance and the like of mechanical equipment. In some embodiments, the process control system and method may be applied to mechanical equipment in the field of fine engineering (e.g., military, aerospace, etc.). In some embodiments, the process control systems and methods may be applied to the production manufacturing of aircraft parts, the assembly of aircraft seats, the inspection and repair of aircraft casings, the manufacturing, assembly, repair, etc. of rocket parts; the problems that the working procedures are complex and the monitoring and the implementation are difficult are effectively solved. In some embodiments, the process control system and method may be applied to other fields. In some embodiments, the process control system and method may be applied to production, manufacturing, assembly, inspection, or repair in the fields of automobiles, chips, jewelry, high-speed rails, ships, engineering machinery, and the like. Through the flow control system and the method, the following steps can be realized: the method has one or more functions of recording part or all processes in the process, identifying unqualified parts in the process, guiding user operation, reminding irregular operation process and the like. The flow control system and method based on the wearable device can achieve one or more beneficial effects of freeing both hands of a user, improving operation efficiency and accuracy, ensuring the qualification of parts in the flow and the normalization of operation and the like.
Fig. 1 is a schematic diagram of a wearable device based process control system according to some embodiments of the present application. As shown in fig. 1, the process control system 1000 may be implemented based on the wearable device 100. Wearable device 100 may include smart bracelets, smart footwear, smart glasses, smart helmets, smart watches, smart clothing, smart backpack, smart accessories, and the like, or any combination thereof. In some embodiments, wearable device 100 may include smart glasses that a user may wear for operation during manufacturing, assembly, and/or servicing. In some embodiments, wearable device 100 may comprise a smart head-mounted device that may be worn on the head of a user. In some embodiments, wearable device 100 may include a smart headset device and a smart watch. Through implementing the flow control of the embodiment of the application based on the wearable device, both hands of a user can be liberated, and the user operation is not delayed.
In some embodiments, as shown in fig. 1, wearable device 100 may include one or more combinations of processor 110, image capture device 120, display device 130, audio capture device 140, identification device 150, voice playback device 160, communication device 170, and the like.
The image capture device 120 may be used to capture images during the process control. In some embodiments, the image capture device may capture images of the first part during manufacturing, assembly, servicing, and the like. In some embodiments, the first part may comprise a single component, a component in which multiple components are combined, an assembly, or the like. In some embodiments, the first part may comprise a component/part in the fine-work field (e.g., military, aerospace) that has high requirements for quality and processing. In some embodiments, the first part may include an aircraft ejector seat, an aircraft shell, a rocket screw nut, and the like. In some embodiments, image capture device 120 may include one or more cameras. In some embodiments, the image capture device 120 may include one or a combination of 2D cameras, 3D cameras, infrared cameras, and the like. The image capture device 120 may be used to capture two-dimensional or three-dimensional image data of an object, such as a first part.
Display device 130 may be used to display information and/or images in the process control procedure. In some embodiments, the display device 130 may display prompt information, guidance information, warning information, and the like. In some embodiments, display device 130 may display a whole and/or partial image of the first part. In some embodiments, display device 130 may include one or more display screens. In some embodiments, the display device 130 may include one or a combination of LCD display screens, LED display screens, high-definition display screens, transparent display screens, and the like. In some embodiments, display device 130 may display multiple items of content simultaneously on one or more display screens. In some embodiments, the left screen displays an image of the first part and the right screen displays a corresponding reminder.
In some embodiments, the front side of the first part is displayed full screen and the back side of the first part is displayed picture-in-picture. In some embodiments, display device 130 may be an Augmented Reality (AR) display device. By adopting the AR display device, relevant information (such as prompt information, guide information, warning information and the like) can be effectively displayed on the basis of not delaying the user from observing the object.
In some embodiments, the display device may include an augmented reality display screen capable of displaying the video images captured (e.g., in real time) by the image capture device 120, and displaying information (e.g., prompt information, guidance information, warning information, etc.) superimposed on the video images. In some embodiments, the display device may be a spectacle lens or a helmet visor, or the like. The user can see through this glasses lens or helmet face guard and observe real object, can superpose display information (like reminder information, guide information, warning information etc.) simultaneously on this glasses lens or helmet face guard. In some embodiments, the display parameters of display device 130 may include brightness, size, and/or color, among others.
The audio capture device 140 may be used to capture audio during the process control. In some embodiments, the audio capture device 140 may capture a first audio, a second audio, and so on. The first audio or the second audio may be sound information captured by the audio capture device 140 in real-time or periodically. In some embodiments, audio capture device 140 may include one or more microphones.
In some embodiments, the audio capture device 140 may include one or more of a combination moving coil microphone, a condenser microphone, an electret microphone, and the like. In some embodiments, the audio capture device 140 may be focused on capturing voice information of the user of the wearable device 100. In some embodiments, the audio capture device 140 may be disposed near the user's mouth.
In some embodiments, the audio capture device 140 may include a noise reduction module that may reduce the capture of sounds other than speech information.
The identification means 150 may be used to identify the identity information of the user. In some embodiments, the identification device 150 may identify the identity of the user of the wearable apparatus. In some embodiments, identification device 150 may identify the identity of a remote user. In some embodiments, the identification device 150 may include one or more of a voiceprint recognition device, an iris recognition device, a face recognition device, a fingerprint recognition device, and the like.
The voice playing device 160 may be used to play voice information during the process control. In some embodiments, the voice playing device 160 may be used to play voice prompt information, voice guidance information, and the like. In some embodiments, the voice playing device 160 may play the voice warning message when the first part quality is detected to be not meeting the standard. In some embodiments, the voice playback device 160 may include one or a combination of speakers, headphones, and the like.
The communication device 170 may be used to enable communication between the wearable device 100 (e.g., the processor 110) and other devices (e.g., remote servers, remote user devices, other wearable devices, etc.). In some embodiments, the communication device 170 may include one or a combination of more of a bluetooth communication device, a wifi communication device, a near field communication device, and the like. In some embodiments, the processor 110 may send information data (e.g., the first image sequence, the operation process information, etc.) in the process control process to a remote server or a remote user device through the communication device 170, and obtain corresponding feedback information. In some embodiments, through the communication device 170, the user of the wearable device 100 may enable interaction (e.g., video interaction, voice interaction, etc.) with a remote user or other wearable device user.
In some embodiments, the processor 110 may be used to process information and/or data in a process control procedure. In some embodiments, the processor 110 may include one or more of a recording module 111, a monitoring module 112, a prompting module 113, a voice recognition module 114, and a control module 115.
The recording module 111 may be used to record information/data in the process control procedure. In some embodiments, the recording module 111 may control the image capture device 120 to record the manufacturing, assembly, and/or repair process of the first part. In some embodiments, the recording module 111 may control the image capture device 120 to record (e.g., record) the entire manufacturing, assembly, and/or repair process of the first part. In some embodiments, the recording module 111 may control the image capture device 120 to periodically (e.g., every 3 seconds, 10 seconds, etc.) photograph the manufacturing, assembly, and/or servicing process of the first part. Through making, assembling and/or the maintenance process to first part carry out the record, can be effectively to the manufacturing, the assembly quality of accuse first part, the source is traceed back to when being convenient for go out the problem simultaneously.
In some embodiments, the recording module 111 may be configured to control the image capture device 120 to record the manufacturing, assembly, and/or repair process of the first part when it is recognized that the first audio includes a voice command regarding image capture. The first audio may be sound information captured by the audio capture device 140 in real time or periodically. In some embodiments, identifying whether the first audio includes voice instructions regarding image capture may be performed by the voice recognition module 114. The voice instruction may be a control command uttered by the user by voice. In some embodiments, the voice instructions regarding image capture may include "shoot", "take picture", "record", and other image capture related instructions. The recording module 111 may control the image capturing device 120 to record (e.g., take a picture, record a video, etc.) the manufacturing, assembling and/or repairing process of the first component according to the voice command related to the image capturing. Through making a record to the manufacturing of first part, assembly and/or maintenance process according to user's voice command, can be convenient for user operation, in time record according to user's demand to can reduce unnecessary record. In some embodiments, the recording module 111 may control the image capturing device 120 to stop recording according to a voice instruction to stop image capturing. In some embodiments, the voice instruction to stop image capture may include "stop shooting", "stop recording", and the like.
In some embodiments, the recording module 111 may be configured to control the image capture device 120 to perform a pre-shot to obtain a pre-shot image. In some embodiments, the process of pre-shooting may be: a pre-shot image is taken at regular intervals (e.g., 3 seconds, 5 seconds, etc.). In some embodiments, the recording module 111 can determine whether at least a portion of the first part is included in the pre-captured image. In some embodiments, the recording module 111 can identify whether at least a portion of the first part is included in the pre-captured image using image matching, image recognition models, and remote user identification. In some embodiments, the recording module 111 can determine whether the pre-shot image includes at least a portion of the first part by comparing the pre-shot image with an effect, design, or material diagram of the first part. In some embodiments, the recording module 111 can determine whether the pre-shot image includes at least a portion of the first part through a pre-designed image matching model. In some embodiments, the recording module 111 may transmit the pre-captured image to a remote user (e.g., a technical expert) who determines whether at least a portion of the first part is included in the pre-captured image. The recording module 111 may control the image pickup device to record a manufacturing, assembling, and/or repairing process of the first part when it is determined that at least a portion of the first part is included in the pre-shot image. By means of pre-shooting, whether the first part can be shot or not can be automatically monitored before formal recording, and when the first part can be shot, recording is started, so that the first part can be recorded in time, and a user is prevented from forgetting; while also reducing unnecessary recording processes. In some embodiments, the pre-captured image pixels may be lower than the normally recorded image pixels, which may reduce memory space and increase processing speed. In some embodiments, the shooting parameters at the time of recording can also be determined by pre-shooting. In some embodiments, pre-capture may include steps of trial, anchoring, calibration, and the like. In some embodiments, the pre-capture image may be multiple images at different capture parameters. The recording module 111 may automatically adjust the photographing parameters according to the pre-photographed image. In some embodiments, the recording module 111 may use the shooting parameters corresponding to the most effective image (as determined by the machine or remote user) of the pre-shot images as parameters for subsequent shooting/recording. The shooting parameters may include, but are not limited to, aperture, focal length, exposure, illumination, lens, angle, etc. Through pre-shooting, the wrong shooting range can be avoided, the proper shooting condition or shooting parameter can be determined, and the part recording effect can be effectively improved. In some embodiments, the recording module 111 may analyze the image obtained during the recording, and the recording module 111 may stop recording when the image obtained during the recording does not include the first part.
The monitoring module 112 may be used to monitor a process control process. In some embodiments, the monitoring module 112 may be used to monitor whether parts in the process flow meet standards, operating procedure conditions, operating procedures are normative, and the like.
In some embodiments, the monitoring module 112 may monitor whether the first part meets a criterion.
In some embodiments, the monitoring module 112 may acquire a first image acquired by an image acquisition device. In some embodiments, at least a portion of the first part may be included in the first image. In some embodiments, the recording module 111 can identify (or determine) whether the image captured by the image capturing device contains the first part, and mark the image containing the first part with a corresponding label; the monitoring module 112 may obtain a first image containing a first part therein from the tag. In some embodiments, the monitoring module 112 may identify (e.g., using an image recognition model) the image captured by the image capture device to obtain a first image containing at least a portion of the first part. By identifying whether the image contains at least one part of the first part or not by using the image recognition model, the first image containing at least one part of the first part can be quickly and accurately acquired. In some embodiments, the monitoring module 112 may periodically (e.g., every 3 seconds, 10 seconds, etc.) acquire the first image acquired by the image acquisition device. In some embodiments, the monitoring module 112 may acquire the first image acquired by the image acquisition device in real-time (e.g., every 0.02 second, 0.01 second, etc.).
In some embodiments, the monitoring module 112 may identify information related to the first part based on the first image using an image recognition model. In some embodiments, the monitoring module 112 may identify information related to the first part using a trained image recognition model. The information relating to the first part may include one or more of a number, a size, a shape, a model, a material, a color, a surface condition, and the like, in combination. The image recognition model may be a machine learning model, which may include, but is not limited to, a combination of one or more of a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and the like. The neural network model can comprise one or more of LeNet, GoogLeNeT, ImageNet, AlexNet, VGG, ResNet and the like. In some embodiments, as shown in fig. 9, the training process of the image recognition model may include:
(1) a plurality of sample pairs are obtained, each sample pair may include a sample image including at least a portion of a sample part and a sample image label including information about the corresponding sample part (e.g., number, size, model, material, surface condition, etc.). In some embodiments, the sample image may be an image acquired by the recording module 111 over a past period of time (e.g., a day, a week, a month, etc.). In some embodiments, the sample image may be an image taken specifically for each sample part. The sample image label corresponding to each sample image can be obtained by manual labeling or machine labeling. In some embodiments, an operator (e.g., an expert in the field) may identify the sample images and label each sample image with information regarding the corresponding sample part (e.g., number, size, model, material, surface condition, etc. of the sample part). In some embodiments, the plurality of sample pairs may include at least two sample pairs including the same sample part having different shooting angles in the sample images thereof. In some embodiments, the sample images of the two sample pairs may be front and side images, respectively, of the same sample part. The sample images of the same sample part are shot at multiple angles for training, so that the obtained image recognition model has better robustness. (2) Based on the plurality of sample pairs, the initial image recognition model is trained to obtain a trained image recognition model. In some embodiments, the method of training may include back propagation, gradient descent, and the like.
In some embodiments, the plurality of sample pairs may include a simulated sample pair including a simulated sample image including at least a portion of the simulated sample part and a simulated sample image label including information about the simulated sample part (e.g., quantity, size, model, material, surface condition, etc.). In some embodiments, the simulated sample part may be a virtual simulated part modeled by simulation software (e.g., three-dimensional modeling software). Different simulation treatments are carried out on the simulation sample part in the simulation software, so that the simulation forms of the simulation sample part under different conditions can be obtained. In some embodiments, scratches, defects, etc. on the surface of the part may be simulated by simulation software. In some embodiments, parts made of different materials may be simulated by simulation software. In some embodiments, the simulated sample part may be a solid simulated part manufactured from at least some of the features (e.g., surface conditions) of the real part. In some embodiments, the structure, dimensions, materials, etc. of the physical simulation sample part and the real part may be the same or different. In some embodiments, the physical simulation sample part may be hollow or filled with an equal mass of other substance. In some embodiments, the physical simulation sample part may be an isometric enlarged version (e.g., an enlarged version of a chip, jewelry, etc.) or an isometric reduced version (e.g., a reduced version of an aircraft housing) of the real part. The physical simulation sample part can be made according to a real part meeting the standard (such as a positive sample) or a real part not meeting the standard (such as a negative sample). In the field of fine engineering (such as the fields of military industry, aviation, aerospace and the like), real parts are high in cost and complex in process, and few real parts which do not meet the standard. In some embodiments, the simulated sample image label may be a factory parameter of the simulated sample part. In some embodiments, the simulated specimen image label may be manually or machine labeled (e.g., by simulation software).
In some embodiments, the monitoring module 112 can detect whether the first part meets the criteria based on information about the first part. In some embodiments, the monitoring module 112 may detect whether the number of first parts is correct, whether the models match, whether the dimensions are within a standard range, whether the surface is smooth and flawless, and the like. Whether the first part meets the standard or not is detected, and parts which do not meet the standard (such as quality problems, model mismatching and the like) can be found in time, so that remedial measures can be taken as early as possible or the manufacturing and assembling processes can be stopped, and the quality and the efficiency of the process can be improved.
In some embodiments, the monitoring module 112 can be configured to detect a location of a defect in the first part when the surface condition of the first part does not meet a criterion. In some embodiments, the monitoring module 112 may compare the surface condition of the first part that does not meet the standard with the surface of the standard part and then determine the location of the difference between the two as the location of the defect. In some embodiments, the monitoring module 112 may detect whether the first part surface includes an irregular shape (e.g., an aircraft surface scratch, a part surface crack, etc.), and if so, determine the location of the irregular shape as the location of the defect in the first part. Because the surface defects are generally irregular in shape, the parts are generally regular in shape; the defects on the surface of the part can be quickly and effectively detected through the detection of the irregular shape.
In some embodiments, the monitoring module 112 may monitor current operating procedure conditions. A sequence of operations may refer to a series of operations performed in an order. In some embodiments, the operational procedures may include manufacturing procedures, assembly procedures, service procedures, and the like. In some embodiments, the casting the first part may include: quenching, tempering, shot blasting, polishing and the like. In some embodiments, the assembly of the first part may include: cleaning, balancing, welding, riveting, adjusting, checking and the like.
In some embodiments, the monitoring module 112 may acquire a first image acquired by an image acquisition device. In some embodiments, the first image may reflect current operating procedure conditions. In some embodiments, the monitoring module 112 may periodically (e.g., every 3 seconds, 10 seconds, etc.) acquire the first image acquired by the image acquisition device. In some embodiments, the monitoring module 112 may acquire the first image acquired by the image acquisition device in real-time (e.g., every 0.02 second, 0.01 second, etc.). In some embodiments, the monitoring module 112 may identify the current procedure using an image recognition model based on the first image. In some embodiments, the monitoring module 112 may input the first image to be identified into the image identification model, and obtain the current operation procedure corresponding to the first image after the image identification model processing. In some embodiments, the image recognition model may be a machine learning model, which may include, but is not limited to, a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and/or the like. The neural network model can comprise one or more of LeNet, GoogLeNeT, ImageNet, AlexNet, VGG, ResNet and the like. In some embodiments, as shown in fig. 9, the training process of the image recognition model may include:
(1) a plurality of sample pairs are obtained, and each sample pair comprises a sample image and an operation procedure label corresponding to the sample image. In some embodiments, the sample image may be an image acquired by the recording module 111 in a past period (e.g., a day, a week, a month, etc.) that reflects a certain operation procedure. In some embodiments, the sample image may be an image taken specifically for each procedure. In some embodiments, the sample image may be an image taken during the quenching process. In some embodiments, the sample image may be an image taken during the polishing process. In some embodiments, the sample image may include at least a portion of the first part; alternatively, the first part may not be included in the sample image. The operation procedure label corresponding to each sample image can be obtained by manual labeling or machine labeling. In some embodiments, an operator (e.g., an expert in the field) may identify the sample images and label each sample image with a corresponding operation procedure (e.g., a procedure number, a procedure name, etc.). In some embodiments, the plurality of sample pairs may include at least two sample pairs having the same operating procedure label but different capture angles of the sample images. Aiming at the same operation procedure, the sample images shot from multiple angles are used for training, so that the obtained image recognition model has better robustness. (2) Based on the plurality of sample pairs, the initial image recognition model is trained to obtain a trained image recognition model. In some embodiments, the method of training may include back propagation, gradient descent, and the like.
In some embodiments, the monitoring module 112 may monitor whether the operational process is normative. The operational procedures may reflect operations performed during manufacturing, assembly, and/or overhaul.
In some embodiments, the monitoring module 112 may acquire a first sequence of images acquired by the image acquisition device. The first image sequence may be a plurality of images acquired in succession. In some embodiments, the first sequence of images may be a plurality of images acquired at 0.2 second intervals over 5 seconds. In some embodiments, the first sequence of images may be a segment of video. In some embodiments, the monitoring module 112 may acquire a plurality of images continuously acquired under the same operation procedure as the first image sequence, so as to monitor the operation process under the operation procedure. In some embodiments, the first sequence of images may be a plurality of images acquired in succession under a quenching process. In some embodiments, the recording module 111 may divide the images captured by the image capturing device into a plurality of image sequences according to the operation procedures, and print a corresponding procedure label on each image sequence (e.g., each image sequence corresponds to one or more operation procedures), and the monitoring module 112 may obtain one of the image sequences as the first image sequence, so as to monitor one or more operation procedures corresponding to the first image sequence.
In some embodiments, the monitoring module 112 may identify the current procedure and procedure information based on the first sequence of images. In some embodiments, the monitoring module 112 may identify the current procedure using an image recognition model based on any one of the images in the first sequence of images. In some embodiments, the monitoring module 112 may extract any one image from the first image sequence, input the image into the image recognition model, and obtain the current operation procedure corresponding to the image through the image recognition model processing, that is, the current operation procedure corresponding to the first image sequence. In some embodiments, the image recognition model may be a machine learning model, which may include, but is not limited to, a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and/or the like. The neural network model can comprise one or more of LeNet, GoogLeNeT, ImageNet, AlexNet, VGG, ResNet and the like. In some embodiments, as shown in fig. 9, the training process of the image recognition model may include: (1) obtaining a plurality of sample pairs, wherein each sample pair comprises a sample image and an operation procedure label corresponding to the sample image; (2) based on the plurality of sample pairs, the initial image recognition model is trained to obtain a trained image recognition model. In some embodiments, the plurality of sample pairs includes at least two sample pairs having the same operating procedure label but different capture angles of the sample images. In some embodiments, the monitoring module 112 may directly determine the current operation procedure according to the procedure tag corresponding to the first image sequence.
In some embodiments, the operational process information may include, but is not limited to: one or more of operation time length, operation step number, operation step sequence and the like. In some embodiments, for each image in the first sequence of images, the monitoring module 112 may identify its corresponding operation (e.g., based on an image recognition model). In some embodiments, the monitoring module 112 may determine the operation procedure information according to the one or more operations identified in the first image sequence and the corresponding number of images (or duration of the image sequence). The operation corresponding to each image is identified firstly, and then the operation process information is determined, so that the identification accuracy of the operation process information can be effectively improved. In some embodiments, when the first sequence of images reflects that the quenching operation occurred within 12 consecutive seconds, the monitoring module 112 may identify the operational process information as: the duration of the quenching operation was 12 seconds. In some embodiments, the operational process information identified by the monitoring module 112 based on the first sequence of images may be: part a (3 seconds) was installed first, part B (5 seconds) was then installed, and part C (2 seconds) was then installed. In some embodiments, the monitoring module 112 may identify the current procedure and procedure information in the first image sequence using the image sequence identification model. The image sequence recognition model may be a deep learning model capable of processing the image sequence. In some embodiments, the image sequence identification model may include RNN, CRNN, or the like. The current operation procedure and operation process information corresponding to the first image sequence can be obtained by inputting the first image sequence into the image sequence identification model.
In some embodiments, the monitoring module 112 may determine whether the operating procedure information meets an operating specification corresponding to the operating procedure. In some embodiments, the monitoring module 112 may obtain the corresponding operating specification based on the operating procedure. In some embodiments, the monitoring module 112 may obtain the operating specification corresponding to the operating procedure from a storage device of the wearable device 100 or a remote server. The operating specification may include, but is not limited to: one or more combinations of standard operation duration, standard operation step number, standard operation step sequence and the like. In some embodiments, the operating specification for the operating sequence quench may be: quenching for 13 seconds. In some embodiments, the operating specification for the part installation may be: the parts A-B-C are sequentially installed. By comparing the operation procedure information with the corresponding operation specification, the monitoring module 112 can determine whether the operation procedure information conforms to the corresponding operation specification. In some embodiments, the monitoring module 112 may send the first image sequence to a remote user (e.g., a technical expert) and may obtain a determination of whether the operational procedure information complies with the corresponding operational specification by the remote user.
In some embodiments, the monitoring module 112 may be used to monitor whether the part meets a standard, the operating procedure condition, and the operating procedure specification, respectively. In some embodiments, the monitoring module 112 can simultaneously monitor whether the part meets two or all of the standards, the operating procedure conditions, and the operating procedure specifications, so as to assist the user in quickly and accurately completing the process operation from various aspects such as part detection, process guidance, operating procedure specification warning, and the like. In some embodiments, the image recognition model may be used to identify information about the first part or the current operating procedure. In some embodiments, the image recognition model may be used to identify both the relevant information of the first part and the current operating procedure. In some embodiments, the image recognition model may also be used to identify specific operations.
The prompting module 113 may be used to prompt the user during flow control. In some embodiments, when the first part does not meet the criteria, the reminder module 113 can control the display device to display a corresponding reminder message. The reminder information may include, but is not limited to, text reminder information, picture reminder information, animation reminder information, video reminder information, and the like. In some embodiments, the prompting module 113 may control the display device to display a prompt message such as "the first part model does not match", "the size is too small", "the surface is flawed", and the like. In some embodiments, the cue information may indicate a location of the defect on the surface when the surface condition of the first part does not meet the criteria. In some embodiments, the prompting module 113 can control the display device to display an image of the first part and mark (e.g., flash, highlight, circle, etc.) the defect location of the first part. In some embodiments, the display device may be an AR display device, and the prompt module 113 may control the display device to mark the defect position of the first part in an AR display manner. By displaying the corresponding prompt information when the first part which does not meet the standard is found, the user can be reminded to take corresponding measures (such as part replacement, part cleaning and the like) in time, and therefore the process is effectively guaranteed. In some embodiments, the prompt module 113 may further control the voice playing device 160 to play the corresponding voice prompt information.
In some embodiments, the prompt module 113 may control the display device to display corresponding guidance information based on the operation procedure. The guidance information may be used to guide the user through current or subsequent operations. In some embodiments, the guidance information may include, but is not limited to, textual guidance information, pictorial guidance information, animated guidance information, video guidance information, and the like. In some embodiments, for a process quench, the guidance information may be textual guidance information: "quench 13 seconds". In some embodiments, for a part installation procedure, the guidance information may be an animated demonstration of the part installation. In some embodiments, the display device may be an AR display device, and the guidance information may be displayed in the display device in an AR manner. In some embodiments, the prompting module 113 may obtain corresponding guiding information from a storage device of the wearable device 100 or a remote server based on the operation procedure, and control the display device 130 to display the guiding information. The display device is controlled to display the guide information, so that the user can be guided to accurately complete corresponding operation, and omission or mistakes are avoided. In some embodiments, the prompt module 113 may further control the voice playing device 160 to play the corresponding voice guidance information.
In some embodiments, the prompt module 113 may be configured to control the display device to display corresponding warning information when the operation procedure information does not meet the operation specification. In some embodiments, when the operation duration is not consistent, the operation steps are omitted, and the operation sequence is not standardized, the prompt module 113 may control the display device 130 to display the corresponding warning information. In some embodiments, the alert information may include, but is not limited to, one or more combinations of textual alert information, pictorial alert information, animated alert information, video alert information, and the like. In some embodiments, the alert information may include a screen that freezes and zooms in, displays a distinctive red cross, displays a surprise emoticon, and the like. For example only, when the operation specification is "quenching for 13 seconds" and the operation process information is quenching for 12 seconds, the prompt module 113 may control the display device 130 to display a warning message "the quenching time is not sufficient". In some embodiments, the prompt module 113 may be configured to control the voice playing device 160 to play the corresponding voice warning information when the operation procedure information does not meet the operation specification. The voice warning information may include, but is not limited to, a warning tone, a warning statement (e.g., "cheer over"), and the like. By displaying and/or playing the warning information when the operation process is not in accordance with the specification, the user can be reminded in time, and the user can take remedial measures in time. In some embodiments, when the operation procedure information does not meet the operation specification, the prompt module 113 may control the display device to display the corresponding guidance information. In some embodiments, the prompt module 113 may control the display device to display guidance information corresponding to the operation procedure. In some embodiments, the prompting module 113 may control the display device to display guidance information (e.g., guidance information for remedial operation, guidance information for repeating current operation, etc.) corresponding to an irregular operation process, so as to enable a user to make sure how to perform remedial operation under an irregular condition, correct user errors in time, and ensure that the entire process is stable and efficient.
The speech recognition module 114 may be used to recognize speech information and/or speech data in a process control process. In some embodiments, the speech recognition module 114 may obtain first audio captured by an audio capture device. The first audio may be voice information captured by the speech recognition module 114 in real-time or periodically. In some embodiments, the first audio may include a voice uttered by a user (e.g., a wearer of the wearable device, a technician responsible for assembly/servicing). In some embodiments, the voice recognition module 114 may identify whether the first audio contains voice instructions regarding image capture. The voice instruction may be a control command uttered by the user by voice. In some embodiments, the voice instructions regarding image capture may include "shoot", "take picture", "record", and other image capture related instructions. In some embodiments, the voice instruction to stop image capture may include "stop shooting", "stop recording", and the like. In some embodiments, the speech recognition module 114 may recognize the speech instructions in the first audio based on the acoustic model and the language model. In some embodiments, the speech recognition module 114 may convert the first audio into a text, determine whether the text contains a preset keyword, and if so, determine that the first audio contains a speech instruction related to image capture.
Control module 115 may be used to control one or more components of wearable apparatus 100 (e.g., image capture device 120, display device 130, audio capture device 140, identification device 150, voice playback device 160, communication device 170, etc.). In some embodiments, the control module 115 may be used to control the display device 130 and the identification device 150 to be turned on and off. In some embodiments, the control module 115 may be used to control the voice playback device 160 to play voice. In some embodiments, control module 115 may control the display content and/or display form of display device 130.
In some embodiments, the control module 115 may obtain a sequence of control images captured by the image capture device, each image of the sequence of control images including a particular control gesture. The control image sequence may be a plurality of images captured in succession containing a particular control gesture. The control gesture may be a particular hand gesture used to control the display location of information (e.g., reminder information, guidance information, alert information, etc.) in a display device (e.g., an AR display device). In some embodiments, the control gestures may include one-handed gestures and two-handed gestures. In some embodiments, the control gesture may include a single finger extension gesture, a fist-making gesture, a palm opening gesture, a grasping gesture, and the like. In some embodiments, the control module 115 may recognize a control gesture based on an image recognition model; when a particular control gesture is recognized, the control module 115 may control the image capture device to begin capturing a sequence of control images. In some embodiments, when the single-finger extending gesture is recognized, the control module may control the image capture device to start continuous shooting or video recording to obtain a control image sequence. In some embodiments, the control module 115 may complete the capture of the control image sequence when a particular control gesture disappears.
In some embodiments, the control module 115 may determine a movement trajectory of the control gesture based on the sequence of control images. In some embodiments, the control module 115 may identify the position of the hand (e.g., the position of the center of the hand, the position of the fingertip) in each image of the control image sequence and time-sequentially line the positions of the hand in all of the control image sequences to determine the movement trajectory of the control gesture. The control module 115 may control a display position of the guide information in the display device based on the movement trace. In some embodiments, the control image sequence includes a plurality of images collected within a duration of 3 seconds, and the movement trajectory of the control gesture determined by the control image sequence by the control module 115 is: move from the lower left corner to the upper right corner; the control module 115 may control the display position of the guide information to move from the lower left corner to the upper right corner. In some embodiments, the control module 115 may control the guidance information to be displayed at the end of the movement trajectory of the control gesture. By moving the guidance information using the control gesture, non-touch screen operation can be achieved; meanwhile, visual occlusion of the operation by the guiding information (such as guiding information displayed by an AR) can be avoided, so that the guiding information can be moved to a position which is favored by the user according to the intention of the user.
In some embodiments, the control module 115 (and/or the speech recognition module 114) may obtain the second audio captured by the audio capture device. The second audio may be sound information captured by the audio capture device 140 in real-time or periodically. In some embodiments, the second audio may include a voice uttered by a wearer of the wearable device (e.g., a technician on site responsible for assembly, service, etc.). The control module 115 (and/or the voice recognition module 114) may recognize whether voice control instructions regarding the guidance information are included in the second audio. In some embodiments, the control module 115 may identify whether the second audio contains voice instructions for guidance information based on the acoustic model and the language model. In some embodiments, the control module 115 may convert the second audio into text by recognition, and then recognize whether the text contains the voice command related to the guidance information. The voice control command regarding the guidance information may be a voice command for controlling a display manner of the guidance information. In some embodiments, the voice control instruction regarding the guidance information may include "shift left", "zoom in", "zoom out", "brightness plus", "red", "black", "float window", "close guidance", and the like. When recognizing that the second audio includes a voice control instruction regarding the guidance information, the control module 115 may control a display parameter of the guidance information in the display device. The display parameters may include, but are not limited to, one or more combinations of brightness, size, color, transparency, and the like. The display parameters of the guide information in the display device are controlled through the voice control instruction, and the guide information can be conveniently displayed according to the requirements and preferences of users.
In some embodiments, the control module 115 may be configured to obtain the historical usage record of the user according to the user identification information recognized by the identification device 150. In some embodiments, control module 115 may obtain the user's historical usage record from a memory device of wearable device 100 or from a remote server based on the user identity information. The historical usage record may be where the user used the wearable device 100 over a past period of time (e.g., a month, a year, etc.). In some embodiments, the historical usage record may include information such as time of use, location of display of guidance information when used, display parameters, and the like. The display position may be a position of the guidance information in the image displayed by the display device. In some embodiments, the display position may include one or more of an upper left corner, a lower left corner, a center, an upper right corner, a lower right corner, an upper edge, a lower edge, a left edge, a right edge, and so forth. The display parameters may include brightness, size, color, transparency, etc. In some embodiments, the control module 115 may determine a display location and/or display parameters of the guidance information in the display device based on the user's historical usage records. In some embodiments, the control module 115 may determine the display position and/or display parameters of the guidance information in the display device to be consistent with the last use by the user. In some embodiments, the control module 115 may use the display location or display parameter that appears most when the user used the last 10 times as the current display location or display parameter. The display position and/or the display parameter of the guide information are determined through the historical use record, the default position and/or the default parameter which accord with the habit of the user can be quickly provided, the interaction times can be reduced, and the operation efficiency and the user experience of the user are improved.
It should be understood that the processor and its modules shown in FIG. 1 may be implemented in a variety of ways. In some embodiments, the processor and its modules may be implemented in hardware, software, or a combination of software and hardware in some embodiments. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems of the embodiments of the present description may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The processor and its modules of the present application may be implemented not only by hardware circuits such as a very large scale integrated circuit or a gate array, a semiconductor such as a logic chip, a transistor, or the like, or a programmable hardware device such as a field programmable gate array, a programmable logic device, or the like, but also by software executed by various types of processors, for example, and by a combination of the above hardware circuits and software (for example, firmware). The processor may be mounted on the wearable device 100 or may be a cloud server (in communication with components on the wearable device 100). The processor 110 and one or more components of the wearable device 100 (e.g., the image capture device 120, the display device 130, the audio capture device 140, the identification device 150, the voice playback device 160, the communication device 170, etc.) may have a signal connection (e.g., an electrical connection, a wireless connection).
In some embodiments, wearable apparatus 100 may include some or all of processor 110, image capture device 120, display device 130, audio capture device 140, identification device 150, voice playback device 160, communication device 170, and/or the like. In some embodiments, wearable device 100 may also include other apparatus. In some embodiments, wearable device 100 may also include a storage device in which instructions executed by the processor, information/data in the process control process, and the like may be stored. In some embodiments, the wearable device 100 may include a power supply for powering one or more components of the wearable device 100.
It should be noted that the above description of the wearable device 100 and its modules is merely for convenience of description and is not intended to limit the scope of the present disclosure to the illustrated embodiments. It will be appreciated by those skilled in the art that any combination of devices, modules, or constituent subsystems can be combined with other modules without departing from the principles of the system. In some embodiments, the recording module 111, the monitoring module 112, the prompting module 113, the voice recognition module 114, and the control module 115 disclosed in fig. 1 may be different modules in a system, or may be a module that implements the functions of two or more of the above modules. In some embodiments, each module may share one memory module, and each module may also have its own memory module. In some embodiments, the image recognition functions in the plurality of modules described above may be implemented by a single image recognition module. In some embodiments, the monitoring module 112 may further include a part monitoring unit, an operating procedure monitoring unit, and an operating procedure monitoring unit to monitor the part, the operating procedure, and the operating procedure, respectively. Such variations are within the scope of the present application.
Fig. 2 is a schematic diagram of a wearable device according to some embodiments of the present application. As shown in fig. 2, the wearable device 100 may be helmet-shaped. The image capturing device 120 may be disposed right in front of the helmet such that the viewing angle of the image captured by the image capturing device 120 is close to the user viewing angle, thereby facilitating user operation and remote cooperation. Display device 130 may be disposed on a face shield of the smart helmet (e.g., display device 130 may be an augmented reality display device) that enables a user to view both actual objects through the face shield and information superimposed on the face shield. The audio capture device 140 may be positioned near the user's mouth to facilitate clearer capture of the user's voice. The identification device 150 (e.g., an iris recognition device) may be disposed in front of the eyes of the user, so as to effectively identify the user and enhance the user experience. The voice playing device 160 may be disposed at both sides of the helmet at a position corresponding to the ears of the user, so as to more clearly communicate voice information to the user. The communication device 170 may be disposed in an overhead location of the wearable device 100 to enhance the transceiving of signals. The processor 110 (e.g., a circuit board) may be disposed within the shell of the helmet to better protect the processor 110.
Fig. 3 is a diagram of an exemplary application scenario of a wearable device based process control system according to some embodiments of the present application. As shown in fig. 3, the wearable device 100 may include at least an image capturing device 120 and a display device 130, and the process control system 1000 may include at least a recording module 111, a monitoring module 112, and a prompting module 113. The recording module 111 may be configured to: and controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part. The monitoring module 112 may be configured to: a first image acquired by an image acquisition device is acquired, wherein the first image comprises at least one part of a first part. The monitoring module 112 may also be configured to: identifying relevant information of the first part by using an image identification model based on the first image; whether the first part meets the standard is detected based on the relevant information of the first part. The prompting module 113 may be configured to: and when the first part does not meet the standard, controlling the display device to display corresponding prompt information. Whether the first part meets the standard or not is detected, and parts which do not meet the standard (such as quality problems, model mismatching and the like) can be found in time, so that a user can be prompted to take remedial measures or stop the manufacturing and assembling processes as early as possible, and the quality and the efficiency of the process can be improved. For more details on the process control system 1000, reference may be made to the description associated with FIG. 1.
Fig. 4 is an exemplary flow chart of a wearable device based flow control method according to some embodiments of the present application. In some embodiments, the process control method 200 may be implemented by a process control system 1000 (e.g., the processor 110).
Step 210, acquiring a first audio collected by an audio collection device. Step 210 may be performed by processor 110 (e.g., speech recognition module 114). The first audio may be sound information captured by the audio capture device 140 in real time or periodically. In some embodiments, the first audio may include a voice uttered by a user (e.g., a wearer of the wearable device, a technician responsible for assembly/servicing).
Step 212, identify whether the first audio contains a voice instruction related to image capture. Step 212 may be performed by processor 110 (e.g., speech recognition module 114). The voice instruction may be a control command uttered by the user by voice. In some embodiments, the voice instructions regarding image capture may include "shoot", "take picture", "record", and other image capture related instructions. In some embodiments, the speech recognition module 114 may recognize the speech instructions in the first audio based on the acoustic model and the language model. In some embodiments, if the voice recognition module 114 recognizes a voice instruction regarding image capture, the processor 110 may proceed to step 230.
And step 220, controlling the image acquisition device to perform pre-shooting so as to obtain a pre-shot image. Step 220 may be performed by processor 110 (e.g., recording module 111). In some embodiments, the process of pre-shooting may be: a pre-shot image is taken at regular intervals (e.g., 3 seconds, 5 seconds, etc.). In some embodiments, the pre-captured image pixels may be lower than the normally recorded image pixels, which may reduce memory space and increase processing speed.
Step 222, determining that the pre-shot image includes at least a portion of the first part. Step 230 may be performed by processor 110 (e.g., recording module 111). In some embodiments, the recording module 111 can determine whether the pre-captured image includes at least a portion of the first part using image matching, image recognition models, and remote user recognition. If it is determined that the pre-captured image includes at least a portion of the first part, the processor 110 may proceed to step 230.
And step 230, controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part. Step 230 may be performed by processor 110 (e.g., recording module 111). In some embodiments, the recording module 111 may control the image capture device 120 to record (e.g., record) the entire manufacturing, assembly, and/or repair process of the first part. In some embodiments, the recording module 111 may control the image capture device 120 to periodically (e.g., every 3 seconds, 10 seconds, etc.) photograph the manufacturing, assembly, and/or servicing process of the first part. In some embodiments, the recording module 111 may be configured to control the image capture device 120 to record the manufacturing, assembly, and/or repair process of the first part when it is recognized that the first audio includes a voice command regarding image capture. In some embodiments, the recording module 111 may control the image capture device to record the manufacturing, assembly, and/or repair process of the first part when the pre-shot image is identified as including at least a portion of the first part. Through making, assembling and/or the maintenance process to first part carry out the record, can be effectively to the manufacturing, the assembly quality of accuse first part, the source is traceed back to when being convenient for go out the problem simultaneously.
Step 240, acquiring a first image acquired by the image acquisition device. Step 240 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, at least a portion of the first part may be included in the first image. In some embodiments, the monitoring module 112 may periodically (e.g., every 3 seconds, 10 seconds, etc.) acquire the first image acquired by the image acquisition device. In some embodiments, the monitoring module 112 may acquire the first image acquired by the image acquisition device in real-time (e.g., every 0.02 second, 0.01 second, etc.).
And 250, identifying relevant information of the first part by using an image identification model based on the first image. Step 250 may be performed by processor 110 (e.g., monitoring module 112). The information relating to the first part may include one or more of a number, a size, a shape, a model, a material, a color, a surface condition, and the like, in combination. The image recognition model may be a machine learning model, which may include, but is not limited to, a combination of one or more of a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and the like.
And step 260, detecting whether the first part meets the standard or not based on the relevant information of the first part. Step 260 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, the monitoring module 112 may detect whether the number of first parts is correct, whether the models match, whether the dimensions are within a standard range, whether the surface is smooth and flawless, and the like. Whether the first part meets the standard or not is detected, and parts which do not meet the standard (such as quality problems, model mismatching and the like) can be found in time, so that remedial measures can be taken as early as possible or the manufacturing and assembling processes can be stopped, and the quality and the efficiency of the process can be improved.
And 270, controlling the display device to display corresponding prompt information when the first part does not meet the standard. Step 270 may be performed by processor 110 (e.g., hinting module 113). The reminder information may include, but is not limited to, text reminder information, picture reminder information, animation reminder information, video reminder information, and the like. In some embodiments, the prompting module 113 may control the display device to display a prompt message such as "the first part model does not match", "the size is too small", "the surface is flawed", and the like.
In step 280, the defect location of the first part is detected when the surface condition of the first part does not meet the criteria. Step 280 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, the monitoring module 112 may compare the surface condition of the first part that does not meet the standard with the surface of the standard part and then determine the location of the difference between the two as the location of the defect.
In some embodiments, the monitoring module 112 may detect whether the first part surface includes an irregular shape (e.g., an aircraft surface scratch, a part surface crack, etc.), and if so, determine the location of the irregular shape as the location of the defect in the first part. In some embodiments, the cue information may indicate a location of the defect on the surface when the surface condition of the first part does not meet the criteria.
In some embodiments, the prompting module 113 can control the display device to display an image of the first part and mark (e.g., flash, highlight, circle, etc.) the defect location of the first part. In some embodiments, the display device may be an AR display device, and the prompt module 113 may control the display device to mark the defect position of the first part in an AR display manner.
It should be noted that the above description of the flow control method 200 is for purposes of example and illustration only and is not intended to limit the scope of applicability of the present application. Various modifications and alterations to the flow control method 200 will be apparent to those skilled in the art in light of this disclosure. However, such modifications and variations are intended to be within the scope of the present application. In some embodiments, steps 270 and 280 may be two steps performed independently, either sequentially or in parallel. In some embodiments, steps 220 and 222 may be omitted and system 1000 may not perform pre-capture.
Fig. 5 is a diagram of an exemplary application scenario of a wearable device based process control system according to still another embodiment of the present application. As shown in fig. 5, the wearable device 100 may include at least an image capturing device 120 and a display device 130, and the process control system 1000 may include at least a recording module 111, a monitoring module 112, and a prompting module 113. The recording module 111 may be configured to: and controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part. The monitoring module 112 may be configured to: acquiring a first image acquired by an image acquisition device; based on the first image, the current operation procedure is identified by using an image identification model. The prompting module can be configured to: and controlling the display device to display corresponding guide information based on the operation process. The guiding information corresponding to the operation procedures is displayed by controlling the display device, so that the user can be guided to accurately complete corresponding operation, and omission or mistakes are avoided. For more details on the process control system 1000, reference may be made to the description associated with FIG. 1.
Fig. 6 is an exemplary flowchart of a wearable device-based flow control method according to yet another embodiment of the present application. In some embodiments, the process control method 300 may be implemented by a process control system 1000 (e.g., the processor 110).
In step 310, a first audio collected by an audio collection device is obtained. Step 310 may be performed by processor 110 (e.g., speech recognition module 114). The first audio may be sound information captured by the audio capture device 140 in real time or periodically. In some embodiments, the first audio may include a voice uttered by a user (e.g., a wearer of the wearable device, a technician responsible for assembly/servicing).
In step 320, it is identified whether the first audio contains a voice command related to image capture. Step 320 may be performed by processor 110 (e.g., speech recognition module 114). The voice instruction may be a control command uttered by the user by voice. In some embodiments, the voice instructions regarding image capture may include "shoot", "take picture", "record", and other image capture related instructions. In some embodiments, the speech recognition module 114 may recognize the speech instructions in the first audio based on the acoustic model and the language model.
And step 330, controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part. Step 330 may be performed by processor 110 (e.g., recording module 111). In some embodiments, the recording module 111 may control the image capture device 120 to record (e.g., record) the entire manufacturing, assembly, and/or repair process of the first part. In some embodiments, the recording module 111 may control the image capture device 120 to periodically (e.g., every 3 seconds, 10 seconds, etc.) photograph the manufacturing, assembly, and/or servicing process of the first part. In some embodiments, the recording module 111 may be configured to control the image capture device 120 to record the manufacturing, assembly, and/or repair process of the first part when it is recognized that the first audio includes a voice command regarding image capture.
Step 340, acquiring a first image acquired by the image acquisition device. Step 340 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, the first image may reflect current operating procedure conditions. In some embodiments, the monitoring module 112 may periodically (e.g., every 3 seconds, 10 seconds, etc.) acquire the first image acquired by the image acquisition device. In some embodiments, the monitoring module 112 may acquire the first image acquired by the image acquisition device in real-time (e.g., every 0.02 second, 0.01 second, etc.).
Step 350, based on the first image, using the image recognition model to recognize the current operation procedure. Step 350 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, the monitoring module 112 may input the first image to be identified into the image identification model, and obtain the current operation procedure corresponding to the first image after the image identification model processing. In some embodiments, the image recognition model may be a machine learning model, which may include, but is not limited to, a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and/or the like.
And step 360, controlling the display device to display corresponding guide information based on the operation process. Step 360 may be performed by processor 110 (e.g., hinting module 113). The guidance information may be used to guide the user through current or subsequent operations. In some embodiments, the guidance information may include, but is not limited to, textual guidance information, pictorial guidance information, animated guidance information, video guidance information, and the like. In some embodiments, the display device may be an AR display device, and the guidance information may be displayed in the display device in an AR manner. In some embodiments, the prompting module 113 may obtain corresponding guiding information from a storage device of the wearable device 100 or a remote server based on the operation procedure, and control the display device 130 to display the guiding information. The display device is controlled to display the guide information, so that the user can be guided to accurately complete corresponding operation, and omission or mistakes are avoided.
In step 370, identity information of the user is identified. Step 370 may be performed by processor 110 (e.g., control module 115). In some embodiments, control module 115 may be used to control identification device 150 to identify identity information of a user.
And 372, acquiring the historical use record of the user according to the identity information of the user. Step 372 may be performed by processor 110 (e.g., control module 115). In some embodiments, control module 115 may obtain the user's historical usage record from a memory device of wearable device 100 or from a remote server based on the user identity information. The historical usage record may be where the user used the wearable device 100 over a past period of time (e.g., a month, a year, etc.). In some embodiments, the historical usage record may include information such as time of use, location of display of guidance information when used, display parameters, and the like.
At step 374, the display position or display parameters of the guidance information in the display device are determined based on the user's historical usage record. Step 374 may be performed by processor 110 (e.g., control module 115). In some embodiments, the control module 115 may determine the display position and/or display parameters of the guidance information in the display device to be consistent with the last use by the user. In some embodiments, the control module 115 may use the display location or display parameter that appears most when the user used the last 10 times as the current display location or display parameter. The display position and/or the display parameter of the guide information are determined through the historical use record, the default position and/or the default parameter which accord with the habit of the user can be quickly provided, the interaction times can be reduced, and the operation efficiency and the user experience of the user are improved.
And 380, acquiring a second audio acquired by the audio acquisition device. Step 380 may be performed by processor 110 (e.g., control module 115). The second audio may be sound information captured by the audio capture device 140 in real-time or periodically. In some embodiments, the second audio may include a voice uttered by a wearer of the wearable device (e.g., a technician on site responsible for assembly, service, etc.).
Step 382, identify whether the second audio contains a voice control command related to the guidance information. Step 382 may be performed by processor 110 (e.g., control module 115). In some embodiments, the control module 115 may identify whether the second audio contains voice instructions for guidance information based on the acoustic model and the language model. The voice control command regarding the guidance information may be a voice command for controlling a display manner of the guidance information. In some embodiments, the voice control instruction regarding the guidance information may include "shift left", "zoom in", "zoom out", "brightness plus", "red", "black", "float window", "close guidance", and the like. In some embodiments, if the control module 115 (or the speech recognition module 114) recognizes a speech control instruction regarding the guidance information, the processor may proceed to step 384.
And step 384, controlling display parameters of the guide information in the display device. Step 384 may be performed by processor 110 (e.g., control module 115). The display parameters may include, but are not limited to, one or more combinations of brightness, size, color, transparency, and the like. The display parameters of the guide information in the display device are controlled through the voice control instruction, and the guide information can be displayed according to the requirements and preferences of the user.
Step 390, acquiring the control image sequence collected by the image collecting device. Step 390 may be performed by processor 110 (e.g., control module 115). In some embodiments, each image of the control image sequence includes a particular control gesture therein. The control image sequence may be a plurality of images captured in succession containing a particular control gesture. The control gesture may be a particular hand gesture used to control the display location of information (e.g., reminder information, guidance information, alert information, etc.) in a display device (e.g., an AR display device).
At step 392, a movement trajectory of the control gesture is determined based on the sequence of control images. Step 390 may be performed by processor 110 (e.g., control module 115). In some embodiments, the control module 115 may identify the position of the hand (e.g., the position of the center of the hand, the position of the fingertip) in each image of the control image sequence and time-sequentially line the positions of the hand in all of the control image sequences to determine the movement trajectory of the control gesture.
Step 394, the display position of the guidance information on the display device is controlled based on the movement trajectory. Step 390 may be performed by processor 110 (e.g., control module 115). By moving the guidance information using the control gesture, non-touch screen operation can be achieved; meanwhile, visual occlusion of the operation by the guiding information (such as guiding information displayed by an AR) can be avoided, so that the guiding information can be moved to a position which is favored by the user according to the intention of the user.
It should be noted that the above description of the flow control method 300 is for purposes of example and illustration only and is not intended to limit the scope of applicability of the present application. Various modifications and alterations to the flow control method 300 will be apparent to those skilled in the art in light of this disclosure. However, such modifications and variations are intended to be within the scope of the present application. In some embodiments, steps 370, 380 and 390 may be three steps performed independently, and the three may be performed sequentially or in parallel. In some embodiments, steps 370, 372, and 374 may be omitted, and system 1000 may not perform identification. In some embodiments, after the system 1000 displays the boot information of the last operation procedure, the step 330 and 350 may not be executed any more, and the system 1000 does not perform the process recording any more.
Fig. 7 is a diagram of an exemplary application scenario of a wearable device based process control system according to still another embodiment of the present application. As shown in fig. 7, the wearable device 100 may include at least an image capturing device 120 and a display device 130, and the process control system 1000 may include at least a recording module 111, a monitoring module 112, and a prompting module 113. The recording module 111 may be configured to: the image capture device 120 is controlled to record the manufacturing, assembly and/or repair process of the first part. The monitoring module 112 may be configured to: acquiring a first image sequence acquired by an image acquisition device; identifying a current operation procedure and operation process information based on the first image sequence; and judging whether the operation process information meets the operation specification corresponding to the operation procedure. The prompting module can be configured to: and when the operation process information does not accord with the operation specification, controlling the display device to display corresponding warning information. By prompting the warning information when the operation process is not in accordance with the standard, the user can be reminded in time, and the user can take remedial measures in time. For more details on the process control system 1000, reference may be made to the description associated with FIG. 1.
Fig. 8 is an exemplary flowchart of a wearable device-based flow control method according to still another embodiment of the present application. In some embodiments, the process control method 400 may be implemented by the process control system 1000 (e.g., the processor 110).
Step 410, a first audio collected by an audio collection device is obtained. Step 410 may be performed by processor 110 (e.g., speech recognition module 114). The first audio may be sound information captured by the audio capture device 140 in real time or periodically. In some embodiments, the first audio may include a voice uttered by a user (e.g., a wearer of the wearable device, a technician responsible for assembly/servicing).
In step 420, it is identified whether the first audio contains a voice instruction related to image capture. Step 420 may be performed by processor 110 (e.g., speech recognition module 114). The voice instruction may be a control command uttered by the user by voice. In some embodiments, the voice instructions regarding image capture may include "shoot", "take picture", "record", and other image capture related instructions. In some embodiments, the speech recognition module 114 may recognize the speech instructions in the first audio based on the acoustic model and the language model.
And 430, controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part. Step 430 may be performed by processor 110 (e.g., recording module 111). In some embodiments, the recording module 111 may control the image capture device 120 to record (e.g., record) the entire manufacturing, assembly, and/or repair process of the first part. In some embodiments, the recording module 111 may control the image capture device 120 to periodically (e.g., every 3 seconds, 10 seconds, etc.) photograph the manufacturing, assembly, and/or servicing process of the first part. In some embodiments, the recording module 111 may be configured to control the image capture device 120 to record the manufacturing, assembly, and/or repair process of the first part when it is recognized that the first audio includes a voice command regarding image capture.
Step 440, a first image sequence acquired by the image acquisition device is acquired. Step 440 may be performed by processor 110 (e.g., monitoring module 112). The first image sequence may be a plurality of images acquired in succession. In some embodiments, the first sequence of images may be a plurality of images acquired at 0.2 second intervals over 5 seconds. In some embodiments, the first sequence of images may be a segment of video. In some embodiments, the monitoring module 112 may acquire a plurality of images acquired in succession under the same procedure as the first sequence of images.
Step 450, identifying the current operation procedure and the operation process information based on the first image sequence. Step 450 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, the monitoring module 112 may identify the current procedure using an image recognition model based on any one of the images in the first sequence of images. In some embodiments, the operational process information may include, but is not limited to: one or more of operation time length, operation step number, operation step sequence and the like. In some embodiments, for each image in the first sequence of images, the monitoring module 112 may identify its corresponding operation (e.g., based on an image recognition model). Further, the monitoring module 112 may determine the operation duration according to the image sequence duration corresponding to a certain operation. In some embodiments, the monitoring module 112 may identify the current procedure and procedure information in the first image sequence using the image sequence identification model.
Step 460, determine whether the operation procedure information meets the operation specification corresponding to the operation procedure. Step 460 may be performed by processor 110 (e.g., monitoring module 112). In some embodiments, the monitoring module 112 may obtain the corresponding operating specification based on the operating procedure. In some embodiments, the monitoring module 112 may obtain the operating specification corresponding to the operating procedure from a storage device of the wearable device 100 or a remote server. The operating specification may include, but is not limited to: one or more combinations of standard operation duration, standard operation step number, standard operation step sequence and the like. By comparing the operation procedure information with the corresponding operation specification, the monitoring module 112 can determine whether the operation procedure information conforms to the corresponding operation specification. In some embodiments, the monitoring module 112 may send the first image sequence to a remote user (e.g., a technical expert) and may obtain a determination of whether the operational procedure information complies with the corresponding operational specification by the remote user.
Step 470, when the operation process information does not conform to the operation specification, controlling the display device to display the corresponding warning information. Step 470 may be performed by processor 110 (e.g., hinting module 113). In some embodiments, when the operation duration is not consistent, the operation steps are omitted, and the operation sequence is not standardized, the prompt module 113 may control the display device 130 to display the corresponding warning information. In some embodiments, the alert information may include, but is not limited to, one or more combinations of textual alert information, pictorial alert information, animated alert information, video alert information, and the like.
And step 480, when the operation process information does not meet the operation specification, controlling the display device to display corresponding guide information. Step 480 may be performed by processor 110 (e.g., hinting module 113). In some embodiments, the prompt module 113 may control the display device to display guidance information corresponding to the operation procedure. In some embodiments, the prompting module 113 may control the display device to display guidance information corresponding to an irregular operation procedure (e.g., guidance information for remedial operations, guidance information for repeating a current operation, etc.).
Step 490, when the operation process information does not meet the operation specification, controlling the voice playing device to play the corresponding voice warning information. Step 490 may be performed by processor 110 (e.g., hinting module 113). The voice warning information may include, but is not limited to, a warning tone, a warning statement (e.g., "cheer over"), and the like. By displaying and/or playing the warning information when the operation process is not in accordance with the specification, the user can be reminded in time, and the user can take remedial measures in time.
It should be noted that the above description of the flow control method 400 is for purposes of example and illustration only and is not intended to limit the scope of applicability of the present application. Various modifications and alterations to the flow control method 400 will be apparent to those skilled in the art in light of the present application. However, such modifications and variations are intended to be within the scope of the present application. In some embodiments, steps 480 and 490 may be two steps performed independently, either sequentially or in parallel.
Some possible benefits of embodiments of the present application include, but are not limited to: (1) parts which do not meet the standard in the process can be effectively detected, and the quality and the efficiency of the processes of manufacturing, assembling, overhauling and the like are improved; (2) flow guidance can be provided for field operators in time; (3) the operation process can be monitored, and irregular operation can be reminded to ensure the reliability of the flow; (4) the first visual angle of the field personnel can be presented to the remote user, and remote guidance is realized. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A flow control system based on wearable equipment is characterized in that the wearable equipment comprises an image acquisition device and a display device, and the flow control system at least comprises a recording module, a monitoring module and a prompting module;
the recording module is used for: controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part;
the monitoring module is used for:
acquiring a first image acquired by the image acquisition device;
identifying a current operation procedure by using an image identification model based on the first image;
the prompt module is used for: and controlling the display device to display corresponding guide information based on the operation process.
2. The wearable device-based process control system of claim 1, wherein the wearable device further comprises an audio acquisition device, the process control system further comprising a voice recognition module;
the speech recognition module is configured to:
acquiring a first audio collected by the audio collecting device;
identifying whether a voice instruction about image acquisition is contained in the first audio;
the recording module is used for: and when the first audio is recognized to contain a voice instruction about image acquisition, controlling the image acquisition device to record the manufacturing, assembling and/or overhauling process of the first part.
3. The wearable device-based process control system of claim 1, wherein the image recognition model is a machine learning model, and the training process of the image recognition model comprises:
obtaining a plurality of sample pairs, wherein each sample pair comprises a sample image and an operation procedure label corresponding to the sample image;
training an initial image recognition model based on the plurality of sample pairs to obtain a trained image recognition model.
4. The wearable device-based process control system of claim 3, wherein the plurality of sample pairs includes at least two sample pairs having the same operating procedure label but different angles from which the sample images of the two sample pairs were taken.
5. The wearable device-based process control system of claim 1, wherein the display device is an augmented reality display device, the process control system further comprising a control module to:
acquiring a control image sequence acquired by the image acquisition device, wherein each image of the control image sequence comprises a specific control gesture;
determining a movement trajectory of the control gesture based on the sequence of control images;
and controlling the display position of the guide information in the display device based on the movement track.
6. The wearable device-based process control system of claim 1, wherein the display device is an augmented reality display device, the wearable device further comprises an audio acquisition device, the process guidance system further comprises a control module to:
acquiring a second audio collected by the audio collecting device;
and identifying whether a voice control instruction related to the guide information is contained in the second audio, and controlling display parameters of the guide information in the display device when the voice control instruction related to the guide information is identified to be contained in the second audio, wherein the display parameters comprise brightness, size and/or color.
7. The wearable device-based process control system of claim 5 or 6, wherein the wearable device further comprises an identification device for identifying identity information of a user; the control module is further configured to:
acquiring a historical use record of the user according to the identity information of the user;
determining a display position and/or a display parameter of the guide information in the display device based on the historical usage record of the user.
8. The wearable device-based process control system of claim 7, wherein the identity recognition device comprises a voiceprint recognition device and/or an iris recognition device.
9. A flow control method based on wearable equipment is characterized in that the wearable equipment comprises an image acquisition device and a display device, and the flow control method comprises the following steps:
controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part;
acquiring a first image acquired by the image acquisition device;
identifying a current operation procedure by using an image identification model based on the first image;
and controlling the display device to display corresponding guide information based on the operation process.
10. A wearable device comprising an image acquisition device, a display device, and a processor, the processor configured to:
controlling the image acquisition device to record the manufacturing, assembling and/or overhauling processes of the first part;
acquiring a first image acquired by the image acquisition device;
identifying a current operation procedure by using an image identification model based on the first image;
and controlling the display device to display corresponding guide information based on the operation process.
CN202110358518.2A 2021-04-01 2021-04-01 Flow control system and method based on wearable device Pending CN113052561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110358518.2A CN113052561A (en) 2021-04-01 2021-04-01 Flow control system and method based on wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110358518.2A CN113052561A (en) 2021-04-01 2021-04-01 Flow control system and method based on wearable device

Publications (1)

Publication Number Publication Date
CN113052561A true CN113052561A (en) 2021-06-29

Family

ID=76517237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110358518.2A Pending CN113052561A (en) 2021-04-01 2021-04-01 Flow control system and method based on wearable device

Country Status (1)

Country Link
CN (1) CN113052561A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385291A (en) * 2021-12-29 2022-04-22 南京财经大学 Standard workflow guiding method and device based on plug-in transparent display screen

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019559A (en) * 2012-11-27 2013-04-03 海信集团有限公司 Gesture control projection display device and control method thereof
CN104484037A (en) * 2014-12-12 2015-04-01 三星电子(中国)研发中心 Method for intelligent control by virtue of wearable device and wearable device
WO2017093439A1 (en) * 2015-12-02 2017-06-08 Michon Cédric Device for voice control of an image capture apparatus
CN107590252A (en) * 2017-09-19 2018-01-16 百度在线网络技术(北京)有限公司 Method and device for information exchange
CN109241986A (en) * 2018-05-30 2019-01-18 北京飞鸿云际科技有限公司 For rail traffic vehicles part diagram as the sample production method of identification model
CN109542228A (en) * 2018-11-22 2019-03-29 青岛理工大学 A kind of intelligent information record system and method based on wearable device
CN109766872A (en) * 2019-01-31 2019-05-17 广州视源电子科技股份有限公司 Image-recognizing method and device
CN109919331A (en) * 2019-02-15 2019-06-21 华南理工大学 A kind of airborne equipment intelligent maintaining auxiliary system and method
CN110751215A (en) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 Image identification method, device, equipment, system and medium
DE102018214307A1 (en) * 2018-08-23 2020-02-27 Friedrich-Alexander-Universität Erlangen-Nürnberg System and method for quality inspection in the manufacture of individual parts
CN111353470A (en) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 Image processing method and device, readable medium and electronic equipment
CN111428374A (en) * 2020-03-30 2020-07-17 苏州惟信易量智能科技有限公司 Part defect detection method, device, equipment and storage medium
CN111428373A (en) * 2020-03-30 2020-07-17 苏州惟信易量智能科技有限公司 Product assembly quality detection method, device, equipment and storage medium
CN111476284A (en) * 2020-04-01 2020-07-31 网易(杭州)网络有限公司 Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment
CN111744170A (en) * 2019-03-27 2020-10-09 广东小天才科技有限公司 Game control method based on wearable device and wearable device
CN111931835A (en) * 2020-07-31 2020-11-13 中国工商银行股份有限公司 Image identification method, device and system
CN112085223A (en) * 2020-08-04 2020-12-15 深圳市新辉煌智能科技有限责任公司 Guidance system and method for mechanical maintenance
CN112101181A (en) * 2020-09-10 2020-12-18 湖北烽火平安智能消防科技有限公司 Automatic hidden danger scene recognition method and system based on deep learning
CN112164396A (en) * 2020-09-28 2021-01-01 北京百度网讯科技有限公司 Voice control method and device, electronic equipment and storage medium
CN112198965A (en) * 2020-12-04 2021-01-08 宁波圻亿科技有限公司 AR (augmented reality) glasses eye protection automatic control method and device
CN112488218A (en) * 2020-12-04 2021-03-12 北京金山云网络技术有限公司 Image classification method, and training method and device of image classification model

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019559A (en) * 2012-11-27 2013-04-03 海信集团有限公司 Gesture control projection display device and control method thereof
CN104484037A (en) * 2014-12-12 2015-04-01 三星电子(中国)研发中心 Method for intelligent control by virtue of wearable device and wearable device
WO2017093439A1 (en) * 2015-12-02 2017-06-08 Michon Cédric Device for voice control of an image capture apparatus
CN107590252A (en) * 2017-09-19 2018-01-16 百度在线网络技术(北京)有限公司 Method and device for information exchange
CN109241986A (en) * 2018-05-30 2019-01-18 北京飞鸿云际科技有限公司 For rail traffic vehicles part diagram as the sample production method of identification model
DE102018214307A1 (en) * 2018-08-23 2020-02-27 Friedrich-Alexander-Universität Erlangen-Nürnberg System and method for quality inspection in the manufacture of individual parts
CN109542228A (en) * 2018-11-22 2019-03-29 青岛理工大学 A kind of intelligent information record system and method based on wearable device
CN109766872A (en) * 2019-01-31 2019-05-17 广州视源电子科技股份有限公司 Image-recognizing method and device
CN109919331A (en) * 2019-02-15 2019-06-21 华南理工大学 A kind of airborne equipment intelligent maintaining auxiliary system and method
CN111744170A (en) * 2019-03-27 2020-10-09 广东小天才科技有限公司 Game control method based on wearable device and wearable device
CN110751215A (en) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 Image identification method, device, equipment, system and medium
CN111353470A (en) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 Image processing method and device, readable medium and electronic equipment
CN111428373A (en) * 2020-03-30 2020-07-17 苏州惟信易量智能科技有限公司 Product assembly quality detection method, device, equipment and storage medium
CN111428374A (en) * 2020-03-30 2020-07-17 苏州惟信易量智能科技有限公司 Part defect detection method, device, equipment and storage medium
CN111476284A (en) * 2020-04-01 2020-07-31 网易(杭州)网络有限公司 Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment
CN111931835A (en) * 2020-07-31 2020-11-13 中国工商银行股份有限公司 Image identification method, device and system
CN112085223A (en) * 2020-08-04 2020-12-15 深圳市新辉煌智能科技有限责任公司 Guidance system and method for mechanical maintenance
CN112101181A (en) * 2020-09-10 2020-12-18 湖北烽火平安智能消防科技有限公司 Automatic hidden danger scene recognition method and system based on deep learning
CN112164396A (en) * 2020-09-28 2021-01-01 北京百度网讯科技有限公司 Voice control method and device, electronic equipment and storage medium
CN112198965A (en) * 2020-12-04 2021-01-08 宁波圻亿科技有限公司 AR (augmented reality) glasses eye protection automatic control method and device
CN112488218A (en) * 2020-12-04 2021-03-12 北京金山云网络技术有限公司 Image classification method, and training method and device of image classification model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全国企业管理现代化创新成果审定委员会,中国企业联合会管理现代化工作委员会编: "国家级企业管理创新成果 2011 上 第十七届", vol. 1, 天津科学技术出版社, pages: 677 - 51 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385291A (en) * 2021-12-29 2022-04-22 南京财经大学 Standard workflow guiding method and device based on plug-in transparent display screen

Similar Documents

Publication Publication Date Title
US10074402B2 (en) Recording and providing for display images of events associated with power equipment
US20170156586A1 (en) Head-mounted display for performing ophthalmic examinations
JP6165362B1 (en) Display system, display device, display method, and program
US20210224752A1 (en) Work support system and work support method
CN108713223B (en) System and method for providing welding training
US20170199543A1 (en) Glass-type terminal and method of controling the same
JP6780767B2 (en) Inspection support device, inspection support method and program
JP6323202B2 (en) System, method and program for acquiring video
KR20160050755A (en) Electronic Device and Method for Recognizing Iris by the same
CN111783640A (en) Detection method, device, equipment and storage medium
CN113052561A (en) Flow control system and method based on wearable device
JP6319951B2 (en) Railway simulator, pointing motion detection method, and railway simulation method
CN110458108B (en) Manual operation real-time monitoring method, system, terminal equipment and storage medium
US11336866B2 (en) Aircraft inspection support device and aircraft inspection support method
CN113034113A (en) Flow control system and method based on wearable device
CN113034114A (en) Flow control system and method based on wearable device
CN111052127A (en) System and method for fatigue detection
JP6355146B1 (en) Medical safety system
TWI622901B (en) Gaze detection apparatus using reference frames in media and related method and computer readable storage medium
CN115562490A (en) Cross-screen eye movement interaction method and system for aircraft cockpit based on deep learning
US20220070365A1 (en) Mixed reality image capture and smart inspection
CN106821300A (en) Color vision detection method and system
KR102132294B1 (en) Method for analyzing virtual reality content information in virtual reality and evaluation terminal adopting the same
EP3432204B1 (en) Telepresence framework for region of interest marking using headmount devices
WO2019018577A1 (en) Systems and methods for analyzing behavior of a human subject

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Sen

Inventor before: Wang Sen

Inventor before: Li Zhichao

CB03 Change of inventor or designer information