CN115019394A - Process tracking method, device, equipment and storage medium based on equipment maintenance - Google Patents
Process tracking method, device, equipment and storage medium based on equipment maintenance Download PDFInfo
- Publication number
- CN115019394A CN115019394A CN202210652316.3A CN202210652316A CN115019394A CN 115019394 A CN115019394 A CN 115019394A CN 202210652316 A CN202210652316 A CN 202210652316A CN 115019394 A CN115019394 A CN 115019394A
- Authority
- CN
- China
- Prior art keywords
- component
- maintenance
- personnel
- related data
- relative position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012423 maintenance Methods 0.000 title claims abstract description 206
- 238000000034 method Methods 0.000 title claims abstract description 140
- 230000008569 process Effects 0.000 title claims abstract description 83
- 230000006399 behavior Effects 0.000 claims description 69
- 238000004590 computer program Methods 0.000 claims description 16
- 230000007547 defect Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 abstract description 6
- 230000036544 posture Effects 0.000 description 34
- 238000001514 detection method Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000008439 repair process Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 210000003423 ankle Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009413 insulation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/80—Management or planning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Human Resources & Organizations (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a process tracking method, a device, equipment and a storage medium based on equipment maintenance, wherein the method comprises the following steps: acquiring personnel related data of a maintainer in a maintenance scene, wherein the personnel related data comprise the posture category of the maintainer and a first spatial relative position of the maintainer; acquiring relevant data of the components in the maintenance scene, wherein the relevant data of the components comprise component types and second space relative positions where the components corresponding to the component types are located; determining local behavior information of the maintenance personnel on the component according to the personnel related data and the component related data; and tracking the maintenance process of the maintenance scene based on the local behavior information. The whole-flow monitoring of the maintenance process is realized, and the work supervision of maintenance personnel is improved, so that the safe operation of equipment is better guaranteed.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a process tracking method based on device maintenance, a process tracking apparatus based on device maintenance, an electronic device, and a computer-readable storage medium.
Background
Technical management measures, which are performed according to a predetermined plan or a predetermined specification of corresponding technical conditions in order to prevent deterioration of the performance of the Equipment or reduce the probability of failure of the Equipment, are called Equipment maintenance (maintenance), which is a combination of Equipment repair and maintenance. The maintenance of the equipment is the basis of the normal operation of the equipment and is an effective measure for preventing accidents.
Currently, in a common maintenance mode for equipment, a maintenance company arranges maintenance personnel to perform maintenance or repair operations in a scheduling manner according to a maintenance period or a repair notice of each equipment. However, the working process of maintenance personnel is lack of monitoring, and the supervision problems of incomplete maintenance, untimely repair, poor maintenance procedure completion degree and the like easily occur.
Disclosure of Invention
The application provides a process tracking method, a device, equipment and a storage medium based on equipment maintenance, which are used for solving the monitoring problems that the maintenance is not responsible for the utmost, the first-aid repair is not timely, the maintenance procedure completion degree is poor and the like easily caused by lack of monitoring in the existing equipment maintenance process.
According to a first aspect of the present application, there is provided a process tracking method based on device maintenance, the method comprising:
acquiring personnel related data of a maintainer in a maintenance scene, wherein the personnel related data comprise the posture category of the maintainer and a first spatial relative position of the maintainer;
acquiring related data of the components in the maintenance scene, wherein the related data of the components comprise component types and second space relative positions of the components corresponding to the component types;
determining local behavior information of the maintenance personnel on the component according to the personnel related data and the component related data;
and tracking the maintenance process of the maintenance scene based on the local behavior information.
According to a second aspect of the present application, there is provided a process tracking apparatus based on equipment maintenance, the apparatus comprising:
the system comprises a personnel data acquisition module, a first spatial relative position acquisition module and a second spatial relative position acquisition module, wherein the personnel data acquisition module is used for acquiring personnel related data of maintenance personnel in a maintenance scene, and the personnel related data comprise the posture category of the maintenance personnel and the first spatial relative position of the maintenance personnel;
the component data acquisition module is used for acquiring component related data in the maintenance scene, wherein the component related data comprises a component type and a second spatial relative position where a component corresponding to the component type is located;
the behavior information determining module is used for determining the local behavior information of the maintenance personnel on the component according to the personnel related data and the component related data;
and the process tracking module is used for tracking the maintenance process of the maintenance scene based on the local behavior information.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method of any of the embodiments of the application.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon computer instructions for causing a processor to perform the method according to any of the embodiments of the present application when executed.
In the embodiment, the local behavior information of the maintenance personnel on the component is automatically determined by acquiring the personnel related data of the maintenance personnel and the component related data of the component to be maintained in the maintenance scene, and then the maintenance process tracking is performed on the maintenance scene according to the local behavior information. The whole process realizes the full-flow monitoring of the maintenance process through the continuous data of the time domain and the airspace, and improves the work supervision of maintenance personnel, thereby better ensuring the safe operation of equipment.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present application, nor are they intended to limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a process tracking method based on device maintenance according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a maintenance scenario provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a process tracking apparatus based on equipment maintenance according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the accompanying drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a process tracking method based on device maintenance according to an embodiment of the present disclosure, where this embodiment may be applied to a process tracking apparatus based on device maintenance, where the apparatus may be disposed in a server, a management background, or a local device provided with a processor, and the present embodiment does not limit a device in which the apparatus is disposed.
The embodiment can be applied to a scene in which the process of equipment maintenance performed by a maintainer in a maintenance scene is tracked and monitored. However, the maintenance scenario is not limited in this embodiment, and the scenarios that maintenance personnel need to perform regular maintenance, inspection, or repair on the device all fall within the protection range of this embodiment, for example, the maintenance scenario may include an elevator maintenance scenario, and the corresponding device is an elevator device, and the corresponding component is a component included in an operation process of the elevator device. For another example, the maintenance scenario may include a fire fighting equipment maintenance scenario, and then the corresponding equipment is fire fighting equipment, and the corresponding component is a component included in the fire fighting equipment.
As shown in fig. 1, the method may include the steps of:
and step 110, acquiring personnel related data of maintenance personnel in a maintenance scene.
The method mainly comprises the step of obtaining personnel related data of maintenance personnel in a maintenance scene. The personnel related data may include data generated by maintenance personnel in a maintenance scenario. The personnel-related data are recorded and can be used for subsequent process tracking of the maintenance process. By way of example, the people-related data may include, but is not limited to: the method comprises the steps of classifying postures of maintenance personnel, locating a first space relative position of the maintenance personnel and collecting some related data of the maintenance personnel by a sensor arranged in a maintenance scene.
In one embodiment, if the person-related data includes a gesture category, step 110 may further include the steps of:
step 110-1, a first image data sequence comprising the maintenance personnel is acquired.
In implementations, the first sequence of image data may be acquired by a first acquisition device disposed in a maintenance scene. The first collecting device may be a movable device, and when the maintenance personnel arrives at the maintenance site, the maintenance personnel may find an erection position where the first collecting device can be erected, and erect the first collecting device in the assumed position. The erection position can enable the first acquisition equipment to acquire the information of the whole maintenance scene, at least the maintenance operation of maintenance personnel can be acquired, and the number of the maintenance personnel can be one or more than one.
In one implementation, the first capture device may include a variety of sensors therein, which illustratively may include an image sensor, a depth sensor, a three-axis sensor, an electronic compass, and the like. Each of the first image data in the first image data sequence may be image data acquired by an image sensor, and the first image data may include at least maintenance personnel, and in other examples, may also include some or all of the components that the maintenance personnel are maintaining.
And 110-3, extracting a first area image where the maintenance personnel are located from each first image data.
In one implementation, a pre-trained human target detection network may be employed to detect a rectangular box of maintenance personnel from the first image, the rectangular box being representable with position information (x1, y1, w1, h1), where (x1, y1) is the upper left coordinate of the rectangular box and (w1, h1) is the width and height of the rectangular box, respectively. After the rectangular frame of each first image data is obtained, the rectangular frame may be cut out as a first area image representing the maintenance person. During the capturing, the image capturing can be performed by taking the rectangular frame as a boundary, and the capturing can also be performed by expanding the preset pixel distance outwards by using the boundary of the rectangular frame.
The present embodiment does not limit the target detection algorithm used by the human target detection network.
And 110-5, performing posture recognition on the continuous first area image sequence to determine the posture category of the maintenance personnel.
After the first area images corresponding to the first image data are obtained, the first area images of the continuous time sequence form a first area image sequence, and then posture recognition is carried out on the first area image sequence. In one implementation, a pre-generated human body posture estimation network may be used to perform posture recognition on the first region image sequence, so as to obtain a posture category of the maintenance personnel. Wherein the gesture categories may illustratively include, but are not limited to: standing, lying down, half lying, bending waist, stretching hands, pulling, etc.
In one implementation, the human body pose estimation network may be generated as follows: the method comprises the steps of collecting a human body posture data set under a current maintenance scene to train an Alphapose posture estimation network, training through the data set, learning the distribution of a heat map of joint points of a human body by the Alphapose posture estimation network, obtaining the numerical coordinates of the joint points in a human body frame through the regression of the heat map, mapping the numerical coordinates back to an input image, and finally obtaining the coordinates of the joint points of the human body in an input image pixel coordinate system. The articulation points of the human body may include, for example: nose, neck, left shoulder, left elbow, left wrist, right shoulder, right elbow, right wrist, spine, left hip, left knee, left ankle, right hip, right knee, right ankle. And then reconstructing the 2D human body posture of the first region image sequence into human body joint point 3D numerical coordinates of the image sequence by utilizing a human body 3D posture reconstruction network. And then training a human body 3D posture reconstruction model 3D-baseline based on a 3D human body posture data set Human3.6M and a human body 2D posture as input. And taking the obtained 2D human body position coordinates as the input of the model to output the 3D human body position coordinates. That is, the estimated 15 joint 2D coordinates are input, and the estimated 15 joint 3D coordinates are output. Then, acquiring a space-time relationship of the human body posture from the human body joint point 3D numerical value coordinate sequence by utilizing a GCN network model so as to realize the recognition of the human body posture in the current maintenance environment; and training a posture estimation model of the human body based on a 3D human body posture behavior data set Human3.6M and the 3D human body posture as input.
In another embodiment, the person-related data may include a distance and an angle between the maintenance person and the first collecting apparatus, for example, if the person-related data includes a first spatial relative position, which is a spatial relative position of the maintenance person and the first collecting apparatus. In implementation, a visual positioning algorithm may be used to calculate the first spatial relative position, and specifically, step 110 may further include the following steps:
and step 110-2, determining key points of the maintenance personnel from the first area image.
In one implementation, the first region image may be input into the human body posture estimation network, and key points of the maintenance personnel output by the network are obtained, and the key points may exemplarily include key points capable of reflecting the human body posture, such as head key points, limb key points, torso key points, and the like.
And 110-4, acquiring a first depth image acquired by the first acquisition equipment, and performing pixel matching on the first area image and the first depth image to determine three-dimensional coordinates of each key point in a camera coordinate system.
In one implementation, the pixel may be a pass-through pixelAnd determining the depth information depth of each key point from the first depth image in a point matching mode. Assuming that the pixel coordinates of each key point in the first area image are (u, v), combining the focal length f and the principal point coordinates (c) of the first acquisition device x ,c y ) And calculating the three-dimensional coordinates of each key point in the camera coordinate system.
In one implementation, the following formula may be used to calculate the three-dimensional coordinates (X) of each keypoint in the camera coordinate system c ,Y c ,Z c ):
Z c =depth
And 110-6, acquiring first angle information of the first acquisition equipment, and converting the three-dimensional coordinates of each key point in a camera coordinate system into three-dimensional coordinates in a world coordinate system according to the first angle information, wherein the world coordinate system is constructed by taking the position projected to the ground by the first acquisition equipment as an origin.
Exemplarily, the first angle information may include at least a posture angle and a direction angle. When the method is realized, the attitude angle of the first acquisition equipment can be acquired through a three-axis sensor on the first acquisition equipment, and the direction angle of the first acquisition equipment is acquired through an electronic compass on the first acquisition equipment.
When the method is implemented, a coordinate transformation matrix can be constructed according to the first angle information, and three-dimensional coordinates (X) of each key point in a camera coordinate system can be transformed according to the coordinate transformation matrix c ,Y c ,Z c ) Converting into three-dimensional coordinates (X) in world coordinate system w ,Y w ,Z w )。
Wherein the origin of the world coordinate system is the position projected to the ground by the first acquisition equipment, the z-axis is vertical to the ground and points to the sky, and the x-axis and the y-axis are on the groundOn the face. Then Z w The height of the key point from the ground can be represented.
The present embodiment does not limit the manner of constructing the coordinate transformation matrix by using the first angle information, and may be any existing construction manner.
And 110-8, calculating the distance and the angle between each key point and the first acquisition equipment according to the three-dimensional coordinates of each key point in the world coordinate system to serve as the relative position of the first space.
After determining the three-dimensional coordinates of each keypoint in the world coordinate system, the three-dimensional coordinates of the keypoint of the first acquisition device (e.g., the intermediate position between the image sensor and the depth sensor) may also be determined according to the world coordinate system. And then calculating the distance between each key point of the maintainer and the first acquisition equipment through a distance calculation formula, and calculating the angle between each key point of the maintainer and the first acquisition equipment through an angle calculation formula. The distance and angle are then taken as the first relative spatial position.
In addition, the height information (the value of the z-axis of the world coordinate system) of each key point may be set as the first spatial relative position.
This step is primarily to obtain part-related data for parts in the maintenance scenario. The component related data may include data related to parts of the devices in the maintenance scenario. The component related data is recorded and can be used for subsequent process tracking of the maintenance process. Illustratively, the component-related data may include, but is not limited to: the component type, the second spatial relative position where the component corresponding to the component type is located, and some related data of the component collected by the sensor arranged in the maintenance scene.
In one embodiment, if the component-related data includes a component category, step 120 may further include the steps of:
step 120-1, a second sequence of image data is acquired that includes the component.
In implementations, the second sequence of image data may be acquired by a second acquisition device disposed in the maintenance scene. The second acquisition device can be a wearable device, and can be worn by maintenance personnel to acquire images of components when the maintenance personnel arrive at a maintenance site. The worn position may enable the second capture device to capture complete information of the component. For example, as shown in fig. 2, the second capture device 20 may be worn on the head of the maintenance person, 10 being the first capture device, and 30 being the component.
In one implementation, the second capture device may also include various sensors, which may illustratively include an image sensor, a depth sensor, a three-axis sensor, an electronic compass, and the like. Each second image data in the second image data sequence may be image data acquired by an image sensor, and the second image data may include at least a part to be maintained, and preferably may be one part currently maintained.
And 120-3, extracting a second area image where the part is located from each second image data.
In one implementation, a pre-trained component target detection network may be employed to detect a rectangular box of the component from the second image data, which may be represented using the position information (x2, y2, w2, h2), where (x2, y2) is the coordinates of the upper left corner of the rectangular box and (w2, h2) are the width and height of the rectangular box, respectively. After one or more rectangular frames of each second image data are obtained, each rectangular frame may be cut out as a second area image for representing the part. During the capturing, the image capturing can be performed by taking the rectangular frame as a boundary, and the capturing can also be performed by expanding the preset pixel distance outwards by using the boundary of the rectangular frame.
The present embodiment does not limit the target detection algorithm used by the component target detection network.
In the actual processing process, the number of the components in the second image may exceed one, and at this time, it is necessary to determine which component needs to be processed currently, so as to screen out the second area image corresponding to the component to be processed currently. The screening method can be various, including but not limited to: filtering out incomplete parts according to the integrity detection of the parts; or acquiring depth information of each part, and determining the part in the foreground as the current part to be processed; or, according to the frequency of the components in the continuous image frames, the component with the highest frequency of occurrence is taken as the current component to be processed.
Step 120-5, a component identification is performed on the sequence of consecutive second region images to determine a component category of the component. After the second area images corresponding to the current part to be processed in each second image data are obtained, the second area images of the continuous time sequence form a second area image sequence, and then the second area image sequence is subjected to part type identification. In one implementation, a pre-generated component class identification network may be used to perform component class identification on the second region image sequence to obtain what type of component the current component is. For example, in an elevator maintenance scenario, component categories may illustratively include, but are not limited to: wire rope, band-type brake, overspeed governor etc..
The present embodiment does not limit the training mode of the component class identification network. In one implementation, the sample set employed by the component class identification network in training may be the individual components that are present in the current maintenance scenario. The corresponding component type is marked on each component sample in advance, the characteristic point information of each component sample is extracted, and a preset classification algorithm is adopted for training, so that a component type identification network can be obtained. In another embodiment, the component-related data may also include component status, which may illustratively include operating status, component structural integrity, and the like. The working state is used for describing whether the component is currently in a working state or a non-working state, or which working mode the component is in. The structural integrity of the component is used to describe whether a sub-component exists in the current component, i.e., whether a sub-component is missing or not. Step 120 may include the steps of:
and detecting the component state of the second area image sequence to determine the component state of the component.
In one implementation, the component status detection logic may be preset and the component status may be determined by performing a status detection on the current component according to the component status detection logic. For example, if the component state is the working state, the image of the current component in the working state and the image of the current component in the non-working state may be queried according to the component type of the current component, and then the currently acquired second area image may be compared with the image of the working state and the image of the non-working state, so as to determine whether the component in the second area image is in the working state or the non-working state. For another example, if the component status is the structural integrity of the component, the image with the complete structure of the current component may be queried according to the component category of the current component, and then the currently acquired second region image is compared with the image with the complete structure, if the two components are consistent, the current component is complete, otherwise, the current component is incomplete.
In yet another embodiment, the part-related data may further include part quality, including whether the part surface is defective. Step 120 may include the steps of: performing a component quality inspection on the second sequence of region images to determine a component quality of the component.
In one implementation, the component quality detection logic may be preset and the current component may be quality tested according to the component quality detection logic to determine the component quality. For example, the component quality detection logic may be a component quality detection model, and may collect defects that are relatively likely to occur on the surface of each component category in the current maintenance scene in advance, as sample data to train the quality detection model. Then, each second area image in the second area image sequence is input into the quality detection model, whether corresponding defects exist in the images or not is detected by the quality detection model, and a quality result is output. The quality result may for example use the defect class as the result.
In yet another embodiment, if the component-related data includes the second spatial relative position, step 120 may include the steps of:
step 120-2, determining key points of the component from the second region image.
Wherein, the key points of the component may include: pixel points representing shape or contour features of the part. In implementation, a component keypoint extraction model may be employed to determine keypoints of the component from the second region image.
And 120-4, acquiring a second depth image acquired by the second acquisition equipment, and performing pixel matching on the second area image and the second depth image to determine three-dimensional coordinates of each key point of the component in a camera coordinate system.
And 120-6, acquiring second angle information of the second acquisition equipment, and converting the three-dimensional coordinates of each key point of the component in a camera coordinate system into three-dimensional coordinates in a world coordinate system according to the second angle information, wherein the world coordinate system is constructed by taking the position projected to the ground by the second acquisition equipment as an origin.
And 120-8, calculating the distance and the angle between each key point and the second acquisition equipment as the second space relative position according to the three-dimensional coordinates of each key point of the component in the world coordinate system.
The second spatial relative position refers to the relative position of the component and the second capturing device in the scene. Regarding the calculation manner of the second spatial relative position, similar to the calculation manner of the first spatial relative position, reference may be made to the manner of the first spatial relative position, which is not described in detail in this embodiment.
The local behavior information may be a behavior of a maintenance person performing maintenance on a certain component for a certain time period, for example, in an elevator maintenance scenario, the local behavior information may include, but is not limited to: measuring part size, screwing, pressing a button, measuring electrical properties, etc.
In one embodiment, step 130 may further include the steps of:
step 130-1, determining the relative position relationship between the maintenance personnel and the component according to the first space relative position and the second space relative position.
Specifically, the second collecting device is worn on the maintenance person, and after the angle and the distance between the component and the second collecting device are determined according to the second spatial relative position, the angle and the distance can be used as the angle and the distance between the component and the maintenance person.
In addition to this, in the first spatially relative position, the ground clearance of the maintenance personnel key points may be included. In the second spatially relative position, the ground clearance of each critical point of the component is included. Comparing the maximum ground clearance of the maintenance person with the maximum ground clearance of the current component and comparing the minimum ground clearance of the maintenance person with the minimum ground clearance of the current component allows determining the up-down orientation of the component relative to the maintenance person, e.g. whether the component is above or below the maintenance person.
The distance, angle, vertical orientation, and the like between the component and the maintenance person can be referred to as a relative positional relationship between the maintenance person and the component. Step 130-2, generating information to be matched according to the posture type, the component type and the relative position relationship, and matching the information to be matched with a plurality of local behavior templates generated in advance to use the local behavior template successfully matched as the local behavior information of the information to be matched, wherein each local behavior template comprises a template posture type, a template component type and a template relative position relationship.
After information such as the posture category of a maintenance person, the component category of a component operated by the maintenance person, the relative position relationship between the maintenance person and the component and the like is obtained within a certain period of time, the information can be used as information to be matched, and then the information to be matched is quickly matched with a plurality of local behavior templates generated in advance, so that the successfully matched local behavior template is used as the local behavior information of the current information to be matched. Specifically, the local behavior template may also include a template posture category, a template component category, and a template relative position relationship. When a certain local behavior template exists, the template posture category of the local behavior template is the same as the current posture category, the template component category is the same as the current component category, and the same or similar relative position relationship exists, the local behavior template can be judged to be a successfully matched local behavior template.
And 140, tracking the maintenance process of the maintenance scene based on the local behavior information.
After the local behavior information is obtained, the currently obtained local behavior information may be recorded. So as to facilitate the follow-up maintenance process tracking of the maintenance scene based on the local behavior information.
In one embodiment, step 140 may further include the steps of:
acquiring one or more pieces of local behavior information executed by the maintenance personnel in the maintenance scene, and organizing the one or more pieces of local behavior information into a local behavior list according to a time sequence; and matching the local behavior list with one or more process templates generated in advance, and taking the process template successfully matched as the process type corresponding to the local behavior list. For example, in an elevator maintenance scene, the process template may include an insulation resistance detection process, a brake holding detection process, a wire rope detection process, and the like.
In this step, a plurality of process templates each including a plurality of process steps may be set in advance. Then, each piece of local behavior information recorded in advance in the current maintenance scene is taken as a step to form a local behavior list, namely a step list. The step list is compared with the order of the steps of each process template. If there is a process template whose step sequence is identical or similar to that of the current local action list, the most similar or completely identical process template may be used as the matched process template, and the process type corresponding to the matched process template is used as the process type corresponding to the current local action list.
In one embodiment, after determining the process category, and performing integrity check on the current process, step 140 may further include the steps of:
judging whether all process steps in the successfully matched process template are finished or not according to the local behavior list; if yes, the current process type is judged to be completely finished, namely the current process is completely finished. If not, the current process type is judged to be unfinished. And sending prompt information to the user according to the judgment condition.
In the embodiment, the local behavior information of the maintenance personnel on the component is automatically determined by acquiring the personnel related data of the maintenance personnel and the component related data of the component to be maintained in the maintenance scene, and then the maintenance process tracking is performed on the maintenance scene according to the local behavior information. The whole process realizes the full-flow monitoring of the maintenance process through the continuous data of the time domain and the airspace, and improves the work supervision of maintenance personnel, thereby better ensuring the safe operation of equipment.
Example two
Fig. 3 is a schematic structural diagram of a process tracking apparatus based on equipment maintenance according to a second embodiment of the present application, which may include the following modules:
the personnel data acquisition module 210 is configured to acquire personnel related data of a maintenance worker in a maintenance scene, where the personnel related data includes a posture category of the maintenance worker and a first spatial relative position where the maintenance worker is located;
a component data obtaining module 220, configured to obtain component related data in the maintenance scene, where the component related data includes a component category and a second spatial relative position where a component corresponding to the component category is located;
a behavior information determining module 230, configured to determine local behavior information of the maintenance worker on the component according to the personnel related data and the component related data;
and a process tracking module 240, configured to perform maintenance process tracking on the maintenance scenario based on the local behavior information.
In an embodiment, the personnel data obtaining module 210 is specifically configured to:
acquiring a first image data sequence containing the maintenance personnel;
extracting a first area image where the maintenance personnel are located from each first image data;
and performing gesture recognition on the continuous first area image sequence to determine the gesture category of the maintenance personnel.
In another embodiment, the first image data is acquired by a first acquisition device disposed in the maintenance scene; the personnel data acquisition module 210 is further configured to:
determining key points of the maintenance personnel from the first area image;
acquiring a first depth image acquired by the first acquisition equipment, and performing pixel matching on the first area image and the first depth image to determine three-dimensional coordinates of each key point in a camera coordinate system;
acquiring first angle information of the first acquisition equipment, and converting three-dimensional coordinates of each key point in a camera coordinate system into three-dimensional coordinates in a world coordinate system according to the first angle information, wherein the world coordinate system is constructed by taking the position projected to the ground by the first acquisition equipment as an origin;
and calculating the distance and the angle between each key point and the first acquisition equipment according to the three-dimensional coordinates of each key point in the world coordinate system to serve as the relative position of the first space.
In one embodiment, the component data acquisition module 220 is specifically configured to:
acquiring a second sequence of image data comprising the component;
extracting a second area image where the part is located from each second image data;
component recognition is performed on the sequence of consecutive second region images to determine a component category of the component.
In another embodiment, the component-related data further includes a component status, and the component-data obtaining module 220 is further configured to:
and detecting the component state of the second area image sequence to determine the component state of the component, wherein the component state comprises an operating state.
In yet another embodiment, the component-related data further includes a component quality, and the component-data obtaining module 220 is further configured to:
performing a part quality inspection on the second sequence of region images to determine a part quality of the part, the part quality including whether a defect is present on the surface of the part.
In yet another embodiment, the second image data is acquired by a second acquisition device disposed in the maintenance scene; the component data acquisition module 220 is further configured to:
determining key points of the component from the second area image;
acquiring a second depth image acquired by second acquisition equipment, and performing pixel matching on the second area image and the second depth image to determine three-dimensional coordinates of each key point of the component in a camera coordinate system;
acquiring second angle information of the second acquisition equipment, and converting three-dimensional coordinates of each key point of the component in a camera coordinate system into three-dimensional coordinates in a world coordinate system according to the second angle information, wherein the world coordinate system is constructed by taking the position projected to the ground by the second acquisition equipment as an origin;
and calculating the distance and the angle between each key point and the second acquisition equipment according to the three-dimensional coordinates of each key point of the component in the world coordinate system to serve as the relative position of the second space.
In one embodiment, the behavior information determination module 230 may include the following sub-modules:
the relative position relation determining submodule is used for determining the relative position relation between the maintenance personnel and the component according to the first space relative position and the second space relative position;
and the local behavior template matching submodule is used for generating information to be matched according to the posture type, the component type and the relative position relationship, matching the information to be matched with a plurality of local behavior templates which are generated in advance, and taking the successfully matched local behavior template as the local behavior information of the information to be matched, wherein each local behavior template comprises a template posture type, a template component type and a template relative position relationship.
In one embodiment, the first spatial relative position comprises a distance and an angle between a first acquisition device disposed in a maintenance scenario and a maintenance person; the second spatial relative position comprises an angle and a distance between a second acquisition device arranged on the maintainer and the component;
the relative position relation determination submodule is specifically configured to:
and taking the distance and the angle between the component and the second acquisition equipment as the distance and the angle between the component and the maintenance personnel.
In one embodiment, the process tracking module 240 is specifically configured to:
acquiring one or more pieces of local behavior information executed by the maintenance personnel in the maintenance scene, and organizing the one or more pieces of local behavior information into a local behavior list according to a time sequence;
and matching the local behavior list with one or more process templates generated in advance, and taking the process template successfully matched as the process type corresponding to the local behavior list.
In another embodiment, the process tracking module 240 is further configured to:
judging whether all process steps in the successfully matched process template are finished or not according to the local behavior list;
if yes, judging that all the current process types are finished;
if not, the current process type is judged to be unfinished.
The process tracking device based on equipment maintenance provided by the embodiment of the application can execute the process tracking method based on equipment maintenance provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE III
Fig. 4 shows a schematic structural diagram of an electronic device 10 that may be used to implement method embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as the methods described in embodiment one or embodiment two.
In some embodiments, the method of embodiment one or embodiment two may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the method described in embodiment one or embodiment two above may be performed. Alternatively, in other embodiments, the processor 11 may be configured by any other suitable means (e.g., by means of firmware) to perform the method described in embodiment one or embodiment two.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of this application, a computer readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (14)
1. A method for process tracking based on equipment maintenance, the method comprising:
acquiring personnel related data of a maintainer in a maintenance scene, wherein the personnel related data comprise the posture category of the maintainer and a first spatial relative position of the maintainer;
acquiring relevant data of the components in the maintenance scene, wherein the relevant data of the components comprise component types and second space relative positions where the components corresponding to the component types are located;
determining local behavior information of the maintenance personnel on the component according to the personnel related data and the component related data;
and tracking the maintenance process of the maintenance scene based on the local behavior information.
2. The method of claim 1, wherein the obtaining of the person-related data of the maintenance person in the maintenance scene comprises:
acquiring a first image data sequence containing the maintenance personnel;
extracting a first area image where the maintenance personnel are located from each first image data;
and performing gesture recognition on the continuous first area image sequence to determine the gesture category of the maintenance personnel.
3. The method of claim 2, wherein the first image data is acquired by a first acquisition device disposed in the maintenance scene;
the acquiring of the personnel related data of the maintenance personnel in the maintenance scene further comprises:
determining key points of the maintenance personnel from the first area image;
acquiring a first depth image acquired by the first acquisition equipment, and performing pixel matching on the first area image and the first depth image to determine three-dimensional coordinates of each key point in a camera coordinate system;
acquiring first angle information of the first acquisition equipment, and converting three-dimensional coordinates of each key point in a camera coordinate system into three-dimensional coordinates in a world coordinate system according to the first angle information, wherein the world coordinate system is constructed by taking the position projected to the ground by the first acquisition equipment as an origin;
and calculating the distance and the angle between each key point and the first acquisition equipment according to the three-dimensional coordinates of each key point in the world coordinate system to serve as the relative position of the first space.
4. The method of claim 1, wherein the obtaining component-related data in the maintenance scenario comprises:
acquiring a second sequence of image data comprising the component;
extracting a second area image where the part is located from each second image data;
component recognition is performed on the sequence of consecutive second region images to determine a component category of the component.
5. The method of claim 4, wherein the component-related data further includes a component status, and wherein obtaining the component-related data in the maintenance scenario further comprises:
and detecting the component state of the second area image sequence to determine the component state of the component, wherein the component state comprises an operating state.
6. The method of claim 4, wherein the component-related data further comprises component quality, and wherein the obtaining component-related data in the maintenance scenario further comprises:
performing a part quality inspection on the second sequence of region images to determine a part quality of the part, the part quality including whether a defect exists on the surface of the part.
7. The method according to any one of claims 4-6, wherein the second image data is acquired by a second acquisition device disposed in the maintenance scene;
the acquiring of the component-related data in the maintenance scenario further includes:
determining key points of the component from the second area image;
acquiring a second depth image acquired by second acquisition equipment, and performing pixel matching on the second area image and the second depth image to determine three-dimensional coordinates of each key point of the component in a camera coordinate system;
acquiring second angle information of the second acquisition equipment, and converting three-dimensional coordinates of each key point of the component in a camera coordinate system into three-dimensional coordinates in a world coordinate system according to the second angle information, wherein the world coordinate system is constructed by taking the position projected to the ground by the second acquisition equipment as an origin;
and calculating the distance and the angle between each key point and the second acquisition equipment as the relative position of the second space according to the three-dimensional coordinates of each key point of the component in the world coordinate system.
8. The method according to any one of claims 1-6, wherein the determining local behavior information of the maintenance personnel on the component based on the personnel related data and the component related data comprises:
determining the relative position relation between the maintenance personnel and the component according to the first space relative position and the second space relative position;
and generating information to be matched according to the posture type, the component type and the relative position relationship, matching the information to be matched with a plurality of local behavior templates generated in advance, and taking the local behavior template successfully matched as the local behavior information of the information to be matched, wherein each local behavior template comprises a template posture type, a template component type and a template relative position relationship.
9. The method of claim 8, wherein the first spatial relative position comprises a distance and an angle between a first collection device disposed in a maintenance scenario and a maintenance person; the second spatial relative position comprises an angle and a distance between a second acquisition device arranged on the maintainer and the component;
determining the relative position relationship between the maintenance personnel and the component according to the first space relative position and the second space relative position comprises the following steps:
and taking the distance and the angle between the component and the second acquisition equipment as the distance and the angle between the component and the maintenance personnel.
10. The method of claim 1, wherein performing maintenance procedure tracking on the maintenance scenario based on the local behavior information comprises:
acquiring one or more pieces of local behavior information executed by the maintenance personnel in the maintenance scene, and organizing the one or more pieces of local behavior information into a local behavior list according to a time sequence;
and matching the local action list with one or more process templates generated in advance, and taking the successfully matched process template as the process category corresponding to the local action list.
11. The method of claim 10, wherein the process template includes a plurality of process steps, and wherein performing maintenance procedure tracking on the maintenance scenario based on the local behavior information further comprises:
judging whether all process steps in the successfully matched process template are finished or not according to the local behavior list;
if yes, judging that all the current process types are finished;
if not, the current process type is judged to be unfinished.
12. An apparatus for process tracking based on equipment maintenance, the apparatus comprising:
the system comprises a personnel data acquisition module, a first spatial relative position acquisition module and a second spatial relative position acquisition module, wherein the personnel data acquisition module is used for acquiring personnel related data of maintenance personnel in a maintenance scene, and the personnel related data comprise the posture category of the maintenance personnel and the first spatial relative position of the maintenance personnel;
the component data acquisition module is used for acquiring component related data in the maintenance scene, wherein the component related data comprises a component type and a second spatial relative position where a component corresponding to the component type is located;
the behavior information determining module is used for determining the local behavior information of the maintenance personnel on the component according to the personnel related data and the component related data;
and the process tracking module is used for tracking the maintenance process of the maintenance scene based on the local behavior information.
13. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
14. A computer-readable storage medium storing computer instructions for causing a processor to perform the method of any one of claims 1-11 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210652316.3A CN115019394A (en) | 2022-06-09 | 2022-06-09 | Process tracking method, device, equipment and storage medium based on equipment maintenance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210652316.3A CN115019394A (en) | 2022-06-09 | 2022-06-09 | Process tracking method, device, equipment and storage medium based on equipment maintenance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115019394A true CN115019394A (en) | 2022-09-06 |
Family
ID=83073892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210652316.3A Pending CN115019394A (en) | 2022-06-09 | 2022-06-09 | Process tracking method, device, equipment and storage medium based on equipment maintenance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115019394A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993318A (en) * | 2017-12-28 | 2019-07-09 | 中国船舶重工集团公司第七一一研究所 | The method and apparatus of the troubleshooting of marine diesel |
-
2022
- 2022-06-09 CN CN202210652316.3A patent/CN115019394A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993318A (en) * | 2017-12-28 | 2019-07-09 | 中国船舶重工集团公司第七一一研究所 | The method and apparatus of the troubleshooting of marine diesel |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109508688B (en) | Skeleton-based behavior detection method, terminal equipment and computer storage medium | |
CN109085174A (en) | Display screen peripheral circuit detection method, device, electronic equipment and storage medium | |
CN113177468B (en) | Human behavior detection method and device, electronic equipment and storage medium | |
CN116092199B (en) | Employee working state identification method and identification system | |
CN112434669B (en) | Human body behavior detection method and system based on multi-information fusion | |
CN110135290B (en) | Safety helmet wearing detection method and system based on SSD and AlphaPose | |
CN108921840A (en) | Display screen peripheral circuit detection method, device, electronic equipment and storage medium | |
CN115019394A (en) | Process tracking method, device, equipment and storage medium based on equipment maintenance | |
CN115908988A (en) | Defect detection model generation method, device, equipment and storage medium | |
CN113989855A (en) | Human body posture recognition method, device, equipment and storage medium | |
CN115641607A (en) | Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator | |
CN116363751A (en) | Climbing action recognition method, device and equipment for electric power tower climbing operation and storage medium | |
CN116523288A (en) | Base station constructor risk identification method and device, electronic equipment and storage medium | |
CN116311499A (en) | Wearing detection method and device for safety equipment | |
Fan et al. | Computer-vision based rapid entire body analysis (REBA) estimation | |
CN114821444A (en) | Unmanned overhead traveling crane operation area safety detection method based on visual perception | |
CN115457656A (en) | Method, device and equipment for determining operation duration and storage medium | |
CN115100495A (en) | Lightweight safety helmet detection method based on sub-feature fusion | |
CN115153632A (en) | Ultrasonic imaging positioning system, method, device, equipment and storage medium | |
CN112598059A (en) | Worker dressing detection method and device, storage medium and electronic equipment | |
CN116824167A (en) | Non-inductive non-contact security inspection method, device, system and storage medium | |
CN114494857A (en) | Indoor target object identification and distance measurement method based on machine vision | |
CN114120433A (en) | Image processing method, image processing apparatus, electronic device, and medium | |
CN113033515A (en) | Wearing detection method and device, electronic equipment and computer-readable storage medium | |
CN112528855A (en) | Electric power operation dressing standard identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |