CN114758422A - Real-time intelligent identification method and device for actions of construction machinery equipment - Google Patents

Real-time intelligent identification method and device for actions of construction machinery equipment Download PDF

Info

Publication number
CN114758422A
CN114758422A CN202210672343.7A CN202210672343A CN114758422A CN 114758422 A CN114758422 A CN 114758422A CN 202210672343 A CN202210672343 A CN 202210672343A CN 114758422 A CN114758422 A CN 114758422A
Authority
CN
China
Prior art keywords
construction machinery
time
action
machinery equipment
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210672343.7A
Other languages
Chinese (zh)
Other versions
CN114758422B (en
Inventor
安雪晖
周力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210672343.7A priority Critical patent/CN114758422B/en
Publication of CN114758422A publication Critical patent/CN114758422A/en
Application granted granted Critical
Publication of CN114758422B publication Critical patent/CN114758422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a real-time intelligent identification method and a real-time intelligent identification device for actions of construction machinery equipment, wherein the method comprises the following steps: acquiring motion part information of construction machinery equipment to be identified at a plurality of moments within a preset time range; respectively analyzing the motion part information at each moment based on a pre-trained motion recognition model to obtain a plurality of motion recognition results of the construction machinery equipment to be recognized at a plurality of moments; the motion recognition model is obtained by training according to a mechanical equipment motion type data set; and performing smoothing operation and result correction on the action recognition result by utilizing the time sequence information to obtain the action of the construction machinery equipment to be recognized. According to the method and the device, the action of the construction machinery at a certain moment can be obtained only based on the motion part information of the construction machinery at the moment, the time sequence of data does not need to be considered, the data calculation amount and the resource waste are reduced, the action recognition efficiency and the recognition precision are improved, and the size of an action recognition model is reduced.

Description

Construction machinery equipment action real-time intelligent identification method and device
Technical Field
The application relates to the technical field of intelligent construction sites, in particular to a method and a device for intelligently identifying actions of construction machinery equipment in real time.
Background
In the past 20 years, the progress of emerging technologies enables researchers and practitioners to carry out certain researches on construction activities of different levels in the management field of construction elements by using different technical means, so that the industry can continuously advance towards intelligent, real-time and dynamic management, and great progress is also made. Recent developments in the field of sensor-based technology (from a hardware and software perspective) have helped building managers interact efficiently with other parties in the field and have improved productivity and safety performance. Depending on the type of sensor implemented, the methods for automatic detection and identification of the activity of construction elements can be divided into four broad categories, namely: (1) a kinematic-based method; (2) computer vision based methods; (3) an audio-based method; (4) other physical sensor methods.
At present, no matter which type of sensor-based motion recognition method is used, the judgment needs to be performed based on a plurality of data in one time interval, that is, the time sequence needs to be considered, and technically the time sequence context information in the one time interval is obtained through a time domain sliding window. This requires extracting features from a large amount of temporal context information and inputting the large amount of temporal information as a sample into the model framework. There are a number of problems with this implementation: 1) the judgment of the action starting point is difficult, so that a plurality of invalid sliding window holes can appear; 2) a large amount of redundant time sequence information is required to be contained in a single sliding window, but the calculation of a computer is required, so that a large amount of calculation resources are wasted; 3) the length of the sliding window is difficult to select, the result is greatly influenced, and a plurality of sliding windows which slide for different time lengths are required to be arranged at the same time in many cases, so that the resource waste, the efficiency reduction and the precision reduction are further caused; 4) the data of time dimension is added, the data is a data dimension-increasing framework, and the calculation power and the storage resource consumption are huge; 5) because the input data volume is large, the corresponding action recognition model is also large, and the front-end deployment is not facilitated.
Disclosure of Invention
In order to solve the problems in the prior art, in a first aspect, the application provides a method for intelligently identifying actions of construction machinery equipment in real time, which includes:
acquiring motion part information of construction machinery equipment to be identified at a plurality of moments within a preset time range;
respectively analyzing the motion part information at each moment based on a pre-trained motion recognition model to obtain a plurality of motion recognition results of the construction machinery equipment to be recognized at a plurality of moments; the motion recognition model is obtained by training according to a mechanical equipment motion type data set.
In one embodiment, the method for real-time intelligent identification of the actions of the construction machinery equipment further comprises the following steps:
and performing smoothing operation and result correction on the action recognition result at each moment by using the time sequence information.
In one embodiment, the motion part information is image information of a motion part of the construction machinery to be identified;
the motion position information of the construction machinery equipment to be identified at a plurality of moments within a preset time range is collected, and the motion position information comprises the following steps:
acquiring an image sequence of the construction machinery equipment to be identified within a preset time range;
and analyzing and processing each image in the image sequence based on an image difference method and an example segmentation method to obtain the motion part information of the construction machinery equipment to be identified at the corresponding moment.
In an embodiment, the analyzing and processing each image in the image sequence based on an image difference method and an example segmentation method to obtain the motion part information of the to-be-identified construction machinery at the corresponding time includes:
acquiring a first pixel set corresponding to a moving target in the image by a background difference method;
carrying out example segmentation on the construction machinery equipment to be identified on the image to obtain a second pixel set;
and acquiring the intersection of the first pixel set and the second pixel set to obtain the motion part information of the construction machinery equipment to be identified.
In one embodiment, the step of training the motion recognition model comprises:
the method comprises the steps that a training data set comprising a plurality of training data is obtained, and each training data comprises an image of construction machinery equipment and a semantic label corresponding to the image; the semantic tag includes a type and an action category of the construction machine;
training an initial model by using the training data set to obtain the action recognition model; the initial model comprises one of a machine learning model, a deep learning model, a mathematical statistics comparison model and a characteristic value and threshold value comparison model based on simple mathematical operation.
In one embodiment, the step of establishing the training data set comprises:
generating a mechanical equipment action instruction by adopting mechanical equipment simulation software, and applying the mechanical equipment action instruction to preset digital mechanical equipment;
acquiring a sequence of images of the digital mechanical device at a plurality of camera poses using a virtual camera;
aligning the image sequence time of each camera pose to standard time;
performing action semantic standard analysis on an image sequence of any camera pose to obtain a corresponding semantic tag;
giving the semantic labels to corresponding images in the image sequence of each camera pose;
and establishing the training data set according to each image and the corresponding semantic label.
In one embodiment, the method for real-time intelligent identification of the actions of the construction machinery equipment further comprises the following steps:
collecting an environment image of an engineering site;
and carrying out instance segmentation and semantic analysis on the environment image to obtain a plurality of construction elements.
In one embodiment, the method for real-time intelligent identification of the actions of the construction machinery equipment further comprises the following steps:
acquiring the action time consumption of each operation link of the construction mechanical equipment to be identified under a plurality of operation working conditions; the action time is the time for the construction machinery equipment to be identified to execute a certain action;
respectively determining the consumed time of a working cycle of the construction machinery equipment to be identified under the corresponding working condition based on the action consumed time; the time consumed by one working cycle is the time for the construction mechanical equipment to be identified to sequentially execute all actions;
and determining the working efficiency information of the construction machinery equipment to be identified under each working condition according to the action consumed time and/or the working cycle consumed time.
In one embodiment, the method for real-time intelligent identification of the actions of the construction machinery equipment further comprises the following steps:
acquiring operation instructions corresponding to different operation conditions;
and establishing a corresponding relation among the working conditions, the working efficiency information and the operation instructions so as to determine the working conditions with the highest working efficiency and the corresponding operation instructions according to the corresponding relation.
In one embodiment, the method for real-time intelligent identification of the actions of the construction machinery equipment further comprises the following steps:
determining the interactive activity of the corresponding construction machinery equipment, the safety information of the single construction machinery equipment and the safety information of the interactive activity according to the action recognition result and the construction elements;
acquiring an operation instruction when the construction mechanical equipment executes the action recognition result;
and establishing a corresponding relation between the operation instruction and the safety information of the single construction machinery equipment and the safety information of the interactive activities.
In one embodiment, the method for real-time intelligent identification of the actions of the construction machinery equipment further comprises the following steps:
and generating a working log of the construction machinery equipment according to the action recognition result, the working efficiency information, the interactive activity of the construction machinery equipment, the safety information of the single construction machinery equipment and the safety information of the interactive activity.
In one embodiment, the method for real-time intelligent recognition of the motion of the construction machinery further comprises:
sending the action recognition result to a data display platform for display; the data display platform can achieve near real-time digital twinning of the action recognition result through a holographic projection three-dimensional display mode, or achieve image simulation twinning and/or character description information twinning of the action recognition result through a two-dimensional display mode.
In one embodiment, the construction machinery equipment comprises one or more of an excavator, a loader, a truck, a crane, a truck crane, a climbing vehicle, a road roller, a bulldozer, a road paver, a concrete mixing truck, a boom pump truck, a pile driver, a rotary drilling rig, a trenching machine.
In a second aspect, the present application provides a real-time intelligent recognition device for the actions of construction machinery, comprising:
the information acquisition module is used for acquiring the motion part information of the construction machinery equipment to be identified at a plurality of moments within a preset time range;
the motion recognition module is used for analyzing the motion part information at each moment respectively based on a pre-trained motion recognition model to obtain a plurality of motion recognition results of the construction machinery equipment to be recognized at a plurality of moments; the motion recognition model is obtained by training according to a mechanical equipment motion type data set.
In an embodiment, the real-time intelligent recognition device for the actions of the construction machinery equipment further includes a result correction module, which is configured to perform smoothing operation and result correction on the action recognition result at each time by using the time sequence information.
In one embodiment, the motion part information is image information of a motion part of the construction machinery to be identified;
the information acquisition module comprises:
the image sequence acquisition unit is used for acquiring an image sequence of the construction machinery equipment to be identified within a preset time range;
and the image analysis unit is used for analyzing and processing each image in the image sequence based on an image difference method and an example segmentation method to obtain the motion part information of the construction machinery equipment to be identified at the corresponding moment.
In an embodiment, the image analysis unit is specifically configured to:
acquiring a first pixel set corresponding to a moving target in the image by a background difference method;
carrying out instance segmentation on the construction machinery equipment to be identified on the image to obtain a second pixel set;
and acquiring the intersection of the first pixel set and the second pixel set to obtain the motion part information of the construction machinery equipment to be identified.
In one embodiment, the real-time intelligent recognition device for the actions of the construction machinery equipment further comprises:
the training data acquisition module is used for acquiring a training data set containing a plurality of training data, and each training data comprises an image of construction machinery equipment and a semantic label corresponding to the image; the semantic tag includes a type and an action category of the construction machine;
the model training module is used for training an initial model by using the training data set to obtain the action recognition model; the initial model comprises one of a machine learning model, a deep learning model, a mathematical statistics comparison model and a characteristic value and threshold value comparison model based on simple mathematical operation.
In an embodiment, the training data obtaining module is further configured to:
generating a mechanical equipment action instruction by adopting mechanical equipment simulation software, and applying the mechanical equipment action instruction to preset digital mechanical equipment;
acquiring a sequence of images of the digital mechanical device at a plurality of camera poses using a virtual camera;
aligning the image sequence time of each camera pose to standard time;
performing action semantic standard analysis on an image sequence of any camera pose to obtain a corresponding semantic tag;
giving the semantic labels to corresponding images in the image sequence of each camera pose;
and establishing the training data set according to each image and the corresponding semantic label.
In one embodiment, the real-time intelligent recognition device for the actions of the construction machinery equipment further comprises a construction element acquisition module, which is configured to:
collecting an environment image of an engineering site;
and carrying out example segmentation and semantic analysis on the environment image to obtain a plurality of construction elements.
In one embodiment, the real-time intelligent recognition device for the actions of the construction machinery equipment further comprises a work efficiency generation module, which is configured to:
acquiring the action time consumption of each operation link of the construction machinery equipment to be identified under a plurality of operation conditions; the action time is the time for the construction machinery equipment to be identified to execute a certain action;
respectively determining the consumed time of a working cycle of the construction machinery equipment to be identified under the corresponding working condition based on the action consumed time; the time consumed by one working cycle is the time for the construction mechanical equipment to be identified to sequentially execute all actions;
and determining the working efficiency information of the construction machinery equipment to be identified under each working condition according to the action time consumption and/or the working cycle time consumption.
In a third aspect, the present application further provides an electronic device, including:
the intelligent real-time identification method for the actions of the construction machinery equipment comprises a central processing unit, a storage and a communication module, wherein a computer program is stored in the storage, the central processing unit can call the computer program, and the central processing unit can realize the intelligent real-time identification method for the actions of any construction machinery equipment when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium for storing a computer program, where the computer program is executed by a processor to implement any one of the methods for real-time intelligent recognition of an action of construction machinery provided in the present application.
According to the construction machinery equipment action real-time intelligent identification method and device, the action of the construction machinery equipment at a certain moment can be identified and obtained only based on the motion part information of the construction machinery equipment at the moment, and compared with the existing technical scheme that action judgment needs to be carried out based on a plurality of data in a certain time period, the method and device do not need to consider the time sequence of the data, do not need to use a time domain sliding window to obtain the time sequence context information in the certain time period, further reduce the data calculation amount and resource waste, and improve the action identification efficiency and identification precision; meanwhile, the size of the motion recognition model is reduced to a certain degree.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a construction machinery equipment action real-time intelligent identification method provided by the present application.
Fig. 2 is another schematic diagram of the construction machinery equipment action real-time intelligent identification method provided by the present application.
Fig. 3 is a schematic diagram of a step of acquiring motion region information provided by the present application.
Fig. 4 is a schematic diagram of a step of obtaining motion region information based on an image difference method and an example segmentation method provided in the present application.
Fig. 5 is a schematic diagram illustrating steps of training a motion recognition model according to the present application.
FIG. 6 is a schematic diagram illustrating the steps provided in the present application for creating a training data set for training a motion recognition model.
Fig. 7A to 7D are partial data sets corresponding to loading, rotation-reloading, unloading, rotation-unloading operations of the excavator according to the present disclosure.
Fig. 8 is a schematic diagram of a device for real-time intelligent recognition of actions of construction machinery equipment according to the present application.
Fig. 9 is another schematic diagram of the construction machine motion real-time intelligent recognition device provided in the present application.
Fig. 10 is another schematic view of the construction machine motion real-time intelligent recognition apparatus provided in the present application.
Fig. 11 is another schematic view of the construction machine motion real-time intelligent recognition apparatus provided in the present application.
Fig. 12 is another schematic view of the construction machine motion real-time intelligent recognition apparatus provided in the present application.
Fig. 13 is a schematic diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a first aspect, the present application provides a method for intelligently identifying actions of construction machinery, as shown in fig. 1, the method includes the following steps S101 to S102:
step S101, collecting the motion part information of the construction machinery equipment to be identified at a plurality of moments within a preset time range. The motion part information may be acquired in real time or may be acquired in advance.
Specifically, the construction machinery equipment to be identified in the present application includes, but is not limited to, excavators (backhoes, face shovels, cable plants, tracks, wheels, etc.), loaders, trucks, wheel cranes, crawler cranes, tower cranes, truck cranes, climbing cars, road rollers, bulldozers, road pavers, concrete mixing and transporting vehicles, boom pump trucks, pile drivers, rotary drilling rigs, trenching machines, and the like. The construction machine equipment to be identified may be one of the above equipments or a plurality of equipments in an interactive state, such as an excavator and a truck in an interactive state. Here, the state in which two or more kinds of construction machine equipments are operated in cooperation with each other in the interactive state value, for example, the operation state in which the excavator is loading the material to the truck.
The motion portion information of the present application specifically refers to state information of a motion portion (a portion at which displacement occurs) of the construction machine to be identified. The movement site is, for example: the boom, hook of a crane, the bucket, body of a loader, turntable, boom, stick, bucket, track, etc. of an excavator.
The state information includes, but is not limited to, one or more of displacement (including angular displacement) information, acoustic information, spatial pose information of the moving part. For the excavator, the motion part information comprises but is not limited to one or more of joint space data, driving mechanism space data, pose space data and detection mechanism space parameters.
The present application mainly uses displacement (including angular displacement) information as an example for description. The displacement (including angular displacement) information can be obtained in various ways, for example, the displacement (including angular displacement) information can be obtained by taking an image and performing image instance segmentation, feature matching and other computer vision-based methods. For another example, the displacement (including angular displacement) information may be obtained by a kinematics-based method, such as reading an operation command, reading an operation lever angle corresponding to each moving portion, and reading data of a displacement (angular displacement) sensor mounted on each moving portion. For the moving parts driven by the oil cylinder and the electric cylinder, the displacement information can be indirectly obtained by obtaining the stroke, work, power, displacement and the like of the driving part. In addition, parameters which have geometric and motion relations with the motion part can be obtained, and then the parameters are converted into displacement information of the motion part. The displacement information of the moving part can be obtained through the combination of the modes.
Step S102, analyzing the motion part information at each moment respectively based on a pre-trained motion recognition model to obtain a plurality of motion recognition results of the construction machinery to be recognized at a plurality of moments; the motion recognition model is obtained by training according to a mechanical equipment motion type data set.
Specifically, the motion recognition model of this step is analyzed based on the motion part information obtained in step S101 to obtain the motion recognition result of the device, and thus the motion recognition model corresponds to the motion part information. For example, when the motion part information is the image information obtained by shooting, the motion type data set of the mechanical equipment comprises image data obtained by training of various types of mechanical equipment in different motion states; when the motion part information is the driving information such as the angle of the operating rod, the stroke of the driving part, the work, the power, the displacement and the like, the motion type data set of the mechanical equipment comprises the driving data of various types of mechanical equipment in different motion states for training. That is, the motion recognition model in this step may be a model for analyzing image information, a model for analyzing drive information, or a combination of models having both the capability of analyzing image information and the capability of analyzing drive information. In practical applications, different motion recognition models can be obtained by training according to different data types selected in step S101.
It should be emphasized that, when the motion recognition model of this step performs motion recognition, motion recognition results corresponding to respective times are obtained by analyzing motion part information at the respective times. That is, unlike the conventional technique of determining an action based on a plurality of data in one time period, the present application does not need to consider time series information, but classifies an action at a certain time based on motion part information at that time. This is a difference between the present application and the existing action classification method. Because time sequence information does not need to be considered, the motion part information at a certain moment can be acquired, and then the relevant processing of motion recognition can be carried out without waiting for the completion of motion part information acquisition at other moments, so that the efficiency of motion recognition based on single-time point/moment motion part information is greatly improved compared with the prior art, real-time or even front-end real-time recognition of construction mechanical equipment is realized, and the method has great significance for the safety of a construction site and the stable promotion of construction operation; meanwhile, the motion recognition result can be obtained only by processing the motion part information at a certain moment, so that the data volume processed by the method is greatly reduced, and the method has the advantages of data dimension reduction and low required calculation power compared with the existing technical scheme of classifying the motion based on the working states of a plurality of devices at a certain moment in a period of time.
Therefore, the method for intelligently identifying the action of the construction machinery equipment in real time can identify and obtain the action of the construction machinery equipment at a certain moment only based on the information of the motion part of the construction machinery equipment at the moment, and compared with the prior technical scheme that action judgment needs to be carried out based on a plurality of data in a period, the method does not need to consider the time sequence of the data, does not need to use a time domain sliding window to obtain the time sequence context information in the period, further reduces the data calculation amount and resource waste, and improves the action identification efficiency and identification precision; meanwhile, the size of the motion recognition model is reduced to a certain degree.
It should be noted that, in the present application, the actions of one or more moving parts of the construction machinery device to be recognized may be recognized at the same time, and each moving part may be processed according to the above steps. For convenience of description, the following description will only take the motion recognition of a certain motion part as an example, and it should be understood by those skilled in the art that this is not a limitation of the present application.
In an embodiment, as shown in fig. 2, the method for real-time intelligent recognition of the actions of the construction machinery equipment further includes:
step S103, smoothing operation and result correction are carried out on the action recognition result at each moment by using the time sequence information.
Specifically, the obtained motion recognition result at each time can be smoothed by using the time sequence information, and the misjudgment result of the motion recognition model is corrected, and the principle is that the back-and-forth change between two motions cannot occur in a short continuous time. For example, the motion recognition results are sorted according to the sequence of the time, if there are continuous t-5 time, t-4 time, t-3 time, t-2 time, t-1 time, t +1 time, t +2 time, t +3 time, t +4 time and t +5 time, if the motion recognition results corresponding to the respective times are respectively motion one, motion two, motion one and motion one, since the motion one continuously appears for many times before and after the time t, the motion recognition result at the time t is considered to be wrong, and the motion recognition result at the time t can be corrected to be motion one. If the action recognition results corresponding to the respective times are action one, action two and action two, respectively, since the action recognition result at time t is different from the previous time, and the action recognition results at a plurality of times which are continuous after time t are all action two, it can be determined that action two has already occurred, at this time, time t can be determined as the action start time (the time when the action is most started to occur) of action two, and similarly, time t can be further taken as the action end time of action one. Therefore, the method can quickly and conveniently determine the action starting time and the action finishing time, and solves a great difficulty in the prior art.
That is, when the motion recognition result is corrected, if a certain motion appears in a continuous period of time, it is determined that the motion appears, otherwise, it should be determined that the motion is the original motion. The continuous period of time may be set to 0.1s to 5s, and may specifically be determined according to different actions of the construction machinery, or may be determined by multiple actions of the construction machinery together, for example, for a cyclic operation with a cyclic execution action one, an action two, an action three, and an action four, a start time of the cyclic operation may be determined according to a start action (action one) and an end action (action four), that is, a time when the start action occurs for the first time after the end action. The "continuous period of time" is relatively short for a fast-changing motion, and the "continuous fixed period of time" is relatively long for a slow-changing motion. The "continuous period of time" may be converted according to the sampling amount of the moving part information at a single time point/a single moment, that is, determined according to the relationship among the time domain sampling frequency, the sampling duration, and the sampling amount, for example, when the moving part information is the image information obtained by shooting, determined according to the relationship among the frame rate, the duration, and the number of frames.
Table 1 shows the accuracy of the motion recognition model obtained by training the data set corresponding to the excavator and the accuracy of the motion recognition after time smoothing, and the data in table 1 is obtained on the basis of the number 150 of loading training sets, the number 460 of loading test sets, the number 150 of spin-reload training sets, the number 460 of spin-reload test sets, the number 150 of unloading training sets, the number 460 of spin-unload training sets, and the number 460 of spin-unload test sets. The data set of loading, rotation-reloading, unloading, rotation-unloading is schematically shown in fig. 7A to 7D.
Figure 119168DEST_PATH_IMAGE001
As can be seen from the table above, the accuracy of the trained motion recognition model for the motion recognition of the excavator loading is 98.70%, and the accuracy after time smoothing is improved to 99.57%; the accuracy of the action of rotation-heavy load of the excavator is recognized to be 98.48%, and the accuracy after time smoothing is improved to 99.13%; the accuracy of the action of unloading the excavator is 99.78%, and the accuracy after time smoothing is improved to 100.00%; the accuracy of the identification of the rotation-no-load action of the excavator is 97.83%, and the accuracy is improved to 99.35% after the time is smoothed. Therefore, the accuracy of the action recognition of the construction machinery equipment to be recognized can be further improved through the smoothing operation.
In one embodiment, as shown in fig. 3, the motion part information is image information of a motion part of the construction machine to be identified;
step S101, collecting the motion part information of the construction machinery equipment to be identified at a plurality of moments within a preset time range, including:
in step S1011, an image sequence of the construction machinery to be identified within a preset time range is obtained.
The sequencing of the image sequence is based on the time sequence; the frame rate of the image sequence may be fixed, or may be set to different values according to the moving portion of the construction machine. The image sequence can be a monitoring video (frame extraction is not performed) of construction machinery equipment acquired by an image acquisition device (such as a monitoring camera) in real time or in advance, and can also be an image sequence formed by video frames after the frames of the video are extracted.
The images in the image sequence may be one or more of a binary image, a grayscale image, an RGB image, an HSV image.
Step S1012, analyzing each image in the image sequence to obtain the movement location information of the to-be-identified construction machinery at the corresponding time.
The method of acquiring the motion region information may employ one or a combination of a plurality of image difference methods (frame difference method, background difference method), an optical flow method, a feature matching method, an example segmentation method, an example segmented contour comparison method (similar to the image difference method, except that an object of difference by the image difference method is a pixel value of a target, and an object of difference by the example segmented contour comparison method is a change in position of a contour of the target in an image), and an edge detection method.
In fact, before step S1012 is executed, it is further necessary to pre-process each image in the acquired image sequence based on a computer vision technology, determine an area or a position of the construction machinery in each image, locate the construction machinery to be identified from the whole image (detection or localization), and then intercept the target area by resize (downlink or update).
In this embodiment, an image is used as the information of the moving part at each time, and the image can be acquired by an image acquisition device (such as a monitoring camera). This embodiment has utilized this non-contact sensing means of image acquisition equipment, realizes the swift monitoring of engineering scene, compares in using contact sensing equipment such as sensor, and image acquisition equipment need not to carry out interface adaptation with construction machinery equipment, also can not exert an influence to the construction operation, has further improved the suitability and the convenience of technique.
In an embodiment, in the present application, a combination method of a preferred image difference method and an example segmentation method is used to analyze and process each image in an image sequence to obtain motion part information of a to-be-identified construction machinery device at a corresponding time, and as shown in fig. 4, the step of obtaining the motion part information based on the image difference method and the example segmentation method includes:
step S10121, a first pixel set corresponding to the moving object in the image is obtained through a background difference method. The image difference method of the step specifically adopts a background difference method, wherein the background adopts a background updated in real time.
Specifically, the image sequence acquired in step S1011 includes a plurality of images. Each image in the sequence of images needs to be processed separately. Taking one of the images P1 as an example, the pixel set S of all moving objects in the image P1 is obtained by the background subtraction methodbackground differenceThe number of moving objects in the image P1 may be plural, and the moving objects herein refer to other movable objects such as construction machine equipment and workers.
And step S10122, carrying out example segmentation on the construction machinery equipment to be identified on the image to obtain a second pixel set.
Instance segmentation is carried out on the construction machinery equipment to be recognized on the image P1 to obtain a pixel set corresponding to the construction machinery equipment to be recognized, wherein the number of the construction machinery equipment to be recognized can be one or more, and therefore the second pixel set in the step can be represented as Sinstance segmentation=(equipment1,equipment2,……,equipmenti,……,equipmentN) Wherein the equipment isiAnd the pixel subset obtained after the example segmentation is carried out on the ith construction machinery to be identified is shown.
Step S10123, acquiring an intersection of the first pixel set and the second pixel set to obtain the motion part information of the construction machinery equipment to be identified.
Specifically, the set of pixels S of all moving objects in the image P1 is solvedbackground differenceAnd set of pixels Sinstance segmentationPixel subset equiment in (1)iTo obtain image information P of the i-th construction machine to be recognized in the image P1equipment,iI.e. motion location information. Namely:
Pequipment,i = Sbackground difference∩equipmenti
in the above manner, image information of other work machine equipment to be identified in the image P1 can also be obtained.
This is the step of obtaining the movement site information of the construction machine to be identified in the image P1. The other images in the image sequence can be processed in the same way, and then the motion part information of the construction machinery equipment to be identified in each image is obtained.
In the embodiment, the acquired image is processed based on an image difference method and an example segmentation method, and the obtained image information (motion part information) has strong feature expression capability. The image information obtained by the present embodiment is similar to the image data shown in fig. 7A to 7D, and as can be seen from fig. 7A and 7D, the motion characteristics of the to-be-constructed mechanical equipment in the figures are sufficiently obvious, and the corresponding motion can be accurately recognized even through manual judgment. Therefore, the accuracy of motion recognition of the image through the trained motion recognition model is very high.
In one embodiment, as shown in fig. 5, the step of training the motion recognition model includes:
step S201, a training data set containing a plurality of training data is obtained, and each training data comprises an image of construction machinery equipment and a semantic label corresponding to the image; the semantic tag includes a type and an action category of the construction machine.
In particular, the training data set may be pre-stored in a database. The training data set includes images (motion part information) of various types of construction machine equipment and motion types indicated by the images. Among them, the type of the construction machine may be referred to the type of the construction machine to be identified given in step S101. The types of actions of different pieces of construction machinery also vary, for example, for a crawler type backhoe, the types of actions include loading (loading material into a bucket), heavy rotation, unloading (unloading material from a bucket), no-load rotation, walking, idling, site leveling, slope trimming, deep digging, and the like; for the loader, the action categories comprise loading, heavy-load transportation, no-load running, high-lift unloading, normal unloading, idling and the like; for the crawler crane, the action types comprise preparation, hoisting, heavy-load driving, installation (descending), idle driving and the like; for the rotary drilling rig, the action types comprise rotary drilling, lifting, unloading, descending, temporary transfer, transition, idle and the like; for the cantilever crane pump truck, the action categories comprise arm folding idle, arm unfolding idle, driving, distributing, arm folding, arm unfolding idle and the like. In the actual action recognition process, after an image at a certain moment is acquired and the contour of the construction mechanical equipment in the image is acquired through example segmentation, if the detection result shows that no moving part can be recognized in the contour, the action recognition result corresponding to the image at the moment is idle.
And S202, training an initial model by using the training data set to obtain the motion recognition model.
The initial model comprises one of a machine learning model, a deep learning model, a mathematical statistics comparison model and a characteristic value and threshold value comparison model based on simple mathematical operation.
The motion recognition model in the embodiment only needs to recognize image information at a certain moment, the calculated amount of the model is small, and the complexity and the training difficulty of the model are greatly reduced. When the applicant actually trains the motion recognition model, the adopted training configuration is as follows: the Hua is MateBook X Pro 142019, the CPU is Intel core i 78565U, and the non-GPU is accelerated; the training time of the model is less than 3 minutes; the trained model size is only 3.25M. In the prior art, a model for motion recognition needs to be carried out according to the consideration of time sequence information, the complexity is high, the data calculation amount is large, and the model training time needs 2 days or even longer. Compared with the prior art, the motion recognition model is low in complexity, short in training time, high in efficiency and fast and convenient to operate.
In one embodiment, the present application provides a method for creating a low-cost and high-efficiency training data set, as shown in fig. 6, the step of creating the training data set using the method includes:
step S301, generating a mechanical equipment action instruction by adopting mechanical equipment simulation software, and applying the mechanical equipment action instruction to preset digital mechanical equipment.
The digital mechanical equipment is a digital model corresponding to real construction mechanical equipment. The mechanical equipment action command is applied to the digital mechanical equipment, so that the action of real construction mechanical equipment can be simulated.
Step S302, a virtual camera is used for acquiring an image sequence of the digital mechanical equipment in a plurality of camera poses.
Specifically, the virtual camera herein may employ existing graphics processing software, a virtual camera in a processing toolkit. The method comprises the steps that shooting parameters (camera pose, shooting frequency and the like) of a virtual camera are set to obtain image sequences of the digital mechanical equipment at different angles, wherein the image sequences are acquired by the digital mechanical equipment.
Step S303, aligning the image sequence time of each camera pose to standard time.
That is, the image sequences of the camera poses are aligned in time in order to correspond the images representing different angles of the digital mechanical apparatus at the same time in each image sequence. For example, after time alignment, the image sequence captured in the first camera pose is: time t image a1, time t +1 image a2, time t +3 image A3 … …; the image sequence shot in the second camera pose is as follows: time t image B1, time t +1 image B2, and time t +3 image B3 … …. After the image sequences are time-aligned, the image a1 and the image B1 at the time t are views of the digital mechanical device at different angles in the same action state, and the image a1 and the image B1 correspond to the same action, so the semantic labels corresponding to the image a1 and the image B1 are the same.
And step S304, performing action semantic standard analysis on the image sequence of any camera pose to obtain a corresponding semantic label.
Specifically, the semantic standard analysis is respectively carried out on each image in the image sequence of a certain camera pose to obtain the semantic label corresponding to each image. The semantic standard analysis can be realized by adopting a pre-trained semantic model. The semantic model may be trained using a large number of images of the work machine that have been assigned action semantic labels. The trained semantic model can extract the image features to be analyzed and generate action semantic labels corresponding to the images based on the image features.
And S305, endowing the semantic tags to corresponding images in the image sequence of each camera pose.
Specifically, assume that the image sequence captured in the first camera pose is: time t image a1, time t +1 image a2, time t +2 image A3 … …; the image sequence shot in the second camera pose is as follows: time t image B1, time t +1 image B2, time t +2 image B3 … …; the image sequence shot in the third camera attitude is as follows: time t image C1, time t +1 image C2, and time t +2 image C3 … …. And in step S304 action semantic criteria analysis is performed on the image sequence of the first camera pose, the semantic label for image a1 is action one, the semantic label for image a2 is action one, the semantic label for image A3 is action two … …, then the semantic label of image a1 is assigned to image B1 and image C1 captured at the same time (time t) as image a1 in the sequence of images of the other camera pose, the semantic label of image a2 is assigned to image B2 and image C2 captured at the same time (time t + 1) as image a2 in the sequence of images of the other camera pose, and the semantic label of image A3 is assigned to image B3 and image C3 … … captured at the same time (time t + 2) as image A3 in the sequence of images of the other camera pose because the images captured at the same time by the different camera pose are the same action of the digital mechanical device and the corresponding semantic labels are naturally the same.
And S306, establishing the training data set according to each image and the semantic label corresponding to the image. The training data set includes a correspondence relationship between each image (motion region information) and the motion type. Fig. 7A to 7D show partial data sets corresponding to four actions of loading, rotating-reloading, unloading and rotating-unloading of the excavator respectively.
Through the steps, the training data set can be rapidly expanded. Actually, the step S301 and the step S302 of acquiring the image sequence of different camera poses can be realized by the following steps:
and simulating real construction mechanical equipment by using a pre-manufactured RC model, and placing the construction mechanical equipment in a shooting background. And shooting the motion state information of the construction mechanical equipment by image acquisition equipment fixedly arranged at different angles of the RC model. In this process, a reduced shot background may be sampled to obtain a more efficient data set, such as selecting a shot background that contrasts sharply with the color of the work machine equipment. And after each image sequence is obtained, processing the images in the image sequence according to the method in the step S303 to the step S306.
In practical application, a fixing component can be designed to fix the image acquisition equipment at different angles of the construction machinery equipment. The fixing component comprises a plurality of sliding rails which are arranged along the warp direction, and the sliding rails can be arc-shaped or circular. Each sliding rail is provided with a sliding component which can move along the sliding rail and can be fixed at any position on the sliding rail, and the sliding component can be connected with a support of the image acquisition equipment. When the sliding assembly moves on the sliding rail, the image acquisition equipment can move along with the sliding assembly, and therefore the pose of the image acquisition equipment relative to the target construction mechanical equipment is adjusted flexibly.
In an embodiment, the method for intelligently identifying the action of the construction machinery equipment in real time further comprises the step of determining construction elements of a project site, and specifically comprises the following steps:
(1) collecting an environment image of an engineering site;
(2) and carrying out example segmentation and semantic analysis on the environment image to obtain a plurality of construction elements. The construction elements comprise loading materials, loading capacity, unloading positions and cycle operation times.
Specifically, taking a shovel as an example, the construction elements may include that the material being dug by the shovel is soil, loess, sandy soil, and the like; the unloading position of the materials in the bucket of the excavator is on the open ground, on a truck and the like; the loading capacity of the excavator bucket is full load, half load, no load and the like; the excavator performs the cycle of loading-heavy load rotation-unloading-no-load rotation, the loading is time-consuming, and the like. The embodiment can realize the steps of example segmentation and voice analysis through a pre-trained model, and the example segmentation model and the semantic model can be obtained by training a large number of environment images endowed with corresponding semantic labels. The trained instance segmentation model and the semantic model can extract the features of the environmental image to be analyzed and generate a plurality of semantic labels corresponding to the environmental image based on the features of the environmental image. The training data sets of the motion recognition model, the example segmentation model and the semantic model can be derived from a public database MOCS Dataset (reference: Dataset and benchmark for detecting moving objects in construction sites), and the images in the MOCS Dataset are further labeled with semantic labels and can be used as training data of the model. Based on the difference of the trained models, the category number, the fineness and the like of the semantic labels have certain difference.
For example, it is assumed that a certain image information of the shovel is recognized, and the result of recognizing the operation of the shovel is obtained as loading. At this time, the construction elements in the image are obtained through example segmentation and semantic analysis: the material loaded in the bucket is sandy soil, the unloading position of the truck is the number X, the truck which is full of the truck is obtained by performing 6 times of cyclic operation of loading, heavy load rotation, unloading and no load rotation through a counter and deep state semantic analysis, wherein the loading capacity of the bucket is half full for 2 times, and the loading capacity of the bucket is full for 4 times.
The construction elements obtained in the embodiment can be used as supplementary information for identifying the actions of the construction machinery to be identified.
In an embodiment, the method for intelligently identifying the action of the construction machinery equipment in real time further comprises the step of determining the working efficiency of the construction machinery equipment under different working conditions, and specifically comprises the following steps:
(1) acquiring the action time consumption of each operation link of the construction mechanical equipment to be identified under a plurality of operation working conditions; the action time is a time taken for the construction machine device to be recognized to perform a certain action. The time taken to perform an action may be derived from the time that the action occurs in succession.
For example, for the excavator, the motion recognition results are sorted according to the sequence of the image acquisition time, it is assumed that there are continuous times t-5, t-4, t-3, t-2, t-1, t +1, t +2, t +3, t +4 and t +5, and the motion recognition results corresponding to the images acquired at each time are respectively loading, rotation-reloading and unloading, at this time, the time t-4 can be determined as the motion start time of the rotation-reloading (the time when the motion starts to appear), and determining the t +4 moment as the action ending moment of the rotation-overload (the moment when the action appears at the end), wherein the difference value between the action ending moment and the action starting moment is the action consumed time of the rotation-overload.
(2) Respectively determining the consumed time of a working cycle of the construction machinery equipment to be identified under the corresponding working condition based on the action consumed time; the time consumed by one working cycle is the time for the construction machinery equipment to be identified to sequentially execute all actions; for example, for a shovel, one work cycle takes time for the shovel to continuously perform four actions of "load, rotate-reload, unload, select rotate-unload".
(3) And determining the working efficiency information of the construction machinery equipment to be identified under each working condition according to the action consumed time and/or the working cycle consumed time.
A specific example is provided herein for illustration. Taking a crawler-type backhoe excavator as an example, the action classification comprises loading (loading materials into a bucket), heavy-load rotation, unloading (unloading materials in the bucket), no-load rotation, walking, idling, leveling a field, trimming a side slope, deep digging and the like. The working efficiency of the backhoe crawler excavator under different working conditions is analyzed, and the optimal working strategy under different working conditions can be determined.
For the most common earth and stone square car loading operations, common variables include: the type of the earthwork (the type of the material to be loaded), the relative position relation of the material to be loaded and the excavator on the height, the height relation of the truck to be loaded and the excavator, and the rotation angle of the excavator in one working cycle (namely, the rotation angle required by the excavator to complete one cycle of loading-heavy load rotation-unloading-no-load rotation). Wherein, the earthwork type includes: hardened undisturbed soil (high strength) which is not loosened \ loosened, fertile soil (low strength) which is not loosened \ loosened, hard rock which is not loosened \ loosened, soft rock which is not loosened \ loosened, undisturbed/undisturbed cobblestone accumulation bodies, silt and the like; the relative position relation of the material to be loaded and the excavator on the height comprises: the materials to be loaded are lower than the center of gravity of the excavator, the materials to be loaded are almost flush with the center of gravity of the excavator, and the materials to be loaded are higher than the center of gravity of the excavator; the height relation between the truck to be loaded and the excavator comprises the following steps: the plane where the truck is located is lower than the plane where the excavator is located by 1\ about half of the height of the vehicle body, the truck and the excavator are located on the same plane, and the plane where the truck is located is higher than the plane where the excavator is located; the rotation angle of a work cycle excavator comprises: 30-90 degrees, 90-150 degrees, 150-210 degrees and 210-360 degrees.
In general operation, the type of the material to be loaded cannot be selected, and under certain conditions, at least one of the other variables cannot be selected (basically determined by field conditions). Therefore, in this situation, how to select the other selectable variables is very important to ensure that the loading efficiency of the excavator is the highest.
Table 2 shows the time consumed by the action of loading, rotation-overloading, unloading, rotation-unloading and one work cycle of the crawler type backhoe under five working conditions.
The following explanation is made for the five operating conditions from the material state-the relationship between the position of the excavator and the material-the relationship between the height of the excavator and the truck-the rotation angle of the excavator in one working cycle respectively:
the working condition I is as follows: spreading scattered soil, flattening ground material, flattening vehicle and loading: the materials are loose soil which is not in an aggregation state, and a basic layer is in a flat spreading state; the material and the lower crawler of the excavator are on the same plane (namely, the material and the center of gravity are basically on the same plane or slightly lower); the crawler belt bottom of the excavator and the truck are on the same plane; if the direction of loading (unloading materials) of the excavator is right ahead, the angle between the material loading and unloading direction and the unloading direction is about 150-180 degrees, namely the angle of rotation needed by digging, rotating-overloading, unloading, rotating-idling is close to 360 degrees.
Working conditions are as follows: gathering in-situ soil, medium step material, low vehicle and rear loading: the material is in-situ soil, and is soft and aggregated; the material is slightly higher than the gravity center of the excavator; the bottom of a crawler belt of the excavator is higher than half of the truck height; assuming that the direction of loading (unloading materials) of the excavator is right ahead, the angle between the direction of loading and unloading materials and the unloading direction is about 150-180 degrees, namely the angle of rotation required by 'digging, rotating-overloading, unloading, rotating-idling' is close to 360 degrees.
Working conditions are as follows: in-situ weathered rock-middle step material-lower vehicle-side loading: the material is in-situ weathered rock which is in an aggregated state; the material is slightly higher than the gravity center of the excavator; the crawler bottom of the excavator is close to half the truck height; assuming that the direction of loading (unloading materials) of the excavator is right ahead, the angle between the direction of loading and unloading materials and the unloading direction is about 60-90 degrees, namely the angle of rotation required by 'digging, rotating-overloading, unloading, rotating-idling' is close to 180 degrees.
Working conditions are as follows: in-situ ancient riverbed pebble-low step material-low vehicle-front loading: the material is an in-situ ancient river bed, cobblestones are mainly used, the material is in an aggregation shape, and the strength among granular materials is not large; the material is lower than the crawler belt of the excavator and far lower than the gravity center of the excavator; the crawler belt bottom of the excavator is higher than half of the truck height; assuming that the direction of loading (unloading materials) of the excavator is right ahead, the angle between the direction of loading and unloading materials and the unloading direction is about 15-30 degrees, namely the angle of rotation required by 'digging, rotating-overloading, unloading, rotating-idling' is close to 60 degrees.
Working condition five: gathering loose soil, low step material, flat car and side rear loading: the material is loose soil and is in an aggregation state; the material is lower than the crawler belt of the excavator and far lower than the gravity center of the excavator; the crawler belt bottom of the excavator and the truck are on the same plane; if the direction of loading (unloading materials) of the excavator is right ahead, the angle between the material loading and unloading direction and the unloading direction is about 90-130 degrees, namely the angle of rotation needed by 'digging, rotating-overloading, unloading, rotating-idling' is close to 240 degrees.
Table 2: time consumption of each link of loading operation of excavator(s)
Figure 442833DEST_PATH_IMAGE002
Figure 753728DEST_PATH_IMAGE003
Through comparing a duty cycle under each operating mode consuming time can know, crawler-type backhoe excavator is in operating mode four: the efficiency is highest under the conditions of in-situ ancient riverbed pebbles, low-step materials, low vehicles and front loading. Table 2 above only shows some working conditions and their corresponding time consumption, and the operation objects of the five working conditions are all different. In practical application, various selectable engineering factors are taken as variables to obtain multiple operation modes aiming at the same material to be loaded, time consumption of actions in various operation modes is counted respectively, so that an optimal operation mode aiming at the material to be loaded is obtained, and construction efficiency is improved. The non-selectable engineering factors comprise materials to be loaded, and the selectable engineering factors comprise the position relation between construction machinery equipment and the materials (ground materials, low-step materials, middle-step materials and the like) and the height relation between the construction machinery equipment and a truck (flat cars, low cars, slightly low cars and the like).
The working efficiency information of the construction machinery equipment under various working conditions/working modes is obtained through the method, and the optimal working modes/working conditions under different working conditions can be selected by comparing the efficiency information of different actions under different working conditions, so that the aim of improving the working efficiency is fulfilled.
Further, the construction machinery equipment action real-time intelligent identification method further comprises the following steps:
acquiring operation instructions corresponding to different operation conditions/operation modes;
and establishing a corresponding relation among the working conditions/working modes, the working efficiency information and the operation instructions so as to determine the working conditions/working modes with the highest working efficiency and the corresponding operation instructions according to the corresponding relation.
Specifically, after the work efficiency information corresponding to each operation mode/operation condition of the construction machinery equipment is obtained, the operation instruction corresponding to each operation mode/operation condition can be obtained, and then the corresponding relation among the operation mode/operation condition, the work efficiency information and the operation instruction is established. The corresponding relation is reasonably utilized, and the instruction efficiency of the automatic driving of the construction machinery equipment is improved.
In a specific example, non-selectable construction elements including materials to be loaded (such as tiled loose soil, in-situ strongly weathered rock, loose soil, in-situ ancient river bed, in-situ soil and the like) are acquired and further analyzed through image acquisition equipment such as an image acquisition device (such as a monitoring camera), at least one operation mode/operation condition is matched by taking the non-selectable construction elements as matching basis, and when the operation modes/operation conditions are more than one, the work efficiency information of each operation mode/operation condition is further acquired according to the corresponding relation; determining an optimal operation mode/operation working condition according to the information of each working efficiency, namely the operation mode/operation working condition with the highest efficiency; and finally, acquiring an operation instruction corresponding to the optimal operation mode/operation working condition according to the corresponding relation. And sending the operation instruction corresponding to the optimal operation mode to a constructor driving construction machinery equipment for reference operation of the constructor, or sending the operation instruction corresponding to the optimal operation mode to the construction machinery equipment capable of automatically driving so as to operate according to the operation instruction.
According to the embodiment, the corresponding relation among the operation mode/operation working condition, the working efficiency information and the operation instruction is established, so that the optimal operation mode/operation working condition of a construction site can be determined quickly, and the construction efficiency is improved.
Further, the method for intelligently identifying the action of the construction machinery equipment in real time further comprises the following steps:
determining the interactive activity of the corresponding construction machinery equipment, the safety information of the single construction machinery equipment and the safety information of the interactive activity according to the action recognition result and the construction elements;
acquiring an operation instruction when the construction mechanical equipment executes the action recognition result;
and establishing a corresponding relation between the operation instruction and the safety information of the single construction machinery equipment and the safety information of the interactive activities.
Specifically, after the real-time intelligent recognition result of the operation of the construction machinery equipment is obtained, the information of the interactive activity between the construction machinery equipment (for example, whether the excavator loads materials onto a truck), the safety information of a single construction machinery equipment (for example, whether the operation of the construction machinery equipment meets the operation specification) and the safety information of the interactive activity of a plurality of construction machinery equipment (for example, whether collision is possible in the interactive process of the plurality of construction machinery equipment) and the operation instruction corresponding to each operation can be obtained through the operation analysis of each equipment and the analysis of construction site elements (work machines), and then the corresponding relation between the safety information and the operation instruction is established. Here, the safety information of a single construction machine and the safety information of a plurality of construction machine interactions refer to an illegal action of a construction machine and actions that may cause a safety hazard, for example: whether a collision occurs between construction machines, whether a collision with other objects occurs, whether overload occurs, whether overspeed occurs, etc. Wherein, such as: 1) whether the mechanical equipment is collided or not can be judged according to the pose relation of the related mechanical equipment; 2) whether the mechanical equipment collides with other objects or not can be judged according to the spatial relationship between the pose of the mechanical equipment and other construction elements; 3) whether overload is available from physical sensor information; 4) whether the overspeed is caused can be obtained by combining the information of the motion part with the information of the original size of the mechanical equipment (or the parts thereof), for example, after the motion part is obtained by a difference method in the image information, the motion speed of the mechanical equipment (or the parts thereof) can be restored by combining the analysis with the original size of the mechanical equipment (or the parts thereof), such as the speed of a truck, a muck truck, a hook of a crane and the like. The corresponding relation between the safety information and the operation instruction is reasonably utilized, and the safety of the construction equipment during automatic driving is improved.
Further, the construction machinery equipment action real-time intelligent identification method further comprises the following steps:
and generating a working log of the construction machinery equipment according to the action recognition result, the working efficiency information, the interactive activity of the construction machinery equipment, the safety information of the single construction machinery equipment and the safety information of the interactive activity.
Specifically, after the real-time intelligent identification result of the operation of the construction machinery equipment, the work efficiency information corresponding to each operation, the equipment interaction information, and the safety information of the single equipment and the equipment interaction activity are obtained, a work log of the construction machinery equipment can be generated and stored further based on the work efficiency information, the equipment interaction information, the safety information of the single equipment and the equipment interaction activity, the operation time sequence information (the sequence of executing the operation) of the construction machinery equipment, and the position information, so that the work log can be called and checked by workers in the follow-up maintenance process of the construction machinery equipment.
By means of the method, the working log is automatically generated, the problem that errors are prone to occurring when the working log is recorded manually is avoided to a certain extent, and accuracy and reliability of information in the working log and recording efficiency of the working log are improved.
Further, the method for intelligently identifying the action of the construction machinery equipment in real time further comprises the following steps:
sending the action recognition result to a data display platform for display; the data display platform can achieve near real-time digital twinning of the action recognition result through a holographic projection three-dimensional display mode, or achieve image simulation twinning and/or character description information twinning of the action recognition result through a two-dimensional display mode.
Specifically, the action recognition result of the application can be sent to the data display platform by using communication technologies such as 5G and 6G, and the data display platform displays the action recognition result. The display mode can be three-dimensional stereo display, namely, the near real-time digital twinning of the action recognition result is realized through a holographic projection stereo display mode; the display mode can also be two-dimensional image or character display, namely, the image simulation twins and/or the character description information twins of the action recognition result are realized through a two-dimensional display mode such as a display screen. The embodiment is beneficial to the background staff to check the working state of the construction machinery equipment.
Based on the same inventive concept, the embodiment of the application also provides a real-time intelligent identification device for the action of the construction machinery equipment, which can be used for realizing the method described in the embodiment, as described in the following embodiment. The principle of solving the problems of the construction machinery equipment action real-time intelligent recognition device is similar to that of the construction machinery equipment action real-time intelligent recognition method, so the implementation of the construction machinery equipment action real-time intelligent recognition device can refer to the implementation of the construction machinery equipment action real-time intelligent recognition method, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
As shown in fig. 8, the present application provides a real-time intelligent recognition device for a construction machine, including:
the information acquisition module 801 is used for acquiring motion part information of the construction machinery equipment to be identified at multiple moments within a preset time range;
the action recognition module 802 is configured to analyze the motion part information at each time based on a pre-trained action recognition model, and obtain a plurality of action recognition results of the to-be-recognized construction machinery at a plurality of times; the motion recognition model is obtained by training according to a mechanical equipment motion type data set.
In an embodiment, as shown in fig. 9, the device for intelligently recognizing the action of the construction machinery equipment in real time further includes a result correction module 803, configured to perform smoothing operation and result correction on the action recognition result at each time by using timing information.
In one embodiment, as shown in fig. 10, the motion portion information is image information of a motion portion of the construction machine to be identified;
the information acquisition module 801 includes:
the image sequence acquiring unit 8011 is configured to acquire an image sequence of the to-be-identified construction machinery within a preset time range;
the image analyzing unit 8012 is configured to analyze and process each image in the image sequence based on an image difference method and an example segmentation method, so as to obtain information of a motion part of the to-be-identified construction machinery at a corresponding time.
In an embodiment, the image analysis unit 8012 is specifically configured to:
acquiring a first pixel set corresponding to a moving target in the image by a background difference method;
carrying out instance segmentation on the construction machinery equipment to be identified on the image to obtain a second pixel set;
and acquiring the intersection of the first pixel set and the second pixel set to obtain the motion part information of the construction machinery equipment to be identified.
In an embodiment, as shown in fig. 11, the real-time intelligent recognition device for the motion of the construction machinery equipment further includes:
a training data obtaining module 901, configured to obtain a training data set including a plurality of training data, where each training data includes an image of a construction machine and a semantic label corresponding to the image; the semantic tag comprises the type and action category of the construction machinery equipment;
a model training module 902, configured to train an initial model using the training data set to obtain the motion recognition model; the initial model comprises one of a machine learning model, a deep learning model, a mathematical statistics comparison model and a characteristic value and threshold value comparison model based on simple mathematical operation.
In an embodiment, the training data obtaining module 901 is further configured to:
generating a mechanical equipment action instruction by adopting mechanical equipment simulation software, and applying the mechanical equipment action instruction to preset digital mechanical equipment;
acquiring a sequence of images of the digital mechanical device at a plurality of camera poses using a virtual camera;
aligning the image sequence time of each camera pose to standard time;
performing action semantic standard analysis on an image sequence of any camera pose to obtain a corresponding semantic tag;
giving the semantic labels to corresponding images in the image sequence of each camera pose;
and establishing the training data set according to each image and the corresponding semantic label.
In an embodiment, as shown in fig. 12, the device for real-time intelligent recognition of the operation of the construction machinery equipment further includes a construction element obtaining module 804, configured to:
collecting an environment image of an engineering site;
and carrying out example segmentation and semantic analysis on the environment image to obtain a plurality of construction elements.
In an embodiment, please refer to fig. 12, the real-time intelligent recognition apparatus for motion of construction machinery further includes a work efficiency generating module 805 configured to:
acquiring the action time consumption of each operation link of the construction mechanical equipment to be identified under a plurality of operation working conditions; the action time is the time for executing a certain action by the construction machinery equipment to be identified;
respectively determining the consumed time of a working cycle of the construction machinery equipment to be identified under the corresponding working condition based on the action consumed time; the time consumed by one working cycle is the time for the construction machinery equipment to be identified to sequentially execute all actions;
and determining the working efficiency information of the construction machinery equipment to be identified under each working condition according to the action consumed time and/or the working cycle consumed time.
The construction machinery equipment action real-time intelligent recognition device can only recognize and obtain the action of the construction machinery equipment at a certain moment based on the motion part information of the construction machinery equipment at the moment, and compared with the existing technical scheme that action judgment needs to be carried out based on a plurality of data in a period, the time sequence of the data does not need to be considered, time sequence context information in the period does not need to be obtained by using a time domain sliding window, so that the data calculation amount and the resource waste are reduced, and the action recognition efficiency and the recognition precision are improved; meanwhile, the size of the motion recognition model is reduced to a certain degree.
In a third aspect, the present application further provides an electronic device, and referring to fig. 13, the electronic device 100 specifically includes:
a central processing unit (processor)110, a memory (memory)120, a communication module (Communications)130, an input unit 140, an output unit 150, and a power supply 160.
The memory (memory)120, the communication module (Communications)130, the input unit 140, the output unit 150 and the power supply 160 are respectively connected to the central processing unit (processor) 110. The memory 120 stores therein a computer program that can be called by the cpu 110, and the cpu 110 implements all steps of any one of the methods for real-time intelligent recognition of the operation of the construction machine in the above embodiments when executing the computer program.
In a fourth aspect, the present application also provides a computer storage medium for storing a computer program, the computer program being executable by a processor. When being executed by a processor, the computer program realizes the real-time intelligent identification method for the action of any construction machinery equipment provided by the application.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein. The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification.
In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. The above description is only an example of the embodiments of the present disclosure, and is not intended to limit the embodiments of the present disclosure. Various modifications and alterations to the embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present specification should be included in the scope of the claims of the embodiments of the present specification.

Claims (17)

1. A real-time intelligent identification method for actions of construction machinery equipment is characterized by comprising the following steps:
acquiring motion part information of construction machinery equipment to be identified at a plurality of moments within a preset time range;
respectively analyzing the motion part information at each moment based on a pre-trained motion recognition model to obtain a plurality of motion recognition results of the construction machinery equipment to be recognized at a plurality of moments; the motion recognition model is obtained by training according to a mechanical equipment motion type data set.
2. The method for intelligently identifying the motion of construction machinery equipment in real time according to claim 1, further comprising: and performing smoothing operation and result correction on the action recognition result at each moment by using the time sequence information.
3. The method for real-time intelligent recognition of the operation of construction machinery according to claim 1, wherein the motion part information is image information of a motion part of construction machinery to be recognized;
the method for acquiring the motion part information of the construction machinery equipment to be identified at a plurality of moments within a preset time range comprises the following steps:
acquiring an image sequence of the construction machinery equipment to be identified within a preset time range;
and analyzing and processing each image in the image sequence based on an image difference method and an example segmentation method to obtain the motion part information of the construction machinery equipment to be identified at the corresponding moment.
4. The method for intelligently identifying the action of the construction machinery equipment in real time according to claim 3, wherein the method for analyzing and processing each image in the image sequence based on an image difference method and an example segmentation method to obtain the motion part information of the construction machinery equipment to be identified at the corresponding moment comprises the following steps:
acquiring a first pixel set corresponding to a moving target in the image by a background difference method;
carrying out example segmentation on the construction machinery equipment to be identified on the image to obtain a second pixel set;
and acquiring the intersection of the first pixel set and the second pixel set to obtain the motion part information of the construction machinery equipment to be identified.
5. The method of claim 3, wherein the step of training the motion recognition model includes:
acquiring a training data set containing a plurality of training data, wherein each training data comprises an image of construction machinery equipment and a semantic label corresponding to the image; the semantic tag includes a type and an action category of the construction machine;
training an initial model by using the training data set to obtain the action recognition model; the initial model comprises one of a machine learning model, a deep learning model, a mathematical statistics comparison model and a characteristic value and threshold value comparison model based on simple mathematical operation.
6. The method of intelligently recognizing motion of construction machinery according to claim 5, wherein the step of establishing the training data set includes:
generating a mechanical equipment action instruction by adopting mechanical equipment simulation software, and applying the mechanical equipment action instruction to preset digital mechanical equipment;
acquiring a sequence of images of the digital mechanical device at a plurality of camera poses using a virtual camera;
aligning the image sequence time of each camera pose to standard time;
performing action semantic standard analysis on an image sequence of any camera pose to obtain a corresponding semantic tag;
giving the semantic labels to corresponding images in the image sequence of each camera pose;
and establishing the training data set according to each image and the corresponding semantic label.
7. The method for intelligently identifying the motion of construction machinery equipment in real time according to any one of claims 1 to 6, further comprising:
collecting an environment image of an engineering site;
and carrying out example segmentation and semantic analysis on the environment image to obtain a plurality of construction elements.
8. The method for intelligently identifying the motion of construction machinery according to claim 7, further comprising:
acquiring the action time consumption of each operation link of the construction machinery equipment to be identified under a plurality of operation conditions; the action time is the time for the construction machinery equipment to be identified to execute a certain action;
respectively determining the consumed time of a working cycle of the construction machinery equipment to be identified under the corresponding working condition based on the action consumed time; the time consumed by one working cycle is the time for the construction mechanical equipment to be identified to sequentially execute all actions;
and determining the working efficiency information of the construction machinery equipment to be identified under each working condition according to the action consumed time and/or the working cycle consumed time.
9. The method for intelligently identifying the motion of the construction machinery according to claim 8, further comprising:
acquiring operation instructions corresponding to different operation conditions;
and establishing a corresponding relation among the working conditions, the working efficiency information and the operation instructions so as to determine the working conditions with the highest working efficiency and the corresponding operation instructions according to the corresponding relation.
10. The method for intelligently identifying the motion of construction machinery according to claim 9, further comprising:
determining the interactive activity of the corresponding construction machinery equipment, the safety information of the single construction machinery equipment and the safety information of the interactive activity according to the action recognition result and the construction element;
acquiring an operation instruction when the construction mechanical equipment executes the action recognition result;
and establishing a corresponding relation between the operation instruction and the safety information of the single construction machinery equipment and the safety information of the interactive activities.
11. The method for intelligently recognizing the motion of construction machinery according to claim 10, further comprising:
and generating a working log of the construction machinery equipment according to the action recognition result, the working efficiency information, the interactive activity of the construction machinery equipment, the safety information of the single construction machinery equipment and the safety information of the interactive activity.
12. The method for intelligently identifying the motion of construction machinery equipment in real time according to any one of claims 1 to 6, further comprising:
sending the action recognition result to a data display platform for display; the data display platform can achieve near real-time digital twinning of the action recognition result through a holographic projection three-dimensional display mode, or achieve image simulation twinning and/or character description information twinning of the action recognition result through a two-dimensional display mode.
13. The method for real-time intelligent recognition of the actions of construction machinery equipment according to any one of claims 1 to 6, wherein the construction machinery equipment comprises one or more of an excavator, a loader, a truck, a crane, a truck crane, a climbing vehicle, a road roller, a bulldozer, a road paver, a concrete mixing truck, a boom pump truck, a pile driver, a rotary drilling rig, a trenching machine in an interactive state.
14. The utility model provides a real-time intelligent recognition device of construction machinery equipment action which characterized in that includes:
the information acquisition module is used for acquiring the motion part information of the construction machinery equipment to be identified at a plurality of moments within a preset time range;
the motion recognition module is used for analyzing the motion part information at each moment respectively based on a pre-trained motion recognition model to obtain a plurality of motion recognition results of the construction machinery equipment to be recognized at a plurality of moments; the motion recognition model is obtained by training according to a mechanical equipment motion type data set;
and the result correction module is used for performing smooth operation and result correction on the action recognition result by utilizing the time sequence information to obtain the action of the construction machinery equipment to be recognized.
15. The real-time intelligent recognition device for construction machinery action of claim 14, further comprising:
the training data acquisition module is used for acquiring a training data set containing a plurality of training data, and each training data comprises an image of construction machinery equipment and a semantic label corresponding to the image; the semantic tag includes a type and an action category of the construction machine;
the model training module is used for training an initial model by using the training data set to obtain the action recognition model; the initial model comprises one of a machine learning model, a deep learning model, a mathematical statistics comparison model and a characteristic value and threshold value comparison model based on simple mathematical operation.
16. An electronic device, comprising:
the real-time intelligent identification method for the actions of the construction machinery equipment comprises a central processing unit, a storage and a communication module, wherein a computer program is stored in the storage, the central processing unit can call the computer program, and when the central processing unit executes the computer program, the real-time intelligent identification method for the actions of the construction machinery equipment as claimed in any one of claims 1 to 13 is realized.
17. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method for real-time intelligent recognition of work machine device action according to any one of claims 1 to 13.
CN202210672343.7A 2022-06-15 2022-06-15 Real-time intelligent identification method and device for actions of construction machinery equipment Active CN114758422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210672343.7A CN114758422B (en) 2022-06-15 2022-06-15 Real-time intelligent identification method and device for actions of construction machinery equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210672343.7A CN114758422B (en) 2022-06-15 2022-06-15 Real-time intelligent identification method and device for actions of construction machinery equipment

Publications (2)

Publication Number Publication Date
CN114758422A true CN114758422A (en) 2022-07-15
CN114758422B CN114758422B (en) 2022-08-30

Family

ID=82336681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210672343.7A Active CN114758422B (en) 2022-06-15 2022-06-15 Real-time intelligent identification method and device for actions of construction machinery equipment

Country Status (1)

Country Link
CN (1) CN114758422B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562191A (en) * 2022-09-26 2023-01-03 北京能科瑞元数字技术有限公司 Productivity intermediate station intelligent conjecture analysis method based on industrial digital twin
CN115760819A (en) * 2022-11-28 2023-03-07 北京中环高科环境治理有限公司 Volatile organic compound measuring method, calculating equipment and storage medium
CN115875008A (en) * 2023-01-06 2023-03-31 四川省川建勘察设计院有限公司 Intelligent drilling data acquisition method and system for geological drilling machine and storage medium
CN116430739A (en) * 2023-06-14 2023-07-14 河北工业大学 Whole-process intelligent compaction system based on digital twin technology and control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598951A (en) * 2020-05-18 2020-08-28 清华大学 Method, device and storage medium for identifying space target
WO2021019948A1 (en) * 2019-07-29 2021-02-04 コベルコ建機株式会社 Position identification system for construction machinery
CN113989367A (en) * 2021-10-12 2022-01-28 三一重机有限公司 Method and device for estimating attitude of working machine, and working machine
CN114155294A (en) * 2021-10-25 2022-03-08 东北大学 Engineering machinery working device pose estimation method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021019948A1 (en) * 2019-07-29 2021-02-04 コベルコ建機株式会社 Position identification system for construction machinery
CN111598951A (en) * 2020-05-18 2020-08-28 清华大学 Method, device and storage medium for identifying space target
CN113989367A (en) * 2021-10-12 2022-01-28 三一重机有限公司 Method and device for estimating attitude of working machine, and working machine
CN114155294A (en) * 2021-10-25 2022-03-08 东北大学 Engineering machinery working device pose estimation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAAO XIE ET AL.: "Movement and Gesture Recognition Using Deep Learning and Wearable-sensor Technology", 《PROCEEDINGS OF THE 2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION》 *
丁仲聪 等: "基于双轴图像的自密实混凝土工作性能分析", 《清华大学学报(自然科学版)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562191A (en) * 2022-09-26 2023-01-03 北京能科瑞元数字技术有限公司 Productivity intermediate station intelligent conjecture analysis method based on industrial digital twin
CN115562191B (en) * 2022-09-26 2024-02-27 北京能科瑞元数字技术有限公司 Industrial digital twin-based intelligent presumption analysis method for productivity center
CN115760819A (en) * 2022-11-28 2023-03-07 北京中环高科环境治理有限公司 Volatile organic compound measuring method, calculating equipment and storage medium
CN115760819B (en) * 2022-11-28 2023-11-24 北京中环高科环境治理有限公司 Volatile organic compound measuring method, computing equipment and storage medium
CN115875008A (en) * 2023-01-06 2023-03-31 四川省川建勘察设计院有限公司 Intelligent drilling data acquisition method and system for geological drilling machine and storage medium
CN116430739A (en) * 2023-06-14 2023-07-14 河北工业大学 Whole-process intelligent compaction system based on digital twin technology and control method
CN116430739B (en) * 2023-06-14 2023-08-22 河北工业大学 Whole-process intelligent compaction system based on digital twin technology and control method

Also Published As

Publication number Publication date
CN114758422B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN114758422B (en) Real-time intelligent identification method and device for actions of construction machinery equipment
Xiao et al. Development of an image data set of construction machines for deep learning object detection
Kim et al. Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles
US11010889B2 (en) Measurement platform that automatically determines wear of machine components based on images
Kim et al. Analyzing context and productivity of tunnel earthmoving processes using imaging and simulation
CN106716455B (en) Method for developing machine operation classifier using machine learning
CN109642417A (en) Work machine
Tajeen et al. Image dataset development for measuring construction equipment recognition performance
CN111656412B (en) System, method, and method for determining work performed by work vehicle and method for manufacturing learned model
KR102454694B1 (en) Method and apparatus for controlling excavator to excavate
CN114322993B (en) Method and system for generating loading area map of unmanned transport system of surface mine
CN111788362B (en) Work analysis device and work analysis method
Bae et al. Automatic identification of excavator activities using joystick signals
Li et al. Difficulty assessment of shoveling stacked materials based on the fusion of neural network and radar chart information
Abdi Oskouei et al. A method for data-driven evaluation of operator impact on energy efficiency of digging machines
US20160104391A1 (en) Method of training an operator of machine
Molaei et al. An approach for estimation of swing angle and digging depth during excavation operation
Ji et al. Bucket teeth detection based on faster region convolutional neural network
US20240117598A1 (en) Autonomous Control Of On-Site Movement Of Powered Earth-Moving Construction Or Mining Vehicles
WO2019181685A1 (en) Work analysis device and work analysis method
Lumley Trends in performance of open cut mining equipment
Lu et al. Video surveillance-based multi-task learning with swin transformer for earthwork activity classification
Finkbeiner et al. Shape recognition of material heaps in outdoor environments and optimal excavation planning
Molaei et al. Automatic estimation of excavator actual and relative cycle times in loading operations
Proctor et al. Task analysis for improving training of construction equipment operators

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant