CN113469063A - Building worker strain early warning analysis method and system based on computer vision - Google Patents

Building worker strain early warning analysis method and system based on computer vision Download PDF

Info

Publication number
CN113469063A
CN113469063A CN202110756425.5A CN202110756425A CN113469063A CN 113469063 A CN113469063 A CN 113469063A CN 202110756425 A CN202110756425 A CN 202110756425A CN 113469063 A CN113469063 A CN 113469063A
Authority
CN
China
Prior art keywords
action
construction
video image
worker
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110756425.5A
Other languages
Chinese (zh)
Inventor
邓逸川
邓晖
苏成
欧智斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino Singapore International Joint Research Institute
Original Assignee
Sino Singapore International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino Singapore International Joint Research Institute filed Critical Sino Singapore International Joint Research Institute
Priority to CN202110756425.5A priority Critical patent/CN113469063A/en
Publication of CN113469063A publication Critical patent/CN113469063A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/008Alarm setting and unsetting, i.e. arming or disarming of the security system
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a computer vision-based construction worker strain early warning analysis method and system, which comprises the following steps of: collecting standard construction action information corresponding to different construction work types, inputting standard construction actions, and establishing an action training library; the video image acquisition device acquires video image information of the work of site construction workers in real time and transmits the video image information to a workstation of the video processing module through an information transmission line; extracting human skeleton information by a video processing program on the workstation according to the video image information; realizing intelligent identification of labor states based on a dynamic time regression algorithm and a Gesture Builder; the method and the system can intelligently identify the specific labor state of construction workers, provide reliable and accurate strain early warning reference suggestions for field management personnel to realize the intellectualization and refinement of construction field management, and ensure the physical health of the construction personnel to a certain degree.

Description

Building worker strain early warning analysis method and system based on computer vision
Technical Field
The invention relates to the technical field of intelligent management of construction sites, in particular to a computer vision-based construction worker strain early warning analysis method and system.
Background
In the construction site at the present stage, the number of basic-level operators is large, the development of construction work is complex and variable, the site manager is difficult to realize real-time, comprehensive and efficient personnel management, and supervision work is still mainly extensive. In order to enhance supervision and management on field personnel, enterprises often adopt a long-period, high-density and high-investment personnel supervision mode, and the mode is high in investment and low in efficiency, and is not beneficial to improving the modern construction management level. Due to long-term repetitive physical labor, important joint parts of workers on a construction site often cause irreversible strain inadvertently, and a great deal of postnatal troubles are brought to the old life of the workers.
At present, in order to improve the management level of field personnel, construction units generally adopt a video monitoring system to carry out field management on a construction site. However, the existing visual monitoring system is not intelligent enough, and a mode that a control room supervisor carries out manual monitoring and analysis on video data transmitted back by field shooting is mainly adopted. Although the mode improves the personnel management level to a certain extent, the intelligent degree is low, and the mode essentially belongs to the category of manpower supervision. On-site video monitoring equipment acquires a large amount of available data information, but the manual supervision mode cannot effectively utilize the data information, the efficient, intelligent and comprehensive management effect cannot be realized, and the input-output ratio of the mode is very low. In addition, the manual method for managing the strain of the constructors is difficult to be recognized by many interested parties.
The current related systems for intelligent management of construction sites focus on information management, access management and the like, and have great vacancy in the field of intelligent monitoring of specific labor states and strain situations of workers.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a building worker strain early warning analysis method and system based on computer vision, which can intelligently identify the specific labor state of a building worker, record the labor state information, analyze the strain possibility of a constructor, provide reliable and accurate strain early warning reference suggestions for field managers, realize the intellectualization and the refinement of construction field management, improve the personnel management level of the field and ensure the physical health of the constructor to a certain extent.
In order to achieve the aim, the invention provides a computer vision-based construction worker strain early warning analysis method, which comprises the following steps of:
step S1, collecting standard construction action information corresponding to different construction work types, inputting standard construction actions, and establishing an action training library;
step S2, the video image collecting device collects the video image information of the on-site construction worker working in real time and transmits the information to the workstation of the video processing module through the information transmission line;
step S3, extracting human skeleton information according to the video image information by the video processing program on the workstation;
step S4, realizing intelligent identification of labor state based on dynamic time normalization algorithm and Gesture Builder;
and step S5, feeding back the recognition result to the monitoring module in real time.
Preferably, the step S1 further includes the following steps:
step S11, collecting the labor actions of various work types, removing the influence of height and weight, and summarizing the standard construction actions of different work types;
step S12, performing action decomposition on the standard construction action, wherein the construction action can be regarded as the motion track of limbs in space on a time span, and the core characteristics of the standard construction action are extracted by dividing the standard construction action into action segments, analyzing the change of the segments in the action process;
and step S13, storing the core characteristics of the construction actions with different standards in a database to form an action training library as action information used for matching by the analysis program.
Preferably, the step S4 further includes the following steps:
step S41, identifying the labor state of a single person by a dynamic time integration algorithm;
and step S42, intelligently recognizing the labor states of multiple persons based on the Gesture Builder.
Preferably, the step S41 further includes the following steps:
step S411, giving sequences of two video segments, namely a sample sequence X (X1.. multidot., xn) and a test sequence Y (Y1.. multidot., ym), sequentially operating the operation information of the training library and the data information collected on site, and respectively calculating lengths of the two sequences, namely video frame numbers n and m;
step S412, selecting each point value in the sequence, namely determining the motion characteristic of each frame in the video sequence, and selecting the characteristic value as a motion vector constructed into each video frame; according to the standard construction action extracted in the step S1, selecting a corresponding joint point to construct an action vector, considering that the heights and the body types of different workers are different, and in order to eliminate the influence, adopting a cosine value of an included angle between the action vectors as a value of a standard construction action sequence:
Figure BDA0003147746550000031
step S413, selecting a pair of point-to-point distance functions d (i, j) ═ f (xi, yj) ≥ 0 in the sequence, that is, the similarity between each point in the sequence X and each point in the sequence Y increases as the distance decreases, and using a function of euclidean distance, that is:
d(i,j)=(xi-yj)2
step S414, solving a rounding path Warp PathW ═ w1+ w2+ w3+ … + wk;
wherein wk is of the form (i, j), i denotes the i coordinate in X, and j denotes the j coordinate in Y; max (n, m) < ═ k < ═ n + m; the normalization path W must start at W1 ═ 1,1 and end at wk ═ n, m to ensure that each coordinate in X and Y appears in W; in W, i and j of W (i, j) must be monotonically increasing, that is, satisfy: for wk (i, j), wk +1(i ', j') there are i < ═ i '< = i +1, j < ═ j' < = j + 1;
in step S415, the solution normalization path is the one with the shortest distance, i.e. the best path:
D(i,j)=Dist(i,j)+min[D(i-1,j),D(i,j-1),D(i-1,j-1)],
the optimal path is the path that minimizes the cumulative distance along the path, which can be obtained by a dynamic programming algorithm.
Preferably, the step S42 further includes the following steps:
step S421, recording standard construction action clips; recording standard construction actions by using Kinect Studio, acquiring data in a Record mode, ensuring the reality and effectiveness of the acquired data by a data source by adopting a Nui Raw IR11bit, and converting the acquired data into a data format which can be identified by a Gesture Builder;
step S422, establishing a Gesture build solution; according to the characteristic example of the standard construction action; creating a corresponding solution file, and respectively establishing corresponding analysis items in the solution according to the action decomposition of the standard action to be detected;
step S423, action clip entry; the clips containing corresponding actions are respectively imported into the three analysis items, and the more the clips are imported, the more accurate the identification result is;
step S424, calibrating the action clipping logic; the action clip calibration is to mark whether the action accords with the action to be detected in the action clip, and the action can be divided into a discrete type and a continuous type according to whether the action is continuous, wherein the logic calibration of the discrete action is as follows: 1 is true and 0 is false; the logic calibration of continuous actions needs to specify the completion degree of actions at each moment, the start is 0, the end is 1, and interpolation is carried out according to the completion degree in the middle;
step S425, generating an action recognition library, importing the action clips, and generating the action recognition library by the Gesture build as a basis for judging the action to be detected after the logic judgment is finished;
in step S426, the action recognition library is called by: establishing a vgb data source which is defined as an identification library of the action to be detected; then: establishing a reader for the vgb frame to receive frame data and setting an action to be detected; the detection result of the discrete motion is a confidence coefficient, and the detection result of the continuous motion is a progress value.
Preferably, the step S5 further includes the following steps:
step S51, based on the action detection feedback of DTW, the monitoring interface displays the judgment whether the work is done or not, and gives the name of the action being done by the worker;
step S52, detecting feedback based on Gesture Builder action;
and step S53, based on the continuous working time of the worker obtained by identification and analysis, when the repeated work of the worker continuously carried out for a long time exceeds a certain time, the system identifies the continuous working time of the worker, and when the analysis exceeds a threshold value, the system carries out early warning in time through an LED display screen and sound equipment, and after the worker stops for a period of time, the early warning information can be eliminated, and the worker can operate again.
The invention also provides a building worker strain early warning analysis system based on computer vision, which comprises:
the video image acquisition device is used for acquiring the video image information of the work of site construction workers in real time;
the workstation comprises a video image information processing module and an action training library;
the action training library is used for recording standard construction actions;
the video image information processing module is connected with the video image acquisition device through an information transmission line and is used for processing the video image information and comparing and identifying the video image information with standard construction actions in an action training library;
and the monitoring module is electrically connected with the video image information processing module and is used for displaying or outputting the comparison and identification results.
Preferably, the action training library further comprises a development module, and the development module is used for adding action training data and data information storage;
video image collection system feels camera and power adapter including feeling, it is continuous with the workstation through information transmission line and video image information processing module to feel the camera, power adapter provides the power for feeling the camera.
Preferably, the motion sensing camera is arranged above a working area of a worker and above a main traffic intersection, and the angle of view of the motion sensing camera covers all directions of the range of motion of the worker.
Preferably, the monitoring module comprises an LED display screen and an audio device, and the LED display screen and the audio device are electrically connected with the video image information processing module.
Compared with the prior art, the invention has the beneficial effects that:
the video image information processing module in the workstation processes the video image information of the work of the site construction workers acquired by the video image acquisition device in real time to obtain the skeleton information of a human body, realizes the intelligent identification of the work state based on a dynamic time regression algorithm and a Gesture Builder, can intelligently identify a certain work done by the specific construction workers at a certain moment, transmits the work information to the monitoring module to display and store the work information of the workers, analyzes the strain possibility of the constructors by identifying whether the work action of the construction workers is standard or not, provides reliable and accurate strain early warning reference suggestions for site managers, analyzes the possibility that the strain may be caused by the repetitive work of the workers by combining the continuous working time length of the identified workers, timely sends out early warning when the strain condition occurs, and realizes the intellectualization and refinement of the construction site management, improve the personnel management level on site, and ensure the physical health of constructors to a certain degree.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic step diagram of a method for early warning and analyzing strain of a construction worker based on computer vision according to an embodiment of the present invention;
fig. 2 is a development flowchart of a computer vision-based construction worker strain early warning analysis method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a computer vision-based construction worker strain early warning analysis system according to a second embodiment of the present invention;
fig. 4 is a schematic field installation diagram of a computer vision-based construction worker strain early warning analysis system according to a second embodiment of the present invention;
fig. 5 is a schematic display diagram of an LED display screen of a computer vision-based early warning and analyzing system for strain of construction workers according to a second embodiment of the present invention.
The figure includes:
the system comprises a video image acquisition device, a 5-workstation, a 7-video image information processing module, an 8-action training library, a 4-information transmission line, a 9-monitoring module, a 81-development module, an 11-somatosensory camera, a 12-power adapter, a 91-LED display screen, 92-sound equipment, a 93-staff action information recording and storing device and a 3-labor area.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are one embodiment of the present invention, and not all embodiments of the present invention. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.
Example one
Referring to fig. 1 and fig. 2, an embodiment of the present invention provides a computer vision-based construction worker strain early warning analysis method, including the following steps:
and step S1, collecting standard construction action information corresponding to different construction work types, inputting standard construction actions, and establishing an action training library.
Specifically, the step S1 further includes the following steps:
and step S11, collecting the labor actions of various work types, removing the influence of height and weight, and summarizing the standard construction actions of different work types.
And step S12, performing action decomposition on the standard construction action, wherein the construction action can be regarded as the motion track of limbs in space in a time span, and the core characteristics of the standard construction action are extracted by dividing the standard construction action into action segments and analyzing the change of the segments in the action process.
And step S13, storing the core characteristics of the construction actions with different standards in a database to form an action training library as action information used for matching by the analysis program.
And step S2, the video image collecting device collects the video image information of the work of the site construction worker in real time and transmits the video image information to the workstation of the video processing module through the information transmission line.
And step S3, the video processing program on the workstation extracts the human skeleton information according to the video image information. The video image information can provide information such as positions and angles of certain joint points and bones of the human body in a certain time period, and then the human body bone data can be obtained.
And step S4, realizing intelligent identification of the labor state based on the dynamic time normalization algorithm and the Gesture Builder. The essence of human motion is an ordered space-time sequence of skeletal joints, so the key of motion recognition is the tracking and processing of human skeletal data. The analysis software architecture developed by the invention is shown in fig. 2, and a program extracts human skeleton data information according to the acquired video image information and matches the human skeleton data information with standard construction action information in an action training library, so that respective corresponding construction work types are identified, which is as follows.
Further, the step S4 further includes the following steps:
step S41, the dynamic time warping algorithm identifies the labor status of the single person. The dynamic time warping algorithm distorts time series with different lengths from time dimension to nonlinearity to measure nonlinear similarity in time dimension, and measures the similarity of the two time series by calculating the shortest distance of all similar points. The labor is a sequence of actions in time, a standard action sequence can be used as a template sequence by means of DTW, and the similarity is calculated by collecting the action sequence of a field constructor and the template sequence to judge the labor state of a worker.
The specific implementation process is as follows: the step S41 further includes the following steps:
in step S411, two sequences (video clips) are given, i.e., a sample sequence X (X1.,. xn) and a test sequence Y (Y1.,. yn), and the motion information of the training library and the data information collected in the field are sequentially operated, so that the lengths (the number of video frames) of the two sequences are n and m, respectively.
Step S412, selecting each point value in the sequence, namely determining the motion characteristic of each frame in the video sequence, and selecting the characteristic value as a motion vector constructed into each video frame; according to the standard construction action extracted in the step S1, selecting a corresponding joint point to construct an action vector, considering that the heights and the body types of different workers are different, and in order to eliminate the influence, adopting a cosine value of an included angle between the action vectors as a value of a standard construction action sequence:
Figure BDA0003147746550000091
step S413, selecting a pair of point-to-point distance functions d (i, j) ═ f (xi, yj) ≥ 0 in the sequence, that is, the similarity between each point in the sequence X and each point in the sequence Y is higher as the distance is smaller, and the present invention adopts a function of euclidean distance (in other embodiments, other functions may also be used), that is:
d(i,j)=(xi-yj)2
step S414, solving a rounding path Warp PathW ═ w1+ w2+ w3+ … + wk;
wherein wk is of the form (i, j), i denotes the i coordinate in X, and j denotes the j coordinate in Y; max (n, m) < ═ k < ═ n + m; the normalization path W must start at W1 ═ 1,1 and end at wk ═ n, m to ensure that each coordinate in X and Y appears in W; in W, i and j of W (i, j) must be monotonically increasing, that is, satisfy: for wk (i, j), wk +1(i ', j') is i ═ i '< ═ i +1, and j < ═ j' < ═ j + 1.
In step S415, the solution normalization path is the one with the shortest distance, i.e. the best path:
D(i,j)=Dist(i,j)+min[D(i-1,j),D(i,j-1),D(i-1,j-1)];
the optimal path is the path that minimizes the cumulative distance along the path, which can be obtained by a dynamic programming algorithm.
And step S42, intelligently recognizing the labor states of multiple persons based on the Gesture Builder. The Gesture trainer in the Gesture Builder body sensing equipment leads the action clips recorded by the Kinect Studio to mark True or False manually to the action clips in the clips to guide the Kinect to conduct machine learning and intelligently conduct data processing. The specific implementation engineering is as follows, and the step S42 further includes the following steps:
step S421, recording standard construction action clips; the Kinect Studio is used for recording standard construction actions, data acquisition is carried out in a Record mode, a data source adopts the Nui Raw IR11bit to ensure that the acquired data are real and effective, and the acquired data are converted into a data format which can be identified by the Gesture Builder.
Step S422, establishing a Gesture build solution; according to the characteristic example of the standard construction action; for example: whether to distinguish left and right hands, whether to ignore lower limbs, discrete action or continuous action, and the like, and creating a corresponding solution file, and respectively establishing corresponding analysis items in the solution according to the action decomposition of the standard action to be detected.
Step S423, action clip entry; the clips containing corresponding actions are respectively imported into the three analysis items, and the more the clips are imported, the more accurate the recognition result is.
Step S424, calibrating the action clipping logic; the action clip calibration is to mark whether the action accords with the action to be detected in the action clip, and the action can be divided into a discrete type and a continuous type according to whether the action is continuous, wherein the logic calibration of the discrete action is as follows: 1 is true and 0 is false; the logic calibration of continuous action needs to specify the completion degree of the action at each moment, the start is 0, the end is 1, and interpolation is carried out according to the completion degree in the middle.
Step S425, generating an action recognition library, importing the action clips, and after the logical judgment is finished, generating the action recognition library by the Gesture build to serve as a basis for judging the action to be detected.
In step S426, the action recognition library is called by: establishing a vgb data source which is defined as an identification library of the action to be detected; then: establishing a reader for the vgb frame to receive frame data and setting an action to be detected; the detection result of the discrete motion is a confidence coefficient, and the detection result of the continuous motion is a progress value.
And step S5, feeding back the recognition result to the monitoring module in real time. The video processing module can transmit the human labor state information to the monitoring module in real time after the analysis is finished, and if the system identifies that the continuous working time of workers is too long and strain possibly occurs, the system gives an alarm. In the monitoring module, an LED display screen displays whether workers are working, the working types of the working, the duration of the working, an image identification skeleton map and the like; when the continuous interruption time is too long, the sound equipment sends out sound information for the dispatching reference of managers; meanwhile, the monitoring module records the information of the labor state in real time.
Further, the step S5 further includes the following steps:
step S51, based on the action detection feedback of the DTW, the monitoring interface displays the judgment of whether to work or not, and gives the name of the action the worker is doing. If the construction worker works, displaying the logical value on the monitoring interface as 'yes', and marking the wrist joints of the two hands as green; if the construction worker is not working, the logical value shows "no" and the wrist joints of both hands are marked as wine red. And the monitoring interface simultaneously displays the labor duration of the construction worker, and if the construction worker stops working, the timing is stopped.
Step S52, detecting feedback based on Gesture Builder action; the feedback interface of the Gesture Builder is divided into a left side and a right side, the right side simultaneously displays skeleton maps of 6 workers, the left side simultaneously displays the labor states of the 6 workers and tracks the labor states, and the labor states are displayed in three conditions.
(1) If no constructor exists in the acquisition range of the video image acquisition device, displaying Not Tracked, displaying False by Shovel Start, representing that the action is Not started, and displaying 0 by Progress, representing that the action Progress is 0.
(2) If the constructor exists in the acquisition range of the video image acquisition device, but the constructor does Not perform construction work, displaying Not covering, covering Start displaying False, representing that the action is Not started, and Progress displaying 0 representing that the action Progress is 0.
(3) If constructors exist in the acquisition range of the video image acquisition device and the constructors perform construction operation, displaying the Shoville, displaying the True by the Shovel Start, representing that the action is started, and displaying the progress value representing the action progress.
And step S53, based on the continuous working time of the worker obtained by identification and analysis, when the repeated work of the worker continuously carried out for a long time exceeds a certain time, the system identifies the continuous working time of the worker, and when the analysis exceeds a threshold value, the system carries out early warning in time through an LED display screen and sound equipment, and after the worker stops for a period of time, the early warning information can be eliminated, and the worker can operate again.
Example two
Referring to fig. 3 to 5, a second embodiment of the present invention provides a computer vision-based construction worker strain early warning analysis system, including:
the video image acquisition device 1 is used for acquiring video image information of the work of site construction workers in real time;
the system comprises a workstation 5, a video image information processing module 7 and an action training library 8, wherein the workstation 5 comprises the video image information processing module 7 and the action training library 8;
the action training library 8 is used for recording standard construction actions;
the video image information processing module 7 is connected with the video image acquisition device 1 through an information transmission line 4 and is used for processing video image information and comparing and identifying the video image information with standard construction actions in an action training library 8;
and the monitoring module 9 is electrically connected with the video image information processing module 7 and is used for displaying or outputting the comparison and identification results.
As shown in fig. 3, the action training library 8 further includes a development module 81, and the development module 81 is used for adding action training data and data information storage.
As shown in fig. 3 and 4, the video image capturing apparatus 1 includes a motion sensing camera 11 and a power adapter 12, the motion sensing camera 11 is connected to the workstation 5 of the video image information processing module 7 through the information transmission line 4, and the power adapter 12 supplies power to the motion sensing camera 11.
Further, as shown in fig. 4, the motion sensing camera 11 is installed above a working area of a worker and above a main traffic intersection, and a field angle of the motion sensing camera 11 covers each direction of a range of motion of the worker.
Specifically, the angle of view of the motion sensing camera 11 is aligned with the worker working area 3.
More specifically, the model of the motion sensing camera 11 is Xbox kinect2.0, the power adapter 12 is an adapter matched with the motion sensing camera 11, and the power adapter 12 is connected with a PC/ONE S.
The video image information processing module 7 is included in the workstation 5, the workstation 5 is a high-performance computer, the workstation 5 further comprises an action training library 8, and the workstation 5 is connected with the monitoring module 9.
The monitoring module 9 comprises an LED display screen 91, a sound device 92 and a staff action information recording and storing device 93, wherein the LED display screen 91 and the sound device 92 are electrically connected with the video image information processing module 7.
The LED display screen 91 is CSD-P6-SMD3535 in model, and has double backup power lines and resolution ratio of over 720P.
As shown in fig. 4 and 5, the second embodiment of the present invention includes a set of motion sensing camera 11, two LED display screens 91 and a sound device 92, where the LED display screens 91 respectively display video image information acquired by the video image acquisition device 1 in real time and employee labor status information processed by the video image information processing module 7, and if the system predicts that strain may occur, alarm information is sent through the LED display screens 91 and the sound device 92.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A building worker strain early warning analysis method based on computer vision is characterized by comprising the following steps:
step S1, collecting standard construction action information corresponding to different construction work types, inputting standard construction actions, and establishing an action training library;
step S2, the video image collecting device collects the video image information of the on-site construction worker working in real time and transmits the information to the workstation of the video processing module through the information transmission line;
step S3, extracting human skeleton information according to the video image information by the video processing program on the workstation;
step S4, realizing intelligent identification of labor state based on dynamic time reduction algorithm and GestureBuilder;
and step S5, feeding back the recognition result to the monitoring module in real time.
2. The computer vision based construction worker strain early warning analysis method as claimed in claim 1, wherein the step S1 further comprises the following steps:
step S11, collecting the labor actions of various work types, removing the influence of height and weight, and summarizing the standard construction actions of different work types;
step S12, performing action decomposition on the standard construction action, wherein the construction action can be regarded as the motion track of limbs in space on a time span, and the core characteristics of the standard construction action are extracted by dividing the standard construction action into action segments, analyzing the change of the segments in the action process;
and step S13, storing the core characteristics of the construction actions with different standards in a database to form an action training library as action information used for matching by the analysis program.
3. The computer vision based construction worker strain early warning analysis method as claimed in claim 2, wherein the step S4 further comprises the following steps:
step S41, identifying the labor state of a single person by a dynamic time integration algorithm;
and step S42, intelligently recognizing the labor states of multiple persons based on GestureBuilder.
4. The computer vision-based construction worker strain warning analysis method as claimed in claim 3, wherein the step S41 further comprises the following steps:
step S411, giving sequences of two video segments, namely a sample sequence X (X1.. multidot., xn) and a test sequence Y (Y1.. multidot., ym), sequentially operating the operation information of the training library and the data information collected on site, and respectively calculating lengths of the two sequences, namely video frame numbers n and m;
step S412, selecting each point value in the sequence, namely determining the motion characteristic of each frame in the video sequence, and selecting the characteristic value as a motion vector constructed into each video frame; according to the standard construction action extracted in the step S1, selecting a corresponding joint point to construct an action vector, considering that the heights and the body types of different workers are different, and in order to eliminate the influence, adopting a cosine value of an included angle between the action vectors as a value of a standard construction action sequence:
Figure FDA0003147746540000021
step S413, selecting a pair of point-to-point distance functions d (i, j) ═ f (xi, yj) ≥ 0 in the sequence, that is, the similarity between each point in the sequence X and each point in the sequence Y increases as the distance decreases, and using a function of euclidean distance, that is:
d(i,j)=(xi-yj)2
step S414, solving a rounding path warppthw ═ w1+ w2+ w3+ … + wk;
wherein wk is of the form (i, j), i denotes the i coordinate in X, and j denotes the j coordinate in Y; max (n, m) < ═ k < ═ n + m; the normalization path W must start at W1 ═ 1,1 and end at wk ═ n, m to ensure that each coordinate in X and Y appears in W; in W, i and j of W (i, j) must be monotonically increasing, that is, satisfy: for wk (i, j), wk +1(i ', j') there are i < ═ i '< = i +1, j < ═ j' < = j + 1;
in step S415, the solution normalization path is the one with the shortest distance, i.e. the best path:
D(i,j)=Dist(i,j)+min[D(i-1,j),D(i,j-1),D(i-1,j-1)],
the optimal path is the path that minimizes the cumulative distance along the path, which can be obtained by a dynamic programming algorithm.
5. The computer vision-based construction worker strain warning analysis method as claimed in claim 3, wherein the step S42 further comprises the following steps:
step S421, recording standard construction action clips; recording standard construction actions by using Kinectstudio, acquiring data in a Record mode, ensuring the reality and effectiveness of the acquired data by adopting a NuiRawIR11bit as a data source, and converting the acquired data into a data format which can be identified by GestureBuilder;
step S422, establishing a GestureBuilde solution; according to the characteristic example of the standard construction action; creating a corresponding solution file, and respectively establishing corresponding analysis items in the solution according to the action decomposition of the standard action to be detected;
step S423, action clip entry; the clips containing corresponding actions are respectively imported into the three analysis items, and the more the clips are imported, the more accurate the identification result is;
step S424, calibrating the action clipping logic; the action clip calibration is to mark whether the action accords with the action to be detected in the action clip, and the action can be divided into a discrete type and a continuous type according to whether the action is continuous, wherein the logic calibration of the discrete action is as follows: 1 is true and 0 is false; the logic calibration of continuous actions needs to specify the completion degree of actions at each moment, the start is 0, the end is 1, and interpolation is carried out according to the completion degree in the middle;
step S425, generating an action recognition library, importing the action clips, and generating the action recognition library by the Gesture build as a basis for judging the action to be detected after the logic judgment is finished;
in step S426, the action recognition library is called by: establishing a vgb data source which is defined as an identification library of the action to be detected; then: establishing a reader for the vgb frame to receive frame data and setting an action to be detected; the detection result of the discrete motion is a confidence coefficient, and the detection result of the continuous motion is a progress value.
6. The computer vision based construction worker strain early warning analysis method as claimed in claim 1, wherein the step S5 further comprises the following steps:
step S51, based on the action detection feedback of DTW, the monitoring interface displays the judgment whether the work is done or not, and gives the name of the action being done by the worker;
step S52, detecting feedback based on Gesture Builder action;
and step S53, based on the continuous working time of the worker obtained by identification and analysis, when the repeated work of the worker continuously carried out for a long time exceeds a certain time, the system identifies the continuous working time of the worker, and when the analysis exceeds a threshold value, the system carries out early warning in time through an LED display screen and sound equipment, and after the worker stops for a period of time, the early warning information can be eliminated, and the worker can operate again.
7. A construction worker strain early warning analysis system based on computer vision is characterized by comprising:
the video image acquisition device (1) is used for acquiring video image information of the work of site construction workers in real time;
a workstation (5), said workstation (5) comprising a video image information processing module (7) and an action training library (8);
the action training library (8) is used for recording standard construction actions;
the video image information processing module (7) is connected with the video image acquisition device (1) through an information transmission line (4) and is used for processing video image information and comparing and identifying the video image information with standard construction actions in an action training library (8);
and the monitoring module (9) is electrically connected with the video image information processing module (7) and is used for displaying or outputting the comparison and identification results.
8. The computer vision based construction worker strain warning analysis system as claimed in claim 7, wherein the action training library (8) further comprises a development module (81), the development module (81) is used for adding action training data and data information storage;
video image collection system (1) is including feeling camera (11) and power adapter (12) body, body feels camera (11) and links to each other with workstation (5) through information transmission line (4) and video image information processing module (7), power adapter (12) provides the power for body feels camera (11).
9. The computer vision-based construction worker strain early warning and analyzing system as claimed in claim 8, wherein the motion sensing camera (11) is installed above a working area of a worker and above a main traffic intersection, and the angle of view of the motion sensing camera (11) covers all directions of the range of motion of the worker.
10. The computer vision-based construction worker strain early warning and analyzing system as claimed in claim 7, wherein the monitoring module (9) comprises an LED display screen (91), an audio device (92) and a staff action information recording and storing device (93), and the LED display screen (91) and the audio device (92) are electrically connected with the video image information processing module (7).
CN202110756425.5A 2021-07-05 2021-07-05 Building worker strain early warning analysis method and system based on computer vision Pending CN113469063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110756425.5A CN113469063A (en) 2021-07-05 2021-07-05 Building worker strain early warning analysis method and system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110756425.5A CN113469063A (en) 2021-07-05 2021-07-05 Building worker strain early warning analysis method and system based on computer vision

Publications (1)

Publication Number Publication Date
CN113469063A true CN113469063A (en) 2021-10-01

Family

ID=77878098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110756425.5A Pending CN113469063A (en) 2021-07-05 2021-07-05 Building worker strain early warning analysis method and system based on computer vision

Country Status (1)

Country Link
CN (1) CN113469063A (en)

Similar Documents

Publication Publication Date Title
CN108596148B (en) System and method for analyzing labor state of construction worker based on computer vision
CN110826538B (en) Abnormal off-duty identification system for electric power business hall
CN111260325A (en) Bridge construction progress management system based on BIM
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN209543514U (en) Monitoring and alarm system based on recognition of face
CN112487891B (en) Visual intelligent dynamic identification model construction method applied to electric power operation site
CN104077568A (en) High-accuracy driver behavior recognition and monitoring method and system
CN107241572A (en) Student&#39;s real training video frequency tracking evaluation system
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
CN110458198A (en) Multiresolution target identification method and device
CN115546899A (en) Examination room abnormal behavior analysis method, system and terminal based on deep learning
CN113569656B (en) Examination room monitoring method based on deep learning
CN114881665A (en) Method and system for identifying electricity stealing suspected user based on target identification algorithm
CN113469938A (en) Pipe gallery video analysis method and system based on embedded front-end processing server
CN113469063A (en) Building worker strain early warning analysis method and system based on computer vision
Madrid et al. Recognition of dynamic Filipino Sign language using MediaPipe and long short-term memory
CN111199378B (en) Student management method, device, electronic equipment and storage medium
CN210091231U (en) Wisdom garden management system
CN114639168B (en) Method and system for recognizing running gesture
EP4156115A1 (en) Method and apparatus for identifying product that has missed inspection, electronic device, and storage medium
CN115907554A (en) Automatic judgment method suitable for measurement acquisition operation and maintenance simulation training
CN115314684A (en) Railway bridge inspection method, system, equipment and readable storage medium
Liu et al. Development of a fatigue detection and early warning system for crane operators: A preliminary study
CN114496326A (en) Monitoring and identifying method and system for man-machine interaction operation of nuclear power plant

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination