WO2024066039A1 - 一种基于多源数据融合的施工风险评估方法及装置 - Google Patents

一种基于多源数据融合的施工风险评估方法及装置 Download PDF

Info

Publication number
WO2024066039A1
WO2024066039A1 PCT/CN2022/137051 CN2022137051W WO2024066039A1 WO 2024066039 A1 WO2024066039 A1 WO 2024066039A1 CN 2022137051 W CN2022137051 W CN 2022137051W WO 2024066039 A1 WO2024066039 A1 WO 2024066039A1
Authority
WO
WIPO (PCT)
Prior art keywords
construction
worker
relationship
information
construction site
Prior art date
Application number
PCT/CN2022/137051
Other languages
English (en)
French (fr)
Inventor
杨之乐
吴承科
郭媛君
刘祥飞
王尧
冯伟
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2024066039A1 publication Critical patent/WO2024066039A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present invention relates to the technical field of construction risk assessment, and in particular to a construction risk assessment method and device based on multi-source data fusion.
  • the main image vision-based construction risk behavior recognition only focuses on the individual behavior of workers (such as smoking, not wearing a safety helmet, not wearing a safety belt, etc.), but fails to fully consider the scene characteristics and cannot accurately assess construction risks.
  • the technical problem to be solved by the present invention is that, in view of the above-mentioned defects of the prior art, a construction risk assessment method and device based on multi-source data fusion is provided, aiming to provide a solution to the problem that the prior art fully considers scene characteristics and cannot accurately assess construction risks.
  • the present invention provides a construction risk assessment method based on multi-source data fusion, wherein the method comprises:
  • construction risk level information is determined.
  • determining, based on the construction scene image, an object corresponding to the construction scene image and category information corresponding to the object includes:
  • the construction scene image is segmented based on the image segmentation model, and objects in the construction scene image and the category information are identified, wherein the image segmentation model is trained based on the YOLOv5 model.
  • the training process of the image segmentation model includes:
  • scene sample images include positional relationships and distance relationships between workers and objects on the construction site in different scenes;
  • the scene sample images of the marked workers, construction site objects and category information corresponding to the construction site objects are input into the YOLOv5 model for training to obtain the image segmentation model.
  • determining the worker behavior information and the association relationship between the worker and the object at the construction site based on the object and the category information includes:
  • the impact information of the construction site object on the worker is determined, based on the category information and in combination with the position relationship and the distance relationship, the impact information of the construction site object on the worker, wherein the impact information includes the positive impact and negative impact of the construction site object on the worker;
  • an association relationship between the worker and the object at the construction site is determined.
  • the determining, based on the object and the category information, the worker behavior information and the association relationship between the worker and the object at the construction site further includes:
  • the worker behavior information is output.
  • the training process of the behavior recognition model includes:
  • mapping relationship is trained based on a residual convolutional neural network to obtain the behavior recognition model.
  • determining the construction risk level information based on the worker behavior information and the association relationship includes:
  • the construction risk level information is determined to be a low risk level
  • the construction risk level information is determined to be the second lowest risk level
  • the construction risk level information is determined to be a higher risk level
  • the construction risk level information is determined to be a high risk level.
  • an embodiment of the present invention further provides a construction risk assessment device based on multi-source data fusion, characterized in that the device comprises:
  • a scene image analysis module used to obtain a construction scene image, and based on the construction scene image, determine an object corresponding to the construction scene image and category information corresponding to the object, wherein the object includes workers and objects on the construction site;
  • an association relationship analysis module used to determine worker behavior information and an association relationship between the worker and the object at the construction site based on the object and the category information, wherein the association relationship is used to reflect the protection relationship or potential harm relationship caused by the object at the construction site to the worker;
  • the construction risk assessment module is used to determine the construction risk level information based on the worker behavior information and the association relationship.
  • an embodiment of the present invention further provides a terminal device, wherein the terminal device is a commercial display terminal or a projection terminal, and the terminal device includes a memory, a processor, and a construction risk assessment program based on multi-source data fusion stored in the memory and executable on the processor.
  • the processor executes the construction risk assessment program based on multi-source data fusion, the steps of the construction risk assessment method based on multi-source data fusion of any one of the above-mentioned schemes are implemented.
  • an embodiment of the present invention further provides a computer-readable storage medium, wherein a construction risk assessment program based on multi-source data fusion is stored on the computer-readable storage medium, and when the construction risk assessment program based on multi-source data fusion is executed by a processor, the steps of the construction risk assessment method based on multi-source data fusion described in any one of the above-mentioned schemes are implemented.
  • the present invention provides a construction risk assessment method based on multi-source data fusion.
  • the present invention first obtains a construction scene image, and based on the construction scene image, determines the object corresponding to the construction scene image and the category information corresponding to the object, wherein the object includes workers and objects on the construction site; based on the object and the category information, determines the worker behavior information and the association relationship between the worker and the construction site object, wherein the association relationship is used to reflect the protection relationship or potential injury relationship caused by the construction site object to the worker; based on the worker behavior information and the association relationship, determines the construction risk level information.
  • the present invention can realize a refined assessment of construction risks, determine the construction risk level, and fully consider the characteristics of the construction scene, which is conducive to accurately assessing construction risks.
  • FIG1 is a flowchart of a specific implementation of a construction risk assessment method based on multi-source data fusion provided by an embodiment of the present invention.
  • FIG2 is a functional schematic diagram of a construction risk assessment device based on multi-source data fusion provided in an embodiment of the present invention.
  • FIG3 is a functional block diagram of a terminal device provided in an embodiment of the present invention.
  • the present embodiment provides a construction risk assessment method based on multi-source data fusion. Based on the method of the present embodiment, a refined assessment of construction risks can be achieved, the construction risk level can be determined, and the construction scene characteristics are fully considered, which is conducive to accurately assessing the construction risks.
  • the present embodiment first obtains a construction scene image, and based on the construction scene image, determines the object corresponding to the construction scene image and the category information corresponding to the object, wherein the object includes workers and construction site objects. Then, based on the object and the category information, the worker behavior information and the association relationship between the worker and the construction site object are determined, wherein the association relationship is used to reflect the protection relationship or potential injury relationship caused by the construction site object to the worker.
  • the construction risk level information is determined. It can be seen that the present embodiment determines the association relationship between the worker and the construction site object based on the construction scene image, and the association relationship reflects the protection relationship or potential injury relationship caused by the construction site object to the worker. In addition, the present embodiment can also identify the worker behavior information of the worker, comprehensively consider the worker behavior information and the association relationship, and realize the fusion analysis of multi-source data, so as to determine the construction risk level information, thereby realizing a refined assessment of the construction risk.
  • the construction risk assessment method based on multi-source data fusion of this embodiment is applied to a terminal device, and the terminal device includes an intelligent product terminal such as a computer. Specifically, as shown in FIG1 , the construction risk assessment method based on multi-source data fusion of this embodiment includes the following steps:
  • Step S100 Acquire a construction scene image, and based on the construction scene image, determine an object corresponding to the construction scene image and category information corresponding to the object, wherein the object includes workers and objects on the construction site.
  • This embodiment first obtains a construction scene image, which reflects people or objects at the construction site.
  • the construction scene image is an image directly taken at the construction site, so the construction scene image includes objects such as workers and objects at the construction site.
  • the functions of different objects at the construction site and their effects on workers are also different. Therefore, this embodiment needs to determine the category information of these objects (including workers and objects at the construction site) so as to determine the impact information of objects at the construction site on workers in subsequent steps, so as to analyze whether objects at the construction site protect or harm workers, so as to analyze construction risks.
  • this embodiment when determining the category information, this embodiment includes the following steps:
  • Step S101 obtaining a pre-trained image segmentation model, and inputting the construction scene image into the image segmentation model;
  • Step S102 performing segmentation processing on the construction scene image based on the image segmentation model, and identifying objects in the construction scene image and the category information, wherein the image segmentation model is obtained by training based on the YOLOv5 model.
  • an image segmentation model is pre-set, and the image segmentation model is used to segment the construction scene image, and determine the workers and construction site objects in the construction scene image.
  • the present embodiment collects scene sample images, and the scene sample images include the positional relationship and distance relationship between the workers and the construction site objects in different scenes, so in these scene sample images, workers and construction site objects are included.
  • the present embodiment based on the image recognition method, the present embodiment identifies the workers and construction site objects in each scene sample image, and marks the workers and the construction site objects in the scene sample images, and marks the category information of the construction site objects, and the category information of the construction site objects in each scene sample image has been marked.
  • the scene sample images of the marked workers, construction site objects and the category information corresponding to the construction site objects are input into the YOLOv5 model for training to obtain the image segmentation model.
  • the image segmentation model can automatically identify the workers and construction site objects in the construction scene image, and also automatically output the category information corresponding to the construction site objects. For example, if the object at the construction site is a safety helmet, the corresponding category information is protective equipment; if the object at the construction site is an excavator, the corresponding category information is ground construction equipment.
  • the present embodiment can also identify workers and construction site objects from the construction scene image based on image recognition, and then directly compare the identified construction site objects with a preset construction site library, which stores images of various equipment at the construction site at various angles, and the images in the construction site library are classified and set according to the category information of the construction site objects. Therefore, after comparing the construction site objects with the preset construction site library, it is possible to determine what the construction site objects are specifically, and the category information of the construction site objects can be determined.
  • Step S200 Based on the object and the category information, determine the worker behavior information and the association relationship between the worker and the object at the construction site, wherein the association relationship is used to reflect the protection relationship or potential harm relationship caused by the object at the construction site to the worker.
  • this embodiment can determine the worker behavior information based on the objects (workers/construction site objects) and the category information, and the worker behavior information reflects what work the workers are performing in the construction scene.
  • this embodiment can also determine the association relationship between the workers and the construction site objects, wherein the association relationship is used to reflect the protection relationship or potential harm relationship caused by the construction site objects to the workers.
  • step S200 of this embodiment specifically includes the following steps:
  • Step S201 obtaining the position relationship and distance relationship between the worker and the object at the construction site, and determining the worker behavior information according to the position relationship and the distance relationship;
  • Step S202 determining the impact information of the construction site object on the worker according to the category information and in combination with the position relationship and the distance relationship, wherein the impact information includes the positive impact and negative impact of the construction site object on the worker;
  • Step S203 Determine the association relationship between the worker and the object at the construction site based on the influence information.
  • this embodiment first obtains the position relationship and distance relationship between the worker and the construction site object.
  • the position relationship and distance relationship can reflect whether the worker is operating the construction site object (such as whether the worker is operating an excavator), or whether the worker is wearing the construction site object (such as whether the worker is wearing a safety helmet). Therefore, this embodiment can determine the worker behavior information based on the position relationship and distance relationship, and the worker behavior information is what work the user is doing.
  • this embodiment can identify the construction scene image based on image recognition technology, identify the worker and the construction site object, and determine the position relationship between the worker and the construction site object.
  • the worker behavior information includes: safe behavior and dangerous behavior.
  • the construction scene image after determining the object and the category information corresponding to the object can also be input into a pre-set behavior recognition model. Then, based on the behavior recognition model, the worker behavior information is output. Specifically, in the present embodiment, workers and construction site objects in a number of scene sample images are pre-marked, and the category information of the construction site objects is determined. Then, based on image recognition, the worker behavior information in each scene sample image is analyzed, and the recognition of the behavior information can be determined by identifying the movement posture of the worker's limbs in each scene sample image. Then, the behavior information, the worker and the category information are bound to obtain a mapping relationship.
  • the behavior recognition model can determine the category information of the worker, the construction site object and the behavior information corresponding to the worker from the construction scene image. Therefore, when the construction scene image after determining the object and the category information corresponding to the object is input into the behavior recognition model, the worker behavior information can be output based on the behavior recognition model.
  • this embodiment can determine the impact information of the construction site object on the worker according to the category information of the construction site object, combined with the position relationship and the distance relationship, and the impact information includes the positive impact and negative impact of the construction site object on the worker. For example, based on the position relationship and the distance relationship, it can be determined that the safety helmet (i.e., the construction site object) is located on the worker's head. At this time, the worker is wearing a safety helmet, and the category information of the safety helmet is a protective device. Therefore, at this time, it can be determined that the impact information of the safety helmet on the worker is a positive impact.
  • the safety helmet i.e., the construction site object
  • the worker is located on a high-rise building (i.e., the construction site object).
  • the worker's behavior information is high-altitude work, and the category information of the high-rise building is a dangerous building. Therefore, at this time, it can be determined that the impact information of the high-rise building on the worker is a negative impact.
  • this embodiment can determine the association relationship between the worker and the construction site object according to the impact information.
  • the impact relationship when the impact relationship is a positive impact, it can be determined that the construction site object has a protective relationship with the worker (for example, the safety helmet plays a protective role on the worker), so the association relationship between the construction site object and the worker is a protective relationship.
  • the impact relationship is negative, it can be determined that the objects on the construction site cause potential harm to the workers (for example, working on a high building causes potential harm to the workers). Therefore, the association relationship between the objects on the construction site and the workers is a potential harm relationship.
  • Step S300 Determine construction risk level information based on the worker behavior information and the association relationship.
  • the worker behavior information of this embodiment includes safe behavior and dangerous behavior, and the association relationship includes protection relationship and potential harm relationship. Therefore, after determining the worker behavior information and the association relationship, this embodiment can comprehensively consider the worker behavior information and the association relationship to determine the construction risk level information.
  • the construction risk level information is determined to be a low risk level. If the worker behavior information is a dangerous behavior and the association relationship is a protection relationship, the construction risk level information is determined to be the second lowest risk level. If the worker behavior information is a safe behavior and the association relationship is a potential injury relationship, the construction risk level information is determined to be a higher risk level. If the worker behavior information is a dangerous behavior and the association relationship is a potential injury relationship, the construction risk level information is determined to be a high risk level. Risk index in this embodiment: low risk level ⁇ second lowest risk level ⁇ higher risk level ⁇ high risk level.
  • this embodiment can also train a risk assessment model.
  • This embodiment can construct a correspondence between the category information of workers, personal behavior, objects on the construction site, and construction risk level information, and use the category information of workers, personal behavior, and objects on the construction site as independent variables and the construction risk level information as a dependent variable for training to obtain a risk assessment model.
  • the risk assessment model can automatically output the construction risk level information after directly identifying workers and objects on the construction site from the construction scene image, thereby realizing automatic assessment of construction risks.
  • this embodiment first obtains a construction scene image, and based on the construction scene image, determines the object corresponding to the construction scene image and the category information corresponding to the object, wherein the object includes workers and objects at the construction site; based on the object and the category information, determines the worker behavior information and the association relationship between the worker and the construction site object, wherein the association relationship is used to reflect the protection relationship or potential harm relationship caused by the construction site object to the worker; based on the worker behavior information and the association relationship, determines the construction risk level information.
  • This embodiment can realize a refined assessment of construction risks, determine the construction risk level, and fully consider the characteristics of the construction scene, which is conducive to accurately assessing construction risks.
  • the present invention also provides a construction risk assessment device based on multi-source data fusion.
  • the device of this embodiment includes: a scene image analysis module 10, an association analysis module 20, and a construction risk assessment module 30.
  • the scene image analysis module 10 in this embodiment is used to obtain a construction scene image, and based on the construction scene image, determine the object corresponding to the construction scene image and the category information corresponding to the object, wherein the object includes workers and construction site objects.
  • the association analysis module 20 is used to determine the worker behavior information and the association relationship between the worker and the construction site object based on the object and the category information, wherein the association relationship is used to reflect the protection relationship or potential injury relationship caused by the construction site object to the worker.
  • the construction risk assessment module 30 is used to determine the construction risk level information based on the worker behavior information and the association relationship.
  • the scene image analysis module 10 includes:
  • An image input unit used to obtain a pre-trained image segmentation model and input the construction scene image into the image segmentation model
  • An image processing unit is used to perform segmentation processing on the construction scene image based on the image segmentation model, and identify objects in the construction scene image and the category information, wherein the image segmentation model is obtained by training based on the YOLOv5 model.
  • the apparatus includes an image segmentation model training module, and the image segmentation model training module includes:
  • An image acquisition unit used to acquire scene sample images, wherein the scene sample images include positional relationships and distance relationships between workers and objects on the construction site in different scenes;
  • an information labeling unit used to label the worker and the construction site object in the scene sample image, and label the category information of the construction site object
  • the model training unit is used to input the scene sample images of the marked workers, construction site objects and category information corresponding to the construction site objects into the YOLOv5 model for training to obtain the image segmentation model.
  • the association relationship analysis module 20 includes:
  • a behavior analysis unit used to obtain the position relationship and distance relationship between the worker and the object at the construction site, and determine the worker behavior information according to the position relationship and the distance relationship;
  • An impact analysis unit configured to determine, based on the category information and in combination with the position relationship and the distance relationship, the impact information of the construction site object on the worker, wherein the impact information includes positive impact and negative impact of the construction site object on the worker;
  • a relationship determination unit is used to determine the association relationship between the worker and the object at the construction site based on the influence information.
  • the association relationship analysis module 20 further includes:
  • a model acquisition unit used to acquire a pre-trained behavior recognition model, and input the construction scene image after determining the object and the category information corresponding to the object into the behavior recognition model;
  • a behavior recognition unit is used to output the worker behavior information based on the behavior recognition model.
  • the device further includes a behavior recognition model training module, and the behavior recognition model training module includes:
  • An information labeling processing unit used to label workers and construction site objects in a number of scene sample images in advance, and determine category information of the construction site objects;
  • a mapping relationship establishing unit used to analyze the behavior information of the workers in each scene sample image based on image recognition, and bind the behavior information, the workers and the category information to obtain a mapping relationship;
  • the behavior recognition model training unit trains the mapping relationship based on the residual convolutional neural network to obtain the behavior recognition model.
  • the construction risk assessment module 30 includes:
  • a first risk level determination unit configured to determine that the construction risk level information is a low risk level if the worker behavior information is a safe behavior and the association relationship is a protection relationship;
  • a second risk level determination unit configured to determine that the construction risk level information is a second lowest risk level if the worker behavior information is a dangerous behavior and the association relationship is a protection relationship;
  • a third risk level determination unit configured to determine that the construction risk level information is a higher risk level if the worker behavior information is a safe behavior and the association relationship is a potential harm relationship;
  • the fourth risk level determination unit is used to determine that the construction risk level information is a high risk level if the worker behavior information is a dangerous behavior and the association relationship is a potential harm relationship.
  • the present invention further provides a terminal device, the principle block diagram of which can be shown in Figure 3, and the terminal device is the host computer in the above embodiments, such as a computer device.
  • the terminal device may include one or more processors 100 (only one is shown in Figure 3), a memory 101, and a computer program 102 stored in the memory 101 and executable on one or more processors 100, for example, a program for construction risk assessment based on multi-source data fusion.
  • processors 100 execute the computer program 102
  • each step in the method embodiment of construction risk assessment based on multi-source data fusion can be implemented.
  • the functions of each template/unit in the device embodiment of construction risk assessment based on multi-source data fusion can be implemented, which is not limited here.
  • the processor 100 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the memory 101 may be an internal storage unit of an electronic device, such as a hard disk or memory of the electronic device.
  • the memory 101 may also be an external storage device of the electronic device, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card, etc. equipped on the electronic device.
  • the memory 101 may also include both an internal storage unit of the electronic device and an external storage device.
  • the memory 101 is used to store computer programs and other programs and data required by the terminal device.
  • the memory 101 may also be used to temporarily store data that has been output or is to be output.
  • FIG3 is only a block diagram of a partial structure related to the solution of the present invention, and does not constitute a limitation on the terminal device to which the solution of the present invention is applied.
  • the specific terminal device may include more or fewer components than those shown in the figure, or combine certain components, or have a different arrangement of components.
  • Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM) or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual operating data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • the present invention discloses a construction risk assessment method and device based on multi-source data fusion, the method comprising: acquiring a construction scene image, and based on the construction scene image, determining the object corresponding to the construction scene image and the category information corresponding to the object, wherein the object includes workers and objects at the construction site; based on the object and the category information, determining the worker behavior information and the association relationship between the worker and the construction site object, wherein the association relationship is used to reflect the protection relationship or potential injury relationship caused by the construction site object to the worker; based on the worker behavior information and the association relationship, determining the construction risk level information.
  • the present invention can realize a refined assessment of construction risks, determine the construction risk level, and fully consider the characteristics of the construction scene, which is conducive to accurately assessing construction risks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Biophysics (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种基于多源数据融合的施工风险评估方法及装置,所述方法包括:获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;基于所述工人行为信息与所述关联关系,确定施工风险等级信息。本发明可实现对施工风险进行精细化评估,确定出施工风险等级,并且充分考虑施工场景特性,有利于准确评估出施工风险。

Description

一种基于多源数据融合的施工风险评估方法及装置 技术领域
本发明涉及施工风险评估技术领域,尤其涉及一种基于多源数据融合的施工风险评估方法及装置。
背景技术
建筑行业作为推动我国经济社会发展的重要力量,容纳超过七千万建筑从业人员。由于施工环境以及施工现场的比较复杂,容易造成施工人员的安全风险。
目前主要的基于图像视觉的施工风险行为识别仅关注工人个体的行为(比如抽烟、未佩戴安全帽、未佩戴安全带等)但是未能充分考虑场景特征,无法准确地评估出施工风险。
因此,现有技术还有待改进和提高。
技术问题
本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种基于多源数据融合的施工风险评估方法及装置,旨在提供解决现有技术充分考虑场景特征,无法准确地评估出施工风险的问题。
技术解决方案
第一方面,本发明提供一种基于多源数据融合的施工风险评估方法,其中,所述方法包括:
获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;
基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;
基于所述工人行为信息与所述关联关系,确定施工风险等级信息。
在一种实现方式中,所述基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,包括:
获取预先训练的图像分割模型,并将所述施工场景图像输入至所述图像分割模型中;
基于所述图像分割模型对所述施工场景图像进行分割处理,并识别所述施工场景图像中的对象以及所述类别信息,其中,所述图像分割模型为基于YOLOv5模型训练得到的。
在一种实现方式中,所述图像分割模型的训练过程包括:
采集场景样本图像,所述场景样本图像包括不同场景下工人与施工现场物体之间的位置关系以及距离关系;
对所述场景样本图像中的所述工人与所述施工现场物体进行标记,并标记所述施工现场物体的类别信息;
将被标记的工人、施工现场物体以及施工现场物体对应的类别信息的场景样本图像输入至所述YOLOv5模型中进行训练,得到所述图像分割模型。
在一种实现方式中,所述基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,包括:
获取所述工人与所述施工现场物体之间的位置关系与距离关系,并根据所述位置关系与所述距离关系,确定所述工人行为信息;
根据所述类别信息,并结合所述位置关系与距离关系,确定所述施工现场物体对所述工人的影响信息,所述影响信息包括所述施工现场物体对所述工人的正面影响与负面影响;
基于所述影响信息,确定所述工人与所述施工现场物体之间的关联关系。
在一种实现方式中,所述基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,还包括:
获取预先训练的行为识别模型,并将确定所述对象以及所述对象所对应的类别信息后的施工场景图像输入至所述行为识别模型中;
基于所述行为识别模型,输出所述工人行为信息。
在一种实现方式中,所述行为识别模型的训练过程包括:
预先对若干场景样本图像中的工人与施工现场物体进行标注,并确定所述施工现场物体的类别信息;
基于图像识别分析出每一张场景样本图像中工人的行为信息,并将所述行为信息、工人以及类别信息进行绑定,得到映射关系;
基于残差卷积神经网络对所述映射关系进行训练,得到所述行为识别模型。
在一种实现方式中,所述基于所述工人行为信息与所述关联关系,确定施工风险等级信息,包括:
若所述工人行为信息为安全行为,所述关联关系为保护关系,则确定所述施工风险等级信息为低风险等级;
若所述工人行为信息为危险行为,所述关联关系为保护关系,则确定所述施工风险等级信息为次低风险等级;
若所述工人行为信息为安全行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为较高风险等级;
若所述工人行为信息为危险行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为高风险等级。
第二方面,本发明实施例还提供一种基于多源数据融合的施工风险评估装置,其特征在于,所述装置包括:
场景图像分析模块,用于获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;
关联关系分析模块,用于基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;
施工风险评估模块,用于基于所述工人行为信息与所述关联关系,确定施工风险等级信息。
第三方面,本发明实施例还提供一种终端设备,其中,所述终端设备为商显终端或者投屏终端,所述终端设备包括存储器、处理器及存储在存储器中并可在处理器上运行的基于多源数据融合的施工风险评估程序,处理器执行基于多源数据融合的施工风险评估程序时,实现上述方案中任一项的基于多源数据融合的施工风险评估方法的步骤。
第四方面,本发明实施例还提供一种计算机可读存储介质,其中,计算机可读存储介质上存储有基于多源数据融合的施工风险评估程序,所述基于多源数据融合的施工风险评估程序被处理器执行时,实现上述方案中任一项所述的基于多源数据融合的施工风险评估方法的步骤。
有益效果
有益效果:与现有技术相比,本发明提供了一种基于多源数据融合的施工风险评估方法,本发明首先获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;基于所述工人行为信息与所述关联关系,确定施工风险等级信息。本发明可实现对施工风险进行精细化评估,确定出施工风险等级,并且充分考虑施工场景特性,有利于准确评估出施工风险。
附图说明
图1为本发明实施例提供的基于多源数据融合的施工风险评估方法的具体实施方式的流程图。
图2为本发明实施例提供的基于多源数据融合的施工风险评估装置的功能原理图。
图3为本发明实施例提供的终端设备的原理框图。
本发明的实施方式
为使本发明的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本实施例提供一种基于多源数据融合的施工风险评估方法,基于本实施例的方法,可实现对施工风险进行精细化评估,确定出施工风险等级,并且充分考虑施工场景特性,有利于准确评估出施工风险。具体实施时,本实施例首先获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体。然后,基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系。最后,基于所述工人行为信息与所述关联关系,确定施工风险等级信息。由此可见,本实施例是基于施工场景图像来确定工人与施工现场物体之间的关联关系,该关联关系反映的是施工现场物体对所述工人造成的保护关系或者潜在伤害关系,并且,本实施例还可识别出工人的工人行为信息,将工人行为信息与关联关系综合考虑,实现多源数据的融合分析,就可以确定出施工风险等级信息,从而实现精细化地评估出施工风险。
示例性方法
本实施例的基于多源数据融合的施工风险评估方法应用于终端设备中,所述终端设备包括电脑等智能化产品终端。具体地,如图1中所示,本实施例的基于多源数据融合的施工风险评估方法,包括如下步骤:
步骤S100、获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体。
本实施例首先获取施工场景图像,该施工场景图像反映的是施工现场的人或者物。施工场景图像是直接针对施工现场拍摄得到的图像,因此,在施工场景图像中包括工人以及施工现场物体等对象,而不同的施工现场物体的功能以及对工人的影响也是不相同的,因此,本实施例需要确定出这些对象(包括工人和施工现场物体)的类别信息,以便在后续步骤中确定出施工现场物体对工人的影响信息,便于分析出施工现场物体对工人是起到保护作用还是起到伤害作用,以便分析出施工风险。
在一种实现方式中,本实施例在确定类别信息时,包括如下步骤:
步骤S101、获取预先训练的图像分割模型,并将所述施工场景图像输入至所述图像分割模型中;
步骤S102、基于所述图像分割模型对所述施工场景图像进行分割处理,并识别所述施工场景图像中的对象以及所述类别信息,其中,所述图像分割模型为基于YOLOv5模型训练得到的。
具体地,本实施例中预先设置一图像分割模型,该图像分割模型用于对施工场景图像进行分割,确定出施工场景图像中的工人和施工现场物体。首先,本实施例采集场景样本图像,所述场景样本图像包括不同场景下工人与施工现场物体之间的位置关系以及距离关系,因此在这些场景样本图像中,是包括有工人和施工现场物体的。接着,本实施例基于图像识别的方式,对每一个场景样本图像中的工人和施工现场物体进行识别,并对对所述场景样本图像中的所述工人与所述施工现场物体进行标记,并标记所述施工现场物体的类别信息,此时每一张场景样本图像中的施工现场物体的类别信息都已被标注。因此,本实施例将被标记的工人、施工现场物体以及施工现场物体对应的类别信息的场景样本图像输入至所述YOLOv5模型中进行训练,得到所述图像分割模型。当将施工场景图像输入至图像分割模型中,图像分割模型就可以自动识别施工场景图像中的工人与施工现场物体,并且还会自动输出施工现场物体所对应的类别信息。比如,如果施工现场物体为安全帽,则对应的类别信息为保护设备;如果施工现场物体为挖掘机,则对应的类别信息为地面施工设备。
在另一种实现方式中,本实施例还可基于图像识别的方式从施工场景图像中识别出工人和施工现场物体,然后直接将识别出的施工现场物体与预设的施工现场图库进行比对,该施工现场图库中存储有施工现场的各种设备在各个角度下的图像,且施工现场图库中的图像是按照施工现场物体的类别信息进行分类设置的。因此,当将施工现场物体与预设的施工现场图库进行比对后,就可以确定出施工现场物体具体是什么,并且可以确定出施工现场物体的类别信息。
步骤S200、基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系。
当确定出施工场景图像中的工人/施工现场物体以及施工现场物体所对应的类别信息后,本实施例可基于对象(工人/施工现场物体)与所述类别信息来确定出工人行为信息,该工人行为信息反映的是工人在施工场景下正在执行什么工作。此外,本实施例还可以确定出工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系。
在一种实现方式中,本实施例的步骤S200具体包括如下步骤:
步骤S201、获取所述工人与所述施工现场物体之间的位置关系与距离关系,并根据所述位置关系与所述距离关系,确定所述工人行为信息;
步骤S202、根据所述类别信息,并结合所述位置关系与距离关系,确定所述施工现场物体对所述工人的影响信息,所述影响信息包括所述施工现场物体对所述工人的正面影响与负面影响;
步骤S203、基于所述影响信息,确定所述工人与所述施工现场物体之间的关联关系。
具体地,本实施例首先获取工人和施工现场物体之间的位置关系与距离关系,该位置关系与距离关系可反映出工人是否正在操作施工现场物体(比如工人是否正在操作挖掘机),或者反映出工人是否正在佩戴施工现场物体(比如工人是否正在佩戴安全帽),因此,本实施例基于位置关系和距离关系就可以确定出工人行为信息,该工人行为信息即为用户正在做什么工作。本实施例在确定工人和施工现场物体之间的位置关系与距离关系时,可基于图像识别技术对施工场景图像来进行识别,识别出工人和施工现场物体,并确定工人和施工现场物体之间的位置关系。然后基于施工场景图像中工人和施工现场物体的距离换算比例尺,得到工人和施工现场物体之间的距离关系,再基于位置关系和距离关系就可以确定出工人行为信息。该工人行为信息包括:安全行为和危险行为。
在另一种实现方式中,本实施例识别工人行为信息时,还可将确定所述对象以及所述对象所对应的类别信息后的施工场景图像输入至预先设置的行为识别模型中。然后基于所述行为识别模型,输出所述工人行为信息。具体地,本实施例中预先对若干场景样本图像中的工人与施工现场物体进行标注,并确定所述施工现场物体的类别信息。然后基于图像识别分析出每一张场景样本图像中工人的行为信息,该行为信息的识别可通过识别图每一张场景样本图像中的工人四肢的运动姿态来确定。然后将所述行为信息、工人以及类别信息进行绑定,得到映射关系。接着,基于残差卷积神经网络对所述映射关系进行训练,得到所述行为识别模型。该行为识别模型就可以从施工场景图像中确定出工人、施工现场物体的类别信息以及工人对应的行为信息。因此当将确定所述对象以及所述对象所对应的类别信息后的施工场景图像输入至所述行为识别模型中后,就可以基于所述行为识别模型,输出所述工人行为信息。
接着,本实施例可根据施工现场物体的类别信息,结合所述位置关系与距离关系,确定所述施工现场物体对所述工人的影响信息,所述影响信息包括所述施工现场物体对所述工人的正面影响与负面影响。比如,基于位置关系和距离关系,可确定出安全帽(即施工现场物体)是位于工人头部,此时工人是佩戴安全帽的,而安全帽的类别信息是保护设备的,因此,此时就可以确定该安全帽对工人的影响信息为正面影响。再比如,基于位置关系和距离关系,可确定出工人位于高楼(即施工现场物体)之上,此时工人的行为信息为高空作业,而高楼的类别信息为危险建筑,因此,此时就可以确定高楼对于工人的影响信息为负面影响。当确定出影响信息后,本实施例即可根据影响信息来确定所述工人与所述施工现场物体之间的关联关系。在本实施例中,当影响关系为正面影响时,则就可以确定施工现场物体对所述工人造成保护关系(比如安全帽对工人起到保护作用),因此施工现场物体与工人之间的关联关系为保护关系。而当影响关系为负面影响时,则就可以确定施工现场物体对工人造成潜在伤害关系(比如高楼上作业对工人造成潜在伤害),因此施工现场物体与工人之间的关联关系为潜在伤害关系。
步骤S300、基于所述工人行为信息与所述关联关系,确定施工风险等级信息。
本实施例的工人行为信息包括安全行为与危险行为,关联关系又包括保护关系与潜在伤害关系。因此本实施例在确定工人行为信息与关联关系后,可将工人行为信息与所述关联关系综合考虑,确定出施工风险等级信息。
具体地,若所述工人行为信息为安全行为,所述关联关系为保护关系,则确定所述施工风险等级信息为低风险等级。若所述工人行为信息为危险行为,所述关联关系为保护关系,则确定所述施工风险等级信息为次低风险等级。若所述工人行为信息为安全行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为较高风险等级。若所述工人行为信息为危险行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为高风险等级。本实施例中风险指数:低风险等级<次低风险等级<较高风险等级<高风险等级。
此外,本实施例还可以训练一个风险评估模型,本实施例可构建工人、个人行为、施工现场物体的类别信息以及施工风险等级信息这四者之间的对应关系,并将工人、个人行为、施工现场物体的类别信息作为自变量,将施工风险等级信息作为因变量来进行训练,得到风险评估模型。该风险评估模型可直接在从施工场景图像中识别到工人和施工现场物体之后,自动输出施工风险等级信息,从而实现施工风险的自动评估。
综上,本实施例首先获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;基于所述工人行为信息与所述关联关系,确定施工风险等级信息。本实施例可实现对施工风险进行精细化评估,确定出施工风险等级,并且充分考虑施工场景特性,有利于准确评估出施工风险。
示例性装置
基于上述实施例,本发明还提供一种基于多源数据融合的施工风险评估装置,如图2中所示,本实施例的装置包括:场景图像分析模块10、联关系分析模块20以及施工风险评估模块30。具体地,本实施例中的场景图像分析模块10,用于获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体。所述关联关系分析模块20,用于基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系。所述施工风险评估模块30,用于基于所述工人行为信息与所述关联关系,确定施工风险等级信息。
在一种实现方式中,所述场景图像分析模块10包括:
图像输入单元,用于获取预先训练的图像分割模型,并将所述施工场景图像输入至所述图像分割模型中;
图像处理单元,用于基于所述图像分割模型对所述施工场景图像进行分割处理,并识别所述施工场景图像中的对象以及所述类别信息,其中,所述图像分割模型为基于YOLOv5模型训练得到的。
在一种实现方式中,所述装置包括图像分割模型训练模块,所述图像分割模型训练模块包括:
图像采集单元,用于采集场景样本图像,所述场景样本图像包括不同场景下工人与施工现场物体之间的位置关系以及距离关系;
信息标注单元,用于对所述场景样本图像中的所述工人与所述施工现场物体进行标记,并标记所述施工现场物体的类别信息;
模型训练单元,用于将被标记的工人、施工现场物体以及施工现场物体对应的类别信息的场景样本图像输入至所述YOLOv5模型中进行训练,得到所述图像分割模型。
在一种实现方式中,所述关联关系分析模块20包括:
行为分析单元,用于获取所述工人与所述施工现场物体之间的位置关系与距离关系,并根据所述位置关系与所述距离关系,确定所述工人行为信息;
影响分析单元,用于根据所述类别信息,并结合所述位置关系与距离关系,确定所述施工现场物体对所述工人的影响信息,所述影响信息包括所述施工现场物体对所述工人的正面影响与负面影响;
关系确定单元,用于基于所述影响信息,确定所述工人与所述施工现场物体之间的关联关系。
在一种实现方式中,所述关联关系分析模块20还包括:
模型获取单元,用于获取预先训练的行为识别模型,并将确定所述对象以及所述对象所对应的类别信息后的施工场景图像输入至所述行为识别模型中;
行为识别单元,用于基于所述行为识别模型,输出所述工人行为信息。
在一种实现方式中,所述装置还包括行为识别模型训练模块,所述行为识别模型训练模型包括:
信息标注处理单元,用于预先对若干场景样本图像中的工人与施工现场物体进行标注,并确定所述施工现场物体的类别信息;
映射关系建立单元,用于基于图像识别分析出每一张场景样本图像中工人的行为信息,并将所述行为信息、工人以及类别信息进行绑定,得到映射关系;
行为识别模型训练单元,与基于残差卷积神经网络对所述映射关系进行训练,得到所述行为识别模型。
在一种实现方式中,所述施工风险评估模块30,包括:
第一风险等级确定单元,用于若所述工人行为信息为安全行为,所述关联关系为保护关系,则确定所述施工风险等级信息为低风险等级;
第二风险等级确定单元,用于若所述工人行为信息为危险行为,所述关联关系为保护关系,则确定所述施工风险等级信息为次低风险等级;
第三风险等级确定单元,用于若所述工人行为信息为安全行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为较高风险等级;
第四风险等级确定单元,用于若所述工人行为信息为危险行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为高风险等级。
本实施例的基于多源数据融合的施工风险评估装置中各个模板的工作原理与上述方法实施例中各个步骤的原理相同,此处不再赘述。
基于上述实施例,本发明还提供了一种终端设备,所述终端设备的原理框图可以如3所示,所述终端设备为上述实施例中的上位机,比如电脑设备。终端设备可以包括一个或多个处理器100(图3中仅示出一个),存储器101以及存储在存储器101中并可在一个或多个处理器100上运行的计算机程序102,例如,基于多源数据融合的施工风险评估的程序。一个或多个处理器100执行计算机程序102时可以实现基于多源数据融合的施工风险评估的方法实施例中的各个步骤。或者,一个或多个处理器100执行计算机程序102时可以实现基于多源数据融合的施工风险评估的装置实施例中各模板/单元的功能,此处不作限制。
在一个实施例中,所称处理器100可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在一个实施例中,存储器101可以是电子设备的内部存储单元,例如电子设备的硬盘或内存。存储器101也可以是电子设备的外部存储设备,例如电子设备上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,存储器101还可以既包括电子设备的内部存储单元也包括外部存储设备。存储器101用于存储计算机程序以及终端设备所需的其他程序和数据。存储器101还可以用于暂时地存储已经输出或者将要输出的数据。
本领域技术人员可以理解,图3中示出的原理框图,仅仅是与本发明方案相关的部分结构的框图,并不构成对本发明方案所应用于其上的终端设备的限定,具体的终端设备以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、运营数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双运营数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
综上,本发明公开了一种基于多源数据融合的施工风险评估方法及装置,所述方法包括:获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;基于所述工人行为信息与所述关联关系,确定施工风险等级信息。本发明可实现对施工风险进行精细化评估,确定出施工风险等级,并且充分考虑施工场景特性,有利于准确评估出施工风险。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种基于多源数据融合的施工风险评估方法,其特征在于,所述方法包括:
    获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;
    基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;
    基于所述工人行为信息与所述关联关系,确定施工风险等级信息。
  2. 根据权利要求1所述的基于多源数据融合的施工风险评估方法,其特征在于,所述基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,包括:
    获取预先训练的图像分割模型,并将所述施工场景图像输入至所述图像分割模型中;
    基于所述图像分割模型对所述施工场景图像进行分割处理,并识别所述施工场景图像中的对象以及所述类别信息,其中,所述图像分割模型为基于YOLOv5模型训练得到的。
  3. 根据权利要求2所述的基于多源数据融合的施工风险评估方法,其特征在于,所述图像分割模型的训练过程包括:
    采集场景样本图像,所述场景样本图像包括不同场景下工人与施工现场物体之间的位置关系以及距离关系;
    对所述场景样本图像中的所述工人与所述施工现场物体进行标记,并标记所述施工现场物体的类别信息;
    将被标记的工人、施工现场物体以及施工现场物体对应的类别信息的场景样本图像输入至所述YOLOv5模型中进行训练,得到所述图像分割模型。
  4. 根据权利要求1所述的基于多源数据融合的施工风险评估方法,其特征在于,所述基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,包括:
    获取所述工人与所述施工现场物体之间的位置关系与距离关系,并根据所述位置关系与所述距离关系,确定所述工人行为信息;
    根据所述类别信息,并结合所述位置关系与距离关系,确定所述施工现场物体对所述工人的影响信息,所述影响信息包括所述施工现场物体对所述工人的正面影响与负面影响;
    基于所述影响信息,确定所述工人与所述施工现场物体之间的关联关系。
  5. 根据权利要求4所述的基于多源数据融合的施工风险评估方法,其特征在于,所述基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,还包括:
    获取预先训练的行为识别模型,并将确定所述对象以及所述对象所对应的类别信息后的施工场景图像输入至所述行为识别模型中;
    基于所述行为识别模型,输出所述工人行为信息。
  6. 根据权利要求5所述的基于多源数据融合的施工风险评估方法,其特征在于,所述行为识别模型的训练过程包括:
    预先对若干场景样本图像中的工人与施工现场物体进行标注,并确定所述施工现场物体的类别信息;
    基于图像识别分析出每一张场景样本图像中工人的行为信息,并将所述行为信息、工人以及类别信息进行绑定,得到映射关系;
    基于残差卷积神经网络对所述映射关系进行训练,得到所述行为识别模型。
  7. 根据权利要求1所述的基于多源数据融合的施工风险评估方法,其特征在于,所述基于所述工人行为信息与所述关联关系,确定施工风险等级信息,包括:
    若所述工人行为信息为安全行为,所述关联关系为保护关系,则确定所述施工风险等级信息为低风险等级;
    若所述工人行为信息为危险行为,所述关联关系为保护关系,则确定所述施工风险等级信息为次低风险等级;
    若所述工人行为信息为安全行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为较高风险等级;
    若所述工人行为信息为危险行为,所述关联关系为潜在伤害关系,则确定所述施工风险等级信息为高风险等级。
  8. 一种基于多源数据融合的施工风险评估装置,其特征在于,所述装置包括:
    场景图像分析模块,用于获取施工场景图像,并基于所述施工场景图像,确定所述施工场景图像所对应的对象以及所述对象所对应的类别信息,其中,所述对象包括工人与施工现场物体;
    关联关系分析模块,用于基于所述对象与所述类别信息,确定工人行为信息以及所述工人与所述施工现场物体之间的关联关系,其中,所述关联关系用于反映所述施工现场物体对所述工人造成的保护关系或者潜在伤害关系;
    施工风险评估模块,用于基于所述工人行为信息与所述关联关系,确定施工风险等级信息。
  9. 一种终端设备,其特征在于,所述终端设备包括存储器、处理器及存储在存储器中并可在处理器上运行的基于多源数据融合的施工风险评估程序,所述处理器执行所述基于多源数据融合的施工风险评估程序时,实现如权利要求1-7任一项所述的基于多源数据融合的施工风险评估方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有基于多源数据融合的施工风险评估程序,所述基于多源数据融合的施工风险评估程序被处理器执行时,实现如权利要求1-7任一项所述的基于多源数据融合的施工风险评估方法的步骤。
PCT/CN2022/137051 2022-09-27 2022-12-06 一种基于多源数据融合的施工风险评估方法及装置 WO2024066039A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211182019.3A CN115660247A (zh) 2022-09-27 2022-09-27 一种基于多源数据融合的施工风险评估方法及装置
CN202211182019.3 2022-09-27

Publications (1)

Publication Number Publication Date
WO2024066039A1 true WO2024066039A1 (zh) 2024-04-04

Family

ID=84985359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137051 WO2024066039A1 (zh) 2022-09-27 2022-12-06 一种基于多源数据融合的施工风险评估方法及装置

Country Status (2)

Country Link
CN (1) CN115660247A (zh)
WO (1) WO2024066039A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160148132A1 (en) * 2014-11-21 2016-05-26 International Business Machines Corporation Ergonomic risk assessment
CN111931706A (zh) * 2020-09-16 2020-11-13 清华大学 施工现场的人机碰撞预警方法及系统
US20210109497A1 (en) * 2018-01-29 2021-04-15 indus.ai Inc. Identifying and monitoring productivity, health, and safety risks in industrial sites
CN113191699A (zh) * 2021-06-11 2021-07-30 广东电网有限责任公司 一种配电施工现场安全监管方法
CN113722503A (zh) * 2021-08-17 2021-11-30 中国海洋大学 一种施工现场风险细粒度识别方法和系统
CN114493375A (zh) * 2022-04-02 2022-05-13 清华大学 施工安全宏观评估系统及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160148132A1 (en) * 2014-11-21 2016-05-26 International Business Machines Corporation Ergonomic risk assessment
US20210109497A1 (en) * 2018-01-29 2021-04-15 indus.ai Inc. Identifying and monitoring productivity, health, and safety risks in industrial sites
CN111931706A (zh) * 2020-09-16 2020-11-13 清华大学 施工现场的人机碰撞预警方法及系统
CN113191699A (zh) * 2021-06-11 2021-07-30 广东电网有限责任公司 一种配电施工现场安全监管方法
CN113722503A (zh) * 2021-08-17 2021-11-30 中国海洋大学 一种施工现场风险细粒度识别方法和系统
CN114493375A (zh) * 2022-04-02 2022-05-13 清华大学 施工安全宏观评估系统及方法

Also Published As

Publication number Publication date
CN115660247A (zh) 2023-01-31

Similar Documents

Publication Publication Date Title
TWI726364B (zh) 電腦執行的車輛定損方法及裝置
WO2019218699A1 (zh) 欺诈交易判断方法、装置、计算机设备和存储介质
US8423960B2 (en) Evaluation of software based on review history
JP7111887B2 (ja) ビデオ品質検査方法、装置、コンピュータデバイス及び記憶媒体
US20180181834A1 (en) Method and apparatus for security inspection
CN109472213B (zh) 掌纹识别方法、装置、计算机设备和存储介质
JP2019527434A (ja) 評価モデルのためのモデリング方法及び装置
CN115828112B (zh) 一种故障事件的响应方法、装置、电子设备及存储介质
US20110271252A1 (en) Determining functional design/requirements coverage of a computer code
CN109509087A (zh) 智能化的贷款审核方法、装置、设备及介质
CN110610127A (zh) 人脸识别方法、装置、存储介质及电子设备
CN112017056A (zh) 一种智能双录方法及系统
WO2022142319A1 (zh) 虚假保险报案处理方法、装置、计算机设备及存储介质
CN112256849B (zh) 模型训练方法、文本检测方法、装置、设备和存储介质
WO2020063347A1 (zh) 针对口算题的题目批改方法、装置、电子设备和存储介质
CN110717449A (zh) 车辆年检人员的行为检测方法、装置和计算机设备
CN115082861A (zh) 人员身份与安全违章识别方法及系统
CN109145752A (zh) 用于评估对象检测和跟踪算法的方法、装置、设备和介质
CN114022738A (zh) 训练样本获取方法、装置、计算机设备和可读存储介质
CN114022264A (zh) 生成凭证的方法、装置、设备及存储介质
WO2024066039A1 (zh) 一种基于多源数据融合的施工风险评估方法及装置
CN113707279A (zh) 医学影像图片的辅助分析方法、装置、计算机设备及介质
CN114898155B (zh) 车辆定损方法、装置、设备及存储介质
CN111192150A (zh) 车辆出险代理业务的处理方法、装置、设备及存储介质
CN113269190B (zh) 基于人工智能的数据分类方法、装置、计算机设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22960634

Country of ref document: EP

Kind code of ref document: A1