CN113536935A - A kind of safety monitoring method and equipment for engineering site - Google Patents
A kind of safety monitoring method and equipment for engineering site Download PDFInfo
- Publication number
- CN113536935A CN113536935A CN202110671419.XA CN202110671419A CN113536935A CN 113536935 A CN113536935 A CN 113536935A CN 202110671419 A CN202110671419 A CN 202110671419A CN 113536935 A CN113536935 A CN 113536935A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- collision
- speed
- worker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 104
- 238000012544 monitoring process Methods 0.000 title claims abstract description 41
- 230000033001 locomotion Effects 0.000 claims abstract description 53
- 230000001133 acceleration Effects 0.000 claims description 40
- 238000010276 construction Methods 0.000 claims description 38
- 238000012549 training Methods 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 31
- 238000004364 calculation method Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 3
- 240000004050 Pentaglottis sempervirens Species 0.000 description 36
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 36
- 238000012806 monitoring device Methods 0.000 description 19
- 230000000694 effects Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 12
- 238000013528 artificial neural network Methods 0.000 description 9
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
- G06Q50/265—Personal security, identity or safety
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Tourism & Hospitality (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Educational Administration (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Security & Cryptography (AREA)
- Artificial Intelligence (AREA)
- Development Economics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
本申请适用于计算机技术领域,提供了一种工程现场的安全监测方法及设备,所述方法包括:对目标俯视图像进行识别,得到目标俯视图像中每个像素所属的区域类型;对目标地面图像进行识别,得到目标地面图像中的目标对象及其位置信息;计算目标对象之间的运动危险指数;并且确定区域危险指数;根据运动危险指数和区域危险指数,计算目标对象的危险系数;根据危险系数对工程现场进行安全监测。上述方案,从整个工程现场的动态运行角度出发,全面的获取工程现场的多维信息,通过对运动危险指数和区域危险指数计算出目标对象的危险系数,通过危险系数能够更全面的进行安全预警,更全面的对工程现场的安全进行监测。
The present application is applicable to the field of computer technology, and provides a method and equipment for safety monitoring on an engineering site. The method includes: identifying a target overhead image to obtain the region type to which each pixel in the target overhead image belongs; Identify, obtain the target object and its position information in the target ground image; calculate the movement risk index between the target objects; and determine the regional risk index; calculate the risk factor of the target object according to the movement risk index and the regional risk index; The coefficient is used to monitor the safety of the project site. The above scheme, from the perspective of dynamic operation of the entire project site, comprehensively obtains the multi-dimensional information of the project site, and calculates the risk factor of the target object through the movement risk index and the regional risk index. More comprehensively monitor the safety of the project site.
Description
技术领域technical field
本申请属于计算机技术领域,尤其涉及一种工程现场的安全监测方法及设备。The application belongs to the field of computer technology, and in particular relates to a safety monitoring method and device for a construction site.
背景技术Background technique
工程现场是安全事故多发的场所,所以需要对工程现场的安全进行监测。目前,在对工程现场进行安全监测时,采用了计算机视觉或者深度神经网络的方法,来检测工人是否佩戴安全帽,并对未佩戴者发出警告。但是,该方案对是否佩戴安全帽的检测率并不高,不能保证足够的安全,并且只能起到初步的提醒作用,能提供的安全预警作用非常有限,无法全面的对工程现场的安全进行监测。The construction site is a place with frequent safety accidents, so it is necessary to monitor the safety of the construction site. At present, computer vision or deep neural network methods are used in safety monitoring of engineering sites to detect whether workers wear safety helmets and warn those who do not wear them. However, this scheme has a low detection rate on whether to wear a helmet, cannot guarantee sufficient safety, and can only serve as a preliminary reminder. monitor.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种工程现场的安全监测方法及设备,可以解决上述问题。The embodiments of the present application provide a method and device for safety monitoring at a construction site, which can solve the above problems.
第一方面,本申请实施例提供了一种工程现场的安全监测方法,包括:In the first aspect, the embodiments of the present application provide a safety monitoring method on a construction site, including:
获取工程现场的目标俯视图像和目标地面图像;Obtain the target overhead image and target ground image of the project site;
对所述目标俯视图像进行识别,得到所述目标俯视图像中每个像素所属的区域类型;Identifying the target overhead image to obtain the region type to which each pixel in the target overhead image belongs;
对所述目标地面图像进行识别,得到所述目标地面图像中的目标对象及其位置信息;所述目标对象的数量至少为两个;Identifying the target ground image to obtain the target object and its position information in the target ground image; the number of the target objects is at least two;
根据所述目标对象及其位置信息计算所述目标对象之间的运动危险指数;Calculate the movement risk index between the target objects according to the target objects and their position information;
根据所述每个像素所属的区域类型确定所述目标对象所属的目标区域,并且根据所述目标区域确定区域危险指数;Determine the target area to which the target object belongs according to the area type to which each pixel belongs, and determine a regional risk index according to the target area;
根据所述运动危险指数和所述区域危险指数,计算所述目标对象的危险系数;calculating the risk factor of the target object according to the movement risk index and the regional risk index;
根据所述危险系数对所述工程现场进行安全监测。The safety monitoring of the engineering site is performed according to the risk factor.
进一步地,所述目标对象包括目标工人和目标车辆;所述目标工人的所述位置信息包括第一包围框的第一中心坐标和第一尺寸;所述目标车辆的所述位置信息包括第二包围框的第二中心坐标和第二尺寸;Further, the target object includes a target worker and a target vehicle; the location information of the target worker includes a first center coordinate and a first size of a first bounding box; the location information of the target vehicle includes a second the second center coordinate and second size of the bounding box;
所述根据所述目标对象及其位置信息计算所述目标对象之间的运动危险指数,包括:The calculating the movement risk index between the target objects according to the target objects and their position information includes:
根据所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间;Calculate an estimated time to collision for a target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate, and the second size;
若所述目标碰撞事件为真实碰撞事件,则根据预设运动危险指数计算规则计算所述目标工人和所述目标车辆之间的运动危险指数。If the target collision event is a real collision event, the movement risk index between the target worker and the target vehicle is calculated according to a preset movement risk index calculation rule.
进一步地,所述目标地面图像包括同一图像采集装置采集的多组图像帧组;Further, the target ground image includes multiple groups of image frame groups collected by the same image collection device;
所述根据所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间,包括:The calculating, according to the first center coordinate, the first size, the second center coordinate and the second size, an estimated time to collision for a target collision event between the target worker and the target vehicle, including :
根据每组所述图像帧组的所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的初始碰撞时间;Calculate a target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate, and the second size of each set of the image frame groups the initial collision time;
计算所有所述初始碰撞时间的平均值,得到所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间。An average of all the initial collision times is calculated to obtain an estimated collision time for a target collision event between the target worker and the target vehicle.
进一步地,每组所述图像帧组包括至少连续三帧图像;Further, each group of the image frame group includes at least three consecutive frames of images;
所述根据每组所述图像帧组的所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的初始碰撞时间,包括:calculating the target between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate and the second size of each group of the image frame groups The initial collision time of the collision event, including:
根据所述连续三帧图像的所述第一中心坐标和第一预设计算规则,计算所述目标工人的第一加速度和第一速度,并且根据所述第一加速度和所述第一速度计算第二速度;其中,所述第一速度为所述连续三帧图像中第一帧图像中所述目标工人的速度;所述第二速度为所述连续三帧图像中第三帧图像中所述目标工人的速度;Calculate the first acceleration and first speed of the target worker according to the first center coordinates of the three consecutive frames of images and the first preset calculation rule, and calculate according to the first acceleration and the first speed second speed; wherein, the first speed is the speed of the target worker in the first frame of the three consecutive frames of images; the second speed is the speed of the target worker in the third frame of the three consecutive frames of images describe the speed of the target worker;
根据所述连续三帧图像的所述第二中心坐标和第二预设计算规则,计算所述目标车辆的第二加速度和第三速度,并且根据所述第二加速度和所述第三速度计算第四速度;其中,所述第三速度为所述连续三帧图像中第一帧图像中所述目标车辆的速度;所述第四速度为所述连续三帧图像中第三帧图像中所述目标车辆的速度;Calculate the second acceleration and the third speed of the target vehicle according to the second center coordinate of the three consecutive frames of images and the second preset calculation rule, and calculate the second acceleration and the third speed according to the second acceleration and the third speed the fourth speed; wherein the third speed is the speed of the target vehicle in the first frame of the three consecutive frames of images; the fourth speed is the speed of the target vehicle in the third frame of the three consecutive frames of images the speed of the target vehicle;
根据所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算在所述连续三帧图像中第三帧图像中所述目标工人和所述目标车辆之间的目标距离;Calculate the distance between the target worker and the target vehicle in the third frame of the three consecutive frames based on the first size, the second size, the second speed and the fourth speed target distance;
若根据所述目标距离判定所述目标工人和所述目标车辆之间具有碰撞危险,则根据所述第一加速度、所述第二加速度、所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算目标碰撞事件的初始碰撞时间。If it is determined according to the target distance that there is a danger of collision between the target worker and the target vehicle, then according to the first acceleration, the second acceleration, the first size, the second size, the The second velocity and the fourth velocity calculate the initial impact time of the target impact event.
进一步地,在所述根据所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算在所述连续三帧图像中第三帧图像中所述目标工人和所述目标车辆之间的目标距离之后,还包括:Further, in the calculation according to the first size, the second size, the second speed and the fourth speed, the target worker and the target worker in the third frame of images in the three consecutive frames of images are calculated. After describing the target distance between the target vehicles, it also includes:
若根据所述目标距离判定所述目标工人和所述目标车辆之间不具有碰撞危险,则所述目标碰撞事件的预计碰撞时间为无穷大。If it is determined according to the target distance that there is no danger of collision between the target worker and the target vehicle, the expected collision time of the target collision event is infinite.
进一步地,所述若所述目标碰撞事件为真实碰撞事件,则根据预设运动危险指数计算规则计算所述目标工人和所述目标车辆之间的运动危险指数,包括:Further, if the target collision event is a real collision event, calculating the movement risk index between the target worker and the target vehicle according to a preset movement risk index calculation rule, including:
若所述预计碰撞时间小于第一预设预警时间阈值,则获取所述预计碰撞时间对应的备用碰撞时间;If the estimated collision time is less than the first preset warning time threshold, acquiring the backup collision time corresponding to the estimated collision time;
若所述备用碰撞时间大于第二预设预警时间阈值,且小于所述第一预设预警时间阈值,则根据预设系数和所述预计碰撞时间计算所述目标工人和所述目标车辆之间的运动危险指数。If the backup collision time is greater than the second preset warning time threshold and less than the first preset warning time threshold, the distance between the target worker and the target vehicle is calculated according to the preset coefficient and the expected collision time exercise hazard index.
进一步地,所述对所述目标俯视图像进行识别,得到所述目标俯视图像中每个像素所属的区域类型,包括:Further, identifying the target overhead image to obtain the region type to which each pixel in the target overhead image belongs, including:
将所述目标俯视图像输入经过训练的区域识别模型进行识别,得到所述目标俯视图像中每个像素所属的区域类型。The target bird's-eye view image is input into a trained region recognition model for identification, and the region type to which each pixel in the target bird's-eye view image belongs is obtained.
进一步地,在所述将所述目标俯视图像输入经过训练的区域识别模型进行识别,得到所述目标俯视图像中每个像素所属的区域类型之前,还包括:Further, before the input of the target bird's-eye view image into a trained region recognition model for identification, and obtaining the region type to which each pixel in the target bird's-eye view image belongs, the method further includes:
获取样本训练集;所述样本训练集包括样本俯视图像及其对应的每个像素所属的样本区域类型;Obtain a sample training set; the sample training set includes the sample top-down image and the sample area type to which each corresponding pixel belongs;
使用所述样本训练集对初始识别模型进行训练,得到经过训练的区域识别模型。The initial recognition model is trained using the sample training set to obtain a trained region recognition model.
进一步地,所述获取样本训练集,包括:Further, the obtained sample training set includes:
获取初始俯视图像,并且根据预设图像处理策略对所述初始俯视图像进行处理,得到样本俯视图像;所述图像处理策略包括亮度调整策略、色调调整策略、饱和度调整策略、对比度调整策略、噪声调整策略、边缘增强策略、图像镜像策略、图像缩放策略、图像去除策略、图像混合策略中的一种或者多种;Obtain an initial bird's-eye view image, and process the initial bird's-eye view image according to a preset image processing strategy to obtain a sample bird's-eye view image; the image processing strategy includes a brightness adjustment strategy, a hue adjustment strategy, a saturation adjustment strategy, a contrast adjustment strategy, and a noise adjustment strategy. One or more of adjustment strategies, edge enhancement strategies, image mirroring strategies, image scaling strategies, image removal strategies, and image mixing strategies;
获取所述样本俯视图像对应的每个像素所属的样本区域类型,根据所述样本俯视图像及其对应的每个像素所属的样本区域类型确定样本训练集。The sample area type to which each pixel corresponding to the sample top-down image belongs is acquired, and the sample training set is determined according to the sample top-down image and the sample area type to which each corresponding pixel belongs.
第二方面,本申请实施例提供了一种工程现场的安全监测装置,包括:In a second aspect, the embodiments of the present application provide a safety monitoring device on a construction site, including:
第一获取单元,用于获取工程现场的目标俯视图像和目标地面图像;The first acquisition unit is used to acquire the target overhead image and the target ground image of the engineering site;
第一识别单元,用于对所述目标俯视图像进行识别,得到所述目标俯视图像中每个像素所属的区域类型;a first identifying unit, configured to identify the target bird's-eye view image, and obtain the region type to which each pixel in the target bird's-eye view image belongs;
第二识别单元,用于对所述目标地面图像进行识别,得到所述目标地面图像中的目标对象及其位置信息;所述目标对象的数量至少为两个;a second identification unit, configured to identify the target ground image, and obtain the target object and its position information in the target ground image; the number of the target objects is at least two;
第一计算单元,用于根据所述目标对象及其位置信息计算所述目标对象之间的运动危险指数;a first calculation unit, configured to calculate a movement risk index between the target objects according to the target objects and their position information;
第一处理单元,用于根据所述每个像素所属的区域类型确定所述目标对象所属的目标区域,并且根据所述目标区域确定区域危险指数;a first processing unit, configured to determine the target area to which the target object belongs according to the area type to which each pixel belongs, and determine a regional risk index according to the target area;
第二计算单元,用于根据所述运动危险指数和所述区域危险指数,计算所述目标对象的危险系数;a second calculation unit, configured to calculate the risk factor of the target object according to the movement risk index and the regional risk index;
第二处理单元,用于根据所述危险系数对所述工程现场进行安全监测。The second processing unit is configured to perform safety monitoring on the engineering site according to the risk factor.
进一步地,所述目标对象包括目标工人和目标车辆;所述目标工人的所述位置信息包括第一包围框的第一中心坐标和第一尺寸;所述目标车辆的所述位置信息包括第二包围框的第二中心坐标和第二尺寸;Further, the target object includes a target worker and a target vehicle; the location information of the target worker includes a first center coordinate and a first size of a first bounding box; the location information of the target vehicle includes a second the second center coordinate and second size of the bounding box;
所述第一计算单元,具体用于:The first computing unit is specifically used for:
根据所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间;Calculate an estimated time to collision for a target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate, and the second size;
若所述目标碰撞事件为真实碰撞事件,则根据预设运动危险指数计算规则计算所述目标工人和所述目标车辆之间的运动危险指数。If the target collision event is a real collision event, the movement risk index between the target worker and the target vehicle is calculated according to a preset movement risk index calculation rule.
进一步地,所述目标地面图像包括同一图像采集装置采集的多组图像帧组;Further, the target ground image includes multiple groups of image frame groups collected by the same image collection device;
所述第一计算单元,具体用于:The first computing unit is specifically used for:
根据每组所述图像帧组的所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的初始碰撞时间;Calculate a target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate, and the second size of each set of the image frame groups the initial collision time;
计算所有所述初始碰撞时间的平均值,得到所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间。An average of all the initial collision times is calculated to obtain an estimated collision time for a target collision event between the target worker and the target vehicle.
进一步地,每组所述图像帧组包括至少连续三帧图像;Further, each group of the image frame group includes at least three consecutive frames of images;
所述第一计算单元,具体用于:The first computing unit is specifically used for:
根据所述连续三帧图像的所述第一中心坐标和第一预设计算规则,计算所述目标工人的第一加速度和第一速度,并且根据所述第一加速度和所述第一速度计算第二速度;其中,所述第一速度为所述连续三帧图像中第一帧图像中所述目标工人的速度;所述第二速度为所述连续三帧图像中第三帧图像中所述目标工人的速度;Calculate the first acceleration and first speed of the target worker according to the first center coordinates of the three consecutive frames of images and the first preset calculation rule, and calculate according to the first acceleration and the first speed second speed; wherein, the first speed is the speed of the target worker in the first frame of the three consecutive frames of images; the second speed is the speed of the target worker in the third frame of the three consecutive frames of images describe the speed of the target worker;
根据所述连续三帧图像的所述第二中心坐标和第二预设计算规则,计算所述目标车辆的第二加速度和第三速度,并且根据所述第二加速度和所述第三速度计算第四速度;其中,所述第三速度为所述连续三帧图像中第一帧图像中所述目标车辆的速度;所述第四速度为所述连续三帧图像中第三帧图像中所述目标车辆的速度;Calculate the second acceleration and the third speed of the target vehicle according to the second center coordinate of the three consecutive frames of images and the second preset calculation rule, and calculate the second acceleration and the third speed according to the second acceleration and the third speed the fourth speed; wherein the third speed is the speed of the target vehicle in the first frame of the three consecutive frames of images; the fourth speed is the speed of the target vehicle in the third frame of the three consecutive frames of images the speed of the target vehicle;
根据所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算在所述连续三帧图像中第三帧图像中所述目标工人和所述目标车辆之间的目标距离;Calculate the distance between the target worker and the target vehicle in the third frame of the three consecutive frames based on the first size, the second size, the second speed and the fourth speed target distance;
若根据所述目标距离判定所述目标工人和所述目标车辆之间具有碰撞危险,则根据所述第一加速度、所述第二加速度、所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算目标碰撞事件的初始碰撞时间。If it is determined according to the target distance that there is a danger of collision between the target worker and the target vehicle, then according to the first acceleration, the second acceleration, the first size, the second size, the The second velocity and the fourth velocity calculate the initial impact time of the target impact event.
进一步地,所述第一计算单元,具体还用于:Further, the first computing unit is specifically also used for:
若根据所述目标距离判定所述目标工人和所述目标车辆之间不具有碰撞危险,则所述目标碰撞事件的预计碰撞时间为无穷大。If it is determined according to the target distance that there is no danger of collision between the target worker and the target vehicle, the expected collision time of the target collision event is infinite.
进一步地,所述第一计算单元,具体用于:Further, the first computing unit is specifically used for:
若所述预计碰撞时间小于第一预设预警时间阈值,则获取所述预计碰撞时间对应的备用碰撞时间;If the estimated collision time is less than the first preset warning time threshold, acquiring the backup collision time corresponding to the estimated collision time;
若所述备用碰撞时间大于第二预设预警时间阈值,且小于所述第一预设预警时间阈值,则根据预设系数和所述预计碰撞时间计算所述目标工人和所述目标车辆之间的运动危险指数。If the backup collision time is greater than the second preset warning time threshold and less than the first preset warning time threshold, the distance between the target worker and the target vehicle is calculated according to the preset coefficient and the expected collision time exercise hazard index.
进一步地,所述第一识别单元,具体用于:Further, the first identification unit is specifically used for:
将所述目标俯视图像输入经过训练的区域识别模型进行识别,得到所述目标俯视图像中每个像素所属的区域类型。The target bird's-eye view image is input into a trained region recognition model for identification, and the region type to which each pixel in the target bird's-eye view image belongs is obtained.
进一步地,所述第一识别单元,具体还用于:Further, the first identification unit is specifically also used for:
获取样本训练集;所述样本训练集包括样本俯视图像及其对应的每个像素所属的样本区域类型;Obtain a sample training set; the sample training set includes the sample top-down image and the sample area type to which each corresponding pixel belongs;
使用所述样本训练集对初始识别模型进行训练,得到经过训练的区域识别模型。The initial recognition model is trained using the sample training set to obtain a trained region recognition model.
进一步地,所述第一识别单元,具体还用于:Further, the first identification unit is specifically also used for:
获取初始俯视图像,并且根据预设图像处理策略对所述初始俯视图像进行处理,得到样本俯视图像;所述图像处理策略包括亮度调整策略、色调调整策略、饱和度调整策略、对比度调整策略、噪声调整策略、边缘增强策略、图像镜像策略、图像缩放策略、图像去除策略、图像混合策略中的一种或者多种;Obtain an initial bird's-eye view image, and process the initial bird's-eye view image according to a preset image processing strategy to obtain a sample bird's-eye view image; the image processing strategy includes a brightness adjustment strategy, a hue adjustment strategy, a saturation adjustment strategy, a contrast adjustment strategy, and a noise adjustment strategy. One or more of adjustment strategies, edge enhancement strategies, image mirroring strategies, image scaling strategies, image removal strategies, and image mixing strategies;
获取所述样本俯视图像对应的每个像素所属的样本区域类型,根据所述样本俯视图像及其对应的每个像素所属的样本区域类型确定样本训练集。The sample area type to which each pixel corresponding to the sample top-down image belongs is acquired, and the sample training set is determined according to the sample top-down image and the sample area type to which each corresponding pixel belongs.
第三方面,本申请实施例提供了一种工程现场的安全监测设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面所述的工程现场的安全监测方法。In a third aspect, embodiments of the present application provide a safety monitoring device on an engineering site, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing all When the computer program is used, the safety monitoring method of the engineering site as described in the first aspect above is realized.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面所述的工程现场的安全监测方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the engineering site described in the first aspect above is implemented safety monitoring method.
本申请实施例中,获取工程现场的目标俯视图像和目标地面图像;对目标俯视图像进行识别,得到目标俯视图像中每个像素所属的区域类型;对目标地面图像进行识别,得到目标地面图像中的目标对象及其位置信息;根据目标对象及其位置信息计算目标对象之间的运动危险指数;根据每个像素所属的区域类型确定目标对象所属的目标区域,并且根据目标区域确定区域危险指数;根据运动危险指数和区域危险指数,计算目标对象的危险系数;根据危险系数对工程现场进行安全监测。上述方案,从整个工程现场的动态运行角度出发,全面的获取工程现场的多维信息,通过对运动危险指数和区域危险指数计算出目标对象的危险系数,通过危险系数能够更全面的进行安全预警,更全面的对工程现场的安全进行监测。In the embodiment of the present application, the target overhead image and the target ground image of the project site are acquired; the target overhead image is identified to obtain the area type to which each pixel in the target overhead image belongs; the target ground image is identified, and the target ground image is obtained. According to the target object and its position information, the movement risk index between the target objects is calculated; the target area to which the target object belongs is determined according to the area type to which each pixel belongs, and the regional risk index is determined according to the target area; According to the movement risk index and the regional risk index, the risk coefficient of the target object is calculated; the safety monitoring of the project site is carried out according to the risk coefficient. The above scheme, from the perspective of dynamic operation of the entire project site, comprehensively obtains the multi-dimensional information of the project site, and calculates the risk factor of the target object through the movement risk index and the regional risk index. More comprehensively monitor the safety of the project site.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only for the present application. In some embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1是本申请第一实施例提供的一种工程现场的安全监测方法的示意流程图;1 is a schematic flowchart of a safety monitoring method on a construction site provided by the first embodiment of the present application;
图2是本申请第一实施例提供的一种工程现场的安全监测方法中目标俯视图的示意图;2 is a schematic diagram of a top view of a target in a safety monitoring method on a construction site provided by the first embodiment of the present application;
图3是本申请第一实施例提供的一种工程现场的安全监测方法中目标俯视图像中每个像素所属的区域类型的示意图;3 is a schematic diagram of a region type to which each pixel in a target overhead image belongs in a safety monitoring method on a construction site provided by the first embodiment of the present application;
图4是本申请第一实施例提供的一种工程现场的安全监测方法中图像处理策略中随机缩放法的效果示意图;4 is a schematic diagram of the effect of a random scaling method in an image processing strategy in a safety monitoring method on a construction site provided by the first embodiment of the present application;
图5是本申请第一实施例提供的一种工程现场的安全监测方法中多个图像处理策略的效果示意图;5 is a schematic diagram of the effect of multiple image processing strategies in a safety monitoring method on a construction site provided by the first embodiment of the present application;
图6是本申请第一实施例提供的一种工程现场的安全监测方法中多个图像处理策略的效果示意图;6 is a schematic diagram of the effect of multiple image processing strategies in a safety monitoring method for a construction site provided by the first embodiment of the present application;
图7是本申请第一实施例提供的一种工程现场的安全监测方法中目标地面图像中的目标对象及其位置信息的示意图;7 is a schematic diagram of a target object and its position information in a target ground image in a safety monitoring method on a construction site provided by the first embodiment of the present application;
图8是本申请第一实施例提供的一种工程现场的安全监测方法中地面相机布置的示意图;8 is a schematic diagram of the arrangement of ground cameras in a safety monitoring method on a construction site provided by the first embodiment of the present application;
图9是本申请第二实施例提供的工程现场的安全监测装置的示意图;9 is a schematic diagram of a safety monitoring device on a construction site provided by the second embodiment of the present application;
图10是本申请第三实施例提供的工程现场的安全监测设备的示意图。FIG. 10 is a schematic diagram of a safety monitoring device on a construction site provided by the third embodiment of the present application.
具体实施方式Detailed ways
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are set forth in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to those skilled in the art that the present application may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It is to be understood that, when used in this specification and the appended claims, the term "comprising" indicates the presence of the described feature, integer, step, operation, element and/or component, but does not exclude one or more other The presence or addition of features, integers, steps, operations, elements, components and/or sets thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It will also be understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the specification of this application and the appended claims, the term "if" may be contextually interpreted as "when" or "once" or "in response to determining" or "in response to detecting ". Similarly, the phrases "if it is determined" or "if the [described condition or event] is detected" may be interpreted, depending on the context, to mean "once it is determined" or "in response to the determination" or "once the [described condition or event] is detected. ]" or "in response to detection of the [described condition or event]".
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of the present application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and should not be construed as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。References in this specification to "one embodiment" or "some embodiments" and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in other embodiments," etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless specifically emphasized otherwise. The terms "including", "including", "having" and their variants mean "including but not limited to" unless specifically emphasized otherwise.
请参见图1,图1是本申请第一实施例提供的一种工程现场的安全监测方法的示意流程图。本实施例中一种工程现场的安全监测方法的执行主体为具有工程现场的安全监测功能的设备,例如,服务器、台式电脑等等。如图1所示的工程现场的安全监测方法可以包括:Please refer to FIG. 1 . FIG. 1 is a schematic flowchart of a safety monitoring method on a construction site provided by the first embodiment of the present application. In this embodiment, the execution subject of the method for safety monitoring on a project site is a device with a safety monitoring function on the project site, for example, a server, a desktop computer, and the like. As shown in Figure 1, the safety monitoring method on the project site can include:
S101:获取工程现场的目标俯视图像和目标地面图像。S101: Acquire a target overhead image and a target ground image of the engineering site.
在本实施例中,在工程现场设置两种图像采集装置来采集图像,一种是高空俯视视角的图像采集装置,一种是放置于地面的图像采集装置。目标俯视图像是采用高空俯视视角采集的工程现场的图像,可以通过无人机或者塔吊相机等来采集。其中,如图2所示,图2为目标俯视图的示意图,目标俯视图中应当包括整个工程现场中的所有区域。In this embodiment, two types of image acquisition devices are installed on the engineering site to acquire images, one is an image acquisition device from a high-altitude bird's-eye view, and the other is an image acquisition device placed on the ground. The target overhead image is the image of the engineering site collected from a high-altitude overhead view, which can be collected by a drone or a tower crane camera. Among them, as shown in FIG. 2, FIG. 2 is a schematic diagram of a top view of the target, and the top view of the target should include all areas in the entire project site.
目标地面图像是由放置于地面的图像采集装置,具体来说,该图像采集装置可以放置于工程现场周围,并且在放置时,可以将图像采集装置可以为放置在距离地面2米高的相机。其中,这里的2米高只是进行举例,并不做限制。The target ground image is an image acquisition device placed on the ground. Specifically, the image acquisition device can be placed around the engineering site, and when placed, the image acquisition device can be a camera placed 2 meters above the ground. Among them, the height of 2 meters here is just an example, not a limitation.
本实施例中,对工程现场采集到的目标俯视图像和目标地面图像的数量并不做限制,可以为一张,也可以为多张。工程现场的图像采集装置采集到目标俯视图像和目标地面图像后,将目标俯视图像和目标地面图像发送至本端设备。本端设备获取工程现场的目标俯视图像和目标地面图像。In this embodiment, the number of the target overhead image and the target ground image collected at the engineering site is not limited, which may be one or multiple. After the image acquisition device at the engineering site collects the target overhead image and the target ground image, it sends the target overhead image and the target ground image to the local device. The local device obtains the target overhead image and the target ground image of the project site.
S102:对所述目标俯视图像进行识别,得到所述目标俯视图像中每个像素所属的区域类型。S102: Identify the target bird's-eye view image to obtain a region type to which each pixel in the target bird's-eye view image belongs.
设备获取到目标俯视图像后,采用预设的图像识别方式对目标俯视图像进行识别,识别得到目标俯视图像中每个像素所属的区域类型。本实施例中,对于预设的图像识别方式并不做限制,只要可以识别出目标俯视图像中每个像素所属的区域类型即可。After the device obtains the target overhead image, it uses a preset image recognition method to identify the target overhead image, and recognizes the region type to which each pixel in the target overhead image belongs. In this embodiment, the preset image recognition method is not limited, as long as the region type to which each pixel in the target bird's-eye view image belongs can be recognized.
如图3所示,图3为目标俯视图像中每个像素所属的区域类型的示意图。一般来说,工程现场的区域类型可以包括但是不限于建筑区域、设施区域、道路区域、水面区域、办公区、施工区。As shown in FIG. 3 , FIG. 3 is a schematic diagram of a region type to which each pixel in the target overhead image belongs. Generally speaking, the types of areas on a project site may include but are not limited to construction areas, facility areas, road areas, water surface areas, office areas, and construction areas.
一种实施方式中,为了准确的识别出目标俯视图像中每个像素所属的区域类型,可以通过神经网络的方式,对目标俯视图像进行识别。神经网络模式可以快速、准确的对目标俯视图像进行处理,并且输出每个像素所属的区域类型。In one embodiment, in order to accurately identify the region type to which each pixel in the target bird's-eye view image belongs, the target bird's-eye view image may be identified by means of a neural network. The neural network mode can quickly and accurately process the top-down image of the target, and output the region type to which each pixel belongs.
设备将目标俯视图像输入经过训练的区域识别模型进行识别,得到目标俯视图像中每个像素所属的区域类型。设备中可以预先设置经过训练的区域识别模型,也可以从其他设备中调用经过训练的区域识别模型。经过训练的区域识别模型可以包括输入层、隐含层、输出层(损失函数层)。输入层包括一个输入层节点,用于从外部接收输入的目标俯视图像。隐含层用于对目标俯视图像进行处理,提取目标俯视图像中每个像素所属的区域类型。输出层用于输出目标俯视图像中每个像素所属的区域类型。The device inputs the target bird's-eye view image into the trained region recognition model for recognition, and obtains the region type to which each pixel in the target bird's-eye view image belongs. The trained area recognition model can be preset in the device, or the trained area recognition model can be called from other devices. The trained region recognition model can include an input layer, a hidden layer, and an output layer (loss function layer). The input layer includes an input layer node that receives the input top-down image of the target from the outside. The hidden layer is used to process the target overhead image and extract the region type to which each pixel in the target overhead image belongs. The output layer is used to output the region type to which each pixel in the target overhead image belongs.
一种可能的实施方式中,区域识别模型由本端设备预先进行训练。区域识别模型的训练方法可以如下:In a possible implementation, the region identification model is pre-trained by the local device. The training method of the region recognition model can be as follows:
设备获取样本训练集,其中,样本训练集包括样本俯视图像及其对应的每个像素所属的样本区域类型;使用样本训练集对初始识别模型进行训练,得到经过训练的区域识别模型。在训练过程中,将样本俯视图像及其对应的每个像素所属的样本区域类型作为训练数据,将其输入初始识别模型,通过调整初始识别模型的损失函数不断的完善模型,从而得到最终的区域识别模型。The device obtains a sample training set, wherein the sample training set includes the sample top-down image and the sample area type to which each corresponding pixel belongs; the initial recognition model is trained by using the sample training set to obtain a trained area recognition model. In the training process, the sample top-down image and the sample area type to which each corresponding pixel belongs as the training data is input into the initial recognition model, and the model is continuously improved by adjusting the loss function of the initial recognition model to obtain the final area. Identify the model.
由于在进行模型训练过程中,需要大量的样本数据进行训练才能得到更精准的模型。如果样本数据集的数量不足,很可能出现模型准确度不够高的情况。而获取大量丰富的样本数据也会耗费大量的资源,本实施例中,为了避免这种问题,可以通过对有限的样本进行扩充,从而提升模型在复杂多变的工地环境条件下,对工程机械、工人、物料等物体的识别能力,及其抗干扰能力和鲁棒性。该填充的基本思路是充分考虑到工地上可能出现的情况,使训练集中出现工地上可能发现的情况,从而提高识别的准确性。In the process of model training, a large amount of sample data is required for training to obtain a more accurate model. If the number of sample data sets is insufficient, it is likely that the model accuracy is not high enough. Acquiring a large amount of rich sample data will also consume a lot of resources. In this embodiment, in order to avoid this problem, the limited samples can be expanded to improve the model's ability to adapt to construction machinery under complex and changeable construction site environmental conditions. , workers, materials and other objects recognition ability, and its anti-interference ability and robustness. The basic idea of this filling is to fully consider the conditions that may occur on the construction site, so that the conditions that may be found on the construction site appear in the training set, thereby improving the accuracy of recognition.
设备获取初始俯视图像,并且根据预设图像处理策略对初始俯视图像进行处理,得到样本俯视图像。其中,初始俯视图像即为有限的样本,设备通过预设图像处理策略对有限的样本进行数据增强。然后,设备获取样本俯视图像对应的每个像素所属的样本区域类型,根据样本俯视图像及其对应的每个像素所属的样本区域类型确定样本训练集。The device acquires an initial bird's-eye view image, and processes the initial bird's-eye view image according to a preset image processing strategy to obtain a sample bird's-eye view image. The initial bird's-eye view image is a limited sample, and the device performs data enhancement on the limited sample through a preset image processing strategy. Then, the device acquires the sample area type to which each pixel corresponding to the sample overhead image belongs, and determines the sample training set according to the sample overhead image and the sample area type to which each corresponding pixel belongs.
其中,图像处理策略包括亮度调整策略、色调调整策略、饱和度调整策略、对比度调整策略、噪声调整策略、边缘增强策略、图像镜像策略、图像缩放策略、图像去除策略、图像混合策略中的一种或者多种。下面对图像处理策略进行详细的说明。The image processing strategy includes one of a brightness adjustment strategy, a hue adjustment strategy, a saturation adjustment strategy, a contrast adjustment strategy, a noise adjustment strategy, an edge enhancement strategy, an image mirroring strategy, an image scaling strategy, an image removal strategy, and an image mixing strategy or more. The image processing strategy is described in detail below.
从增强模型抗干扰能力的角度出发,在工地现场,不同时间和天气条件下,相机获取的图像会具有显著不同的亮度、饱和度和对比度差异;不同型号相机获取的图像还会在色调上有差异;在图像的生成、传输等过程中,还可能产生一些噪点。为了避免这些因素对模型识别能力的干扰,本实施例中可以采用下列方法来扩充有限数量的样本:From the perspective of enhancing the anti-interference ability of the model, on the construction site, under different time and weather conditions, the images obtained by the camera will have significantly different brightness, saturation and contrast differences; the images obtained by different models of cameras will also have different hues. Differences; some noise may also be generated during image generation, transmission, etc. In order to avoid the interference of these factors on the model identification ability, the following methods can be used to expand the limited number of samples in this embodiment:
1.随机亮度法:在HSV色彩空间中,将图像中所有像素的亮度分量加上一个特定阈值范围内的随机值,从而随机调整图像的亮度,以模拟工地现场的不同光照度差异。1. Random brightness method: In the HSV color space, the brightness components of all pixels in the image are added to a random value within a certain threshold range, so as to randomly adjust the brightness of the image to simulate different illumination differences on the construction site.
vi′=vi+δ,δ[-Δ,Δ],Δ∈[0,0.5]v i ′ = vi +δ,δ[-Δ,Δ],Δ∈[0,0.5]
其中,P是初始俯视图像中所有像素的集合,vi是某像素的亮度分量,vi′是处理后的像素亮度分量值,δ是亮度的随机增加值,Δ是亮度增加量的阈值。Among them, P is the set of all pixels in the initial bird's-eye view image, vi is the luminance component of a certain pixel , v i ' is the value of the luminance component of the pixel after processing, δ is the random increase value of luminance, and Δ is the threshold value of luminance increase.
2.随机饱和度法:在HSV色彩空间中,将图像中所有像素的饱和度分量加上一个特定阈值范围内的随机值,从而随机调整图像的饱和度,以模拟工地现场的不同光线环境2. Random saturation method: In the HSV color space, the saturation components of all pixels in the image are added to a random value within a certain threshold range, so as to randomly adjust the saturation of the image to simulate different light environments on the construction site
si′=si+γ,γ∈[-Γ,Γ],Γ∈[0,0.5]s i ′=s i +γ, γ∈[-Γ,Γ], Γ∈[0,0.5]
其中,P是初始俯视图像中所有像素的集合,si是某像素的饱和度分量,si′是处理后的像素饱和度分量值,γ是饱和度的随机增加值,Γ是饱和度增加量的阈值。where P is the set of all pixels in the initial top-down image, s i is the saturation component of a certain pixel, s i ′ is the value of the pixel saturation component after processing, γ is the random increase in saturation, and Γ is the increase in saturation volume threshold.
3.随机色调法:在HSV色彩空间中,将图像中所有像素的色调分量加上一个特定阈值范围内的随机值,从而随机调整图像的色调,以模拟不同相机和光线条件下,图像上的色调差异。3. Random hue method: In the HSV color space, the hue components of all pixels in the image are added to a random value within a certain threshold range, so as to randomly adjust the hue of the image to simulate different camera and light conditions on the image. Hue difference.
hi′=hi+η,η∈[-H,H],H∈[0°,180°]h i ′= hi +η, η∈[-H, H], H∈[0°, 180°]
其中,P是图像中所有像素的集合,hi是某像素的色调分量,hi′是处理后的像素色调分量值,η是色调的随机增加值,H是色调增加量的阈值。Among them, P is the set of all pixels in the image, hi is the hue component of a certain pixel, hi ' is the value of the pixel hue component after processing, η is the random increase of hue, and H is the threshold of hue increase.
4.随机对比度法:在RGB色彩空间中,将红、绿、蓝三个分量乘以一个特定阈值范围内的因数,从而随机增强或减弱图像的对比度,同样可以模拟不同相机和光线条件下图像上的对比度差异。4. Random contrast method: In the RGB color space, the three components of red, green and blue are multiplied by a factor within a specific threshold range to randomly enhance or weaken the contrast of the image, which can also simulate images under different camera and light conditions. contrast difference.
ri′=ri×α,gi′=gi×α,bi′=bi×α,α>0r i ′=r i ×α, g i ′=g i ×α,b i ′=b i ×α,α>0
其中,r,g,b是某像素的红、绿、蓝色彩分量,α是对比度因子。Among them, r, g, b are the red, green, and blue color components of a pixel, and α is the contrast factor.
5.高斯噪声法:将原始图像增加如下公式所示的均值为0的二维高斯分布的噪声值,可以有效增强模型的抗干扰能力。在应用中,通过选择合适的方差,可以获得相应的高斯分布的噪声矩阵,将该矩阵作为卷积运算的算子,对原图像进行卷积操作,可以得到增加了高斯噪声的图像。5. Gaussian noise method: Adding the noise value of the two-dimensional Gaussian distribution with the mean value of 0 as shown in the following formula to the original image can effectively enhance the anti-interference ability of the model. In the application, by selecting the appropriate variance, the corresponding noise matrix of Gaussian distribution can be obtained, and the matrix is used as the operator of the convolution operation to perform the convolution operation on the original image, and the image with the added Gaussian noise can be obtained.
其中,σW与中σH是横轴与纵轴方向的方差,ρ是W与H之间的相关系数。Among them, σ W and σ H are the variances of the horizontal axis and the vertical axis, and ρ is the correlation coefficient between W and H.
6.椒盐噪声法:也称为脉冲噪声,它随机将图像中的部分像素点置为纯黑或者纯白。可以用该噪声来模拟相机和传输器件在受到突如其来的强烈干扰而产生的错误,比如失效的感应器导致像素值为最小值即黑点,饱和的感应器导致像素值为最大值即白点。6. Salt and pepper noise method: Also known as impulse noise, it randomly places some pixels in the image as pure black or pure white. This noise can be used to simulate the errors of cameras and transmission devices when they are subjected to sudden and strong interference. For example, a failed sensor results in a minimum pixel value, which is a black point, and a saturated sensor results in a maximum pixel value, which is a white point.
7.边缘增强法:由于卷积神经网络擅长学习物体的纹理特征,因此该方法通过Sobel算子来增强图像中的边缘特征,从而帮助模型更快更好地学习到物体的有效特征,从而提升模型的识别能力。7. Edge enhancement method: Since the convolutional neural network is good at learning the texture features of objects, this method uses the Sobel operator to enhance the edge features in the image, thereby helping the model to learn the effective features of objects faster and better, thereby improving The recognition ability of the model.
从增强模型泛化能力的角度来看,在样本数量有限的情况下,为了让模型能够识别全新的样本,也即让模型能够真正抽象成物体的本质特征,本实施例中可以采用以下数据增强方法来提升模型的泛化能力:From the perspective of enhancing the generalization ability of the model, in the case of a limited number of samples, in order to allow the model to recognize brand new samples, that is, to allow the model to truly abstract the essential features of objects, the following data enhancements can be used in this embodiment. method to improve the generalization ability of the model:
1.随机缩放法:为了让模型能够对不同尺寸大小的物体都有较强的识别能力,可以在一定范围内,比如在[0.5,1.5]倍率下随机缩放原始图像,来模拟不同远近、大小的物体,从而提升模型对物体有效特征的泛化能力。具体效果如图4所示。1. Random scaling method: In order to enable the model to have a strong ability to recognize objects of different sizes, the original image can be randomly scaled within a certain range, such as at [0.5, 1.5] magnification, to simulate different distances and sizes. object, so as to improve the generalization ability of the model to the effective features of the object. The specific effect is shown in Figure 4.
2.镜像法:通过水平镜像翻转原图像,可以获得数量翻倍且质量不变的有效样本,是一种非常有效的数据增强方法。具体效果如图5所示。2. Mirroring method: Flip the original image by horizontal mirroring, and it is a very effective data enhancement method to obtain effective samples with double the quantity and the same quality. The specific effect is shown in Figure 5.
3.随机擦除法:随机在图像中生成多个特定大小的矩形区域,用整个图像的平均像素值来填充该区域。该方法可以模拟出对物体的遮挡效果,从而大大提升模型对物体的泛化能力。效果如图5所示。3. Random erasing method: randomly generate multiple rectangular areas of a specific size in the image, and fill the area with the average pixel value of the entire image. This method can simulate the occlusion effect of the object, thereby greatly improving the generalization ability of the model to the object. The effect is shown in Figure 5.
4.随机切除法:随机在图像中生成多个特定大小的矩形区域,用零值,即黑色来填充该区域。该方法类似于上述随机擦除法,源自于神经网络学习中常用正则化方法Cutout的思想,可以有效避免模型在学习过程中只识别物体局部区域的问题,提升模型识别能力。效果如图5所示。4. Random excision method: randomly generate a plurality of rectangular areas of a certain size in the image, and fill the area with a zero value, that is, black. This method is similar to the above random erasing method, which is derived from the idea of Cutout, a regularization method commonly used in neural network learning. The effect is shown in Figure 5.
5.随机遮挡法:该方法将图像划分为S×S个区域,随机将该区域像素值置0。与随机擦除法和随机切除法类似,都有模拟遮挡效果,避免模型仅关注物体局部的问题。是一种有效的数据增强方法。效果如图5所示。5. Random occlusion method: This method divides the image into S×S regions, and randomly sets the pixel value of this region to 0. Similar to the random erasing method and the random excision method, there are simulated occlusion effects to avoid the problem that the model only pays attention to the local part of the object. It is an effective data augmentation method. The effect is shown in Figure 5.
6.阵列遮挡法:该方法用一个像素值为0的矩形阵列来遮挡原图像,从而迫使模型随机地学习物体的每个部分,从而提升模型对物体的识别能力。效果如图6所示。6. Array occlusion method: This method uses a rectangular array with a pixel value of 0 to occlude the original image, thereby forcing the model to randomly learn each part of the object, thereby improving the model's ability to recognize objects. The effect is shown in Figure 6.
7.混合法:该方法将图像中的两个物体混合在一起,同时将标签平均分配。比如,在神经网络的输出中,物体A的标签,即理想输出为[1 0],物体B的标签,即理想输出为[01],刚混合后的融合图像标签为[0.5 0.5]。此方法借鉴了神经网络的标签平滑化思想,同样是一种正则化策略,可以有效防止神经网络训练出时的过拟合。效果如图6所示。7. Hybrid method: This method mixes two objects in the image together, while assigning the labels equally. For example, in the output of the neural network, the label of object A, that is, the ideal output is [1 0], the label of object B, that is, the ideal output is [01], and the fused image label just after mixing is [0.5 0.5]. This method draws on the idea of label smoothing of neural networks, and is also a regularization strategy, which can effectively prevent overfitting of neural networks during training. The effect is shown in Figure 6.
8.剪切混合法:该方法通过从其他图像中剪切部分并将其粘贴到目标图像上,来合成全新的图像。从而迫使模型的学习必须基于物体的多个特征,而非某个易于学习的局部特征。同时,合成的新图像对应位置的标签按照剪切部分与目标图像物体剩余部分的尺寸来设置比率,如0.6:0.4。效果如图6所示。8. Cut-Mix Method: This method synthesizes a completely new image by cutting parts from other images and pasting them on the target image. Therefore, the learning of the model must be based on multiple features of the object, rather than a local feature that is easy to learn. At the same time, the label of the corresponding position of the synthesized new image is set in a ratio according to the size of the cut part and the remaining part of the target image object, such as 0.6:0.4. The effect is shown in Figure 6.
9.马赛克混合法:该方法是将四个训练图像按照一定比例合成一个,帮助模型学习如何识别更小的物体。该方法类似于随机缩放法,但学习效率更高。效果如图6所示。9. Mosaic mixing method: This method combines four training images into one in a certain proportion to help the model learn how to recognize smaller objects. This method is similar to the random scaling method, but it is more efficient to learn. The effect is shown in Figure 6.
可以理解的是,上述图像处理策略中的方法只是进行举例说明,并不构成对于图像处理策略的限制。It can be understood that the methods in the above-mentioned image processing strategy are only for illustration, and do not constitute a limitation on the image processing strategy.
S103:对所述目标地面图像进行识别,得到所述目标地面图像中的目标对象及其位置信息;所述目标对象的数量至少为两个。S103: Identify the target ground image to obtain target objects and their position information in the target ground image; the number of the target objects is at least two.
设备对目标地面图像进行识别,得到目标地面图像中的目标对象及其位置信息。本实施例中,对于目标地面图像识别方式并不做限制,只要可以识别出目标地面图像中的目标对象及其位置信息即可。The device recognizes the target ground image, and obtains the target object and its position information in the target ground image. In this embodiment, there is no limitation on the identification method of the target ground image, as long as the target object and its position information in the target ground image can be identified.
具体来说,目标对象可以包含工人以及其他工程机械和车辆;位置信息可以为目标对象的包围框的中心坐标(x,y),以及包围框的尺寸(w,h)。Specifically, the target object may include workers and other construction machinery and vehicles; the location information may be the center coordinates (x, y) of the bounding box of the target object, and the size of the bounding box (w, h).
如图7所示,图7为目标地面图像中的目标对象及其位置信息的示意图。其中,目标对象的数量至少为两个,才可能出现碰撞危险。As shown in FIG. 7 , FIG. 7 is a schematic diagram of the target object and its position information in the target ground image. Among them, the number of target objects is at least two, and there is a possibility of a collision risk.
一种实施方式中,为了准确的识别出目标地面图像中的目标对象及其位置信息,可以通过神经网络的方式,对目标地面图像进行识别。神经网络模式可以快速、准确的对目标地面图像中的目标对象及其位置信息进行处理。In one embodiment, in order to accurately identify the target object and its position information in the target ground image, the target ground image may be identified by means of a neural network. The neural network model can process the target object and its position information in the target ground image quickly and accurately.
设备将目标地面图像输入经过训练的对象识别模型进行识别,得到目标地面图像中的目标对象及其位置信息。设备中可以预先设置经过训练的对象识别模型,也可以从其他设备中调用经过训练的对象识别模型。经过训练的对象识别模型可以包括输入层、隐含层、输出层(损失函数层)。输入层包括一个输入层节点,用于从外部接收输入的目标地面图像。隐含层用于对目标地面图像进行处理,提取目标地面图像中的目标对象及其位置信息。输出层用于输出目标地面图像中的目标对象及其位置信息。The device inputs the target ground image into the trained object recognition model for recognition, and obtains the target object and its position information in the target ground image. The trained object recognition model can be preset in the device, or the trained object recognition model can be called from other devices. A trained object recognition model can include an input layer, a hidden layer, and an output layer (loss function layer). The input layer includes an input layer node that receives the input target ground image from the outside. The hidden layer is used to process the target ground image and extract the target object and its position information in the target ground image. The output layer is used to output the target object and its position information in the target ground image.
一种可能的实施方式中,对象识别模型由本端设备预先进行训练。对象识别模型的训练方法可以参照S102中描述区域识别模型的训练过程,此处不再赘述。In a possible implementation, the object recognition model is pre-trained by the local device. For the training method of the object recognition model, reference may be made to the training process of the region recognition model described in S102, which will not be repeated here.
其中,这里获取到的目标地面图像中的目标对象及其位置信息可以加入神经网络的训练集对模型进行优化,这样优化后的模型可以更好的适应当下的工程现场,有利于使模型抗干扰能力上升,提高了对该场地的精准度。Among them, the target object and its position information in the target ground image obtained here can be added to the training set of the neural network to optimize the model, so that the optimized model can better adapt to the current engineering site, which is conducive to making the model anti-interference The ability has been increased, and the accuracy of the field has been improved.
S104:根据所述目标对象及其位置信息计算所述目标对象之间的运动危险指数。S104: Calculate a movement risk index between the target objects according to the target objects and their position information.
设备根据工程现场的目标地面图像获得的工程现场目标对象及其位置信息等动态实体的运动信息,我们可以分析出影响目标对象安全的多种因素,以及危险程度的高低,从而为目标对象提供可靠的警示信息。设备根据目标对象及其位置信息计算目标对象之间的运动危险指数。然后,若目标碰撞事件为真实碰撞事件,则根据预设运动危险指数计算规则计算目标工人和所述目标车辆之间的运动危险指数。According to the motion information of dynamic entities such as the target object and its position information on the project site obtained by the equipment based on the target ground image of the project site, we can analyze various factors that affect the safety of the target object and the level of danger, so as to provide reliable and reliable information for the target object. warning message. The device calculates the movement risk index between the target objects according to the target objects and their position information. Then, if the target collision event is a real collision event, the movement risk index between the target worker and the target vehicle is calculated according to the preset movement risk index calculation rule.
其中,运动危险指数数值越大,危险越高,碰撞可能即将发生,数值越小,则危险越小,碰撞发生可能性很低或者不存在。Among them, the larger the value of the movement risk index, the higher the risk, and the collision may be about to occur, and the smaller the value, the smaller the risk, and the possibility of collision is very low or does not exist.
具体来说,目标对象可以包括目标工人和目标车辆,其中,目标工人的位置信息包括第一包围框的第一中心坐标和第一尺寸;目标车辆的位置信息包括第二包围框的第二中心坐标和第二尺寸。目标工人的第一包围框的第一中心坐标(x,y),第一尺寸为(w,h),目标车辆的第二包围框的第二中心坐标(x’,y’),第二尺寸为(w’,h’)。Specifically, the target object may include a target worker and a target vehicle, wherein the location information of the target worker includes a first center coordinate and a first size of the first bounding box; the location information of the target vehicle includes a second center of the second bounding box Coordinates and second dimension. The first center coordinate (x, y) of the first bounding box of the target worker, the first size is (w, h), the second center coordinate (x', y') of the second bounding box of the target vehicle, the second The dimensions are (w',h').
设备根据第一中心坐标、第一尺寸、第二中心坐标和第二尺寸计算目标工人和目标车辆之间的目标碰撞事件的预计碰撞时间。我们可以获得任意时刻任意目标对象的位置,从而计算出任意两个目标对象之间的相对运动及预计碰撞时间。The device calculates an estimated time to collision for a target collision event between the target worker and the target vehicle based on the first center coordinates, the first size, the second center coordinates, and the second size. We can obtain the position of any target object at any time, so as to calculate the relative motion and expected collision time between any two target objects.
由于仅计算一次预计碰撞时间可能出现误差较大的情况,不够可靠,因此,本实施例中可以计算多个初始碰撞时间,根据多个初始碰撞时间得到更准确的预计碰撞时间。具体来说,目标地面图像包括同一图像采集装置采集的多组图像帧组。根据每组图像帧组的第一中心坐标、第一尺寸、第二中心坐标和第二尺寸计算目标工人和目标车辆之间的目标碰撞事件的初始碰撞时间;计算所有初始碰撞时间的平均值,得到目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间。举例来说,设备可以每隔n(0<n<30)帧计算一次初始碰撞时间T,每m(2<m<20)次为一组,求平均值,以该值作为预计碰撞时间Tcollision。Since calculating the estimated collision time only once may cause large errors and is not reliable enough, in this embodiment, multiple initial collision times can be calculated, and a more accurate estimated collision time can be obtained according to the multiple initial collision times. Specifically, the target ground image includes multiple sets of image frame groups collected by the same image collection device. Calculate the initial collision time of the target collision event between the target worker and the target vehicle according to the first center coordinate, first size, second center coordinate and second size of each group of image frame groups; calculate the average of all initial collision times , obtain the estimated collision time of the target collision event between the target worker and the target vehicle. For example, the device can calculate the initial collision time T every n (0<n<30) frames, every m (2<m<20) times as a group, calculate the average value, and use this value as the estimated collision time T collision .
一种实施方式中,为了更准确的计算预计碰撞时间,在计算初始碰撞时间时,每组图像帧组包括至少连续三帧图像;In one embodiment, in order to calculate the expected collision time more accurately, when calculating the initial collision time, each group of image frame groups includes at least three consecutive frames of images;
设备在根据每组图像帧组的第一中心坐标、第一尺寸、第二中心坐标和第二尺寸计算目标工人和目标车辆之间的目标碰撞事件的初始碰撞时间时,此处假设工人与工程机械均为匀加速运动。设备可以先根据连续三帧图像的第一中心坐标和第一预设计算规则,计算目标工人的第一加速度和第一速度,并且根据第一加速度和第一速度计算第二速度。其中,第一速度为连续三帧图像中第一帧图像中目标工人的速度;第二速度为连续三帧图像中第三帧图像中目标工人的速度。When the device calculates the initial collision time of the target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate and the second size of each group of image frame groups, here it is assumed that the worker and Construction machinery is a uniform acceleration motion. The device may first calculate the first acceleration and first speed of the target worker according to the first center coordinates of the three consecutive frames of images and the first preset calculation rule, and calculate the second speed according to the first acceleration and the first speed. Wherein, the first speed is the speed of the target worker in the first frame of the three consecutive frames of images; the second speed is the speed of the target worker in the third frame of the three consecutive frames of images.
具体来说,连续三帧图像的帧率为f,则两帧时间差为Δt=1/f;设目标工人和目标车辆连续三帧内,在图像中的位置分别为和其中,这里的下标1、2、3分别代表三个帧,速度分别为v1,v2,v3和v1′,v2′,v3′,加速度分别为a和a′;目标工人和目标车辆连续三帧内,包围框的平均宽、高分别为w,h和w′,h′。则目标工人的两段帧间运动可由下列公式描述:Specifically, the frame rate of three consecutive frames of images is f, then the time difference between the two frames is Δt=1/f; the positions of the target worker and the target vehicle in the three consecutive frames are respectively and Among them, the subscripts 1, 2, and 3 here represent three frames, respectively, the speeds are v 1 , v 2 , v 3 and v 1 ′, v 2 ′, v 3 ′, respectively, and the accelerations are a and a ′ respectively; the target The average width and height of the bounding boxes are w, h and w′, h′ in three consecutive frames of the worker and the target vehicle, respectively. Then the motion between the two frames of the target worker can be described by the following formula:
v2=v1+aΔtv 2 =v 1 +aΔt
其中,in,
由上述公式可得目标工人的第一加速度为:From the above formula, the first acceleration of the target worker can be obtained as:
目标工人的第一加速度为:The first acceleration of the target worker is:
目标工人的第二速度为:The second velocity of the target worker is:
v3=v1+2aΔtv 3 =v 1 +2aΔt
然后,设备根据连续三帧图像的第二中心坐标和第二预设计算规则,计算目标车辆的第二加速度和第三速度,并且根据第二加速度和第三速度计算第四速度;其中,第三速度为连续三帧图像中第一帧图像中目标车辆的速度;第四速度为连续三帧图像中第三帧图像中目标车辆的速度。具体计算方式可以参照上文中第一加速度、第一速度和第二速度的计算方法,此处不再赘述。Then, the device calculates the second acceleration and the third speed of the target vehicle according to the second center coordinates of the three consecutive frames of images and the second preset calculation rule, and calculates the fourth speed according to the second acceleration and the third speed; The third speed is the speed of the target vehicle in the first frame of the three consecutive frames; the fourth speed is the speed of the target vehicle in the third frame of the three consecutive frames. For a specific calculation method, reference may be made to the above-mentioned calculation methods of the first acceleration, the first speed, and the second speed, which will not be repeated here.
根据上文中第一加速度、第一速度和第二速度的计算方法计算得到第二加速度、第三速度和第四速度分别为:According to the calculation method of the first acceleration, the first speed and the second speed above, the second acceleration, the third speed and the fourth speed are calculated as:
v′3=v′1+2a′Δtv' 3 =v' 1 +2a'Δt
然后,设备根据第一尺寸、第二尺寸、第二速度和第四速度计算在连续三帧图像中第三帧图像中目标工人和目标车辆之间的目标距离。具体来说,可以通过以下公式计算第三帧图像中目标工人和目标车辆之间的目标距离:Then, the device calculates the target distance between the target worker and the target vehicle in the third frame of images of the three consecutive frames according to the first size, the second size, the second speed, and the fourth speed. Specifically, the target distance between the target worker and the target vehicle in the third frame image can be calculated by the following formula:
计算第三帧图像中目标工人和目标车辆之间的目标距离X3是为了判断目标工人和目标车辆之间是否具有碰撞危险,第一帧图像中目标工人和目标车辆之间的目标距离为X1,在计算初始碰撞时间为之前,需要先判断二者相对运动方向,如果X1>X3,则二者相向或同向而行,具有碰撞危险,开始计算预计碰撞时间。即设备根据目标距离判定目标工人和目标车辆之间具有碰撞危险,则根据第一加速度、第二加速度、第一尺寸、第二尺寸、第二速度和第四速度计算目标碰撞事件的初始碰撞时间。Calculating the target distance X 3 between the target worker and the target vehicle in the third frame image is to determine whether there is a danger of collision between the target worker and the target vehicle. The target distance between the target worker and the target vehicle in the first frame image is X 1. Before calculating the initial collision time, it is necessary to judge the relative movement direction of the two. If X 1 >X 3 , then the two move in the opposite or the same direction, and there is a danger of collision, and the estimated collision time begins to be calculated. That is, the device determines that there is a collision risk between the target worker and the target vehicle according to the target distance, and then calculates the initial collision time of the target collision event according to the first acceleration, the second acceleration, the first size, the second size, the second speed and the fourth speed. .
具体来说,可以通过以下公式计算目标碰撞事件的初始碰撞时间:Specifically, the initial collision time of the target collision event can be calculated by the following formula:
在判断目标工人和目标车辆之间是否具有碰撞危险时,若根据目标距离判定所述目标工人和所述目标车辆之间不具有碰撞危险,即X1<X3,则二者背向或同向而行,没有碰撞风险,则目标碰撞事件的预计碰撞时间为无穷大。When judging whether there is a danger of collision between the target worker and the target vehicle, if it is determined according to the target distance that there is no danger of collision between the target worker and the target vehicle, that is, X 1 <X 3 , then the two are facing away or the same If there is no collision risk, the expected collision time of the target collision event is infinite.
在计算出预计碰撞时间后,由于在某一特定相机的视角内,工程机械或车辆可能与工人发生碰撞,以及相应的碰撞时间。但相机的二维平面视角决定了这种碰撞预测可能是无效的,比如两者二者与相机距离不同,一远一近,则只会交错而过。所以,设备还需要对目标碰撞事件是否为真实碰撞事件进行检测。若目标碰撞事件为真实碰撞事件,则根据预设运动危险指数计算规则计算目标工人和目标车辆之间的运动危险指数。After calculating the expected collision time, the construction machinery or vehicle may collide with the worker due to the view of a specific camera, and the corresponding collision time. However, the two-dimensional plane viewing angle of the camera determines that this collision prediction may be invalid. For example, the distance between the two is different from the camera. One is far and the other is close, and they will only pass by staggered. Therefore, the device also needs to detect whether the target collision event is a real collision event. If the target collision event is a real collision event, the movement risk index between the target worker and the target vehicle is calculated according to the preset movement risk index calculation rule.
在本实施例中,可以设置多个地面相机进行检测是否为真实碰撞。如图8所示,图8为地面相机布置的示意图,当地面相机轴对称布置,镜头平行相向时,二者在物体运动分析上具有完全等价的作用。因此,工程现场的地面相机应避免轴对称布置,且当数量较少时,应使用奇数个相机,从而避免硬件资源浪费。In this embodiment, multiple ground cameras may be set to detect whether it is a real collision. As shown in FIG. 8, FIG. 8 is a schematic diagram of the arrangement of the ground cameras. When the ground cameras are arranged axisymmetrically and the lenses are parallel to each other, the two have completely equivalent effects in the analysis of object motion. Therefore, the ground cameras on the engineering site should avoid axisymmetric arrangement, and when the number is small, an odd number of cameras should be used to avoid wasting hardware resources.
具体来说,设备中预先设置第一预设预警时间阈值和第二预警时间阈值,若预计碰撞时间小于第一预设预警时间阈值,则获取预计碰撞时间对应的备用碰撞时间。其中,备用碰撞时间即为根据其他相机采集的目标地面图像得到的碰撞时间。若备用碰撞时间大于第二预设预警时间阈值,且小于第一预设预警时间阈值,则根据预设系数和预计碰撞时间计算目标工人和目标车辆之间的运动危险指数。此时,说明该碰撞事件的危险性较小,情况并不紧急,可以继续计算目标工人和目标车辆之间的运动危险指数。具体的目标工人和目标车辆之间的运动危险指数计算方式如下:Specifically, a first preset warning time threshold and a second warning time threshold are preset in the device, and if the expected collision time is less than the first preset warning time threshold, the backup collision time corresponding to the expected collision time is obtained. The backup collision time is the collision time obtained according to the target ground images collected by other cameras. If the backup collision time is greater than the second preset warning time threshold and less than the first preset warning time threshold, the movement risk index between the target worker and the target vehicle is calculated according to the preset coefficient and the expected collision time. At this time, it means that the danger of the collision event is small and the situation is not urgent, and the movement risk index between the target worker and the target vehicle can be calculated continuously. The specific calculation method of the movement hazard index between the target worker and the target vehicle is as follows:
αj=c1exp(-c2Tcollision)+c3 α j =c 1 exp(-c 2 T collision )+c 3
其中,中c1、c2和c3为常数,Tcollision为预计碰撞时间。Among them, c 1 , c 2 and c 3 are constants, and T collision is the expected collision time.
若备用碰撞时间小于第二预设预警时间阈值,则目标工人和目标车辆马上就会发生真实碰撞,情况比较紧急,此时不继续计算目标工人和目标车辆之间的运动危险指数,可以直接对目标工人发出碰撞预警,进一步提升了安全等级。If the backup collision time is less than the second preset warning time threshold, the target worker and the target vehicle will have a real collision immediately, and the situation is more urgent. The target worker issued a collision warning, further enhancing the safety level.
S105:根据所述每个像素所属的区域类型确定所述目标对象所属的目标区域,并且根据所述目标区域确定区域危险指数。S105: Determine the target area to which the target object belongs according to the area type to which each pixel belongs, and determine a regional risk index according to the target area.
设备根据每个像素所属的区域类型确定目标对象所属的目标区域,在正常工作中,工程机械往往会在某一区域内高频率地出现,也就是相应的作业区域。根据施工现场安全事故案例的分析,正在工作的工程机械附近、工地道路、停车场出口、深坑或凹陷区边缘等区域是事故高发区当工人出现在这些区域内,通常具有较高的安全风险。设备根据目标区域确定区域危险指数,当目标对象出现在安全风险较高的区域时,区域危险指数也就更高。举例来说,如果正处在工程机械的作业区域、道路等范围内,则区域危险指数数值较大,在办公区、空闲区等范围内,则区域危险指数数值较小。The equipment determines the target area to which the target object belongs according to the area type to which each pixel belongs. During normal operation, construction machinery often appears in a certain area with high frequency, that is, the corresponding operation area. According to the analysis of construction site safety accident cases, the areas near the construction machinery at work, construction site roads, parking lot exits, deep pits or the edge of the depression area are high accident areas. When workers appear in these areas, they usually have high safety risks. . The device determines the regional hazard index according to the target area. When the target object appears in an area with higher security risks, the regional hazard index is also higher. For example, if you are in the working area of construction machinery, roads, etc., the value of the regional risk index is large, and in the range of office areas, idle areas, etc., the value of the regional risk index is small.
S106:根据所述运动危险指数和所述区域危险指数,计算所述目标对象的危险系数。S106: Calculate the risk coefficient of the target object according to the movement risk index and the regional risk index.
设备计算得到运动危险指数和区域危险指数,计算目标对象的危险系数。具体来说,运动危险指数为α和区域危险指数为β,设定合理的权重系数kα和kβ,就可以计算出目标对象的危险系数:The equipment calculates the movement risk index and the regional risk index, and calculates the risk factor of the target object. Specifically, if the movement risk index is α and the regional risk index is β, and reasonable weight coefficients k α and k β are set, the risk coefficient of the target object can be calculated:
Γj=kα×αj+kβ×βw,h Γ j = k α ×α j +k β ×β w,h
其中,Γj是目标对象j的危险系数;αj是目标对象j的运动危险指数,表明目标对象j当前时刻所面临的碰撞风险,数值越大,危险越高,碰撞可能即将发生,数值越小,则危险越小,碰撞发生可能性很低或者不存在;βw,h是目标对象j在图像位置(w,h)上所面临的区域危险指数,如果正处在工程机械的作业区域、道路等范围内,则指数数值较大,在办公区、空闲区等范围内,则数值较小;kα和kβ分别为运动危险指数和区域危险指数的权重系数。Among them, Γ j is the risk factor of the target object j; α j is the movement risk index of the target object j, indicating the collision risk faced by the target object j at the current moment. βw,h is the regional risk index faced by the target object j at the image position (w,h), if it is in the working area of construction machinery , roads, etc., the index value is larger, and in the office area, idle area, etc., the value is smaller; k α and k β are the weight coefficients of the movement risk index and the regional risk index, respectively.
S107:根据所述危险系数对所述工程现场进行安全监测。S107: Perform safety monitoring on the engineering site according to the risk factor.
设备根据危险系数对工程现场进行安全监测,即可以根据计算出的危险系数来输出相应级别的预警信号。设备中可以预先设置不同的危险系数对应不同的预警信号级别,例如,当危险系数为0-30时,预警信号级别是低风险;当危险系数为30-60时,预警信号级别是中风险;当危险系数为60-80时,预警信号级别是高风险;当危险系数为80-100时,预警信号级别是致命风险。通过不同程度的预警信号,实现对工程现场的安全检测。The equipment performs safety monitoring on the project site according to the risk factor, that is, it can output a corresponding level of early warning signal according to the calculated risk factor. Different risk factors can be preset in the device to correspond to different warning signal levels. For example, when the risk factor is 0-30, the warning signal level is low risk; when the risk factor is 30-60, the warning signal level is medium risk; When the risk factor is 60-80, the warning signal level is high risk; when the risk factor is 80-100, the warning signal level is fatal risk. Through different degrees of early warning signals, the safety detection of the project site is realized.
本申请实施例中,获取工程现场的目标俯视图像和目标地面图像;对目标俯视图像进行识别,得到目标俯视图像中每个像素所属的区域类型;对目标地面图像进行识别,得到目标地面图像中的目标对象及其位置信息;根据目标对象及其位置信息计算目标对象之间的运动危险指数;根据每个像素所属的区域类型确定目标对象所属的目标区域,并且根据目标区域确定区域危险指数;根据运动危险指数和区域危险指数,计算目标对象的危险系数;根据危险系数对工程现场进行安全监测。上述方案,从整个工程现场的动态运行角度出发,全面的获取工程现场的多维信息,通过对运动危险指数和区域危险指数计算出目标对象的危险系数,通过危险系数能够更全面的进行安全预警,更全面的对工程现场的安全进行监测。In the embodiment of the present application, the target overhead image and the target ground image of the project site are acquired; the target overhead image is identified to obtain the area type to which each pixel in the target overhead image belongs; the target ground image is identified, and the target ground image is obtained. According to the target object and its position information, the movement risk index between the target objects is calculated; the target area to which the target object belongs is determined according to the area type to which each pixel belongs, and the regional risk index is determined according to the target area; According to the movement risk index and the regional risk index, the risk coefficient of the target object is calculated; the safety monitoring of the project site is carried out according to the risk coefficient. The above scheme, from the perspective of dynamic operation of the entire project site, comprehensively obtains the multi-dimensional information of the project site, and calculates the risk factor of the target object through the movement risk index and the regional risk index. More comprehensively monitor the safety of the project site.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
请参见图9,图9是本申请第二实施例提供的工程现场的安全监测装置的示意图。包括的各单元用于执行图1对应的实施例中的各步骤。具体请参阅图1对应的实施例中的相关描述。为了便于说明,仅示出了与本实施例相关的部分。参见图9,工程现场的安全监测装置9包括:Please refer to FIG. 9. FIG. 9 is a schematic diagram of a safety monitoring device on a construction site provided by the second embodiment of the present application. The included units are used to execute the steps in the embodiment corresponding to FIG. 1 . For details, please refer to the relevant description in the embodiment corresponding to FIG. 1 . For convenience of explanation, only the parts related to this embodiment are shown. Referring to Figure 9, the safety monitoring device 9 on the engineering site includes:
第一获取单元910,用于获取工程现场的目标俯视图像和目标地面图像;a first acquiring
第一识别单元920,用于对所述目标俯视图像进行识别,得到所述目标俯视图像中每个像素所属的区域类型;a first identifying
第二识别单元930,用于对所述目标地面图像进行识别,得到所述目标地面图像中的目标对象及其位置信息;所述目标对象的数量至少为两个;The
第一计算单元940,用于根据所述目标对象及其位置信息计算所述目标对象之间的运动危险指数;a
第一处理单元950,用于根据所述每个像素所属的区域类型确定所述目标对象所属的目标区域,并且根据所述目标区域确定区域危险指数;a
第二计算单元960,用于根据所述运动危险指数和所述区域危险指数,计算所述目标对象的危险系数;a
第二处理单元970,用于根据所述危险系数对所述工程现场进行安全监测。The
进一步地,所述目标对象包括目标工人和目标车辆;所述目标工人的所述位置信息包括第一包围框的第一中心坐标和第一尺寸;所述目标车辆的所述位置信息包括第二包围框的第二中心坐标和第二尺寸;Further, the target object includes a target worker and a target vehicle; the location information of the target worker includes a first center coordinate and a first size of a first bounding box; the location information of the target vehicle includes a second the second center coordinate and second size of the bounding box;
所述第一计算单元940,具体用于:The
根据所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间;Calculate an estimated time to collision for a target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate, and the second size;
若所述目标碰撞事件为真实碰撞事件,则根据预设运动危险指数计算规则计算所述目标工人和所述目标车辆之间的运动危险指数。If the target collision event is a real collision event, the movement risk index between the target worker and the target vehicle is calculated according to a preset movement risk index calculation rule.
进一步地,所述目标地面图像包括同一图像采集装置采集的多组图像帧组;Further, the target ground image includes multiple groups of image frame groups collected by the same image collection device;
所述第一计算单元940,具体用于:The
根据每组所述图像帧组的所述第一中心坐标、所述第一尺寸、所述第二中心坐标和所述第二尺寸计算所述目标工人和所述目标车辆之间的目标碰撞事件的初始碰撞时间;Calculate a target collision event between the target worker and the target vehicle according to the first center coordinate, the first size, the second center coordinate, and the second size of each set of the image frame groups the initial collision time;
计算所有所述初始碰撞时间的平均值,得到所述目标工人和所述目标车辆之间的目标碰撞事件的预计碰撞时间。An average of all the initial collision times is calculated to obtain an estimated collision time for a target collision event between the target worker and the target vehicle.
进一步地,每组所述图像帧组包括至少连续三帧图像;Further, each group of the image frame group includes at least three consecutive frames of images;
所述第一计算单元940,具体用于:The
根据所述连续三帧图像的所述第一中心坐标和第一预设计算规则,计算所述目标工人的第一加速度和第一速度,并且根据所述第一加速度和所述第一速度计算第二速度;其中,所述第一速度为所述连续三帧图像中第一帧图像中所述目标工人的速度;所述第二速度为所述连续三帧图像中第三帧图像中所述目标工人的速度;Calculate the first acceleration and first speed of the target worker according to the first center coordinates of the three consecutive frames of images and the first preset calculation rule, and calculate according to the first acceleration and the first speed second speed; wherein, the first speed is the speed of the target worker in the first frame of the three consecutive frames of images; the second speed is the speed of the target worker in the third frame of the three consecutive frames of images describe the speed of the target worker;
根据所述连续三帧图像的所述第二中心坐标和第二预设计算规则,计算所述目标车辆的第二加速度和第三速度,并且根据所述第二加速度和所述第三速度计算第四速度;其中,所述第三速度为所述连续三帧图像中第一帧图像中所述目标车辆的速度;所述第四速度为所述连续三帧图像中第三帧图像中所述目标车辆的速度;Calculate the second acceleration and the third speed of the target vehicle according to the second center coordinate of the three consecutive frames of images and the second preset calculation rule, and calculate the second acceleration and the third speed according to the second acceleration and the third speed the fourth speed; wherein the third speed is the speed of the target vehicle in the first frame of the three consecutive frames of images; the fourth speed is the speed of the target vehicle in the third frame of the three consecutive frames of images the speed of the target vehicle;
根据所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算在所述连续三帧图像中第三帧图像中所述目标工人和所述目标车辆之间的目标距离;Calculate the distance between the target worker and the target vehicle in the third frame of the three consecutive frames based on the first size, the second size, the second speed and the fourth speed target distance;
若根据所述目标距离判定所述目标工人和所述目标车辆之间具有碰撞危险,则根据所述第一加速度、所述第二加速度、所述第一尺寸、所述第二尺寸、所述第二速度和所述第四速度计算目标碰撞事件的初始碰撞时间。If it is determined according to the target distance that there is a danger of collision between the target worker and the target vehicle, then according to the first acceleration, the second acceleration, the first size, the second size, the The second velocity and the fourth velocity calculate the initial impact time of the target impact event.
进一步地,所述第一计算单元940,具体还用于:Further, the
若根据所述目标距离判定所述目标工人和所述目标车辆之间不具有碰撞危险,则所述目标碰撞事件的预计碰撞时间为无穷大。If it is determined according to the target distance that there is no danger of collision between the target worker and the target vehicle, the expected collision time of the target collision event is infinite.
进一步地,所述第一计算单元940,具体用于:Further, the
若所述预计碰撞时间小于第一预设预警时间阈值,则获取所述预计碰撞时间对应的备用碰撞时间;If the estimated collision time is less than the first preset warning time threshold, acquiring the backup collision time corresponding to the estimated collision time;
若所述备用碰撞时间大于第二预设预警时间阈值,且小于所述第一预设预警时间阈值,则根据预设系数和所述预计碰撞时间计算所述目标工人和所述目标车辆之间的运动危险指数。If the backup collision time is greater than the second preset warning time threshold and less than the first preset warning time threshold, the distance between the target worker and the target vehicle is calculated according to the preset coefficient and the expected collision time exercise hazard index.
进一步地,所述第一识别单元920,具体用于:Further, the
将所述目标俯视图像输入经过训练的区域识别模型进行识别,得到所述目标俯视图像中每个像素所属的区域类型。The target bird's-eye view image is input into a trained region recognition model for identification, and the region type to which each pixel in the target bird's-eye view image belongs is obtained.
进一步地,所述第一识别单元920,具体还用于:Further, the
获取样本训练集;所述样本训练集包括样本俯视图像及其对应的每个像素所属的样本区域类型;Obtain a sample training set; the sample training set includes the sample top-down image and the sample area type to which each corresponding pixel belongs;
使用所述样本训练集对初始识别模型进行训练,得到经过训练的区域识别模型。The initial recognition model is trained using the sample training set to obtain a trained region recognition model.
进一步地,所述第一识别单元920,具体还用于:Further, the
获取初始俯视图像,并且根据预设图像处理策略对所述初始俯视图像进行处理,得到样本俯视图像;所述图像处理策略包括亮度调整策略、色调调整策略、饱和度调整策略、对比度调整策略、噪声调整策略、边缘增强策略、图像镜像策略、图像缩放策略、图像去除策略、图像混合策略中的一种或者多种;Obtain an initial bird's-eye view image, and process the initial bird's-eye view image according to a preset image processing strategy to obtain a sample bird's-eye view image; the image processing strategy includes a brightness adjustment strategy, a hue adjustment strategy, a saturation adjustment strategy, a contrast adjustment strategy, and a noise adjustment strategy. One or more of adjustment strategies, edge enhancement strategies, image mirroring strategies, image scaling strategies, image removal strategies, and image mixing strategies;
获取所述样本俯视图像对应的每个像素所属的样本区域类型,根据所述样本俯视图像及其对应的每个像素所属的样本区域类型确定样本训练集。The sample area type to which each pixel corresponding to the sample top-down image belongs is acquired, and the sample training set is determined according to the sample top-down image and the sample area type to which each corresponding pixel belongs.
图10是本申请第三实施例提供的工程现场的安全监测设备的示意图。如图10所示,该实施例的工程现场的安全监测设备10包括:处理器100、存储器101以及存储在所述存储器101中并可在所述处理器100上运行的计算机程序102,例如工程现场的安全监测程序。所述处理器100执行所述计算机程序102时实现上述各个工程现场的安全监测方法实施例中的步骤,例如图1所示的步骤101至107。或者,所述处理器100执行所述计算机程序102时实现上述各装置实施例中各模块/单元的功能,例如图9所示模块910至970的功能。FIG. 10 is a schematic diagram of a safety monitoring device on a construction site provided by the third embodiment of the present application. As shown in FIG. 10 , the
示例性的,所述计算机程序102可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器101中,并由所述处理器100执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序102在所述工程现场的安全监测设备10中的执行过程。例如,所述计算机程序102可以被分割成第一获取单元、第一识别单元、第二识别单元、第一计算单元、第一处理单元、第二计算单元、第二处理单元,各单元具体功能如下:Exemplarily, the
第一获取单元,用于获取工程现场的目标俯视图像和目标地面图像;The first acquisition unit is used to acquire the target overhead image and the target ground image of the engineering site;
第一识别单元,用于对所述目标俯视图像进行识别,得到所述目标俯视图像中每个像素所属的区域类型;a first identifying unit, configured to identify the target bird's-eye view image, and obtain the region type to which each pixel in the target bird's-eye view image belongs;
第二识别单元,用于对所述目标地面图像进行识别,得到所述目标地面图像中的目标对象及其位置信息;所述目标对象的数量至少为两个;a second identification unit, configured to identify the target ground image, and obtain the target object and its position information in the target ground image; the number of the target objects is at least two;
第一计算单元,用于根据所述目标对象及其位置信息计算所述目标对象之间的运动危险指数;a first calculation unit, configured to calculate a movement risk index between the target objects according to the target objects and their position information;
第一处理单元,用于根据所述每个像素所属的区域类型确定所述目标对象所属的目标区域,并且根据所述目标区域确定区域危险指数;a first processing unit, configured to determine the target area to which the target object belongs according to the area type to which each pixel belongs, and determine a regional risk index according to the target area;
第二计算单元,用于根据所述运动危险指数和所述区域危险指数,计算所述目标对象的危险系数;a second calculation unit, configured to calculate the risk factor of the target object according to the movement risk index and the regional risk index;
第二处理单元,用于根据所述危险系数对所述工程现场进行安全监测。The second processing unit is configured to perform safety monitoring on the engineering site according to the risk factor.
所述工程现场的安全监测设备可包括,但不仅限于,处理器100、存储器101。本领域技术人员可以理解,图10仅仅是工程现场的安全监测设备10的示例,并不构成对工程现场的安全监测设备10的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述工程现场的安全监测设备还可以包括输入输出设备、网络接入设备、总线等。The safety monitoring device on the engineering site may include, but is not limited to, a
所称处理器100可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called
所述存储器101可以是所述工程现场的安全监测设备10的内部存储单元,例如工程现场的安全监测设备10的硬盘或内存。所述存储器101也可以是所述工程现场的安全监测设备10的外部存储设备,例如所述工程现场的安全监测设备10上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(FlashCard)等。进一步地,所述工程现场的安全监测设备10还可以既包括所述工程现场的安全监测设备10的内部存储单元也包括外部存储设备。所述存储器101用于存储所述计算机程序以及所述工程现场的安全监测设备所需的其他程序和数据。所述存储器101还可以用于暂时地存储已经输出或者将要输出的数据。The
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information exchange, execution process and other contents between the above-mentioned devices/units are based on the same concept as the method embodiments of the present application. For specific functions and technical effects, please refer to the method embodiments section. It is not repeated here.
本申请实施例还提供了一种网络设备,该网络设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。An embodiment of the present application also provides a network device, the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor executing The computer program implements the steps in any of the foregoing method embodiments.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,RandomAccess Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the present application realizes all or part of the processes in the methods of the above embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the above-mentioned various method embodiments may be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, RandomAccess Memory), electrical carrier signal, telecommunication signal, and software distribution medium. For example, U disk, mobile hard disk, disk or CD, etc. In some jurisdictions, under legislation and patent practice, computer readable media may not be electrical carrier signals and telecommunications signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the above-mentioned embodiments, those of ordinary skill in the art should understand that: it can still be used for the above-mentioned implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the application, and should be included in the within the scope of protection of this application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110671419.XA CN113536935A (en) | 2021-06-17 | 2021-06-17 | A kind of safety monitoring method and equipment for engineering site |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110671419.XA CN113536935A (en) | 2021-06-17 | 2021-06-17 | A kind of safety monitoring method and equipment for engineering site |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113536935A true CN113536935A (en) | 2021-10-22 |
Family
ID=78096255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110671419.XA Pending CN113536935A (en) | 2021-06-17 | 2021-06-17 | A kind of safety monitoring method and equipment for engineering site |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113536935A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114612866A (en) * | 2022-05-12 | 2022-06-10 | 东营固泰尔建筑科技有限责任公司 | Intelligent identification method, device and equipment for safety in building site |
CN114727063A (en) * | 2022-04-02 | 2022-07-08 | 清华大学 | Path safety monitoring system, method and device for construction site |
CN115082864A (en) * | 2022-07-25 | 2022-09-20 | 青岛亨通建设有限公司 | Building construction safety monitoring system |
CN115796518A (en) * | 2022-11-29 | 2023-03-14 | 中国建筑国际集团有限公司 | A comprehensive evaluation method for traffic organization and transportation scheduling for smart construction sites |
CN119854459A (en) * | 2025-03-20 | 2025-04-18 | 大连馨士俐科技服务有限公司 | Traffic engineering construction site video monitoring method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156725A (en) * | 2016-06-16 | 2016-11-23 | 江苏大学 | A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist |
KR20180127245A (en) * | 2017-05-19 | 2018-11-28 | 한국과학기술원 | Method for Predicting Vehicle Collision Using Data Collected from Video Games |
CN110232320A (en) * | 2019-05-08 | 2019-09-13 | 华中科技大学 | Method and system of the real-time detection building-site worker danger close to construction machinery |
US10417914B1 (en) * | 2014-05-30 | 2019-09-17 | State Farm Mutual Automobile Insurance Company | Systems and methods for determining a vehicle is at an elevated risk for an animal collision |
CN110561432A (en) * | 2019-08-30 | 2019-12-13 | 广东省智能制造研究所 | safety cooperation method and device based on man-machine co-fusion |
CN111145479A (en) * | 2019-12-31 | 2020-05-12 | 清华大学 | Construction dangerous environment real-time early warning platform and method based on BIM positioning technology |
CN111341068A (en) * | 2020-03-02 | 2020-06-26 | 北京四利通控制技术股份有限公司 | Drilling site dangerous area early warning system and method based on deep learning |
CN112885014A (en) * | 2021-01-15 | 2021-06-01 | 广州穗能通能源科技有限责任公司 | Early warning method, device, system and computer readable storage medium |
CN112896159A (en) * | 2021-03-11 | 2021-06-04 | 宁波均联智行科技股份有限公司 | Driving safety early warning method and system |
-
2021
- 2021-06-17 CN CN202110671419.XA patent/CN113536935A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10417914B1 (en) * | 2014-05-30 | 2019-09-17 | State Farm Mutual Automobile Insurance Company | Systems and methods for determining a vehicle is at an elevated risk for an animal collision |
CN106156725A (en) * | 2016-06-16 | 2016-11-23 | 江苏大学 | A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist |
KR20180127245A (en) * | 2017-05-19 | 2018-11-28 | 한국과학기술원 | Method for Predicting Vehicle Collision Using Data Collected from Video Games |
CN110232320A (en) * | 2019-05-08 | 2019-09-13 | 华中科技大学 | Method and system of the real-time detection building-site worker danger close to construction machinery |
CN110561432A (en) * | 2019-08-30 | 2019-12-13 | 广东省智能制造研究所 | safety cooperation method and device based on man-machine co-fusion |
CN111145479A (en) * | 2019-12-31 | 2020-05-12 | 清华大学 | Construction dangerous environment real-time early warning platform and method based on BIM positioning technology |
CN111341068A (en) * | 2020-03-02 | 2020-06-26 | 北京四利通控制技术股份有限公司 | Drilling site dangerous area early warning system and method based on deep learning |
CN112885014A (en) * | 2021-01-15 | 2021-06-01 | 广州穗能通能源科技有限责任公司 | Early warning method, device, system and computer readable storage medium |
CN112896159A (en) * | 2021-03-11 | 2021-06-04 | 宁波均联智行科技股份有限公司 | Driving safety early warning method and system |
Non-Patent Citations (2)
Title |
---|
李鑫;: "公路施工危险区闯入智能识别技术研究", 黑龙江交通科技, no. 03, 15 March 2018 (2018-03-15) * |
黄俊达;杨大伟;毛琳;: "俯视邻近行人风险量化分析方法", 大连民族大学学报, no. 05, 15 September 2018 (2018-09-15) * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114727063A (en) * | 2022-04-02 | 2022-07-08 | 清华大学 | Path safety monitoring system, method and device for construction site |
CN114727063B (en) * | 2022-04-02 | 2022-11-25 | 清华大学 | Path safety monitoring system, method and device on construction site |
CN114612866A (en) * | 2022-05-12 | 2022-06-10 | 东营固泰尔建筑科技有限责任公司 | Intelligent identification method, device and equipment for safety in building site |
CN114612866B (en) * | 2022-05-12 | 2022-09-02 | 东营固泰尔建筑科技有限责任公司 | Intelligent identification method, device and equipment for safety in building site |
CN115082864A (en) * | 2022-07-25 | 2022-09-20 | 青岛亨通建设有限公司 | Building construction safety monitoring system |
CN115796518A (en) * | 2022-11-29 | 2023-03-14 | 中国建筑国际集团有限公司 | A comprehensive evaluation method for traffic organization and transportation scheduling for smart construction sites |
CN119854459A (en) * | 2025-03-20 | 2025-04-18 | 大连馨士俐科技服务有限公司 | Traffic engineering construction site video monitoring method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113536935A (en) | A kind of safety monitoring method and equipment for engineering site | |
CN104350510B (en) | For the method and system for distinguishing the foreground object of image and background model | |
Huang et al. | An advanced single-image visibility restoration algorithm for real-world hazy scenes | |
CN104660994B (en) | Maritime affairs dedicated video camera and maritime affairs intelligent control method | |
CN102592454A (en) | Intersection vehicle movement parameter measuring method based on detection of vehicle side face and road intersection line | |
CN112906777A (en) | Target detection method and device, electronic equipment and storage medium | |
CN112766069A (en) | Vehicle illegal parking detection method and device based on deep learning and electronic equipment | |
CN113887418A (en) | Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium | |
Li et al. | A lane marking detection and tracking algorithm based on sub-regions | |
JP7360520B1 (en) | Object tracking integration method and integration device | |
CN112651953A (en) | Image similarity calculation method and device, computer equipment and storage medium | |
CN103530640A (en) | Unlicensed vehicle detection method based on AdaBoost and SVM (support vector machine) | |
CN113505643B (en) | Method and related device for detecting violation target | |
CN114049488A (en) | Multi-dimensional information fusion remote weak and small target detection method and terminal | |
Strigel et al. | Vehicle detection and tracking at intersections by fusing multiple camera views | |
Buza et al. | Unsupervised method for detection of high severity distresses on asphalt pavements | |
CN113435350A (en) | Traffic marking detection method, device, equipment and medium | |
CN115662118A (en) | Automatic driving test oriented traffic compliance automatic judgment method and device | |
CN110619653A (en) | Early warning control system and method for preventing collision between ship and bridge based on artificial intelligence | |
Petrovai et al. | Obstacle detection using stereovision for Android-based mobile devices | |
CN112001247B (en) | Multi-target detection method, equipment and storage device | |
Habib et al. | Lane departure detection and transmission using Hough transform method | |
CN104809438B (en) | A kind of method and apparatus for detecting electronic eyes | |
CN112529943B (en) | Object detection method, object detection device and intelligent equipment | |
CN115457530A (en) | External damage prevention method, external damage prevention device and external damage prevention system for power transmission line |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |