CN117173639B - Behavior analysis and safety early warning method and system based on multi-source equipment - Google Patents
Behavior analysis and safety early warning method and system based on multi-source equipment Download PDFInfo
- Publication number
- CN117173639B CN117173639B CN202311434334.5A CN202311434334A CN117173639B CN 117173639 B CN117173639 B CN 117173639B CN 202311434334 A CN202311434334 A CN 202311434334A CN 117173639 B CN117173639 B CN 117173639B
- Authority
- CN
- China
- Prior art keywords
- limb
- human body
- feature map
- coincident
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012544 monitoring process Methods 0.000 claims abstract description 124
- 238000010586 diagram Methods 0.000 claims description 52
- 230000004927 fusion Effects 0.000 claims description 27
- 230000003068 static effect Effects 0.000 claims description 25
- 230000011218 segmentation Effects 0.000 claims description 21
- 210000000746 body region Anatomy 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 12
- 238000003384 imaging method Methods 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 10
- 238000003331 infrared imaging Methods 0.000 claims description 4
- 230000003542 behavioural effect Effects 0.000 claims 2
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
- Alarm Systems (AREA)
Abstract
The invention discloses a behavior analysis and safety early warning method and system based on multi-source equipment. And detecting the complex scene approaching to the object to be detected according to the monitoring image, the sound wave image and the infrared image acquired by the multi-source equipment. The overlapping part is judged according to the human body outline and the object outline, and the overlapping part area can be obtained under the condition that the human body is shielded. The occupancy ratio of different types of images is added according to different detection conditions, so that the limb can be extracted more accurately. Meanwhile, due to the fact that the overlapped limb parts are shielded, the category of the overlapped limb is found out from the connection condition of the limbs of the human body in the category of the contact limb. And judging according to the movement condition of the limbs, so that the situation of mistakenly entering a person can be avoided. And further, the personnel for accurately carrying out dangerous operation are obtained, so that reminding is carried out.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a behavior analysis and safety early warning method and system based on multi-source equipment.
Background
Currently, in factories, mountainous areas, construction sites, and the like, there are dangerous devices which, if improperly operated, would cause personal injury and damage to objects, thus prohibiting the operation by unrelated personnel. If monitoring is carried out by means of manpower, improper monitoring is unavoidable, and manpower and material resources are wasted. However, when monitoring is performed through the image pickup device, inaccurate monitoring occurs in complex situations such as personnel entering by mistake or other transactions of the personnel at a nearby place.
Disclosure of Invention
The invention aims to provide a behavior analysis and safety early warning method and system based on multi-source equipment, which are used for solving the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a behavior analysis and security early warning method based on a multi-source device, including:
acquiring a monitoring image, a sound wave image and an infrared image; the monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment;
acquiring an acoustic wave feature map, a monitoring feature map, an infrared feature map, a human fusion feature map and a human contour based on the monitoring image, the acoustic wave image and the infrared image;
extracting the characteristics of equipment to be detected according to the monitoring image to obtain equipment contours;
judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapped frame; the overlapping frame is rectangular and comprises an overlapping part of a human body and equipment;
if the human body is overlapped with the equipment, according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the overlapped frame, carrying out limb segmentation on the human body outline to obtain overlapped limbs and contact limbs; obtaining a plurality of coincident limbs and a plurality of contact limbs correspondingly at a plurality of time points; the coincident limb is an operable limb or a fixed limb; the operable limb is a limb capable of operating the device; the immobilized limb is a limb that cannot operate the device;
If the coincident limb is an operable limb, judging whether the human body is in a static state or a motion state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points;
and if the human body is in a motion state, carrying out illegal operation reminding.
Optionally, the obtaining the acoustic wave feature map, the monitoring feature map, the infrared feature map, the human fusion feature map and the human contour based on the monitoring image, the acoustic wave image and the infrared image includes:
convolving the monitoring image through a monitoring convolution network, extracting image characteristics, and obtaining a monitoring characteristic diagram;
convoluting the sound wave image through a sound wave convolution network, extracting sound wave imaging characteristics, and obtaining a sound wave characteristic diagram;
convoluting the infrared image through an infrared convolution network, extracting infrared imaging characteristics, and obtaining an infrared characteristic diagram;
fusing the camera imaging information in the monitoring feature map at the same position with the distance information of the sound wave reflection in the sound wave feature map to obtain a monitoring sound wave feature map;
fusing the camera imaging information in the monitoring feature map and the light reflection distance information in the infrared feature map at the same position to obtain a monitoring infrared feature map;
Fusing the monitoring sound wave characteristic and the monitoring infrared characteristic map, and acquiring edge shape characteristics of the monitoring characteristic map twice to obtain a human fusion characteristic map;
and detecting according to the human fusion feature map to obtain the human outline.
Optionally, the step of judging whether the human body is coincident with the equipment according to the human body outline and the equipment outline to obtain a coincident frame includes:
calculating a coordinate distance according to the human body contour and the equipment contour to obtain a contour coordinate pair; the contour coordinate pair is a coordinate pair with the distance between the coordinates of the human body contour and the coordinates of the equipment contour smaller than other distances;
connecting two coordinates in the contour coordinate pair to obtain a connecting line and a contour distance; the contour distance is the distance of the connecting line;
dividing the contour distance by 2 to obtain an extension distance;
according to the extension distance, extending the two ends of the connecting wire to obtain an extension line;
obtaining the coincidence judgment length; the superposition judging length is the superposition judging length;
respectively taking two end points of the extension line as central points to construct two vertical lines with coincident judging lengths;
and judging whether the human body is overlapped with the equipment according to the two vertical lines to obtain an overlapped frame.
Optionally, the determining whether the human body coincides with the device according to the two vertical lines to obtain a coinciding frame includes:
constructing a closed overlapping frame according to the two vertical lines; the two vertical lines comprise a first vertical line and a second vertical line;
respectively acquiring intersection points of the human body contour and the equipment contour in the coincident frame and two vertical lines to obtain a first vertical intersection point pair and a second vertical intersection point pair; the first vertical intersection point pair is two intersection points on a first vertical line; the second vertical intersection point pair is two intersection points on a second vertical line;
respectively making vertical lines according to the first vertical intersection point pairs to obtain a first region; respectively making vertical lines according to the second vertical intersection point pairs to obtain a second region;
taking the area where the first area and the second area intersect as a superposition area;
taking the area of the overlapping area as an overlapping value;
if the coincidence value is smaller than the coincidence threshold value, setting the human body and the equipment to be non-coincident;
if the coincidence value is greater than or equal to the coincidence threshold, the human body and the equipment are set to coincide.
Optionally, if the human body coincides with the device, the body segmentation of the human body contour is performed according to the acoustic image, the acoustic feature map, the monitoring feature map, the infrared feature map, the human body contour and the coinciding frame to obtain the coinciding body and the contact body, including:
Extracting an area inside the human body outline in the sound wave image to obtain a human body outline image;
the human body contour image is convolved and then fused with the sound wave feature image, and the structural features in the human body are extracted to obtain a human body contour sound wave feature image;
judging the skeleton structure inside the human body contour according to the human body contour acoustic wave feature map, the monitoring feature map and the infrared feature map to obtain a plurality of segmented human body areas and a plurality of segmented limb categories; the split limb category is the category of the detected human body component; one segmented body region corresponds to one segmented limb category;
and obtaining coincident limbs and contact limbs according to the plurality of segmented body regions and the plurality of segmented limb categories.
Optionally, the obtaining the coincident limb and the contact limb according to the multiple segmented body regions and the multiple segmented limb categories includes:
obtaining a labeling segmentation limb category set and a connection relation; the labeling segmentation limb category set comprises all categories of limbs forming a human body; the connection relation represents the connection condition between limbs of a human body;
according to the human body segmentation areas, the human body segmentation areas connected by the overlapping frames are set to be in contact with limbs;
Taking elements of the labeling segmented limb category set except the plurality of segmented limb categories as undetected segmented limb categories to obtain one or more undetected segmented limb categories;
according to the plurality of undetected segmented limb categories, constructing a relation between the undetected segmented limb categories connected with the contact limb and the overlapping frame to obtain an overlapping limb; the connection relation between the coincident limb and the contact limb is 1 to 1.
Optionally, the determining the skeleton structure inside the human body contour according to the human body contour acoustic wave feature map, the monitoring feature map and the infrared feature map to obtain a plurality of segmented human body regions and a plurality of segmented limb categories includes:
fusing the monitoring feature map and the human body contour acoustic wave feature map to obtain a human body contour monitoring acoustic wave feature map;
the infrared characteristic diagram and the human body contour acoustic wave characteristic diagram are fused and then are convolved, so that the human body contour infrared acoustic wave characteristic diagram is obtained;
fusing the human body contour monitoring acoustic wave feature map and the human body contour infrared acoustic wave feature map twice, and adding the features of the human body composition structure to obtain a human body contour fusion feature map;
and judging the skeleton structure inside the human body contour according to the human body contour fusion feature diagram to obtain a plurality of segmented human body areas and a plurality of segmented limb categories.
Optionally, if the coincident limb is an operable limb, the determining that the human body is in a stationary state or a moving state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points includes:
inputting the contact limbs at a plurality of time points into a time convolution network to obtain a contact limb similarity value;
if the similarity value of the contact limb is larger than the contact limb threshold value, setting the contact limb to be in a static state; if the similarity value of the contact limb is smaller than or equal to the contact limb threshold value, setting the contact limb to be in a motion state;
judging whether the coincident limb is in a static state or a moving state based on the coincident limb at a plurality of time points;
if the coincident limb is in a motion state or the contact limb is in a motion state, setting the human body to be in a motion state;
if the coincident limb is in a static state and the contact limb is in a static state, the human body is set to be in a static state.
Optionally, determining that the coincident limb is in a stationary state or a moving state based on the coincident limb at the plurality of time points includes:
judging whether the types of undetected segmented limbs in the coincident limbs at a plurality of time points are the same or not;
if the types of undetected segmented limbs in the coincident limbs at the multiple time points are the same, judging the similarity of the contours in the coincident frames at the multiple time points in pairs to obtain multiple coincident limb similarity values;
If all the coincident limb similarity values are smaller than the coincident threshold value, setting the coincident limb as a static state; if the similarity value of the coincident limb is larger than or equal to the coincident threshold value, the coincident limb is set to be in a motion state.
In a second aspect, an embodiment of the present invention provides a behavior analysis and security early warning system based on a multi-source device, including:
the acquisition module is used for: acquiring a monitoring image, a sound wave image and an infrared image; the monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment;
human body contour and feature extraction module: acquiring an acoustic wave feature map, a monitoring feature map, an infrared feature map, a human fusion feature map and a human contour based on the monitoring image, the acoustic wave image and the infrared image;
the equipment profile extraction module: extracting the characteristics of equipment to be detected according to the monitoring image to obtain equipment contours;
and (3) overlapping frame modules: judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapped frame; the overlapping frame is rectangular and comprises an overlapping part of a human body and equipment;
coincident limb and contact limb extraction module: if the human body is overlapped with the equipment, according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the overlapped frame, carrying out limb segmentation on the human body outline to obtain overlapped limbs and contact limbs; obtaining a plurality of coincident limbs and a plurality of contact limbs correspondingly at a plurality of time points; the coincident limb is an operable limb or a fixed limb; the operable limb is a limb capable of operating the device; the immobilized limb is a limb that cannot operate the device;
The motion state judging module is used for: if the coincident limb is an operable limb, judging whether the human body is in a static state or a motion state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points;
a reminding module: and if the human body is in a motion state, carrying out illegal operation reminding.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
the embodiment of the invention also provides a behavior analysis and safety early warning method and system based on the multi-source equipment, wherein the method comprises the following steps: and acquiring a monitoring image, a sound wave image and an infrared image. The monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment. And obtaining an acoustic wave characteristic image, a monitoring characteristic image, an infrared characteristic image, a human fusion characteristic image and a human contour based on the monitoring image, the acoustic wave image and the infrared image. And extracting the characteristics of the equipment to be detected according to the monitoring image to obtain the equipment outline. And judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapped frame. The overlapping frame is rectangular and comprises an overlapping part of a human body and equipment. If the human body is overlapped with the equipment, the limb segmentation of the human body outline is carried out according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the overlapped frame, so as to obtain overlapped limbs and contact limbs. Multiple time points correspond to multiple coincident limbs and multiple contact limbs. The coincident limb is an operable limb or a fixed limb. The operable limb is a limb capable of operating the device. The immobilized limb is a limb that is not operable with the device. If the coincident limb is an operable limb, judging whether the human body is in a static state or a motion state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points. And if the human body is in a motion state, carrying out illegal operation reminding.
And detecting the complex scene approaching to the equipment to be detected according to the monitoring image, the sound wave image and the infrared image acquired by the multi-source equipment. The overlapping part is judged according to the human body outline and the equipment outline, so that the overlapping part area can be obtained under the condition that the human body is shielded, the characteristics of extracting the outline information of the human body can be obtained, and the outline information of the human body can be fused in the later judgment. The ratio of different types of images is increased according to different detection conditions, such as when limbs are segmented, the characteristics in the sound wave images are increased, and the limbs can be extracted more accurately. Meanwhile, as the overlapped limb parts are shielded, the contact limb connected with the overlapped limb is found according to the connection condition of the human body limb, and the category of the overlapped limb is found according to the category of the contact limb. And because of the characteristics of the joints of the human body, some limbs cannot operate the equipment, so that judgment is needed. If the limb is operable, the movement condition of the limb is judged, so that the situation of personnel entering by mistake can be avoided. And further, the personnel for accurately carrying out dangerous operation are obtained, so that reminding is carried out.
Drawings
Fig. 1 is a flowchart of a behavior analysis and security early warning method based on a multi-source device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a behavior analysis and security early warning method based on a multi-source device, where the method includes:
s101: and acquiring a monitoring image, a sound wave image and an infrared image. The monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment.
The monitoring image, the sound wave image and the infrared image are images with the same size and shot at the same time point.
S102: and obtaining an acoustic wave characteristic image, a monitoring characteristic image, an infrared characteristic image, a human fusion characteristic image and a human contour based on the monitoring image, the acoustic wave image and the infrared image.
The human body contour is a contour of the monitoring feature map, the sound wave feature map and the infrared feature map which are subjected to fusion segmentation. The human body contour is represented by a plurality of coordinate points, and the coordinate points are two-dimensional coordinate systems constructed by taking the lower left corner of the monitoring image, the sound wave image or the infrared image as an origin.
S103: and extracting the characteristics of the equipment to be detected according to the monitoring image to obtain the equipment outline.
Wherein the device profile is a profile of a structure in the device that can be operated. Such as the profile of the pressing portion of the switch.
S104: and judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapped frame. The overlapping frame is rectangular and comprises an overlapping part of a human body and equipment.
S105: if the human body is overlapped with the equipment, the limb segmentation of the human body outline is carried out according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the overlapped frame, so as to obtain overlapped limbs and contact limbs. Multiple time points correspond to multiple coincident limbs and multiple contact limbs. The coincident limb is an operable limb or a fixed limb. The operable limb is a limb capable of operating the device. The immobilized limb is a limb that is not operable with the device.
Wherein the operable limb is a limb of an operating device capable of independent movement.
Wherein, because the overlapped limb is blocked, the limb type cannot be accurately detected, so that the contact limb connected with the overlapped limb needs to be judged. And, if the coincident limb is the left hand, the device can be operated. While the coincident limb is the left arm, the operation of the device is more difficult.
S106: if the coincident limb is an operable limb, judging whether the human body is in a static state or a motion state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points.
Wherein the dynamic state is to judge whether the human body structure contacted with the device moves or not. Since the image is a two-dimensional image, it is possible that the human body is only occluded but not touching the device, a change in a plurality of time points is required to determine whether the human body is dynamically operating the device rather than being occluded.
S107: and if the human body is in a motion state, carrying out illegal operation reminding.
Optionally, the obtaining the acoustic wave feature map, the monitoring feature map, the infrared feature map, the human fusion feature map and the human contour based on the monitoring image, the acoustic wave image and the infrared image includes:
and convolving the monitoring image through a monitoring convolution network, extracting image features, and obtaining a monitoring feature map.
And convolving the sound wave image through a sound wave convolution network, extracting sound wave imaging characteristics, and obtaining a sound wave characteristic diagram.
And convoluting the infrared image through an infrared convolution network, extracting infrared imaging characteristics, and obtaining an infrared characteristic diagram.
Wherein the monitoring convolutional network, the acoustic wave convolutional network and the infrared convolutional network are convolutional neural networks (Convolutional Neural Networks, CNN) with different structures.
The monitoring feature map, the sound wave feature map and the infrared feature map are the same in size.
And fusing the camera imaging information in the monitoring feature map and the distance information of the sound wave reflection in the sound wave feature map at the same position to obtain the monitoring sound wave feature map.
And fusing the camera imaging information in the monitoring feature map and the light reflection distance information in the infrared feature map at the same position to obtain the monitoring infrared feature map.
And respectively carrying out deconvolution after fusion, wherein the obtained monitoring sound wave characteristic diagram and the monitoring infrared characteristic diagram have the same size.
And fusing the monitoring sound wave characteristic and the monitoring infrared characteristic map, and acquiring edge shape characteristics of the monitoring characteristic map twice to obtain a human fusion characteristic map.
And detecting according to the human fusion feature map to obtain the human outline.
Wherein the convolutional neural network (Convolutional Neural Networks, CNN) is trained by a training set of labeling human contours.
By the method, the characteristics of the image formed by the multi-source equipment are extracted, the characteristics of the monitoring image are respectively fused with the characteristics of the infrared image and the characteristics of the sound wave image, the characteristics of the monitoring image are used twice, the duty ratio of the characteristics of the pixel image is improved, and the detection accuracy is enhanced.
Optionally, the step of judging whether the human body is coincident with the equipment according to the human body outline and the equipment outline to obtain a coincident frame includes:
and calculating a coordinate distance according to the human body contour and the equipment contour to obtain a contour coordinate pair. The contour coordinate pair is a coordinate pair in which the distance between the coordinates of the human body contour and the coordinates of the equipment contour is smaller than other distances.
And connecting two coordinates in the contour coordinate pair to obtain a connecting line and a contour distance. The contour distance is the distance of the connecting line. The contour distance is the distance of the connecting line.
Dividing the contour distance by 2 gives an extension distance.
And according to the extension distance, extending the two ends of the connecting wire to obtain an extension line.
And obtaining the coincidence judgment length. The overlapping judgment length is the overlapping judgment length.
In this embodiment, the coincidence determination length is a value obtained by multiplying the reciprocal of the distance (in decimeters) from the device to be detected to the multi-source device by 1280.
And respectively taking two end points of the extension line as central points to construct two vertical lines with coincident judging lengths.
And judging whether the human body is overlapped with the equipment according to the two vertical lines to obtain an overlapped frame.
By the method, because the human body actions can be changed, whether the human body is overlapped or not is difficult to directly judge, and whether the human body is overlapped or not can be judged more conveniently and accurately by the distance between the human body and the outline of the equipment.
Optionally, the determining whether the human body coincides with the device according to the two vertical lines to obtain a coinciding frame includes:
and constructing a closed overlapped frame according to the two vertical lines.
And respectively acquiring the intersection points of the human body contour and the equipment contour in the coincident frame and the two vertical lines to obtain a first vertical intersection point pair and a second vertical intersection point pair. The first vertical intersection point pair is two intersection points on the first vertical line. The second vertical intersection point pair is two intersection points on the second vertical line.
And respectively making vertical lines according to the first vertical intersection point pair to obtain a first region. And respectively making vertical lines according to the second vertical intersection point pairs to obtain a second region.
And taking the area where the first area and the second area intersect as a superposition area.
The area of the overlapping region is taken as an overlapping value.
Wherein, the coincidence value acquisition method comprises the following steps:
if the coincidence value is smaller than the coincidence threshold value, the human body and the equipment are set to be non-coincident.
If the coincidence value is greater than or equal to the coincidence threshold, the human body and the equipment are set to coincide.
Optionally, if the human body coincides with the device, the body segmentation of the human body contour is performed according to the acoustic image, the acoustic feature map, the monitoring feature map, the infrared feature map, the human body contour and the coinciding frame to obtain the coinciding body and the contact body, including:
And extracting an area inside the human body outline in the sound wave image to obtain a human body outline image.
Wherein, the area outside the human body outline in the acoustic wave image is set to be white.
And (3) fusing the human body contour images after convolution with the sound wave feature map, and extracting structural features in the human body to obtain the human body contour sound wave feature map.
Wherein the coincident limb comprises a hand, a foot and other positions. Because these locations are obscured, it is not well judged that other locations are needed to judge the location of the human bone, including sound waves.
And judging the skeleton structure inside the human body contour according to the human body contour acoustic wave characteristic diagram, the monitoring characteristic diagram and the infrared characteristic diagram to obtain the segmented human body region and the segmented limb category. The split limb category is a category of the detected human body component. One segmented body region corresponds to one segmented limb category.
The convolutional neural network (Convolutional Neural Networks, CNN) is trained by marking the segmented body regions and the segmented limb categories as training sets for convolution.
In this embodiment, the split limb types can be head, neck, torso, left arm, left hand, right arm, right hand, left leg, left foot, right leg, and right foot.
And obtaining coincident limbs and contact limbs according to the plurality of segmented body regions and the plurality of segmented limb categories.
By the method, the human body is divided into different parts according to the reflection distance of the sound wave, and the overlapped part of the shielding can be more accurately judged.
Optionally, the obtaining the coincident limb and the contact limb according to the multiple segmented body regions and the multiple segmented limb categories includes:
and obtaining the labeling segmentation limb category set and the connection relation. The set of labeling split limb categories includes all of the categories of limbs that make up the human body. The connection relationship represents the connection condition between limbs of the human body.
The obtained segmented limb categories are not necessarily the categories in the set of all labeled segmented limb categories according to the human body contour acoustic wave feature map, the monitoring feature map and the infrared feature map. A situation including the head, neck, torso, left arm, right hand, right leg, and right foot may be detected, with no left hand, left leg, and left foot detected for occlusion reasons.
And taking the elements of the labeling segmented limb category set except the plurality of segmented limb categories as undetected segmented limb categories to obtain one or more undetected segmented limb categories.
And according to the plurality of undetected segmented limb categories, constructing a relation between the undetected segmented limb categories connected with the contact limb and the overlapping frame to obtain the overlapping limb. The connection relation between the coincident limb and the contact limb is 1 to 1.
If the left hand is blocked, the left hand may be operating the device, but only the left arm may be detected, and due to the human body structure, the trunk connected to one end of the left arm is already detected, and the other end can only be connected to the detected left hand, and the coincident limb is the left hand.
Optionally, the determining the skeleton structure inside the human body contour according to the human body contour acoustic wave feature map, the monitoring feature map and the infrared feature map to obtain a plurality of segmented human body regions and a plurality of segmented limb categories includes:
and fusing the monitoring feature map and the human body contour acoustic wave feature map to obtain the human body contour monitoring acoustic wave feature map.
And fusing and then deconvolving the infrared characteristic diagram and the human body contour acoustic wave characteristic diagram to obtain the human body contour infrared acoustic wave characteristic diagram.
And fusing the human body contour monitoring acoustic wave characteristic diagram and the human body contour infrared acoustic wave characteristic diagram twice, and adding the characteristics of the human body composition structure to obtain a human body contour fusion characteristic diagram.
And judging the skeleton structure inside the human body contour according to the human body contour fusion feature diagram to obtain a plurality of segmented human body areas and a plurality of segmented limb categories.
Optionally, if the coincident limb is an operable limb, the determining that the human body is in a stationary state or a moving state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points includes:
and inputting the contact limb at a plurality of time points into a time convolution network to obtain the contact limb similarity value.
Wherein the touching limb convolves the network from far to near input time according to the time point.
The contact limb of the input time convolution network is specifically an image which is white except for a human body segmentation area where the contact limb is located.
And if the contact limb similarity value is larger than the contact limb threshold value, setting the contact limb to be in a static state. And if the contact limb similarity value is smaller than or equal to the contact limb threshold value, setting the contact limb to be in a motion state.
Wherein, the contact limb threshold value is 0.95 in the embodiment.
And judging whether the coincident limb is in a static state or a moving state based on the coincident limb at a plurality of time points.
If the coincident limb is in a motion state or the contact limb is in a motion state, the human body is set to be in a motion state.
If the coincident limb is in a static state and the contact limb is in a static state, the human body is set to be in a static state.
Optionally, determining that the coincident limb is in a stationary state or a moving state based on the coincident limb at the plurality of time points includes:
and judging whether the types of the undetected segmented limbs in the coincident limbs at a plurality of time points are the same or not.
If the types of undetected segmented limbs in the coincident limbs at the multiple time points are the same, the similarity is judged by two outlines in the coincident frames at the multiple time points, and a plurality of coincident limb similarity values are obtained.
And if the similarity value of all the coincident limbs is smaller than the coincident threshold value, setting the coincident limbs to be in a static state. If the similarity value of the coincident limb is larger than or equal to the coincident threshold value, the coincident limb is set to be in a motion state.
In this embodiment, the overlapping threshold is 0.9.
Example 2
Based on the behavior analysis and safety early warning method based on the multi-source equipment, the embodiment of the invention also provides a behavior analysis and safety early warning system based on the multi-source equipment, which comprises an acquisition module, a human body contour and characteristic extraction module, an equipment contour extraction module, a coincidence frame module, a coincidence limb and contact limb extraction module, a motion state judgment module and a reminding module.
The acquisition module is used for acquiring the monitoring image, the sound wave image and the infrared image. The monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment.
The human body contour and feature extraction module is used for obtaining an acoustic wave feature map, a monitoring feature map, an infrared feature map, a human body fusion feature map and a human body contour based on the monitoring image, the acoustic wave image and the infrared image.
And the equipment contour extraction module is used for extracting the characteristics of equipment to be detected according to the monitoring image to obtain the equipment contour.
And the overlapping frame module is used for judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapping frame. The overlapping frame is rectangular and comprises an overlapping part of a human body and equipment.
The coincident limb and contact limb extraction module is used for carrying out limb segmentation on the human body outline according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the coincident frame to obtain a coincident limb and a contact limb if the human body is coincident with the device. Multiple time points correspond to multiple coincident limbs and multiple contact limbs. The coincident limb is an operable limb or a fixed limb. The operable limb is a limb capable of operating the device. The immobilized limb is a limb that is not operable with the device.
And the movement state judging module is used for judging whether the human body is in a static state or a movement state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points if the coincident limb is an operable limb.
The reminding module is used for carrying out illegal operation reminding if the human body is in a motion state.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
Claims (9)
1. A behavior analysis and safety early warning method based on multi-source equipment is characterized by comprising the following steps:
acquiring a monitoring image, a sound wave image and an infrared image; the monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment;
acquiring an acoustic wave feature map, a monitoring feature map, an infrared feature map, a human fusion feature map and a human contour based on the monitoring image, the acoustic wave image and the infrared image;
extracting the characteristics of equipment to be detected according to the monitoring image to obtain equipment contours;
judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapped frame; the overlapping frame is rectangular and comprises an overlapping part of a human body and equipment;
if the human body is overlapped with the equipment, according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the overlapped frame, carrying out limb segmentation on the human body outline to obtain overlapped limbs and contact limbs; obtaining a plurality of coincident limbs and a plurality of contact limbs correspondingly at a plurality of time points; the coincident limb is an operable limb or a fixed limb; the operable limb is a limb capable of operating the device; the immobilized limb is a limb that cannot operate the device;
If the coincident limb is an operable limb, judging whether the human body is in a static state or a motion state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points;
if the human body is in a motion state, carrying out illegal operation reminding;
based on the monitoring image, the sound wave image and the infrared image, obtaining a sound wave feature map, a monitoring feature map, an infrared feature map, a human fusion feature map and a human contour comprises the following steps:
convolving the monitoring image through a monitoring convolution network, extracting image characteristics, and obtaining a monitoring characteristic diagram;
convoluting the sound wave image through a sound wave convolution network, extracting sound wave imaging characteristics, and obtaining a sound wave characteristic diagram;
convoluting the infrared image through an infrared convolution network, extracting infrared imaging characteristics, and obtaining an infrared characteristic diagram;
fusing the camera imaging information in the monitoring feature map at the same position with the distance information of the sound wave reflection in the sound wave feature map to obtain a monitoring sound wave feature map;
fusing the camera imaging information in the monitoring feature map and the light reflection distance information in the infrared feature map at the same position to obtain a monitoring infrared feature map;
fusing the monitoring sound wave characteristic and the monitoring infrared characteristic map, and acquiring edge shape characteristics of the monitoring characteristic map twice to obtain a human fusion characteristic map;
And detecting according to the human fusion feature map to obtain the human outline.
2. The behavior analysis and safety pre-warning method based on multi-source equipment according to claim 1, wherein the step of judging whether the human body and the equipment are coincident according to the human body outline and the equipment outline to obtain a coincident frame comprises the following steps:
calculating a coordinate distance according to the human body contour and the equipment contour to obtain a contour coordinate pair; the contour coordinate pair is a coordinate pair with the distance between the coordinates of the human body contour and the coordinates of the equipment contour smaller than other distances;
connecting two coordinates in the contour coordinate pair to obtain a connecting line and a contour distance; the contour distance is the distance of the connecting line;
dividing the contour distance by 2 to obtain an extension distance;
according to the extension distance, extending the two ends of the connecting wire to obtain an extension line;
obtaining the coincidence judgment length; the superposition judging length is the superposition judging length;
respectively taking two end points of the extension line as central points to construct two vertical lines with coincident judging lengths;
and judging whether the human body is overlapped with the equipment according to the two vertical lines to obtain an overlapped frame.
3. The behavior analysis and safety pre-warning method based on multi-source equipment according to claim 2, wherein the step of judging whether the human body coincides with the equipment according to two vertical lines to obtain a coincidence frame comprises the following steps:
Constructing a closed overlapping frame according to the two vertical lines; the two vertical lines comprise a first vertical line and a second vertical line;
respectively acquiring intersection points of the human body contour and the equipment contour in the coincident frame and two vertical lines to obtain a first vertical intersection point pair and a second vertical intersection point pair; the first vertical intersection point pair is two intersection points on a first vertical line; the second vertical intersection point pair is two intersection points on a second vertical line;
respectively making vertical lines according to the first vertical intersection point pairs to obtain a first region; respectively making vertical lines according to the second vertical intersection point pairs to obtain a second region;
taking the area where the first area and the second area intersect as a superposition area;
taking the area of the overlapping area as an overlapping value;
if the coincidence value is smaller than the coincidence threshold value, setting the human body and the equipment to be non-coincident;
if the coincidence value is greater than or equal to the coincidence threshold, the human body and the equipment are set to coincide.
4. The behavior analysis and safety pre-warning method based on multi-source equipment according to claim 1, wherein if the human body is coincident with the equipment, the limb segmentation of the human body contour is performed according to the acoustic image, the acoustic feature map, the monitoring feature map, the infrared feature map, the human body contour and the coincident frame to obtain coincident limbs and contact limbs, comprising:
Extracting an area inside the human body outline in the sound wave image to obtain a human body outline image;
the human body contour image is convolved and then fused with the sound wave feature image, and the structural features in the human body are extracted to obtain a human body contour sound wave feature image;
judging the skeleton structure inside the human body contour according to the human body contour acoustic wave feature map, the monitoring feature map and the infrared feature map to obtain a plurality of segmented human body areas and a plurality of segmented limb categories; the split limb category is the category of the detected human body component; one segmented body region corresponds to one segmented limb category;
and obtaining coincident limbs and contact limbs according to the plurality of segmented body regions and the plurality of segmented limb categories.
5. The method for analyzing behavior and pre-warning safety based on multi-source device according to claim 4, wherein obtaining coincident limbs and contact limbs according to the plurality of divided human body areas and the plurality of divided limb categories comprises:
obtaining a labeling segmentation limb category set and a connection relation; the labeling segmentation limb category set comprises all categories of limbs forming a human body; the connection relation represents the connection condition between limbs of a human body;
According to the human body segmentation areas, the human body segmentation areas connected by the overlapping frames are set to be in contact with limbs;
taking elements of the labeling segmented limb category set except the plurality of segmented limb categories as undetected segmented limb categories to obtain one or more undetected segmented limb categories;
according to the plurality of undetected segmented limb categories, constructing a relation between the undetected segmented limb categories connected with the contact limb and the overlapping frame to obtain an overlapping limb; the connection relation between the coincident limb and the contact limb is 1 to 1.
6. The behavioral analysis and safety precaution method based on multi-source equipment according to claim 4, wherein the determining the skeleton structure inside the human body contour according to the human body contour acoustic wave feature map, the monitoring feature map and the infrared feature map to obtain a plurality of segmented human body regions and a plurality of segmented limb categories comprises:
fusing the monitoring feature map and the human body contour acoustic wave feature map to obtain a human body contour monitoring acoustic wave feature map;
the infrared characteristic diagram and the human body contour acoustic wave characteristic diagram are fused and then are convolved, so that the human body contour infrared acoustic wave characteristic diagram is obtained;
fusing the human body contour monitoring acoustic wave feature map and the human body contour infrared acoustic wave feature map twice, and adding the features of the human body composition structure to obtain a human body contour fusion feature map;
And judging the skeleton structure inside the human body contour according to the human body contour fusion feature diagram to obtain a plurality of segmented human body areas and a plurality of segmented limb categories.
7. The behavior analysis and safety pre-warning method based on multi-source equipment according to claim 1, wherein if the coincident limb is an operable limb, determining that the human body is in a stationary state or a moving state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points comprises:
inputting the contact limbs at a plurality of time points into a time convolution network to obtain a contact limb similarity value;
if the similarity value of the contact limb is larger than the contact limb threshold value, setting the contact limb to be in a static state; if the similarity value of the contact limb is smaller than or equal to the contact limb threshold value, setting the contact limb to be in a motion state;
judging whether the coincident limb is in a static state or a moving state based on the coincident limb at a plurality of time points;
if the coincident limb is in a motion state or the contact limb is in a motion state, setting the human body to be in a motion state;
if the coincident limb is in a static state and the contact limb is in a static state, the human body is set to be in a static state.
8. The method for behavioral analysis and safety precaution based on a multi-source device according to claim 7, wherein determining whether the coincident limb is in a stationary state or a moving state based on coincident limbs at a plurality of time points comprises:
Judging whether the types of undetected segmented limbs in the coincident limbs at a plurality of time points are the same or not;
if the types of undetected segmented limbs in the coincident limbs at the multiple time points are the same, judging the similarity of the contours in the coincident frames at the multiple time points in pairs to obtain multiple coincident limb similarity values;
if all the coincident limb similarity values are smaller than the coincident threshold value, setting the coincident limb as a static state; if the similarity value of the coincident limb is larger than or equal to the coincident threshold value, the coincident limb is set to be in a motion state.
9. A behavior analysis and safety early warning system based on multi-source equipment is characterized by comprising:
the acquisition module is used for: acquiring a monitoring image, a sound wave image and an infrared image; the monitoring image, the sound wave image and the infrared image are three images shot by multi-source equipment which is required to be detected in the direction of the equipment;
human body contour and feature extraction module: acquiring an acoustic wave feature map, a monitoring feature map, an infrared feature map, a human fusion feature map and a human contour based on the monitoring image, the acoustic wave image and the infrared image;
the equipment profile extraction module: extracting the characteristics of equipment to be detected according to the monitoring image to obtain equipment contours;
And (3) overlapping frame modules: judging whether the human body is overlapped with the equipment according to the human body outline and the equipment outline to obtain an overlapped frame; the overlapping frame is rectangular and comprises an overlapping part of a human body and equipment;
coincident limb and contact limb extraction module: if the human body is overlapped with the equipment, according to the sound wave image, the sound wave characteristic diagram, the monitoring characteristic diagram, the infrared characteristic diagram, the human body outline and the overlapped frame, carrying out limb segmentation on the human body outline to obtain overlapped limbs and contact limbs; obtaining a plurality of coincident limbs and a plurality of contact limbs correspondingly at a plurality of time points; the coincident limb is an operable limb or a fixed limb; the operable limb is a limb capable of operating the device; the immobilized limb is a limb that cannot operate the device;
the motion state judging module is used for: if the coincident limb is an operable limb, judging whether the human body is in a static state or a motion state according to the coincident limb at a plurality of time points and the contact limb at a plurality of time points;
a reminding module: if the human body is in a motion state, carrying out illegal operation reminding;
based on the monitoring image, the sound wave image and the infrared image, obtaining a sound wave feature map, a monitoring feature map, an infrared feature map, a human fusion feature map and a human contour comprises the following steps:
Convolving the monitoring image through a monitoring convolution network, extracting image characteristics, and obtaining a monitoring characteristic diagram;
convoluting the sound wave image through a sound wave convolution network, extracting sound wave imaging characteristics, and obtaining a sound wave characteristic diagram;
convoluting the infrared image through an infrared convolution network, extracting infrared imaging characteristics, and obtaining an infrared characteristic diagram;
fusing the camera imaging information in the monitoring feature map at the same position with the distance information of the sound wave reflection in the sound wave feature map to obtain a monitoring sound wave feature map;
fusing the camera imaging information in the monitoring feature map and the light reflection distance information in the infrared feature map at the same position to obtain a monitoring infrared feature map;
fusing the monitoring sound wave characteristic and the monitoring infrared characteristic map, and acquiring edge shape characteristics of the monitoring characteristic map twice to obtain a human fusion characteristic map;
and detecting according to the human fusion feature map to obtain the human outline.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311434334.5A CN117173639B (en) | 2023-11-01 | 2023-11-01 | Behavior analysis and safety early warning method and system based on multi-source equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311434334.5A CN117173639B (en) | 2023-11-01 | 2023-11-01 | Behavior analysis and safety early warning method and system based on multi-source equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117173639A CN117173639A (en) | 2023-12-05 |
CN117173639B true CN117173639B (en) | 2024-02-06 |
Family
ID=88947096
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311434334.5A Active CN117173639B (en) | 2023-11-01 | 2023-11-01 | Behavior analysis and safety early warning method and system based on multi-source equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173639B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106383515A (en) * | 2016-09-21 | 2017-02-08 | 哈尔滨理工大学 | Wheel-type moving robot obstacle-avoiding control system based on multi-sensor information fusion |
CN107403436A (en) * | 2017-06-26 | 2017-11-28 | 中山大学 | A kind of character contour quick detection and tracking based on depth image |
KR20170135288A (en) * | 2016-05-31 | 2017-12-08 | 한국전자통신연구원 | Vision based obstacle detection apparatus and method using radar and camera |
CN107703935A (en) * | 2017-09-12 | 2018-02-16 | 安徽胜佳和电子科技有限公司 | Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance |
CN108846848A (en) * | 2018-06-25 | 2018-11-20 | 广东电网有限责任公司电力科学研究院 | A kind of the operation field method for early warning and device of fusion UWB positioning and video identification |
CN110348312A (en) * | 2019-06-14 | 2019-10-18 | 武汉大学 | A kind of area video human action behavior real-time identification method |
CN114699661A (en) * | 2022-05-07 | 2022-07-05 | 广州科莱瑞迪医疗器材股份有限公司 | Pose association determination and display method |
WO2023015799A1 (en) * | 2021-08-10 | 2023-02-16 | 中国科学院深圳先进技术研究院 | Multimodal fusion obstacle detection method and apparatus based on artificial intelligence blindness guiding |
CN115909185A (en) * | 2022-09-28 | 2023-04-04 | 三一汽车制造有限公司 | Method and device for determining target behavior, pumping machine and readable storage medium |
CN116311727A (en) * | 2023-03-17 | 2023-06-23 | 苏州浪潮智能科技有限公司 | Intrusion response method, device, equipment and readable storage medium |
-
2023
- 2023-11-01 CN CN202311434334.5A patent/CN117173639B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170135288A (en) * | 2016-05-31 | 2017-12-08 | 한국전자통신연구원 | Vision based obstacle detection apparatus and method using radar and camera |
CN106383515A (en) * | 2016-09-21 | 2017-02-08 | 哈尔滨理工大学 | Wheel-type moving robot obstacle-avoiding control system based on multi-sensor information fusion |
CN107403436A (en) * | 2017-06-26 | 2017-11-28 | 中山大学 | A kind of character contour quick detection and tracking based on depth image |
CN107703935A (en) * | 2017-09-12 | 2018-02-16 | 安徽胜佳和电子科技有限公司 | Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance |
CN108846848A (en) * | 2018-06-25 | 2018-11-20 | 广东电网有限责任公司电力科学研究院 | A kind of the operation field method for early warning and device of fusion UWB positioning and video identification |
CN110348312A (en) * | 2019-06-14 | 2019-10-18 | 武汉大学 | A kind of area video human action behavior real-time identification method |
WO2023015799A1 (en) * | 2021-08-10 | 2023-02-16 | 中国科学院深圳先进技术研究院 | Multimodal fusion obstacle detection method and apparatus based on artificial intelligence blindness guiding |
CN114699661A (en) * | 2022-05-07 | 2022-07-05 | 广州科莱瑞迪医疗器材股份有限公司 | Pose association determination and display method |
CN115909185A (en) * | 2022-09-28 | 2023-04-04 | 三一汽车制造有限公司 | Method and device for determining target behavior, pumping machine and readable storage medium |
CN116311727A (en) * | 2023-03-17 | 2023-06-23 | 苏州浪潮智能科技有限公司 | Intrusion response method, device, equipment and readable storage medium |
Non-Patent Citations (2)
Title |
---|
Multimodal feature fusion for illumination-invariant recognition of abnormal human behaviors;Nguyen, VA 等;《INFORMATION FUSION》;第100卷;1-11 * |
基于多传感器信息融合的移动机器人避障研究;王斌明;《中国优秀博硕士学位论文全文数据库 (硕士)信息科技辑》(第01期);I140-140 * |
Also Published As
Publication number | Publication date |
---|---|
CN117173639A (en) | 2023-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106952303B (en) | Vehicle distance detection method, device and system | |
CN110210302B (en) | Multi-target tracking method, device, computer equipment and storage medium | |
US9898651B2 (en) | Upper-body skeleton extraction from depth maps | |
US20180341823A1 (en) | Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system | |
JP7012880B2 (en) | Target detection method and equipment, equipment and storage media | |
CN112528831A (en) | Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment | |
US9367747B2 (en) | Image processing apparatus, image processing method, and program | |
CN113744348A (en) | Parameter calibration method and device and radar vision fusion detection equipment | |
CN112115820B (en) | Vehicle-mounted driving assisting method and device, computer device and readable storage medium | |
US20130114858A1 (en) | Method for Detecting a Target in Stereoscopic Images by Learning and Statistical Classification on the Basis of a Probability Law | |
CN110392239B (en) | Designated area monitoring method and device | |
Guomundsson et al. | ToF imaging in smart room environments towards improved people tracking | |
CN112183476A (en) | Obstacle detection method and device, electronic equipment and storage medium | |
JPH11257931A (en) | Object recognizing device | |
CN116343085A (en) | Method, system, storage medium and terminal for detecting obstacle on highway | |
CN110689556A (en) | Tracking method and device and intelligent equipment | |
CN117173639B (en) | Behavior analysis and safety early warning method and system based on multi-source equipment | |
CN114758414A (en) | Pedestrian behavior detection method, device, equipment and computer storage medium | |
CN110673607A (en) | Feature point extraction method and device in dynamic scene and terminal equipment | |
Fang et al. | Lane boundary detection algorithm based on vector fuzzy connectedness | |
JP2007200364A (en) | Stereo calibration apparatus and stereo image monitoring apparatus using the same | |
Hu et al. | Automatic detection and evaluation of 3D pavement defects using 2D and 3D information at the high speed | |
KR101958270B1 (en) | Intelligent Image Analysis System using Image Separation Image Tracking | |
Lu et al. | Pedestrian detection based on center, temperature, scale and ratio prediction in thermal imagery | |
CN115580830B (en) | Passenger violation path detection method and device based on AP probe multipoint positioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |