CN113239802A - Safety monitoring method, device, medium and electronic equipment - Google Patents

Safety monitoring method, device, medium and electronic equipment Download PDF

Info

Publication number
CN113239802A
CN113239802A CN202110520826.0A CN202110520826A CN113239802A CN 113239802 A CN113239802 A CN 113239802A CN 202110520826 A CN202110520826 A CN 202110520826A CN 113239802 A CN113239802 A CN 113239802A
Authority
CN
China
Prior art keywords
marker
images
motion information
information
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110520826.0A
Other languages
Chinese (zh)
Inventor
朱孟江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hyperfusion Intelligent Technology Shanghai Co ltd
Original Assignee
Hyperfusion Intelligent Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hyperfusion Intelligent Technology Shanghai Co ltd filed Critical Hyperfusion Intelligent Technology Shanghai Co ltd
Priority to CN202110520826.0A priority Critical patent/CN113239802A/en
Publication of CN113239802A publication Critical patent/CN113239802A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a safety monitoring method, a safety monitoring device, a safety monitoring medium and electronic equipment. The method comprises the following steps: acquiring at least two frames of images of a monitored scene, and determining motion information of markers in the at least two frames of images; acquiring motion information of personnel in a monitoring scene through detection equipment; and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring. By means of the image recognition technology, collision calculation and the like, the method can predict and prompt the possible collision or potential safety hazard in the monitored scene, and accordingly safety in the stage performance process is improved.

Description

Safety monitoring method, device, medium and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a safety monitoring method, a safety monitoring device, a safety monitoring medium and electronic equipment.
Background
With the rapid development of economic level and the gradual improvement of living standard of people, the watching demand of people for artistic performance and the like is increasing day by day. In the process of rapid development of performance industry, in many performance scenes, if various dynamic shooting devices are arranged on a stage or movable props are arranged on the ground, collision between actors and the devices or the props is easily caused, and potential safety hazards exist. From the perspective of the actors themselves, it is difficult to realize the props that may collide because of the lighting factors of the stage and the need to concentrate on the performance during the performance. The quality of a show is often dependent on the impact of the content of the show to the viewer. Therefore, how to monitor the safety of personnel, equipment, props and the like in a specific scene is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a safety monitoring method, a safety monitoring device, a safety monitoring medium and electronic equipment, and measures such as an image recognition technology and collision calculation can be adopted to predict and prompt possible collision or potential safety hazards in a monitoring scene, so that the safety in the stage performance process is improved.
In a first aspect, an embodiment of the present application provides a security monitoring method, where the method includes:
acquiring at least two frames of images of a monitored scene, and determining motion information of markers in the at least two frames of images;
acquiring motion information of personnel in a monitoring scene through detection equipment;
and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
Further, acquiring at least two frames of images of a monitored scene, and determining motion information of the markers in the at least two frames of images includes:
acquiring video data of a monitored scene through a camera;
selecting at least two frames of images from the video data, and carrying out marker identification on the at least two frames of images to obtain position information of a marker in the at least two frames of images;
and determining the motion information of the marker according to the position information of the marker in the at least two frames of images.
Further, performing marker identification on the at least two frames of images to obtain position information of a marker in the at least two frames of images, including:
determining a characteristic value of each image by adopting a pattern recognition technology; wherein the pattern recognition techniques include template matching and/or edge detection;
and comparing the characteristic value with a pre-stored template characteristic value to determine the position information of the marker in each image.
Further, determining the motion information of the marker according to the position information of the marker in the at least two frames of images, including:
and determining the offset distance and the offset angle of the marker in the at least two frames of images according to the position of the reference point and the contour form of the marker in the at least two frames of images.
Further, after determining the motion information of the marker according to the position information of the marker in the at least two frames of images, the method further comprises:
determining a virtual frame of the marker, and generating a virtual frame motion track of the marker according to the motion information of the marker;
combining the motion trail of the virtual frame with the video of the monitoring environment to generate augmented reality display information;
and displaying the augmented reality display information in preset display equipment.
Further, after generating the virtual frame motion trajectory of the marker, the method further comprises:
extracting motion trail parameters of the virtual frame;
correspondingly, determining whether the prompting conditions of safety monitoring are met according to the motion information of the marker and the motion information of the personnel, and the method comprises the following steps:
determining whether the motion trail parameters of the virtual frame and the motion trail of the person are smaller than a set distance according to the motion trail parameters of the virtual frame and the motion information of the person;
and if the distance is less than the set distance, determining that the prompt condition of safety monitoring is met.
Further, the monitoring scene comprises a performance stage; the detection device comprises an infrared detector.
In a second aspect, an embodiment of the present application provides a security monitoring apparatus, where the apparatus includes:
the first motion information identification module is used for acquiring at least two frames of images of a monitored scene and determining motion information of a marker in the at least two frames of images;
the second motion information identification module is used for acquiring motion information of personnel in the monitoring scene through the detection equipment;
and the safety monitoring module is used for determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
Further, the first motion information identification module includes:
the video data acquisition unit is used for acquiring video data of a monitored scene through a camera;
the position information identification unit is used for selecting at least two frames of images from the video data and carrying out marker identification on the at least two frames of images so as to obtain the position information of the marker in the at least two frames of images;
and the motion information determining unit is used for determining the motion information of the marker according to the position information of the marker in the at least two frames of images.
Further, the location information identifying unit includes:
the characteristic value identification subunit is used for determining the characteristic value of each image by adopting a mode identification technology; wherein the pattern recognition techniques include template matching and/or edge detection;
and the position information determining subunit is used for comparing the characteristic value with a pre-stored template characteristic value to determine the position information of the marker in each image.
Further, the motion information determining unit is specifically configured to:
and determining the offset distance and the offset angle of the marker in the at least two frames of images according to the position of the reference point and the contour form of the marker in the at least two frames of images.
Further, the device further comprises a display information display module, wherein the display information display module comprises:
the motion track determining unit is used for determining the virtual frame of the marker and generating the motion track of the virtual frame of the marker according to the motion information of the marker;
the display information generating unit is used for combining the motion trail of the virtual frame with the video of the monitoring environment to generate augmented reality display information;
and the display unit is used for displaying the augmented reality display information in preset display equipment.
Further, the display information display module further includes:
the motion trail parameter extraction unit is used for extracting the motion trail parameters of the virtual frame;
correspondingly, the safety monitoring module is specifically configured to:
determining whether the motion trail parameters of the virtual frame and the motion trail of the person are smaller than a set distance according to the motion trail parameters of the virtual frame and the motion information of the person;
and if the distance is less than the set distance, determining that the prompt condition of safety monitoring is met.
Further, the monitoring scene comprises a performance stage; the detection device comprises an infrared detector.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a security monitoring method according to an embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the security monitoring method according to the embodiment of the present application.
According to the technical scheme provided by the embodiment of the application, at least two frames of images of a monitored scene are obtained, and the motion information of markers in the at least two frames of images is determined; acquiring motion information of personnel in a monitoring scene through detection equipment; and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring. By executing the technical scheme, means such as an image recognition technology, collision calculation and the like can be adopted to predict and prompt the possible collision or potential safety hazard in the monitored scene, so that the safety in the stage performance process is improved.
Drawings
Fig. 1 is a flowchart of a security monitoring method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a security monitoring method according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a security monitoring method according to a second embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating the framework identification and tracking provided by the second embodiment of the present application;
fig. 5 is a schematic flow chart of motion detection and early warning provided in the second embodiment of the present application;
fig. 6 is a schematic structural diagram of a safety monitoring device according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a security monitoring method according to an embodiment of the present application, where the present embodiment is applicable to a situation of security monitoring in a scene such as a stage, and the method may be executed by a security monitoring apparatus according to an embodiment of the present application, where the apparatus may be implemented by software and/or hardware, and may be integrated in an electronic device for security monitoring.
As shown in fig. 1, the security monitoring method includes:
s110, at least two frames of images of a monitored scene are obtained, and the motion information of the markers in the at least two frames of images is determined.
Wherein, at least two frames of images can be acquired by the camera. For example, at least two frames of images are selected from video data acquired by a camera. The camera here may be a panoramic camera used to capture panoramic images. For example, it may be a 360-degree panorama, i.e. picture stitching using software by capturing image information of the entire scene with a professional camera or a picture rendered using modeling software. The camera can also be a common camera, and if the camera is a common camera, the whole monitoring scene needs to be covered in the image range.
In the scheme, the monitoring scene can be a performance stage. The marker can be a prop or an instrument and equipment in a performance stage.
Specifically, after the images are acquired, the images can be identified for each frame to determine whether the images contain the markers. It is understood that, in the case where there is a marker in the multi-frame image, the motion trajectory of the marker may be determined based on the position and posture of the marker in the multi-frame image. For example, the device can move along a straight line at a constant speed or at a uniform deceleration, or along a certain curve. By acquiring the motion information, the motion of the marker can be prejudged, so that a data base is provided for subsequent safety monitoring.
It will be appreciated that if used to calculate the magnitude of the velocity, the time interval of at least two frames of images in the overall video data needs to be determined. For example, if three or more frames of images are selected, the decimation may be performed at equal time intervals, and the time intervals may be set to a default value or manually set by the user, such as 1 second. By extracting at equal time intervals, the calculation of the movement speed of the marker can be facilitated.
And S120, acquiring the motion information of the personnel in the monitoring scene through the detection equipment.
The person is moving continuously in the monitoring scene, and the moving speed is not fixed. The detection device may acquire the position of the person according to a preset time interval, for example, every 0.1 second, or may continuously acquire the position of the person, and similar to the motion information of the above-mentioned marker, a certain extraction rule may be set to extract the acquired information, and further calculate the information. In this scheme, the movement information of the person may include a real-time position, a movement direction, a movement speed, and the like of the person. The mode of obtaining in real time can be taken in this scheme because such setting can be more accurate the real-time position of definite personnel to, can also have a plurality of personnel's circumstances simultaneously on the performance stage under, need not to carry out positional information and differentiation to everyone.
In this scheme, optionally, the monitoring scene includes a performance stage; the detection device comprises an infrared detector.
Because the performance stage is shown in real time, the performance effect can be ensured while the safety of personnel in the performance process is ensured by monitoring the performance stage through the scheme.
In addition, in this scheme, can adopt infrared detector to survey personnel's position. The infrared detector comprises an infrared transmitter, a receiver and a signal processor, wherein the signal output end of the signal processor is connected with the infrared transmitter through an infrared transmitting circuit; the signal input end is connected with the infrared receiver through the infrared receiving circuit, and the feedback signal output end is connected with the peripheral control circuit. The technology adopts a micro singlechip as a signal processor to generate a coding signal, drives an infrared emitter to emit an infrared signal with the coding signal, and detects a reflected signal processed by an amplifying circuit in real time, wherein the coding signal can ensure that a plurality of sensors with the same model can work simultaneously and simultaneously without mutual interference. And the working frequency is consistent, the reliability is high, and the power consumption is low.
It can be understood that, in the present embodiment, the two processes of determining the motion information of the marker and acquiring the motion information of the person may be performed simultaneously, or it is also possible to acquire the motion information of the person first and then determine the motion information of the marker. Because the two data are the basis of subsequent safety monitoring, the sequence of the two steps is not strictly limited by the scheme.
S130, determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating prompt information of safety monitoring.
After obtaining the motion information of the marker and the motion information of the person, it may be determined whether there is a situation where a collision may occur according to the directions of the two motion trajectories, for example, the current position of the marker is point a, and the current position of the person is point B, but according to the motion information of both, it is determined that both will reach point C at the same time, which indicates that there is a possibility of a collision between both. Besides collision, when the marker interferes with the movement of a person, the scheme can also be considered to be in accordance with the prompting condition of safety monitoring.
And after the prompt condition which accords with the safety monitoring is determined, generating the prompt information of the safety monitoring and carrying out safety prompt. Specifically, the presentation may be performed by means of sound, light, or the like. For example, a collision risk is shown on the display or a specific warning tone is played through the speaker to alert the person.
According to the technical scheme provided by the embodiment, at least two frames of images of a monitored scene are obtained, and the motion information of the marker in the at least two frames of images is determined; acquiring motion information of personnel in a monitoring scene through detection equipment; and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring. By executing the scheme, means such as an image recognition technology, collision calculation and the like can be adopted to predict and prompt the possible collision or potential safety hazard in the monitored scene, so that the safety in the stage performance process is improved.
In a possible embodiment, optionally, acquiring at least two images of a monitored scene, and determining motion information of a marker in the at least two images includes:
acquiring video data of a monitored scene through a camera;
selecting at least two frames of images from the video data, and carrying out marker identification on the at least two frames of images to obtain position information of a marker in the at least two frames of images;
and determining the motion information of the marker according to the position information of the marker in the at least two frames of images.
The video data may be recorded by a camera. The camera may record at a preset frame rate, for example, 25 frames per second. After obtaining the video data, two or more frames of images can be selected from the video data, and each selected image is subjected to marker identification. For example, the 1 st frame image and the 26 th frame image are selected for the identification of the marker. From this, the position information of the marker in the 1 st frame image and the selected 26 th frame image can be determined. In connection with the above example, if 25 frames per second, the time interval between the 1 st frame image and the selection of the 26 th frame image is 1 second. Thus, the position change amount and the time interval of the marker can be obtained, and the movement speed of the marker can be determined. Further, if three or more images are used, the moving track of the marker can be obtained. Based on these data, motion information of the markers can be determined.
According to the scheme, the position information of the marker in different images is determined, the motion information of the marker is determined, the calculation mode is simple and convenient, the accuracy is high, and meanwhile, the calculation precision of the motion information can be determined according to the requirements of a user.
Example two
The embodiment is further optimized on the basis of the above embodiment, and specifically optimized as follows: performing marker identification on the at least two frames of images to obtain position information of a marker in the at least two frames of images, including: determining a characteristic value of each image by adopting a pattern recognition technology; wherein the pattern recognition techniques include template matching and/or edge detection; and comparing the characteristic value with a pre-stored template characteristic value to determine the position information of the marker in each image.
Fig. 2 is a schematic flowchart of a security monitoring method according to a second embodiment of the present application. As shown in fig. 2, the security monitoring method includes:
and S210, acquiring video data of a monitored scene through a camera.
S220, determining a characteristic value of each image by adopting a mode identification technology; wherein the pattern recognition techniques include template matching and/or edge detection.
The feature value may be information of a pixel point where the feature can be identified by the image, for example, luminance information of the pixel point. In addition, the feature value may be a pixel point capable of representing the shape of the marker, for example, if the feature is a shelf, the feature value may be an inflection point of the shelf.
In this scheme, pattern recognition includes template matching, edge detection, or both techniques. The template matching may be performed by matching the features of the current pixel with the pre-stored inflection points, the features of a certain plane or a certain side of the object, so as to determine which feature values exist in the current image.
The other mode is an edge detection technology, and the purpose of edge detection is to find a set formed by pixels with severe brightness change in an image, and the set is often a contour. If the edges in the image can be accurately measured and located, it means that the actual object can be located and measured, including the area of the object, the diameter of the object, the shape of the object, etc. can be measured. The currently commonly used edge detection technology can detect the following four cases: discontinuities in depth (e.g., objects on different planes); discontinuities in surface orientation (e.g., two different faces of a cube); different materials (e.g., two materials with different light reflection coefficients); the illumination in the scene is different (such as the ground that the light after being shaded by the tree is projected to).
And S230, comparing the characteristic value with a pre-stored template characteristic value, and determining the position information of the marker in each image.
The template feature value may be obtained by previously acquiring images of fixed or movable objects in the monitored scene and obtaining features of the objects. And then matching the characteristic values of the images in the monitoring process with the stored characteristic values of each object, and if the matching is successful, indicating that the marker in the current image is the object. Here, success or failure of matching can be determined by identifying the degree of matching.
In this scheme, optionally, determining motion information of the marker according to the position information of the marker in the at least two frames of images includes:
and determining the offset distance and the offset angle of the marker in the at least two frames of images according to the position of the reference point and the contour form of the marker in the at least two frames of images.
The reference point position may be a position of an inflection point of the object, or a position of a center point of a certain plane, and the like. The outline shape is the shape structure of the object. It is understood that after obtaining the position of the fiducial point and the contour shape of the marker in the at least two images, the offset distance and the offset angle of the marker in the at least two images can be determined. Where the offset distance may be a variable from the above and the offset angle may be a variable of an angle, e.g., 60 degrees rotated counterclockwise as viewed from above.
According to the scheme, the offset distance and the offset angle of the marker can be obtained, and subsequent safety monitoring is carried out based on the offset distance and the offset angle, so that the purpose of more accurate monitoring results can be achieved. For example, if the moving track of the object is estimated during the moving process of the object, but the offset angle is not considered, the risk of collision still exists in the monitoring process.
S240, determining the motion information of the marker according to the position information of the marker in the at least two frames of images.
And S250, acquiring the motion information of the personnel in the monitoring scene through the detection equipment.
S260, determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
On the basis of the above embodiments, the present embodiment provides a method for determining a marker in an image in a manner of comparing feature values, and through such setting, the marker in the image can be identified more accurately, so as to provide a data basis for subsequent security monitoring.
On the basis of the above technical solutions, optionally, after determining the motion information of the marker according to the position information of the marker in the at least two frames of images, the method further includes:
determining a virtual frame of the marker, and generating a virtual frame motion track of the marker according to the motion information of the marker;
combining the motion trail of the virtual frame with the video of the monitoring environment to generate augmented reality display information;
and displaying the augmented reality display information in preset display equipment.
The virtual frame may be a virtual form for displaying the marker, and may be a frame having the same shape and size as the marker. In the scheme, the augmented reality display can be realized by generating the virtual frame and determining the motion trail of the virtual frame. For example, a display device may be disposed at a certain position of the monitoring scene, and the display device is configured to display an actually acquired monitoring scene video, and fit the position of the virtual frame to the video for real-time display. The monitoring manager can determine the position or the motion track of each marker on the stage by looking at the display equipment. Therefore, monitoring management personnel can determine whether the personnel in the monitoring scene have collision danger or not according to the content displayed in real time.
In the foregoing solution, specifically, after generating the virtual frame motion trajectory of the marker, the method further includes:
extracting motion trail parameters of the virtual frame;
correspondingly, determining whether the prompting conditions of safety monitoring are met according to the motion information of the marker and the motion information of the personnel, and the method comprises the following steps:
determining whether the motion trail parameters of the virtual frame and the motion trail of the person are smaller than a set distance according to the motion trail parameters of the virtual frame and the motion information of the person;
and if the distance is less than the set distance, determining that the prompt condition of safety monitoring is met.
In the scheme, after the virtual frame motion trail of the marker is generated, corresponding motion trail parameters can be extracted, whether a cross point exists between the virtual frame motion trail and the motion information of personnel or whether the real-time distance between the virtual frame motion trail and the motion information of the personnel is smaller than a set distance, such as 0.5 m, and if the real-time distance between the virtual frame motion trail and the motion information of the personnel is smaller than the set distance, the situation that the prompting condition of safety monitoring is met is indicated, and safety prompting is performed.
According to the scheme, the movement of the person in the monitoring scene and the movement of the marker can be combined to determine whether the person and the marker reach a certain position at the same time or not, or the distance between the person and the marker at a certain time is smaller than the set safety distance, and the condition is used as a safety prompt condition.
Fig. 3 is a schematic flowchart of a security monitoring method according to a second embodiment of the present application. As shown in fig. 3, the present solution mainly includes the following processes:
step 1, collecting image information through equipment such as a camera;
step 2, converting the video stream of the real scene into a digital image, carrying out image processing, and identifying a preset marker;
step 3, comparing characteristic values of the image processing result, determining a reference point and a contour of the marker, and determining the position and the direction of the virtual frame in the AR;
step 4, synthesizing the real scene video image to generate display information of the user side;
and 5, performing real-time motion calculation according to the detection information, and synchronizing with the control system through data communication if a risk exists.
Fig. 4 is a schematic flowchart of framework identification and tracking according to the second embodiment of the present application. As shown in fig. 4, the present solution mainly includes the following processes:
converting a video stream of a real scene acquired by a camera into a digital image;
using pattern recognition technology (template matching, edge detection, etc.), recognizing characteristic values (markers) preset in the digital image;
comparing the characteristic value (marker) obtained by identification with the characteristic value of a template stored in advance, and calculating and positioning;
and determining corresponding reference points and outlines in the digital images, and calculating the position and the direction of the virtual frame according to the offset distance and the deflection angle in the multi-frame image process.
The ATR technique in fig. 4 is an Automatic Target Recognition (ATR) technique.
Fig. 5 is a schematic flow chart of motion detection and early warning provided in the second embodiment of the present application. As shown in fig. 5, the present solution mainly includes the following processes:
acquiring a virtual frame obtained by calculation in the previous flow, and the movement speed, direction and position of the virtual frame;
the method comprises the following steps of (1) acquiring the on-site real-time personnel motion conditions, mainly personnel positions and the like, by combining with other detection equipment (an infrared detector and the like);
calculating whether risks such as collision and interference exist among the virtual frames and risks of too close distance between the virtual frames and field personnel through parameters of the virtual frames;
if the risk exists, the data communication is used for informing the control system and the data communication is processed by the control system.
According to the scheme, the situation of a complex site can be detected in real time by means of the high-speed computing power of the computer, the emergency situation can be timely and effectively processed, the safety factor of the performance site is improved, and the safety problem possibly caused by negligence or omission of personnel is avoided.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a safety monitoring device according to a third embodiment of the present application. As shown in fig. 6, the security monitoring apparatus includes:
the first motion information identification module 610 is configured to acquire at least two frames of images of a monitored scene, and determine motion information of a marker in the at least two frames of images;
the second motion information identification module 620 is configured to obtain motion information of a person in a monitored scene through the detection device;
and the safety monitoring module 630 is configured to determine whether a prompt condition for safety monitoring is met according to the motion information of the marker and the motion information of the person, and if so, generate a prompt message for safety monitoring.
In this embodiment, optionally, the first motion information identifying module includes:
the video data acquisition unit is used for acquiring video data of a monitored scene through a camera;
the position information identification unit is used for selecting at least two frames of images from the video data and carrying out marker identification on the at least two frames of images so as to obtain the position information of the marker in the at least two frames of images;
and the motion information determining unit is used for determining the motion information of the marker according to the position information of the marker in the at least two frames of images.
In this embodiment, optionally, the location information identifying unit includes:
the characteristic value identification subunit is used for determining the characteristic value of each image by adopting a mode identification technology; wherein the pattern recognition techniques include template matching and/or edge detection;
and the position information determining subunit is used for comparing the characteristic value with a pre-stored template characteristic value to determine the position information of the marker in each image.
In this embodiment, optionally, the motion information determining unit is specifically configured to:
and determining the offset distance and the offset angle of the marker in the at least two frames of images according to the position of the reference point and the contour form of the marker in the at least two frames of images.
In this embodiment, optionally, the apparatus further includes a display information display module, where the display information display module includes:
the motion track determining unit is used for determining the virtual frame of the marker and generating the motion track of the virtual frame of the marker according to the motion information of the marker;
the display information generating unit is used for combining the motion trail of the virtual frame with the video of the monitoring environment to generate augmented reality display information;
and the display unit is used for displaying the augmented reality display information in preset display equipment.
In this embodiment, optionally, the display information display module further includes:
the motion trail parameter extraction unit is used for extracting the motion trail parameters of the virtual frame;
correspondingly, the safety monitoring module is specifically configured to:
determining whether the motion trail parameters of the virtual frame and the motion trail of the person are smaller than a set distance according to the motion trail parameters of the virtual frame and the motion information of the person;
and if the distance is less than the set distance, determining that the prompt condition of safety monitoring is met.
In this embodiment, optionally, the monitoring scene includes a performance stage; the detection device comprises an infrared detector.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
A fourth embodiment of the present application further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a security monitoring method, the method including:
acquiring at least two frames of images of a monitored scene, and determining motion information of markers in the at least two frames of images;
acquiring motion information of personnel in a monitoring scene through detection equipment;
and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
Storage media refers to any of various types of memory electronics or storage electronics. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in the computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide the program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different unknowns (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the security monitoring operation described above, and may also perform related operations in the security monitoring method provided in any embodiment of the present application.
EXAMPLE five
An embodiment of the present invention provides an electronic device, where the security monitoring apparatus provided in the embodiment of the present invention may be integrated into the electronic device, and the electronic device may be configured in a system, or may be a device that performs part or all of functions in the system. Fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application. As shown in fig. 7, the present embodiment provides an electronic device 700, which includes: one or more processors 720; the storage device 710 is configured to store one or more programs, and when the one or more programs are executed by the one or more processors 720, the one or more processors 720 implement the security monitoring method provided in the embodiment of the present application, the method includes:
acquiring at least two frames of images of a monitored scene, and determining motion information of markers in the at least two frames of images;
acquiring motion information of personnel in a monitoring scene through detection equipment;
and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
The electronic device 700 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the electronic device 700 includes a processor 720, a storage 710, an input 730, and an output 740; the number of the processors 720 in the electronic device may be one or more, and one processor 720 is taken as an example in fig. 7; the processor 720, the storage device 710, the input device 730, and the output device 740 in the electronic apparatus may be connected by a bus or other means, and are exemplified by a bus 750 in fig. 7.
The storage device 710 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and module units, such as program instructions corresponding to the security monitoring method in the embodiment of the present application.
The storage device 710 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the storage 710 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 710 may further include memory located remotely from processor 720, which may be connected via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 730 may be used to receive input numbers, character information, or voice information, and to generate key signal inputs related to user settings and function control of the electronic apparatus. The output device 740 may include a display screen, a speaker, and other electronic devices.
The electronic equipment provided by the embodiment of the application can adopt means such as an image recognition technology and collision calculation to predict and prompt the possible collision or potential safety hazard in a monitoring scene, so that the safety in the stage performance process is improved.
The security monitoring device, the medium and the electronic device provided in the above embodiments may execute the security monitoring method provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. For technical details that are not described in detail in the above embodiments, reference may be made to a security monitoring method provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. A security monitoring method, comprising:
acquiring at least two frames of images of a monitored scene, and determining motion information of markers in the at least two frames of images;
acquiring motion information of personnel in a monitoring scene through detection equipment;
and determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
2. The method of claim 1, wherein acquiring at least two images of a monitored scene, determining motion information of markers in the at least two images, comprises:
acquiring video data of a monitored scene through a camera;
selecting at least two frames of images from the video data, and carrying out marker identification on the at least two frames of images to obtain position information of a marker in the at least two frames of images;
and determining the motion information of the marker according to the position information of the marker in the at least two frames of images.
3. The method according to claim 2, wherein performing marker recognition on the at least two images to obtain position information of a marker in the at least two images comprises:
determining a characteristic value of each image by adopting a pattern recognition technology; wherein the pattern recognition techniques include template matching and/or edge detection;
and comparing the characteristic value with a pre-stored template characteristic value to determine the position information of the marker in each image.
4. The method of claim 3, wherein determining motion information of the marker based on the position information of the marker in the at least two images comprises:
and determining the offset distance and the offset angle of the marker in the at least two frames of images according to the position of the reference point and the contour form of the marker in the at least two frames of images.
5. The method of claim 2, wherein after determining motion information of a marker based on position information of the marker in the at least two images, the method further comprises:
determining a virtual frame of the marker, and generating a virtual frame motion track of the marker according to the motion information of the marker;
combining the motion trail of the virtual frame with the video of the monitoring environment to generate augmented reality display information;
and displaying the augmented reality display information in preset display equipment.
6. The method of claim 5, wherein after generating the virtual frame motion profile of the marker, the method further comprises:
extracting motion trail parameters of the virtual frame;
correspondingly, determining whether the prompting conditions of safety monitoring are met according to the motion information of the marker and the motion information of the personnel, and the method comprises the following steps:
determining whether the motion trail parameters of the virtual frame and the motion trail of the person are smaller than a set distance according to the motion trail parameters of the virtual frame and the motion information of the person;
and if the distance is less than the set distance, determining that the prompt condition of safety monitoring is met.
7. The method of claim 1, wherein the monitoring scene comprises a performance stage; the detection device comprises an infrared detector.
8. A security monitoring device, comprising:
the first motion information identification module is used for acquiring at least two frames of images of a monitored scene and determining motion information of a marker in the at least two frames of images;
the second motion information identification module is used for acquiring motion information of personnel in the monitoring scene through the detection equipment;
and the safety monitoring module is used for determining whether the prompt conditions of safety monitoring are met or not according to the motion information of the marker and the motion information of the personnel, and if so, generating the prompt information of safety monitoring.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the security monitoring method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the security monitoring method according to any of claims 1-7 when executing the computer program.
CN202110520826.0A 2021-05-13 2021-05-13 Safety monitoring method, device, medium and electronic equipment Pending CN113239802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110520826.0A CN113239802A (en) 2021-05-13 2021-05-13 Safety monitoring method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110520826.0A CN113239802A (en) 2021-05-13 2021-05-13 Safety monitoring method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113239802A true CN113239802A (en) 2021-08-10

Family

ID=77133999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110520826.0A Pending CN113239802A (en) 2021-05-13 2021-05-13 Safety monitoring method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113239802A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323822A (en) * 2011-05-09 2012-01-18 无锡引域智能机器人有限公司 Method for preventing industrial robot from colliding with worker
CN103093427A (en) * 2013-01-15 2013-05-08 信帧电子技术(北京)有限公司 Monitoring method and monitoring system of personnel stay
CN105956232A (en) * 2016-04-20 2016-09-21 国网电力科学研究院武汉南瑞有限责任公司 Transformer station three-dimensional real scene simulation system and implementation method
CN106454282A (en) * 2016-12-09 2017-02-22 南京创维信息技术研究院有限公司 Security and protection monitoring method, apparatus and system
CN106875081A (en) * 2016-12-22 2017-06-20 国网浙江省电力公司杭州供电公司 A kind of enhancing virtual reality method for electricity substation
CN108583432A (en) * 2018-07-05 2018-09-28 广东机电职业技术学院 A kind of intelligent pillar A blind prior-warning device and method based on image recognition technology
CN109145883A (en) * 2018-10-10 2019-01-04 百度在线网络技术(北京)有限公司 Method for safety monitoring, device, terminal and computer readable storage medium
CN109376383A (en) * 2018-09-14 2019-02-22 长安大学 A kind of explosive view generation method based on collision detection
CN110255380A (en) * 2019-06-25 2019-09-20 广州供电局有限公司 Crane operation method and apparatus
CN110253570A (en) * 2019-05-27 2019-09-20 浙江工业大学 The industrial machinery arm man-machine safety system of view-based access control model
CN110796740A (en) * 2019-10-30 2020-02-14 佛山市艾温特智能科技有限公司 Security protection method, system and readable storage medium based on AR game
CN110880250A (en) * 2018-09-05 2020-03-13 奥迪股份公司 Danger early warning system and method
CN111414686A (en) * 2020-03-18 2020-07-14 北京北特圣迪科技发展有限公司 Monitoring and early warning system for operating risks of theater mechanical equipment
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
CN112055169A (en) * 2020-08-05 2020-12-08 浙江大丰实业股份有限公司 Online performance equipment state monitoring system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323822A (en) * 2011-05-09 2012-01-18 无锡引域智能机器人有限公司 Method for preventing industrial robot from colliding with worker
CN103093427A (en) * 2013-01-15 2013-05-08 信帧电子技术(北京)有限公司 Monitoring method and monitoring system of personnel stay
CN105956232A (en) * 2016-04-20 2016-09-21 国网电力科学研究院武汉南瑞有限责任公司 Transformer station three-dimensional real scene simulation system and implementation method
CN106454282A (en) * 2016-12-09 2017-02-22 南京创维信息技术研究院有限公司 Security and protection monitoring method, apparatus and system
CN106875081A (en) * 2016-12-22 2017-06-20 国网浙江省电力公司杭州供电公司 A kind of enhancing virtual reality method for electricity substation
CN108583432A (en) * 2018-07-05 2018-09-28 广东机电职业技术学院 A kind of intelligent pillar A blind prior-warning device and method based on image recognition technology
CN110880250A (en) * 2018-09-05 2020-03-13 奥迪股份公司 Danger early warning system and method
CN109376383A (en) * 2018-09-14 2019-02-22 长安大学 A kind of explosive view generation method based on collision detection
CN109145883A (en) * 2018-10-10 2019-01-04 百度在线网络技术(北京)有限公司 Method for safety monitoring, device, terminal and computer readable storage medium
CN110253570A (en) * 2019-05-27 2019-09-20 浙江工业大学 The industrial machinery arm man-machine safety system of view-based access control model
CN110255380A (en) * 2019-06-25 2019-09-20 广州供电局有限公司 Crane operation method and apparatus
CN110796740A (en) * 2019-10-30 2020-02-14 佛山市艾温特智能科技有限公司 Security protection method, system and readable storage medium based on AR game
CN111414686A (en) * 2020-03-18 2020-07-14 北京北特圣迪科技发展有限公司 Monitoring and early warning system for operating risks of theater mechanical equipment
CN112055169A (en) * 2020-08-05 2020-12-08 浙江大丰实业股份有限公司 Online performance equipment state monitoring system
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system

Similar Documents

Publication Publication Date Title
JP6043856B2 (en) Head pose estimation using RGBD camera
JP5740884B2 (en) AR navigation for repeated shooting and system, method and program for difference extraction
WO2017114508A1 (en) Method and device for three-dimensional reconstruction-based interactive calibration in three-dimensional surveillance system
US9373174B2 (en) Cloud based video detection and tracking system
US11398049B2 (en) Object tracking device, object tracking method, and object tracking program
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
CN107710280B (en) Object visualization method
WO2006015236A3 (en) Audio-visual three-dimensional input/output
CN102938844A (en) Generating free viewpoint video through stereo imaging
KR20130051501A (en) Online reference generation and tracking for multi-user augmented reality
CN109934873B (en) Method, device and equipment for acquiring marked image
CN113313097B (en) Face recognition method, terminal and computer readable storage medium
US11989827B2 (en) Method, apparatus and system for generating a three-dimensional model of a scene
KR20120065063A (en) System and method for measuring flight information of a spheric object with a high-speed stereo camera
WO2016139868A1 (en) Image analysis device, image analysis method, and image analysis program
CN114140832A (en) Method and device for detecting pedestrian boundary crossing risk in well, electronic equipment and storage medium
US20230260207A1 (en) Shadow-based estimation of 3d lighting parameters from reference object and reference virtual viewpoint
JP2001148025A (en) Device and method for detecting position, and device and method for detecting plane posture
CN112073640B (en) Panoramic information acquisition pose acquisition method, device and system
CN113239802A (en) Safety monitoring method, device, medium and electronic equipment
KR20130062489A (en) Device for tracking object and method for operating the same
JP2008203991A (en) Image processor
CN107802468B (en) Blind guiding method and blind guiding system
JPH0635443A (en) Monitor device
TWI460683B (en) The way to track the immediate movement of the head

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210810