CN112989957A - Safety monitoring method and system suitable for equipment cabinet - Google Patents

Safety monitoring method and system suitable for equipment cabinet Download PDF

Info

Publication number
CN112989957A
CN112989957A CN202110193401.3A CN202110193401A CN112989957A CN 112989957 A CN112989957 A CN 112989957A CN 202110193401 A CN202110193401 A CN 202110193401A CN 112989957 A CN112989957 A CN 112989957A
Authority
CN
China
Prior art keywords
equipment cabinet
cabinet
equipment
information
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110193401.3A
Other languages
Chinese (zh)
Inventor
李斯
赵齐辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongpu Software Co Ltd
Original Assignee
Dongpu Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongpu Software Co Ltd filed Critical Dongpu Software Co Ltd
Priority to CN202110193401.3A priority Critical patent/CN112989957A/en
Publication of CN112989957A publication Critical patent/CN112989957A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/10Coin-freed apparatus for hiring articles; Coin-freed facilities or services for means for safe-keeping of property, left temporarily, e.g. by fastening the property

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a safety monitoring method and a system suitable for an equipment cabinet, wherein the equipment cabinet comprises a cabinet body and a cabinet door arranged on the cabinet body, and the safety monitoring method comprises the following steps: acquiring first video information of a corresponding area of the equipment cabinet in real time, wherein the first video information comprises at least partial image information of the cabinet body and at least partial image information of the cabinet door; and inputting the first video information into a recognition model, and determining the state of the equipment cabinet, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door. The safety monitoring method can intelligently monitor the state of the equipment cabinet in real time, and potential safety hazards are avoided.

Description

Safety monitoring method and system suitable for equipment cabinet
Technical Field
The application relates to the technical field of logistics, in particular to a safety monitoring method and system suitable for an equipment cabinet.
Background
In recent years, the express delivery industry has been developed rapidly, and logistics distribution has become an important field of high-speed development. In the short decades of the development of the express industry, the living habits and consumption habits of people are gradually changed, and the express is well integrated into the work and life of people and becomes a part of the work and life of people. Express delivery work is divided into: the basic functions of modern logistics, such as warehousing, collection and payment, information processing and the like, are added in four traditional links of 'marketing', 'distribution', 'transportation' and 'delivery'. The service objects of the express distribution center corresponding to the distribution are numerous production enterprises and commercial outlets (such as supermarkets and chain stores), and various goods which are assembled are delivered to the hands of the users in time according to the requirements of the users, so that the production and consumption requirements are met. The rapid and safe operation of the distribution center is ensured, and the distribution center plays an important role in express network circulation.
At present, the express distribution center management is mainly realized by manually supervising or setting a camera monitoring device, and maintenance/monitoring personnel know the conditions in the express distribution center by checking video images, namely, the potential safety hazard in the express distribution center is artificially monitored under most conditions, so that the identification precision is low and the real-time performance is poor. Based on this, it is necessary to improve the existing express distribution center management method.
Disclosure of Invention
The application aims to provide a safety monitoring method and a safety monitoring system which are suitable for an equipment cabinet, and real-time performance and accuracy of safety monitoring are achieved.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a safety monitoring method applicable to an equipment cabinet, where the equipment cabinet includes a cabinet body and a cabinet door installed on the cabinet body, the safety monitoring method includes: acquiring first video information of a corresponding area of the equipment cabinet in real time, wherein the first video information comprises at least partial image information of the cabinet body and at least partial image information of the cabinet door; and inputting the first video information into a recognition model, and determining the state of the equipment cabinet, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door. The technical scheme has the advantages that the recognition model acquires the state of the equipment cabinet according to the image information, so that long-time manual monitoring of monitoring personnel can be avoided, the monitoring efficiency of the equipment cabinet is improved, the result is automatically generated, and the real-time performance is high; avoid because the artificial careless omission that manual monitoring brought improves the accuracy and the security of equipment cabinet monitoring.
In some optional embodiments, the identification model is a network model based on a centeret algorithm, inputting the first video information into the identification model, and determining the status of the equipment cabinet comprises: analyzing the first video information into a plurality of frames of images; inputting the multi-frame image into the network model based on the Centernet algorithm, and determining the central feature and the corner feature of the equipment cabinet; and determining the state of the equipment cabinet according to the central feature and the corner feature of the equipment cabinet. The technical scheme has the advantages that the recognition model adopts a network model based on a Centernet algorithm, has the capability of perceiving the internal information of the object, and has higher detection progress for the target object; the target is determined by using a plurality of combined key points such as the central feature and the corner feature, and compared with the method of using a single key point, the method can reduce or even eliminate the false detection frame and reduce the false detection probability of the equipment cabinet state.
In some optional embodiments, the centret algorithm based network model comprises a central posing module to: determining the maximum value of the horizontal direction of the equipment cabinet according to at least one of the multi-frame images; determining the maximum value of the equipment cabinet in the vertical direction according to at least one of the multi-frame images; and determining the central characteristic of the equipment cabinet according to the maximum value in the horizontal direction and the maximum value in the vertical direction. The technical scheme has the advantages that the Center firing module extracts and adds the maximum values of the Center point of the equipment cabinet in the horizontal direction and the vertical direction, so that information except the position of the Center point of the equipment cabinet is provided, and strong semantic information which is easy to distinguish from other categories is provided for the equipment cabinet.
In some optional embodiments, the Centernet algorithm based network model comprises a Cascade corn firing module for determining the top left corner point feature or the bottom right corner point feature of the equipment cabinet. The technical scheme has the advantages that richer associated object semantic information is provided for the corner point features by determining the upper left corner point features or the lower right corner point features of the equipment cabinet, so that the semantic information of the equipment cabinet can be acquired even if the corner points are located outside the equipment cabinet, and the difficulty of corner point detection is reduced.
In some optional embodiments, the equipment cabinet is placed in an express distribution center, and the safety monitoring method further includes: and generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether the equipment cabinet has an object taking operation or an object storing operation. The technical scheme has the advantages that whether an operator stores equipment in the equipment cabinet or takes out the equipment from the equipment cabinet can be obtained by obtaining the operation judgment information, more valuable information is provided, and safety monitoring is more intelligent; meanwhile, after a safety accident occurs, more reference information is provided for subsequently determining the person in charge.
In some optional embodiments, the safety monitoring method further comprises: and when the equipment cabinet is in a state that the cabinet door is opened and the operation judgment information indicates that the equipment cabinet does not have the object taking operation or the object storing operation to the equipment cabinet, generating prompt information. The technical scheme has the advantages that whether prompt information is generated or not is determined jointly according to the state of the equipment cabinet and the operation judgment information, the situation that the cabinet door is opened due to fetching operation from the equipment cabinet or the situation that the cabinet door is opened due to storing operation to the equipment cabinet can be avoided, wrong alarm information is generated under two normal states, the manual judgment process is simulated in the judgment mode, and the prompt information is prevented from being generated under the normal state.
In a second aspect, the present application provides a safety monitoring system suitable for an equipment cabinet, the equipment cabinet includes the cabinet body and installs cabinet door on the cabinet body, the safety monitoring system includes: the video acquisition equipment is arranged towards the equipment cabinet and used for acquiring first video information of a corresponding area of the equipment cabinet in real time, wherein the first video information comprises at least part of cabinet body image information and at least part of cabinet door image information; and the processor is connected with the video acquisition equipment and used for receiving the first video information and determining the state of the equipment cabinet according to the first video information, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door.
In some optional embodiments, determining the status of the equipment cabinet from the first video information comprises: analyzing the first video information into a plurality of frames of images; determining a center feature and a corner feature of the equipment cabinet from at least one of the plurality of frames of images; and determining the state of the equipment cabinet according to the central feature and the angular point feature of the equipment cabinet.
In some optional embodiments, the processor is further configured to perform: generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether the operation of taking articles from the equipment cabinet or storing articles in the equipment cabinet exists or not; and when the equipment cabinet is in a state that the cabinet door is opened and the operation judgment information indicates that the equipment cabinet does not have the object taking operation or the object storing operation, generating a prompt instruction.
In some optional embodiments, the equipment cabinet is placed in an express distribution center, and the safety monitoring system further includes an alarm device connected to the processor and configured to generate an alarm message in response to the prompt instruction.
In a third aspect, the present application provides an electronic device, where the electronic device includes a memory, a processor, and a hardware module for performing tasks, where the memory stores a computer program, and the processor implements, when executing the computer program: acquiring first video information of a corresponding area of the equipment cabinet, wherein the first video information comprises at least partial cabinet body image information and at least partial cabinet door image information; and determining the state of the equipment cabinet according to the first video information, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door. The technical scheme has the advantages that the state of the equipment cabinet is automatically acquired, so that long-time manual monitoring of monitoring personnel can be avoided, the monitoring efficiency of the equipment cabinet is improved, the result is automatically generated, and the real-time performance is high; avoid because the artificial careless omission that manual monitoring brought improves the accuracy and the security of equipment cabinet monitoring.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the methods described above.
Drawings
The present application is further described below with reference to the drawings and examples.
Fig. 1 is a schematic flowchart of a safety monitoring method applied to an equipment cabinet according to an embodiment of the present application;
fig. 2 is a schematic flowchart of determining a state of an equipment cabinet according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a siru included in a recognition model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a network model based on the centeret algorithm according to an embodiment of the present application;
fig. 4A is a schematic structural diagram of a network model based on the centeret algorithm according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a process for determining a center feature according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a Center firing module according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a Cascade core firing module provided in an embodiment of the present application;
FIG. 8 is a schematic structural diagram of another Cascade core firing module provided in an embodiment of the present application;
fig. 9 is a schematic flowchart of another safety monitoring method suitable for an equipment cabinet according to an embodiment of the present application;
fig. 9A is a schematic flowchart of another safety monitoring method suitable for an equipment cabinet according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a safety monitoring device suitable for an equipment cabinet according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an identification module according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a program product for implementing a safety monitoring method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a safety monitoring system suitable for an equipment cabinet according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of another safety monitoring system suitable for an equipment cabinet according to an embodiment of the present application;
fig. 16 is a schematic diagram of a display result provided in the embodiment of the present application.
Detailed Description
The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.
Referring to fig. 1, an embodiment of the present application provides a safety monitoring method suitable for an equipment cabinet, where the equipment cabinet includes a cabinet body and a cabinet door mounted on the cabinet body, and the safety monitoring method includes steps S101 to S102.
Step S101: the method comprises the steps of acquiring first video information of a corresponding area of an equipment cabinet in real time, wherein the first video information comprises at least partial cabinet body image information and at least partial cabinet door image information. The equipment cabinet can be arranged in one or more places such as a supermarket, a home kitchen, a bank system, a dock or an express distribution center. The equipment cabinet can have a storage space for storing operating equipment, operating tools, power distribution equipment, control equipment, etc. The equipment cabinet body can be a cuboid, a cube, a cone or other polygonal bodies. The cabinet door can be arranged on the side surface or the top surface of the cabinet body, etc.
In a specific embodiment, an application scenario of an express distribution center is taken as an example for illustration. A plurality of devices or objects such as equipment cabinets (operating equipment cabinets), parcels to be distributed, operation tables and the like are usually placed in the express distribution center. The operators/sorting personnel carry out manual sorting on the parcels to be distributed placed on the operation platform, and the operators often need to take out equipment/tools from the equipment cabinet or store used equipment in the equipment cabinet in the working process. The express delivery of this application embodiment is allocated and is provided with video acquisition equipment in the center, and this video acquisition equipment can be towards the equipment cabinet, for example set up just to the cabinet door of equipment cabinet, set up directly over the equipment cabinet or the top to one side, still for example this video acquisition equipment sets up in the equipment cabinet. The video acquisition equipment can acquire first video information of an area corresponding to the equipment cabinet, and the first video information comprises multiple frames of images distributed along time. Of course, the shooting range of the video capture device in the embodiment of the present application is not limited to the local area corresponding to the device cabinet, as long as the shooting range can be used for shooting part or all of the image information of the cabinet body and all or part of the image information of the cabinet door. For example, the shooting range of the video acquisition equipment can cover the areas corresponding to a plurality of objects such as parcels to be distributed and operation consoles.
In particular, the video capture device may include an image sensor, a camera, a video recorder, and the like. The video acquisition equipment can acquire panoramic video information and the video acquisition equipment can be distributed in a plurality of different positions; correspondingly, the panoramic video information can be videos of all angles respectively collected by a plurality of video collecting devices in different directions or different angles, and then the videos are spliced. Specifically, the video acquisition device can adopt a wide-angle camera with a field angle exceeding 180 degrees, so as to be beneficial to fully acquiring video information of an area corresponding to the equipment cabinet. Of course, the video capture device may also select a network camera or a local camera. For example, the network camera adopts an RJ45 interface and takes streams through GB 28181; the local camera adopts a USB interface or a CMOS Serial Interface (CSI), and video frames are taken through file handles.
Step S102: and inputting the first video information into the recognition model, and determining the state of the equipment cabinet, wherein the state of the equipment cabinet comprises the closing of a cabinet door or the opening of the cabinet door.
Specifically, inputting the first video information into the recognition model to determine the state of the equipment cabinet comprises: determining whether the characteristic point information of the equipment cabinet exists in the first video information by using the identification model; if the characteristic point information of the equipment cabinet in the identification result of the first video information meets the set condition, judging that the equipment cabinet is in a closed state (the cabinet door is closed); and if the characteristic point information of the equipment cabinet in the identification result of the first video information does not meet the set condition, judging that the equipment cabinet is in an open state (the cabinet door is opened). The setting conditions may be, for example, the center position and the corner position of the equipment cabinet in the closed state, which are acquired and stored in advance.
The neural network model is an R-CNN, FaterRCNN, YOLO, SSD, or CenterNet network model. In a specific embodiment, the identification model is a network model based on the centeret algorithm, and the feature point information of the equipment cabinet included in each frame of image can be determined through the network model, and the feature point information of the equipment cabinet can be one of or a combination of a center feature of the equipment cabinet and a corner feature of the equipment cabinet. The Centernet algorithm selects a py Torch implementation to facilitate later deployment.
In a specific implementation, referring to fig. 2, the step S102 may include steps S201 to S203.
Step S201: and resolving the first video information into a plurality of frames of images.
Step S202: inputting the multi-frame image into a network model based on a Centernet algorithm, and determining the central feature and the corner feature of the equipment cabinet.
Step S203: and determining the state of the equipment cabinet according to the central characteristics and the corner characteristics of the equipment cabinet.
In a specific embodiment, the acquiring process of the network model based on the centeret algorithm may include:
first, enough training samples are made. In particular, it is possible to preselect data sets that are sufficient for the acquisition of the opening and closing of the cabinet doors of the equipment cabinet.
Secondly, labeling the samples to form a sample data set. Specifically, goods existing in the training samples are marked by using a labelimg tool, so that a sample data set is formed. Specifically, the data set needs to be converted into a format of a VOC2007 data set, respectively establishing different storage spaces, wherein: labels stores label information; ImageSets stores data after classification of a training set, a test set and a verification set, JPEGImages and images store original images, and Annotations store annotation information.
And then, classifying the sample data set through the script codes, and applying the classification result to the initial network model for training to obtain the network model based on the Centernet algorithm. Specifically, the training model may be saved in a pth format file, and the classification result includes: the training set, the verification set and the test set respectively account for 60 percent, 30 percent and 10 percent.
Specifically, the network model based on the centeret algorithm may include a scale-weighted Linear Unit (sulu), the function curve of which is shown in fig. 3, wherein: the horizontal axis x represents the input image matrix; the vertical axis f (x) represents the output linear transformation characteristic. In the embodiment of the application, the network model based on the Centernet algorithm is used for training the network training deep network, a strong regularization scheme is adopted, and the learning process is highly robust or stable by setting the SiLU.
In particular, the loss function loss (i.e., L) of the network model based on the Centernet algorithmdet) Loss (i.e., L) of heatmap, which is a three-partK) Loss (i.e., L) of target length and width predictionsize) Loss (i.e., L) of target center point offset valueoff)。
Ldet=LksizeLsizeoffLoff
size=0.1,λoff=1)
LKThe calculation formula (2) is as follows, the focal length is rewritten, and alpha and beta are hyper-parameters and are used for balancing the difficult sample and the positive sample and the negative sample. N is the number of keypoints (positive samples) of the image, for normalizing all positive focal locations to 1, the subscript xyc of the summation symbol indicates all coordinate points on all heatmaps (c indicates the target class, one heatmap for each class),
Figure BDA0002945246600000081
as a predicted value, YxycThe true values are noted.
Figure BDA0002945246600000082
(wherein, the super parameter is alpha 2 and beta 4)
LoffIs as follows, it calculates only the offset value loss of the positive sample. Wherein
Figure BDA0002945246600000083
Represents the predicted offset value, p is the coordinates of the center point of the target in the picture, R is the scaling scale,
Figure BDA0002945246600000084
is the approximate integer coordinate of the center point after scaling.
Figure BDA0002945246600000085
LsizeThe formula of (a) is as follows, also calculated for the loss value of the positive sample only,
Figure BDA0002945246600000086
to predict the size, SkIs a real size.
Figure BDA0002945246600000087
The difficulty of learning is increased by setting the loss function, and the model is forced to continuously learn more distinctive features, so that the inter-class distance is larger and the intra-class distance is smaller.
In a specific embodiment, the network model based on the centret algorithm may simultaneously obtain the central feature and the corner feature of the equipment cabinet, and determine the state of the equipment cabinet according to the central feature and the corner feature of the equipment cabinet. Therefore, the recognition model adopts a network model based on a Centernet algorithm, has the capability of perceiving the internal information of the object, and has higher detection progress for the target object; a target is determined by using a plurality of combined key points such as the central feature and the corner feature, and compared with the method of using a single key point, the method can reduce or even eliminate false detection frames and reduce the false detection probability of the equipment cabinet state.
Specifically, as shown in fig. 4, the network model based on the cennet algorithm may include a Center pond module 410 and a Cascade corner pond module 420, and a parsing module 430 may be further provided in the network model based on the cennet algorithm, which is respectively connected to the Center pond module 410 and the Cascade corner pond module 420. The parsing module 430 is configured to parse the first video information into multiple frames of images, and output results of the parsing module 430 are respectively used as inputs of the Center posing module 410 and the Cascade corner posing module 420; the Center posing module 410 is used for determining the Center feature of the equipment cabinet from the multi-frame images; the Cascade corn popping module 420 is used to determine corner features of the equipment cabinet from the multiple frame images. The Center firing module 410 and the Cascade core firing module 420 may also be coupled to a status determination module 440 for determining a status of the equipment cabinet based on the Center features and the corner features of the equipment cabinet.
Fig. 4A shows a network model based on the centeret algorithm used in an embodiment of the present application. It includes: backbone network 450, Cascade Corner power unit 451, embeddings and Offsets unit 452, Center power unit 453, and Offsets unit 454. The outputs of the backbone network 450 are simultaneously connected to the Cascade Corner Pooling unit 451 and the Center Pooling unit 453, respectively, and the Cascade Corner Pooling operations are performed at the Cascade Corner Pooling unit 451 to obtain Corner heatmaps; the Center heatmap (central heatmap) is obtained by performing a Center firing operation at the Center firing unit 453, and the locations of the points of interest are predicted in association with the Center heatmaps and the Center heatmap.
Further, after the positions and the categories of the corners are obtained according to the corner names, the angles are performed in the angles and Offsets unit 452 to map the positions of the corners to the corresponding positions of the input picture, and then the angles are determined by performing the angles to form a detection frame. In the method and the device, the ability of sensing the internal information of the object can be obtained on the premise of one-stage, the cost is low, and the network model based on the Centeret algorithm only needs to pay attention to the center of the object, so that the situation that the prediction of the existing detection frame needs to pay attention to all the internal information of the object is avoided.
Furthermore, a central area is defined for each predicted frame, and the offset unit 454 executes an of Offsets operation to determine whether the central area of each target frame contains a central point, if so, the central area is reserved, and the confidence (confidence) of the frame is the average of the confidence of the central point, the upper left corner point and the lower right corner point, if not, the confidence is removed, so that the network has the capability of sensing the internal information of the target area, and the wrong target frame can be effectively removed. According to the embodiment of the application, aiming at the problem that a large amount of false detections are caused due to the lack of assistance from the internal information of the target area in the process of determining the detection frame through the key point combination, whether the central area of each target frame contains a central point is judged, so that the network has the capability of perceiving the internal information of the target area, and the wrong target frame can be effectively removed.
Considering that the scale of the central area affects the effect of removing the error frames, too small a central area may result in many error target frames with small scale being unable to be removed, and too large a central area may result in many error target frames with large scale being unable to be removed. In the embodiment of the application, the size of the central area is set to be self-adaptive. Specifically, a relatively small central region is defined when the scale of the prediction box is large, and a relatively large central region is predicted when the scale of the prediction box is small.
Typically, the center of an object does not necessarily contain strong semantic information that is easily distinguished from other classes. For example, a person's head contains strong semantic information that is easily distinguished from other classes, but is often centered in the middle of the person. The embodiment of the application enriches the Center point feature by setting Center firing. As shown in FIG. 5, the Center posing module 410 determining the Center feature may include steps S501-S503.
Step S501: and determining the maximum value of the horizontal direction of the equipment cabinet according to at least one of the multi-frame images.
Step S502: and determining the maximum value of the vertical direction of the equipment cabinet according to at least one of the multi-frame images.
Step S503: and determining the central characteristic of the equipment cabinet according to the maximum value in the horizontal direction and the maximum value in the vertical direction. In the embodiment of the application, the Center firing module extracts and adds the maximum values of the Center point of the equipment cabinet in the horizontal direction and the vertical direction, so that information except the position of the Center point of the equipment cabinet is provided, and strong semantic information which is easy to distinguish from other categories is provided for the equipment cabinet.
In one embodiment, the centret keypoint triplet, i.e., the center point, the top-left corner point, and the bottom-right corner point, may be obtained through a network model based on the centret algorithm. On the basis, two key points of an upper right corner point and a lower left corner point are obtained through a network model based on a Cornernet algorithm, the state of the equipment cabinet is determined, and the position information of the four corner points of the equipment cabinet is obtained, so that the network has the capability of sensing the internal information of the object at a low cost, the false detection can be effectively inhibited, and the stability is improved.
Fig. 6 is a schematic diagram of a Center firing module 410 according to an embodiment of the present application, in which the Center firing module 410 is implemented by combining the corn firing in different directions, and the Center firing extracts the maximum values of the Center point in the horizontal direction and the vertical direction and adds them together, so as to provide the information beyond the position of the Center point. In this embodiment, the Center firing module 410 includes a first branch and a second branch, wherein the first branch of the Center firing module 410 includes a first 3 × 3Conv-BN-ReLU unit 610, a first Left firing unit 611, a first Right firing unit 612 connected in series for implementing a maximum value taking operation in one horizontal direction; the second branch of the Center firing module 410 includes a second 3 × 3Conv-BN-ReLU unit 620, a first Top firing unit 621, a first Bottom firing unit 622 connected in series for implementing a maximum value taking operation in one vertical direction. In this embodiment, input 1 of the first branch and input 2 of the second branch may be performed simultaneously, and both are multi-frame images parsed by the first video information; and adding the output of the first branch and the output of the second branch, and taking the obtained output as the characteristic of the package central point.
In this embodiment, the maximum value in the central horizontal direction is obtained by serially connecting the first Left firing unit 611 and the first Right firing unit 612, the maximum value in the central vertical direction is obtained by serially connecting the first Top firing unit 621 and the first Bottom firing unit 622, and then the two maximum values in the central horizontal direction and the central vertical direction are added, so as to provide information beyond the position where the center is located. This operation gives the hub the opportunity to obtain semantic information that is more easily distinguished from other categories.
Generally, the corner points are located outside the object, and the located positions do not contain semantic information of the associated object, which brings difficulty to the detection of the corner points. In the embodiment of the present application, the Cascade corn discharging module 420 is arranged to predict the upper left corner point and the lower right corner point of the equipment cabinet through the combination of the corn discharging modules in different directions. The Cascade corn pooling module 420 may first extract the maximum value of the equipment cabinet boundary; and then continuously extracting the maximum value from the boundary maximum value to the inside, and simultaneously reserving the upper left corner point, the lower right corner point and the maximum value point extending from the boundary maximum value to the inside, thereby providing richer associated object semantic information for the corner point characteristics.
Fig. 7 is a schematic diagram of a Cascade core firing module 420 according to an embodiment of the present application, which includes: a first branch formed by a third 3 × 3Conv-BN-ReLU unit 710 and a second Left placement unit 711; the second branch comprises a fourth 3 x 3Conv-BN-ReLU unit 720, the first 3 x 3Conv-BN unit 730 and the second Top firing unit 740 are respectively cascaded after the two branches are added, and the detection of the upper left corner point of the equipment cabinet can be realized through the operation.
Fig. 8 is a schematic diagram of a Cascade core firing module 420 according to another embodiment of the present application, which includes: a first branch formed by a fifth 3 × 3Conv-BN-ReLU unit 810 and a second Right firing unit 811; the second branch comprises a sixth 3 x 3Conv-BN-ReLU unit 820, the second 3 x 3Conv-BN unit 830 and the second Bottom firing unit 840 are respectively cascaded after the two branches are added, and the detection of the lower right corner point of the equipment cabinet can be realized through the operation.
In the embodiment of the application, the first video information of the corresponding area of the equipment cabinet is obtained in real time, the state of the equipment cabinet is determined by using the identification model, and the identification model can obtain the state of the equipment cabinet according to the image information, so that long-time manual monitoring of monitoring personnel is avoided, the monitoring efficiency of the equipment cabinet is improved, a result is automatically generated, and the real-time performance is high; in addition, avoid because the artificial careless omission that manual monitoring brought, improve equipment cabinet monitoring's accuracy and security.
In a specific embodiment, an application scenario in which an equipment cabinet is placed in an express distribution center is taken as an example for description. Referring to fig. 9, the safety monitoring method applied to the equipment cabinet includes steps S901 to S904.
Step S901: the method comprises the steps of acquiring first video information of a corresponding area of an equipment cabinet in real time, wherein the first video information comprises at least partial cabinet body image information and at least partial cabinet door image information.
Step S902: and inputting the first video information into the recognition model, and determining the state of the equipment cabinet, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door.
Step S903: and generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether the object taking operation from the equipment cabinet or the object storing operation in the equipment cabinet exists. Specifically, the operation determination information may be determined as follows: and tracking target points of the operators and the handheld operation equipment by adopting a target tracking algorithm, and acquiring operation judgment information according to the relative positions of the operators, the handheld operation equipment and the equipment cabinet. Specifically, the target tracking algorithm may employ an ECO, C-COT, or KCF algorithm.
In this embodiment, step S903 is executed only when the state of the equipment cabinet is that the cabinet door is opened, so as to reduce the data processing amount. By acquiring the operation judgment information, whether an operator stores equipment in the equipment cabinet or takes out the equipment from the equipment cabinet can be acquired, more valuable information is provided, and safety monitoring is more intelligent; meanwhile, after a safety accident occurs, more reference information is provided for subsequently determining the person in charge.
Step S904: and determining whether prompt information is generated or not according to the state and operation judgment information of the equipment cabinet. Specifically, when the state of the equipment cabinet is that the cabinet door is opened, and the operation judgment information indicates that the object taking operation from the equipment cabinet does not exist and the object storing operation to the equipment cabinet does not exist, prompt information is generated. When the state of the equipment cabinet is that the cabinet door is opened and the operation judgment information indicates that the object taking operation from the equipment cabinet exists or the object storing operation to the equipment cabinet exists, no prompt information is generated, and the step S901 is returned to continue the real-time monitoring. And when the equipment cabinet is in the state that the cabinet door is closed, returning to the step S901, and continuing to monitor in real time.
In this application embodiment, whether the state of combination equipment cabinet, operation judgement information confirm jointly to produce the prompt message, can avoid getting the thing operation and lead to the cabinet door to open because from the equipment cabinet, perhaps leads to the cabinet door to open because deposit the thing operation to the equipment cabinet, produces wrong alarm information under two kinds of normal conditions, and the artificial process of judging of discriminant mode simulation avoids producing the prompt message under normal condition.
Specifically, please refer to fig. 9A, which is applicable to steps S910 to S970 of the safety monitoring method for the equipment cabinet.
Step S910: and acquiring first video information of a corresponding area of the equipment cabinet in real time. The first video information comprises at least partial cabinet body image information and at least partial cabinet door image information.
Step S920: and inputting the first video information into the recognition model, and determining the state of the equipment cabinet. Wherein, the state of the equipment cabinet comprises that the cabinet door is closed or opened.
Step S930: and judging whether the cabinet door is closed or not. If the determination result is Y (at this time, the cabinet door is closed), the process returns to step S910 to continue monitoring. If the determination result is N (at this time, the cabinet door is not closed), step S940 is executed.
Step S940: and generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether the object taking operation from the equipment cabinet or the object storing operation in the equipment cabinet exists.
Step S950: and judging whether the fetching operation from the equipment cabinet exists or not. If the determination result is Y (at this time, the cabinet door is opened, and the operator uses the equipment cabinet), the method returns to step S910 to continue monitoring. If the determination result is N, step S960 is performed. In one embodiment, the human body gesture model may be utilized to identify a human body target bounding box and a human body motion in the first video information. The human body target boundary frame is a minimum rectangular frame containing a human body target; calculating the area of the human body target boundary frame and the distance between the central point of the human body target boundary frame and the central point of the frame; filtering the human body targets which are not mainly observed through comparison with corresponding preset threshold values; and acquiring coordinates of a human body target boundary frame of an operator of the current frame image and coordinates of key points of human body actions. Optionally, the coordinates of the human body target bounding box include coordinates of an upper left corner of the human body bounding box and coordinates of a lower right corner of the human body bounding box. Further, the same method is adopted to obtain the coordinates of the human body target boundary frame of the operator and the coordinates of the key points of the human body actions of the previous frame of image, and the coordinates of the human body target boundary frame of the operator and the coordinates of the key points of the human body actions of the next frame of image. And determining the storage operation or the taking operation of an operator to the equipment cabinet according to the key point coordinates of the human body action in the previous frame image, the current frame image and the next frame image and the coordinates of the human body target boundary frame.
Step S960: and judging whether the operation of storing articles in the equipment cabinet exists or not. If the determination result is Y (at this time, the cabinet door is opened, and the operator uses the equipment cabinet), the method returns to step S910 to continue monitoring. If the determination result is N, step S970 is executed.
Step S970: and generating prompt information.
It should be noted that the above description of the monitoring method flow is provided for illustrative purposes only and is not intended to limit the scope of the present application. Many variations and modifications may be made to the teachings of the present application by those of ordinary skill in the art in light of the present disclosure. However, such changes and modifications do not depart from the scope of the present application. In some embodiments, a safety monitoring method flow applicable to an equipment cabinet may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. For example, step S910 and step S940 may be performed simultaneously, or the scheme of step S940 is performed before the scheme of step S910; for another example, step S950 and step S960 may be performed simultaneously, or the scheme of step S960 may be performed prior to the scheme of step S950.
Further, while the systems and methods disclosed herein are primarily directed to a courier distribution center, it should also be understood that this is merely one exemplary embodiment. The system and the method can be applied to other scenes with requirements on safety risk monitoring. For example, the system and method of the present application may be applied to container monitoring, transport container monitoring, elevator door monitoring, and other application scenarios.
In other embodiments, the safety monitoring method applied to the equipment cabinet may further include: acquiring second video information of the console in real time, wherein the second video information can be determined when the first video information of the corresponding area of the equipment cabinet is acquired in real time in the step S901; the second video information is input into the recognition model to determine the status of the table top. The condition of the table top includes the table top being clean or the table top having non-wrapped items present. In the embodiment, the state of the equipment cabinet and the state of the table top of the operating platform can be sent to a control room of the express distribution center, so that monitoring personnel can know the specific condition of the express distribution center in time; prompt information can be timely transmitted to the express distribution center, potential safety hazards can be timely found by operators, and safety of the express distribution center is improved.
Referring to fig. 10, an embodiment of the present application further provides a safety monitoring device 100 suitable for an equipment cabinet, and a specific implementation manner of the safety monitoring device 100 is consistent with the implementation manner and the achieved technical effect described in the embodiment of the foregoing method, and details are not repeated.
The safety monitoring device 100 includes: the acquisition module 110 is configured to acquire first video information of a corresponding area of the equipment cabinet in real time, where the first video information includes at least part of image information of the cabinet body and at least part of image information of the cabinet door; the identification module 120 is configured to determine a state of the equipment cabinet according to the first video information, where the state of the equipment cabinet includes that the cabinet door is closed or the cabinet door is opened.
Referring to fig. 11, in a specific implementation, the identification module 120 may include: an analyzing unit 121 configured to analyze the first video information into a plurality of frames of images; a center feature determination unit 122 for determining a center feature of the equipment cabinet from at least one of the plurality of frame images; a corner feature determination unit 123, configured to determine a corner feature of the equipment cabinet from at least one of the multiple frames of images; and a state determining unit 124, configured to determine a state of the equipment cabinet according to the central feature and the corner feature of the equipment cabinet.
Further, the apparatus may further include an operation information determining module 130 configured to generate operation judgment information according to the first video information, where the operation judgment information includes whether there is an operation of fetching articles from the equipment cabinet or an operation of storing articles in the equipment cabinet.
Further, the apparatus may further include a prompt instruction generating module 140, configured to determine whether to generate a prompt message according to the status and operation judgment information of the equipment cabinet. Specifically, when the state of the equipment cabinet is that the cabinet door is opened, and the operation judgment information indicates that there is no object taking operation from the equipment cabinet or there is no object storage operation to the equipment cabinet, the prompt instruction generating module 140 generates prompt information or a prompt instruction. When the state of the equipment cabinet is that the cabinet door is opened and the operation judgment information indicates that the object taking operation from the equipment cabinet exists or the object storing operation to the equipment cabinet exists, the prompt instruction generating module 140 does not generate prompt information or a prompt instruction. Similarly, when the equipment cabinet is in a state that the cabinet door is closed, the prompt instruction generation module 140 does not generate prompt information or a prompt instruction.
Referring to fig. 12, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of any one of the methods in the embodiments of the present application, and the specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiments of the method, and some contents are not described again. Memory 210 may also include a program/utility 214 having a set (at least one) of program modules 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Accordingly, processor 220 may execute the computer programs described above, as well as may execute programs/utilities 214.
Bus 230 may be a local bus representing one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any other type of bus structure.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, Bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and when the computer program is executed, the steps of any one of the methods in the embodiments of the present application are implemented, and a specific implementation manner of the steps is consistent with the implementation manner and the achieved technical effect described in the embodiments of the methods, and some details are not repeated. Fig. 13 shows a program product 300 for implementing the method provided by the embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not so limited, and in this application, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The embodiment of the present application further provides a safety monitoring system suitable for an equipment cabinet, as shown in fig. 14, the safety monitoring system includes a video capture device 310 and a processor 320 connected to the video capture device 310. The equipment cabinet may include a cabinet body and a cabinet door mounted on the cabinet body. The video acquisition equipment 310 is arranged towards the equipment cabinet and used for acquiring first video information of a corresponding area of the equipment cabinet in real time, wherein the first video information comprises partial or all cabinet body image information and partial or all cabinet door image information. The processor 320 is configured to receive the first video information, and determine a state of the equipment cabinet according to the first video information, where the state of the equipment cabinet includes closing of the cabinet door or opening of the cabinet door. The arrows in fig. 14 indicate the direction of instruction or information propagation. The connection of adjacent devices may be any form of wired or wireless network, or any combination thereof. By way of example only, video capture device 310 and processor 320 may be connected by a network, which may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a zigbee network, a Near Field Communication (NFC) network, and the like, or any combination thereof. In some embodiments, the network may include one or more network access points. For example, the network may include wired or wireless network access points, such as base stations and/or internet switching points.
In a specific embodiment, the video capture device 310 is disposed in the express distribution center and faces the equipment cabinet, and is configured to obtain the first video information of the area corresponding to the equipment cabinet in real time. The specific process can refer to the description of step S101.
Processor 320 may be a single server or a group of servers. The server groups may be centralized or distributed (e.g., processor 320 may be a distributed system). For example, the processor 320 may be local or remote. Also for example, processor 320 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof. In particular, processor 320 may include a processing engine. A processing engine may process information and/or data to perform one or more functions described herein. For example, the processing engine includes a network model based on the centret algorithm, and the processing engine may parse the first video information into a plurality of frames of images, determine a center feature and a corner feature of the equipment cabinet according to the plurality of frames of images, and determine a state of the equipment cabinet according to the center feature and the corner feature of the equipment cabinet. More specifically, processor 320 may include the aforementioned centret algorithm-based network model as shown in fig. 4. The processing engine may include one or more processing engines (e.g., a single chip processing engine or a multi-chip processing engine). By way of example only, the processing engine may include one or more hardware processors, such as a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), an image processing unit (GPU), a physical arithmetic processing unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, and the like, or any combination thereof.
In a specific embodiment, the processor 320 is further configured to perform: generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether an object taking operation from the equipment cabinet or an object storing operation in the equipment cabinet exists or not; when the state of the equipment cabinet is that the cabinet door is opened and the operation judgment information indicates that the object taking operation from the equipment cabinet does not exist and the object storing operation to the equipment cabinet does not exist, a prompt instruction is generated. Specifically, the processor 320 determines whether to generate the prompt message according to the status and the operation judgment information of the equipment cabinet: when the equipment cabinet is in a state that the cabinet door is opened and the operation judgment information indicates that the object taking operation from the equipment cabinet does not exist or the object storing operation to the equipment cabinet does not exist, prompt information is generated; when the equipment cabinet is in a state that the cabinet door is opened and the operation judgment information indicates that the object taking operation from the equipment cabinet exists or the object storing operation to the equipment cabinet exists, no prompt information is generated, and the video acquisition equipment 310 continues to monitor in real time; when the equipment cabinet is in a state that the cabinet door is closed, the video acquisition equipment 310 continues to monitor in real time. Correspondingly, the safety monitoring system suitable for the equipment cabinet is provided with an alarm device 330, is connected with the processor 320, can be arranged in the express distribution center and is used for sending out prompt information according to the prompt instruction. The prompt message may include one or more of a sound alarm message, a light alarm message, or a pop-up prompt message.
Fig. 15 is a schematic diagram of a safety monitoring system suitable for an equipment cabinet according to another embodiment of the present application. Compared with the fig. 14, the difference is that the express delivery system further comprises a display 340 connected with the processor 320, the display 340 can be arranged in a control center of the express delivery distribution center, is located in a different room from the alarm device 330 and the video capture device 310, and can display the image frame of the first video information corresponding to the prompt message. Optionally, the display 340 includes one or a combination of Liquid Crystal Displays (LCDs), Light Emitting Diode (LED) based displays, flat panel displays, curved screens, television devices, Cathode Ray Tubes (CRTs), touch screens, and the like.
The display 340 may also be accompanied by a keyboard, a mouse, a touch screen, a microphone, and the like, so as to realize interaction between the control center of the express distribution center and the device. Fig. 16 is a schematic diagram of a display interface of the display 340 according to an embodiment of the present application. The shooting range of the video capture device 310 covers the area corresponding to a plurality of objects such as parcels to be distributed, operation consoles, and equipment cabinets, wherein the position of the right lower corner of the equipment cabinet is partially shielded. The processor 320 automatically determines that the cabinet door of the equipment cabinet is opened and no operator takes the article from the equipment cabinet or deposits the article into the equipment cabinet. The display interface displays that the current equipment cabinet is in an open state, and simultaneously displays warning icons on the display interface so as to remind monitoring personnel or operating personnel of the control center to process in time.
While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A safety monitoring method suitable for an equipment cabinet, wherein the equipment cabinet comprises a cabinet body and a cabinet door mounted on the cabinet body, and the safety monitoring method comprises the following steps:
acquiring first video information of a corresponding area of the equipment cabinet in real time, wherein the first video information comprises at least partial image information of the cabinet body and at least partial image information of the cabinet door;
and inputting the first video information into a recognition model, and determining the state of the equipment cabinet, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door.
2. The security monitoring method of claim 1, wherein the identification model is a network model based on a Centernet algorithm, wherein inputting the first video information into the identification model, and wherein determining the status of the equipment cabinet comprises:
analyzing the first video information into a plurality of frames of images;
inputting the multi-frame image into the network model based on the Centernet algorithm, and determining the central feature and the corner feature of the equipment cabinet;
and determining the state of the equipment cabinet according to the central feature and the corner feature of the equipment cabinet.
3. The security monitoring method of claim 2, wherein the network model based on the centeret algorithm comprises a centerpoiling module configured to:
determining the maximum value of the horizontal direction of the equipment cabinet according to at least one of the multi-frame images;
determining the maximum value of the equipment cabinet in the vertical direction according to at least one of the multi-frame images;
and determining the central characteristic of the equipment cabinet according to the maximum value in the horizontal direction and the maximum value in the vertical direction.
4. The security monitoring method of claim 3, wherein the network model based on the Centeret algorithm further comprises a Cascade core firing module, and wherein the Cascade core firing module is configured to determine the top left corner feature or the bottom right corner feature of the equipment cabinet.
5. The safety monitoring method according to any one of claims 1 to 4, wherein the equipment cabinet is placed in an express distribution center, and the safety monitoring method further comprises:
and generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether the equipment cabinet has an object taking operation or an object storing operation.
6. The safety monitoring method according to claim 5, further comprising:
and when the equipment cabinet is in a state that the cabinet door is opened and the operation judgment information indicates that the equipment cabinet does not have the object taking operation or the object storing operation to the equipment cabinet, generating prompt information.
7. A safety monitoring system suitable for equipment cabinet, equipment cabinet includes the cabinet body and installs cabinet door on the cabinet body, its characterized in that, safety monitoring system includes:
the video acquisition equipment is arranged towards the equipment cabinet and used for acquiring first video information of a corresponding area of the equipment cabinet in real time, wherein the first video information comprises at least part of cabinet body image information and at least part of cabinet door image information;
and the processor is connected with the video acquisition equipment and used for receiving the first video information and determining the state of the equipment cabinet according to the first video information, wherein the state of the equipment cabinet comprises the closing of the cabinet door or the opening of the cabinet door.
8. The safety monitoring system of claim 7, wherein determining the status of the equipment cabinet from the first video information comprises:
analyzing the first video information into a plurality of frames of images;
determining a center feature and a corner feature of the equipment cabinet from at least one of the plurality of frames of images; and the number of the first and second groups,
and determining the state of the equipment cabinet according to the central feature and the corner feature of the equipment cabinet.
9. The safety monitoring system of claim 7, wherein the processor is further configured to perform:
generating operation judgment information according to the first video information, wherein the operation judgment information comprises whether the operation of taking articles from the equipment cabinet or storing articles in the equipment cabinet exists or not;
and when the equipment cabinet is in a state that the cabinet door is opened and the operation judgment information indicates that the equipment cabinet does not have the object taking operation or the object storing operation, generating a prompt instruction.
10. The safety monitoring system of claim 9, wherein the equipment cabinet is placed in an express distribution center, the safety monitoring system further comprising:
and the alarm equipment is connected with the processor and used for responding to the prompt instruction and generating alarm information.
CN202110193401.3A 2021-02-20 2021-02-20 Safety monitoring method and system suitable for equipment cabinet Pending CN112989957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110193401.3A CN112989957A (en) 2021-02-20 2021-02-20 Safety monitoring method and system suitable for equipment cabinet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110193401.3A CN112989957A (en) 2021-02-20 2021-02-20 Safety monitoring method and system suitable for equipment cabinet

Publications (1)

Publication Number Publication Date
CN112989957A true CN112989957A (en) 2021-06-18

Family

ID=76394134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110193401.3A Pending CN112989957A (en) 2021-02-20 2021-02-20 Safety monitoring method and system suitable for equipment cabinet

Country Status (1)

Country Link
CN (1) CN112989957A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642558A (en) * 2021-08-16 2021-11-12 云南电网有限责任公司电力科学研究院 X-ray image identification method and device for strain clamp crimping defects

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642558A (en) * 2021-08-16 2021-11-12 云南电网有限责任公司电力科学研究院 X-ray image identification method and device for strain clamp crimping defects

Similar Documents

Publication Publication Date Title
CN107808139B (en) Real-time monitoring threat analysis method and system based on deep learning
TW202207077A (en) Text area positioning method and device
US20220301286A1 (en) Method and apparatus for identifying display scene, device and storage medium
CN108389230A (en) Refrigerator capacity automatic testing method, system, equipment and storage medium
CN112613569A (en) Image recognition method, and training method and device of image classification model
CN111538852B (en) Multimedia resource processing method, device, storage medium and equipment
CN112206541A (en) Game plug-in identification method and device, storage medium and computer equipment
CN112861998A (en) Neural network model construction method, safety channel abnormity monitoring method and system
CN112820071A (en) Behavior identification method and device
CN114494776A (en) Model training method, device, equipment and storage medium
CN111444802A (en) Face recognition method and device and intelligent terminal
CN114581732A (en) Image processing and model training method, device, equipment and storage medium
CN112989957A (en) Safety monitoring method and system suitable for equipment cabinet
CN112529836A (en) High-voltage line defect detection method and device, storage medium and electronic equipment
CN110516094A (en) De-weight method, device, electronic equipment and the storage medium of class interest point data
CN114255377A (en) Differential commodity detection and classification method for intelligent container
CN114417029A (en) Model training method and device, electronic equipment and storage medium
CN114511064A (en) Neural network model interpretation method and device, electronic equipment and storage medium
CN112925942A (en) Data searching method, device, equipment and storage medium
CN113378836A (en) Image recognition method, apparatus, device, medium, and program product
Cong et al. Towards enforcing social distancing regulations with occlusion-aware crowd detection
CN112801078A (en) Point of interest (POI) matching method and device, electronic equipment and storage medium
CN111814865A (en) Image identification method, device, equipment and storage medium
CN110532304A (en) Data processing method and device, computer readable storage medium and electronic equipment
CN109034067A (en) Commodity image reproduction detection method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination