CN112689120A - Monitoring method and device - Google Patents

Monitoring method and device Download PDF

Info

Publication number
CN112689120A
CN112689120A CN201910944726.3A CN201910944726A CN112689120A CN 112689120 A CN112689120 A CN 112689120A CN 201910944726 A CN201910944726 A CN 201910944726A CN 112689120 A CN112689120 A CN 112689120A
Authority
CN
China
Prior art keywords
monitoring
target
detection
features
monitored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910944726.3A
Other languages
Chinese (zh)
Inventor
卢家霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huiruisitong Technology Co Ltd
Original Assignee
Guangzhou Huiruisitong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huiruisitong Technology Co Ltd filed Critical Guangzhou Huiruisitong Technology Co Ltd
Priority to CN201910944726.3A priority Critical patent/CN112689120A/en
Publication of CN112689120A publication Critical patent/CN112689120A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a monitoring method and a monitoring device, and belongs to the technical field of video monitoring. The method comprises the following steps: acquiring monitoring videos acquired by a plurality of monitoring devices; identifying the monitoring characteristics of a detection object contained in each monitoring video; determining whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video; and if so, generating early warning information corresponding to the target detection object, and sending the early warning information to a user terminal corresponding to the target monitoring object so as to enable the user terminal to display the early warning information. By the adoption of the method and the device, the early warning information corresponding to each user can be generated according to different monitoring requirements of different users, so that each user can monitor according to the own early warning information, and user experience and monitoring effects are improved.

Description

Monitoring method and device
Technical Field
The present application relates to the field of video monitoring technologies, and in particular, to a monitoring method and apparatus.
Background
With the development of electronic information technology, video monitoring is more and more widely applied, and great help is provided for monitoring target personnel. In particular, people often need to set monitoring equipment in a monitoring area in some public areas, wherein the monitoring equipment is used for collecting monitoring videos and transmitting the collected monitoring videos to users.
In practical applications, a monitoring video of one monitoring device may be provided for a plurality of users, however, a target monitoring object to be monitored by each user is different. For example, for a person rescue organization, a lost person needs to be searched through a monitoring video; for public safety agencies, surveillance videos are required to search for lawless persons. Therefore, a monitoring method suitable for multiple users is needed.
Disclosure of Invention
An object of the embodiments of the present application is to provide a monitoring method and apparatus, so as to implement that, for different monitoring requirements of different users, early warning information corresponding to each user is generated respectively, so that each user can monitor according to the own early warning information, thereby improving user experience and monitoring effect. The specific technical scheme is as follows:
in a first aspect, a monitoring method is provided, the method including:
acquiring monitoring videos acquired by a plurality of monitoring devices;
identifying the monitoring characteristics of a detection object contained in each monitoring video;
determining whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video;
if the target detection object exists, generating early warning information corresponding to the target detection object, and sending the early warning information to a user terminal corresponding to the target monitoring object, so that the user terminal displays the early warning information.
Optionally, the monitoring features at least comprise face features and external features, and the external features at least comprise one or more of clothes features, accessory features and behavior features.
Optionally, the identifying, for each monitoring video, the monitoring feature of the detection object included in the monitoring video includes:
extracting a monitoring image contained in each monitoring video;
for each monitoring image, extracting a portrait picture of a detection object from the monitoring image through a preset object detection algorithm, and extracting a face picture of the detection object through a preset face detection algorithm;
and extracting the external features of the detection object from the human image picture, and extracting the human face features of the detection object from the human face picture.
Optionally, the determining, according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitored video, whether there is a target detection object that satisfies a preset similarity condition compared with the target monitoring object includes:
acquiring monitoring characteristics of target monitoring objects corresponding to user terminals;
for each target monitoring object, calculating the similarity between the face features of all detection objects in the monitoring video and the face features of the target monitoring object, and comparing the external features of all detection objects with the external features of the target monitoring object;
and if the detection objects with the similarity larger than a preset similarity threshold or with the external characteristics the same as the external characteristics of the target monitoring object exist in the detection objects, determining the detection objects as the target detection objects meeting the preset similarity condition compared with the target monitoring object.
Optionally, the method further includes:
receiving a monitoring object setting instruction sent by a user terminal, wherein the monitoring object setting instruction comprises an object identifier of at least one object to be monitored selected from a monitoring object library;
and setting the object to be monitored as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction.
Optionally, the method further includes:
acquiring picture information of an object to be monitored;
extracting the monitoring characteristics of the object to be monitored from the picture information, and determining the object identification of the object to be monitored;
and storing the picture information of the object to be monitored, the monitoring characteristics of the object to be monitored and the object identification so as to establish a monitored object library.
In a second aspect, there is provided a monitoring device, the device comprising:
the first acquisition module is used for acquiring monitoring videos acquired by a plurality of monitoring devices;
the identification module is used for identifying the monitoring characteristics of the detection object contained in each monitoring video;
the first determining module is used for determining whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video;
and the sending module is used for generating early warning information corresponding to the target detection object and sending the early warning information to a user terminal corresponding to the target monitoring object if the target detection object exists, so that the user terminal displays the early warning information.
Optionally, the monitoring features at least comprise face features and external features, and the external features at least comprise one or more of clothes features, accessory features and behavior features.
Optionally, the identification module is specifically configured to:
extracting a monitoring image contained in each monitoring video;
for each monitoring image, extracting a portrait picture of a detection object from the monitoring image through a preset object detection algorithm, and extracting a face picture of the detection object through a preset face detection algorithm;
and extracting the external features of the detection object from the human image picture, and extracting the human face features of the detection object from the human face picture.
Optionally, the first determining module is specifically configured to:
acquiring monitoring characteristics of target monitoring objects corresponding to user terminals;
for each target monitoring object, calculating the similarity between the face features of all detection objects in the monitoring video and the face features of the target monitoring object, and comparing the external features of all detection objects with the external features of the target monitoring object;
and if the detection objects with the similarity larger than a preset similarity threshold or with the external characteristics the same as the external characteristics of the target monitoring object exist in the detection objects, determining the detection objects as the target detection objects meeting the preset similarity condition compared with the target monitoring object.
Optionally, the apparatus further comprises:
the monitoring system comprises a receiving module, a monitoring object setting module and a monitoring object selecting module, wherein the receiving module is used for receiving a monitoring object setting instruction sent by a user terminal, and the monitoring object setting instruction comprises an object identifier of at least one object to be monitored selected from a monitoring object library;
and the setting module is used for setting the object to be monitored as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the picture information of the object to be monitored;
the second determining module is used for extracting the monitoring characteristics of the object to be monitored from the picture information and determining the object identifier of the object to be monitored;
and the storage module is used for storing the picture information of the object to be monitored, the monitoring characteristics of the object to be monitored and the object identification so as to establish a monitored object library.
In a third aspect, a server is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the first aspect when executing a program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when being executed by a processor, carries out the method steps of any of the first aspects.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the monitoring methods described above.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a monitoring method and a monitoring device, which can acquire monitoring videos acquired by a plurality of monitoring devices, identify monitoring characteristics of detection objects contained in each monitoring video, determine whether target detection objects meeting preset similarity conditions exist or not compared with the target monitoring objects according to the monitoring characteristics of the target monitoring objects corresponding to user terminals and the monitoring characteristics corresponding to the detection objects in the monitoring videos, and generate early warning information corresponding to the target detection objects if the target detection objects exist, and send the early warning information to the user terminals corresponding to the target monitoring objects so that the user terminals can display the early warning information. Based on the scheme, when one path of monitoring video is provided for a plurality of users for use, the early warning information corresponding to each user can be generated according to different monitoring requirements of different users, so that each user can monitor according to the early warning information of the user, and the user experience and the monitoring effect are improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a monitoring method according to an embodiment of the present application;
fig. 2 is a schematic diagram of device interaction of a monitoring method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a monitoring device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another monitoring device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of another monitoring device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a monitoring method, which can be applied to a server, wherein the server can be a background server in a monitoring system. The server can be connected with each monitoring device and receives the monitoring video sent by each monitoring device. The server can also be connected with the monitoring terminal of the user so as to push the monitoring video to the monitoring terminal of the user.
A monitoring method provided in an embodiment of the present application will be described in detail below with reference to specific embodiments, as shown in fig. 1, the specific steps are as follows:
step 101, acquiring monitoring videos acquired by a plurality of monitoring devices.
In the embodiment of the application, a plurality of monitoring devices can be arranged in the area to be monitored, and different monitoring devices can monitor different monitoring areas respectively. After the monitoring equipment operates, the monitoring video of the monitoring area can be collected in real time, so that simultaneous monitoring of multiple paths of videos is realized. After the monitoring devices collect the monitoring videos, the monitoring videos can be sent to the server at the background, and the server can receive the monitoring videos sent by the monitoring devices.
Step 102, identifying the monitoring characteristics of the detection object contained in each monitoring video.
In the embodiment of the application, an algorithm for identifying the monitoring characteristics of the detection object may be stored in the server in advance. The monitoring features at least comprise face features and external features, and the external features at least comprise one or more of clothing features (such as clothes color, style and the like), accessory features (such as whether a hat is worn or not, whether a backpack is worn or not and the like) and behavior features (such as whether preset actions occur or not). The external features may also include other features, which may be specifically set by a user, and the embodiment of the present application is not limited.
After receiving the monitoring videos, the server can identify the detection objects contained in the monitoring videos and the monitoring characteristics of the detection objects according to the monitoring images contained in the monitoring videos and a preset identification algorithm for each monitoring video. For different types of monitoring features, different recognition algorithms can be adopted for recognition, and detailed description will be given later.
Optionally, the identification process may specifically include the following steps.
Step one, aiming at each monitoring video, extracting a monitoring image contained in the monitoring video within a preset time length.
In the embodiment of the application, the server may periodically calculate the display priority of each monitoring video, and correspondingly, for each monitoring video, after receiving the monitoring image of the monitoring video, the server may extract the monitoring image included in the monitoring video within a preset time length to perform the calculation. After the calculation of the display priority is finished, when the next period is reached, the calculation is carried out again, so that the display equipment can adjust the display mode of the monitoring video in time. Wherein the server can extract the monitoring image from the monitoring video through OPENCV.
And secondly, extracting a portrait picture of the detected object from each monitoring image through a preset object detection algorithm and extracting a face picture of the detected object through a preset face detection algorithm aiming at each monitoring image.
In the embodiment of the application, after a server obtains a monitoring image of a certain monitoring video (which may be called a target monitoring video) within a preset time, for each monitoring image, a contour of a detection object included in the monitoring image may be identified through a preset object detection algorithm, then a candidate region frame corresponding to the detection object is determined according to the contour of the detection object, and further an image included in the candidate region frame is extracted to obtain a portrait picture of the detection object, where the portrait picture is a picture including the whole detection object. The object detection algorithm may be any existing algorithm having a human image detection function, such as YOLOv3 framework algorithm, deep learning Neural Network, and the like, and the deep learning Neural Network may specifically be a fast-RCNN (Convolutional Neural Network based on a high-speed Region proposal), RCNN, and the like.
After the server identifies the human image picture, the human image picture of the detection object can be further extracted from the human image picture through a preset human face detection algorithm. The face detection algorithm may be any existing algorithm with a face detection function, such as an SSD (single shot multi box detector) algorithm, a machine learning algorithm, a deep learning neural network, and the deep learning neural network may specifically be fast-RCNN, and the like. Moreover, the server may allocate the same identifier to the portrait picture and the facial picture belonging to the same detection object, and the identifier may be an object identifier of the detection object.
In addition, after the server extracts the human image picture and the human face picture, the pictures can be further screened. Specifically, for each picture, the server may determine whether the picture meets a preset definition condition and whether the confidence of the picture is greater than a preset threshold. And if the picture meets the preset definition condition and the confidence coefficient of the picture is greater than or equal to the preset threshold value, the picture is indicated to be an available picture, and the picture is stored. And if the picture does not meet the preset definition condition or the confidence coefficient of the picture is smaller than the preset threshold value, the picture is judged to be an unavailable picture, and the picture is discarded.
Optionally, in order to share the server pressure, an intelligent terminal (such as an intelligent box) for performing edge calculation may be further disposed in the monitoring system, and the intelligent terminal may be connected to the monitoring device and the server, respectively. In the scene, the processing in the first step and the second step can be completed by the intelligent terminal, and the intelligent terminal can report the screened portrait picture, the face picture and the object identifier to the server so as to enable the server to perform subsequent processing. The server can determine which path of the received portrait picture, the received face picture and the received object identifier belong to the monitoring video through an interface with the intelligent terminal or the identifier of each monitoring video sent by the intelligent terminal, so as to obtain the corresponding relation between the portrait picture, the face picture, the object identifier and the monitoring video.
And step three, extracting the external features of the detection object from the human image picture, and extracting the human face features of the detection object from the human face picture.
In the embodiment of the application, the server can extract the external features of the detection object from the portrait picture through a preset external feature extraction algorithm. The external feature extraction algorithm may be a ResNet50 network model, a machine learning algorithm, a deep learning neural network, and the like, and the deep learning neural network may be fast-RCNN, and the like, which is not limited in the embodiment of the present application.
The server can also extract the external features of the detection object from the portrait picture through a preset human face feature extraction algorithm. The face feature extraction algorithm may be a MobileFaceNet algorithm, a machine learning algorithm, a deep learning neural network, and the like, and the deep learning neural network may be fast-RCNN, and the like, which is not limited in the embodiment of the present application.
Step 103, determining whether there is a target detection object meeting a preset similarity condition compared with the target monitoring object according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video.
In the embodiment of the present application, a monitoring object library may be stored in the server, and the monitoring object library may include a large amount of information of an object to be monitored, and specifically may include a correspondence between picture information, monitoring characteristics, and an object identifier of the object to be monitored. Each user can select a target monitoring object to be monitored from the monitoring object library through the user terminal, and the server stores the corresponding relation between each user terminal and the target monitoring object. The server may generate a target monitoring object set according to the target monitoring object corresponding to each user terminal, and may acquire the monitoring characteristics of each target monitoring object included in the target monitoring object set. The process of setting the target monitoring object by the user and the process of establishing the monitoring object library will be described in detail later.
For each monitoring video, after the server identifies the monitoring features of the detection objects contained in the monitoring video, the identified monitoring features can be matched with the monitoring features of each target monitoring object in the target monitoring object set, so that whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists or not is determined.
Alternatively, the specific determination process may include the following steps.
Step one, acquiring monitoring characteristics of target monitoring objects corresponding to user terminals.
In this embodiment, the server may obtain, from the monitored object library, the monitoring characteristics of each target monitored object included in the target monitored object set.
And secondly, calculating the similarity between the face features of all the detection objects in the monitoring video and the face features of the target monitoring object aiming at each target monitoring object, and comparing the external features of all the detection objects with the external features of the target monitoring object.
In the embodiment of the application, after the server identifies the face features and the external features of each detection object in the monitoring video, for each target monitoring object, the similarity between the face features of each detection object and the face features of the target monitoring object can be calculated, and then whether the similarity is greater than a preset threshold value or not is judged. In the present application, any algorithm for calculating the similarity in the prior art may be applied to the embodiment of the present application, such as cosine similarity, euclidean distance, and the like, and the embodiment of the present application is not limited. Similarly, for each external feature identified, the server may determine whether the external feature is the same as the external feature of the target monitored object.
And step three, if the detection objects with the similarity larger than a preset similarity threshold or with the external characteristics the same as those of the target monitoring object exist in the detection objects, determining the detection objects as the target detection objects meeting the preset similarity condition compared with the target monitoring object.
In the embodiment of the application, if the similarity between the face feature of a certain detection object and the face feature of the target monitoring object is greater than or equal to the preset threshold, or the external feature of the detection object is the same as the external feature of the target monitoring object, it may be determined that the detection object is a target detection object which satisfies the preset similarity condition compared with the target monitoring object.
Therefore, for each target monitoring object, the server can determine whether a target detection object meeting a preset similarity condition exists in each monitoring video compared with the target monitoring object, so as to meet the monitoring and early warning requirements of each user.
And 104, if the target detection object exists, generating early warning information corresponding to the target detection object, and sending the early warning information to a user terminal corresponding to the target monitoring object so that the user terminal displays the early warning information.
In the embodiment of the application, for each target monitoring object, if the server determines that the target detection object corresponding to the target monitoring object exists, the early warning information corresponding to the target detection object may be generated. The early warning information may include picture information of the target detection object, such as a face picture and/or a portrait picture. In addition, the early warning information may further include other information, such as an object identifier corresponding to the target detection object, an identifier of the surveillance video including the target detection object, geographic location information of the monitoring device that collects the surveillance video including the target detection object, and the like, and the specific content may be set according to an actual requirement, which is not limited in the embodiment of the present application.
The server can send the generated early warning information to the user terminal corresponding to the target monitoring object, so that the user terminal displays the early warning information, and the user is prompted. Based on the scheme, when one path of monitoring video is provided for a plurality of users for use, the early warning information corresponding to each user can be generated according to different monitoring requirements of different users, so that each user can monitor according to the early warning information of the user, and the user experience and the monitoring effect are improved.
Optionally, an embodiment of the present application further provides a processing procedure for setting a target monitoring object, which specifically includes: receiving a monitoring object setting instruction sent by a user terminal, wherein the monitoring object setting instruction comprises an object identifier of at least one object to be monitored selected from a monitoring object library; and setting the object to be monitored as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction.
In the embodiment of the application, the user terminal may obtain the object to be monitored (such as an object identifier, picture information, a monitoring feature, and the like of the monitored object) in the monitored object library from the server, and display the object to be monitored. The user can select the target monitoring object to be monitored in the displayed objects to be monitored. The user terminal may send a monitoring object setting instruction to the server, where the monitoring object setting instruction includes an object identifier of at least one object to be monitored selected by the user. The server can receive a monitoring object setting instruction sent by the user terminal, and then set an object to be monitored selected by a user as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction. In addition, the user may also set the monitoring characteristics of the target monitoring object that needs to be monitored this time (for example, select the monitoring face characteristics and/or the external characteristics, or set specific values of the external characteristics), and correspondingly, the server may receive the monitoring object setting instruction carrying the monitoring characteristics of the target monitoring object, and may store the monitoring characteristics that need to be monitored this time for the target monitoring object.
Optionally, the process of establishing the monitoring object library may include: acquiring picture information of an object to be monitored; extracting the monitoring characteristics of the object to be monitored from the picture information, and determining the object identification of the object to be monitored; and storing the picture information of the object to be monitored, the monitoring characteristics of the object to be monitored and the object identification so as to establish a monitored object library.
In the embodiment of the application, the server can acquire the picture information of the object to be monitored. The picture information of the object to be monitored may be uploaded by each user, may be obtained from other databases, or may be set by a technician. For each object to be monitored, the server may extract the monitoring feature of the object to be monitored from the picture information of the object to be monitored, and the specific processing procedure is similar to the processing procedure in step 102, and is not described herein again. The server may also determine an object identifier of the object to be monitored, for example, a name of the object to be monitored may be used as the object identifier of the object to be monitored, or an object identifier may be allocated to the object to be monitored. Then, the server may store the picture information of the object to be monitored, the monitoring characteristics of the object to be monitored, and the object identifier to establish a monitored object library.
In the embodiment of the application, monitoring videos collected by a plurality of monitoring devices can be obtained, for each monitoring video, monitoring features of detection objects contained in the monitoring video are identified, then whether target detection objects meeting preset similarity conditions exist or not is determined according to the monitoring features of the target monitoring objects corresponding to the user terminals and the monitoring features corresponding to the detection objects in the monitoring video, if yes, early warning information corresponding to the target detection objects is generated, and the early warning information is sent to the user terminals corresponding to the target monitoring objects, so that the user terminals can display the early warning information. Based on the scheme, when one path of monitoring video is provided for a plurality of users for use, the early warning information corresponding to each user can be generated according to different monitoring requirements of different users, so that each user can monitor according to the early warning information of the user, and the user experience and the monitoring effect are improved.
The embodiment of the application also provides an example of a monitoring method, and the specific content is as follows.
A monitoring object library is added in the server in advance and comprises A-type monitoring objects S1-S100 and B-type monitoring objects L1-L100.
And 16 paths of monitoring equipment C1-C16 are arranged at different positions in a certain area for monitoring, and meanwhile, the intelligent box is opened to extract pedestrian pictures and face pictures and upload the pictures to a server.
The class A user is a user monitoring a class A monitoring object, the target monitoring object at this time is S5-S10, and the user needs to perform face monitoring.
The B-class user is a user for monitoring a B-class monitoring object, the target monitoring object is L99-L100, the face monitoring and the coat color monitoring are needed, and the coat color is blue.
Assuming that the monitoring object S6 appears in the monitoring video of the monitoring device C7 and the monitoring object L99 appears in the monitoring video of the monitoring device C7, the smart box uploads the face picture I1 of the face picture S6, the face picture I2 of the face picture L99, the face pictures I3 to I5 of other detection objects, and the face pictures I6 to I10 to the server.
The server can compare the faces of S5-S10 and L99-L100 as base libraries with the face features of I1-I5 respectively according to the configuration of the user, and determine the comparison result with the highest similarity, namely the similarity between I1 and S6 is 73%, and the similarity between I2 and L99 is 91%.
The server can also extract external features of the portrait pictures I6-I10, and the obtained jacket colors are as follows in sequence: red, yellow, green, blue, black.
The server pushes the first early warning result of the step S6 to a user terminal A of the class A user to remind the class A user to pay attention; and pushing a second early warning result with the same color of the L99 and the I9 jacket to a user terminal B of the class B user to remind the class B user to pay attention, wherein the interaction process can be as shown in FIG. 2.
Based on the same technical concept, an embodiment of the present application further provides a monitoring apparatus, as shown in fig. 3, the apparatus includes:
a first obtaining module 310, configured to obtain a monitoring video collected by multiple monitoring devices;
the identification module 320 is configured to identify, for each monitoring video, a monitoring feature of a detection object included in the monitoring video;
a first determining module 330, configured to determine whether there is a target detection object that meets a preset similarity condition compared with a target monitoring object according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitored video;
the sending module 340 is configured to generate early warning information corresponding to the target detection object if the target detection object exists, and send the early warning information to a user terminal corresponding to the target monitoring object, so that the user terminal displays the early warning information.
Optionally, the monitoring features at least comprise face features and external features, and the external features at least comprise one or more of clothing features, accessory features and behavior features.
Optionally, the identifying module 320 is specifically configured to:
extracting a monitoring image contained in each monitoring video;
for each monitoring image, extracting a portrait picture of a detection object from the monitoring image through a preset object detection algorithm, and extracting a face picture of the detection object through a preset face detection algorithm;
and extracting external features of the detection object from the human image picture, and extracting human face features of the detection object from the human face picture.
Optionally, the first determining module 330 is specifically configured to:
acquiring monitoring characteristics of target monitoring objects corresponding to user terminals;
for each target monitoring object, calculating the similarity between the face features of all detection objects in the monitoring video and the face features of the target monitoring object, and comparing the external features of all detection objects with the external features of the target monitoring object;
and if the detection object with the similarity larger than a preset similarity threshold or with the external characteristic the same as that of the target monitoring object exists in the detection objects, determining the detection object as the target detection object meeting the preset similarity condition compared with the target monitoring object.
Optionally, as shown in fig. 4, the apparatus further includes:
a receiving module 350, configured to receive a monitored object setting instruction sent by a user terminal, where the monitored object setting instruction includes an object identifier of at least one object to be monitored selected from a monitored object library;
the setting module 360 is configured to set the object to be monitored as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction.
Optionally, as shown in fig. 5, the apparatus further includes:
a second obtaining module 370, configured to obtain picture information of an object to be monitored;
the second determining module 380 is configured to extract the monitoring feature of the object to be monitored from the picture information, and determine an object identifier of the object to be monitored;
the storage module 390 is configured to store picture information of an object to be monitored, monitoring characteristics of the object to be monitored, and an object identifier, so as to establish a monitored object library.
In the embodiment of the application, monitoring videos collected by a plurality of monitoring devices can be obtained, for each monitoring video, monitoring features of detection objects contained in the monitoring video are identified, then whether target detection objects meeting preset similarity conditions exist or not is determined according to the monitoring features of the target monitoring objects corresponding to the user terminals and the monitoring features corresponding to the detection objects in the monitoring video, if yes, early warning information corresponding to the target detection objects is generated, and the early warning information is sent to the user terminals corresponding to the target monitoring objects, so that the user terminals can display the early warning information. Based on the scheme, when one path of monitoring video is provided for a plurality of users for use, the early warning information corresponding to each user can be generated according to different monitoring requirements of different users, so that each user can monitor according to the early warning information of the user, and the user experience and the monitoring effect are improved.
Based on the same technical concept, the embodiment of the present invention further provides a server, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
acquiring monitoring videos acquired by a plurality of monitoring devices;
identifying the monitoring characteristics of a detection object contained in each monitoring video;
determining whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video;
if the target detection object exists, generating early warning information corresponding to the target detection object, and sending the early warning information to a user terminal corresponding to the target monitoring object, so that the user terminal displays the early warning information.
Optionally, the monitoring features at least comprise face features and external features, and the external features at least comprise one or more of clothes features, accessory features and behavior features.
Optionally, the identifying, for each monitoring video, the monitoring feature of the detection object included in the monitoring video includes:
extracting a monitoring image contained in each monitoring video;
for each monitoring image, extracting a portrait picture of a detection object from the monitoring image through a preset object detection algorithm, and extracting a face picture of the detection object through a preset face detection algorithm;
and extracting the external features of the detection object from the human image picture, and extracting the human face features of the detection object from the human face picture.
Optionally, the determining, according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitored video, whether there is a target detection object that satisfies a preset similarity condition compared with the target monitoring object includes:
acquiring monitoring characteristics of target monitoring objects corresponding to user terminals;
for each target monitoring object, calculating the similarity between the face features of all detection objects in the monitoring video and the face features of the target monitoring object, and comparing the external features of all detection objects with the external features of the target monitoring object;
and if the detection objects with the similarity larger than a preset similarity threshold or with the external characteristics the same as the external characteristics of the target monitoring object exist in the detection objects, determining the detection objects as the target detection objects meeting the preset similarity condition compared with the target monitoring object.
Optionally, the method further includes:
receiving a monitoring object setting instruction sent by a user terminal, wherein the monitoring object setting instruction comprises an object identifier of at least one object to be monitored selected from a monitoring object library;
and setting the object to be monitored as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction.
Optionally, the method further includes:
acquiring picture information of an object to be monitored;
extracting the monitoring characteristics of the object to be monitored from the picture information, and determining the object identification of the object to be monitored;
and storing the picture information of the object to be monitored, the monitoring characteristics of the object to be monitored and the object identification so as to establish a monitored object library.
The communication bus mentioned in the above server may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the server and other devices.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above monitoring methods.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the monitoring methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of monitoring, the method comprising:
acquiring monitoring videos acquired by a plurality of monitoring devices;
identifying the monitoring characteristics of a detection object contained in each monitoring video;
determining whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video;
if the target detection object exists, generating early warning information corresponding to the target detection object, and sending the early warning information to a user terminal corresponding to the target monitoring object, so that the user terminal displays the early warning information.
2. The method of claim 1, wherein the monitoring features comprise at least facial features and external features, the external features comprising at least one or more of clothing features, accessory features, and behavioral features.
3. The method according to claim 1 or 2, wherein for each monitoring video, identifying the monitoring characteristics of the detection object contained in the monitoring video comprises:
extracting a monitoring image contained in each monitoring video;
for each monitoring image, extracting a portrait picture of a detection object from the monitoring image through a preset object detection algorithm, and extracting a face picture of the detection object through a preset face detection algorithm;
and extracting the external features of the detection object from the human image picture, and extracting the human face features of the detection object from the human face picture.
4. The method according to claim 2, wherein the determining whether there is a target detection object that satisfies a preset similarity condition compared with the target monitoring object according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video includes:
acquiring monitoring characteristics of target monitoring objects corresponding to user terminals;
for each target monitoring object, calculating the similarity between the face features of all detection objects in the monitoring video and the face features of the target monitoring object, and comparing the external features of all detection objects with the external features of the target monitoring object;
and if the detection objects with the similarity larger than a preset similarity threshold or with the external characteristics the same as the external characteristics of the target monitoring object exist in the detection objects, determining the detection objects as the target detection objects meeting the preset similarity condition compared with the target monitoring object.
5. The method of claim 1, further comprising:
receiving a monitoring object setting instruction sent by a user terminal, wherein the monitoring object setting instruction comprises an object identifier of at least one object to be monitored selected from a monitoring object library;
and setting the object to be monitored as a target monitoring object corresponding to the user terminal according to the monitoring object setting instruction.
6. The method of claim 5, further comprising:
acquiring picture information of an object to be monitored;
extracting the monitoring characteristics of the object to be monitored from the picture information, and determining the object identification of the object to be monitored;
and storing the picture information of the object to be monitored, the monitoring characteristics of the object to be monitored and the object identification so as to establish a monitored object library.
7. A monitoring device, the device comprising:
the first acquisition module is used for acquiring monitoring videos acquired by a plurality of monitoring devices;
the identification module is used for identifying the monitoring characteristics of the detection object contained in each monitoring video;
the first determining module is used for determining whether a target detection object meeting a preset similarity condition compared with the target monitoring object exists according to the monitoring characteristics of the target monitoring object corresponding to each user terminal and the monitoring characteristics corresponding to each detection object in the monitoring video;
and the sending module is used for generating early warning information corresponding to the target detection object and sending the early warning information to a user terminal corresponding to the target monitoring object if the target detection object exists, so that the user terminal displays the early warning information.
8. The apparatus according to claim 7, wherein the identification module is specifically configured to:
extracting a monitoring image contained in each monitoring video;
for each monitoring image, extracting a portrait picture of a detection object from the monitoring image through a preset object detection algorithm, and extracting a face picture of the detection object through a preset face detection algorithm;
and extracting the external features of the detection object from the human image picture, and extracting the human face features of the detection object from the human face picture.
9. A server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-6 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 6.
CN201910944726.3A 2019-09-30 2019-09-30 Monitoring method and device Pending CN112689120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910944726.3A CN112689120A (en) 2019-09-30 2019-09-30 Monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910944726.3A CN112689120A (en) 2019-09-30 2019-09-30 Monitoring method and device

Publications (1)

Publication Number Publication Date
CN112689120A true CN112689120A (en) 2021-04-20

Family

ID=75444434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910944726.3A Pending CN112689120A (en) 2019-09-30 2019-09-30 Monitoring method and device

Country Status (1)

Country Link
CN (1) CN112689120A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113055743A (en) * 2021-03-10 2021-06-29 珠海安士佳电子有限公司 Method and system for intelligently pushing video
CN115273395A (en) * 2022-05-31 2022-11-01 歌尔股份有限公司 Monitoring method, device, equipment, system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583278A (en) * 2017-09-29 2019-04-05 杭州海康威视数字技术股份有限公司 Method, apparatus, system and the computer equipment of recognition of face alarm
CN110163146A (en) * 2019-05-21 2019-08-23 银河水滴科技(北京)有限公司 A kind of monitoring method and device based on characteristics of human body
CN110245268A (en) * 2019-06-26 2019-09-17 银河水滴科技(北京)有限公司 A kind of route determination, the method and device of displaying
CN110245722A (en) * 2019-06-26 2019-09-17 银河水滴科技(北京)有限公司 A kind of image-recognizing method and device based on biological characteristic
CN110287890A (en) * 2019-06-26 2019-09-27 银河水滴科技(北京)有限公司 A kind of recognition methods and device based on gait feature and pedestrian's weight identification feature

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583278A (en) * 2017-09-29 2019-04-05 杭州海康威视数字技术股份有限公司 Method, apparatus, system and the computer equipment of recognition of face alarm
CN110163146A (en) * 2019-05-21 2019-08-23 银河水滴科技(北京)有限公司 A kind of monitoring method and device based on characteristics of human body
CN110245268A (en) * 2019-06-26 2019-09-17 银河水滴科技(北京)有限公司 A kind of route determination, the method and device of displaying
CN110245722A (en) * 2019-06-26 2019-09-17 银河水滴科技(北京)有限公司 A kind of image-recognizing method and device based on biological characteristic
CN110287890A (en) * 2019-06-26 2019-09-27 银河水滴科技(北京)有限公司 A kind of recognition methods and device based on gait feature and pedestrian's weight identification feature

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113055743A (en) * 2021-03-10 2021-06-29 珠海安士佳电子有限公司 Method and system for intelligently pushing video
CN115273395A (en) * 2022-05-31 2022-11-01 歌尔股份有限公司 Monitoring method, device, equipment, system and storage medium
CN115273395B (en) * 2022-05-31 2024-03-12 歌尔股份有限公司 Monitoring method, device, equipment, system and storage medium

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
CN109461168B (en) Target object identification method and device, storage medium and electronic device
CN109871815B (en) Method and device for inquiring monitoring information
US9141184B2 (en) Person detection system
CN109426785B (en) Human body target identity recognition method and device
CN109299658B (en) Face detection method, face image rendering device and storage medium
WO2020093830A1 (en) Method and apparatus for estimating pedestrian flow conditions in specified area
CN108540755B (en) Identity recognition method and device
CN111127508B (en) Target tracking method and device based on video
CN111654700A (en) Privacy mask processing method and device, electronic equipment and monitoring system
CN111695495A (en) Face recognition method, electronic device and storage medium
US10659680B2 (en) Method of processing object in image and apparatus for same
CN113378616A (en) Video analysis method, video analysis management method and related equipment
CN110717358A (en) Visitor number counting method and device, electronic equipment and storage medium
CN112689120A (en) Monitoring method and device
CN112492383A (en) Video frame generation method and device, storage medium and electronic equipment
CN110795592B (en) Picture processing method, device and equipment
EP3647997A1 (en) Person searching method and apparatus and image processing device
CN110505438B (en) Queuing data acquisition method and camera
CN113869115A (en) Method and system for processing face image
CN113470013A (en) Method and device for detecting moved article
CN108024148B (en) Behavior feature-based multimedia file identification method, processing method and device
CN109190495B (en) Gender identification method and device and electronic equipment
CN112668357A (en) Monitoring method and device
CN112069357B (en) Video resource processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination