CN106709424A - Optimized surveillance video storage system and equipment - Google Patents

Optimized surveillance video storage system and equipment Download PDF

Info

Publication number
CN106709424A
CN106709424A CN201611035754.6A CN201611035754A CN106709424A CN 106709424 A CN106709424 A CN 106709424A CN 201611035754 A CN201611035754 A CN 201611035754A CN 106709424 A CN106709424 A CN 106709424A
Authority
CN
China
Prior art keywords
face
video
feature vector
image frame
optimal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611035754.6A
Other languages
Chinese (zh)
Other versions
CN106709424B (en
Inventor
王海增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhongke Human Intelligence Technology Co ltd
Original Assignee
Beijing Zhongke Tianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Tianyun Technology Co Ltd filed Critical Beijing Zhongke Tianyun Technology Co Ltd
Priority to CN201611035754.6A priority Critical patent/CN106709424B/en
Publication of CN106709424A publication Critical patent/CN106709424A/en
Application granted granted Critical
Publication of CN106709424B publication Critical patent/CN106709424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Television Signal Processing For Recording (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an optimized surveillance video storage system and equipment. The system and equipment comprise a video acquisition front end, a face recognition module, primary real-time storage equipment, central cloud storage equipment and a searching module, wherein the video acquisition front end is a high-definition camera, and is responsible for acquiring people flow video at an entrance of a equipment channel in real time; the face recognition module obtains an identifier indicating whether each frame of image in the video contains a face and a top-quality face picture in an image set containing the same face through performing face detection, feature extraction and face comparison on the surveillance video frame by frame; the primary real-time storage equipment stores video segments containing the face in real time; the central cloud storage equipment stores the obtained top-quality face picture in a database; and when in retrieval, a picture of a user is input to obtain a corresponding picture from the central data base and the video segments from the primary real-time storage equipment. The optimized surveillance video storage system and the equipment can guarantee that the stored picture has an evidentiary feature, and save the storage space greatly.

Description

Optimized monitoring video storage system and equipment
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a monitoring video optimization storage system and related equipment.
Background
Usually, for the positions such as the entrance and exit of an office place, the analysis of the face in the video is realized through video monitoring and recording or special face and certificate comparison equipment such as a unit visitor plane and the like.
In the conventional mode, if an office unit needs to clearly find out which people and what time enter an office, the current means is to perform video recording query through a video monitoring system, but because video of the video monitoring system is often too large in storage files, only the recording function of the people coming and going within one month is supported, and when the appointed person enters the office in which day, the appointed person needs to search a large number of videos to perform manual positioning, and if specific information such as time and place is not appointed, the problem condition that the appointed person specifically enters the office in which time can be hardly inquired due to huge workload.
The system utilizes advanced artificial intelligence face recognition technology to realize the automatic snapshot and evidence keeping function of the face photos of the entering personnel, and can reach the historical storage capacity of hundreds of thousands to millions of face photo records. Typically these face recording capacities support recording retention times for the doorway of up to months and even 1 to 2 years. And by customizing the storage capacity of the device, longer record retention times of years and the like can be realized. A large number of faces are stored through records, so that accurate management of personnel at an entrance and an exit is facilitated, and effective photo evidences are provided when disputes and major cases occur.
Disclosure of Invention
Currently, in a video monitoring application scenario of a general application scenario, a main objective is to monitor people who enter and exit, and a conventional video storage scheme is generally adopted, so that a complete monitoring video is stored to facilitate user query or investigation and evidence collection. The method occupies a large amount of storage space, so that the monitoring equipment can only support video storage for one to two months at most, the searching is time-consuming and labor-consuming during the searching, and great inconvenience is brought to users.
In the invention, whether a face exists is detected in real time by a monitoring video, only the video segment of the detected face is stored locally, in the central cloud storage, the corresponding storage space of each different face is established, and only the image frame meeting the preset condition is reserved, so that the storage space is greatly saved; meanwhile, by establishing a search engine based on the human face, the query of the user is facilitated, and the query time is greatly saved.
In order to solve the above technical problem, the present invention provides a new system method, which includes:
the system comprises a video acquisition front end, wherein the acquisition front end is provided with a standard high-definition camera, one or more cameras are adopted to monitor the places needing people stream monitoring in real time, and the acquired videos are temporarily stored in an internal memory of the equipment.
The face recognition module is used for carrying out face detection on the video temporarily stored at the front end of the video acquisition frame by frame, and marking the face detection result of each frame in a detection mode of a deep convolutional neural network learning algorithm or other algorithms with the correct detection rate higher than 95%; if the number of correctly detected faces in the detection result of the same image frame is more than 1, recording each face detected in the image frame, extracting a video segment of the face from the video frame marked with the correctly detected face, and simultaneously transmitting the video segment to a primary real-time storage device.
And performing face feature extraction on each face in each image frame in the video segmentation from the video segmentation in which the face is correctly detected, wherein the feature extraction method is a general knowledge-based characterization method or an algebraic feature-based or statistical learning-based characterization method to obtain a feature vector for characterizing each face.
In the video segmentation, starting from a first image frame with a face, matching the feature vector of each face in each image frame with each face in a previous image frame one by one, and considering the face meeting the similarity requirement as the same face, or considering the face as a new face; forming a set of feature vectors belonging to the same face, marking the feature vectors correspondingly, and establishing a new feature vector set for a new face; until the last image frame in the video segment, or to the maximum continuous image frame processing capability of the present invention.
Then, in a feature vector set belonging to the same face, performing correlation operation on each feature vector and other feature vectors except the feature vector and summing correlation operation results to obtain a matching value of the feature vector in the set, wherein the maximum matching value obtained by calculation in the feature vector set is the optimal face matching value, the face feature vector corresponding to the maximum matching value is the optimal face feature vector, and an image frame corresponding to the feature vector is recorded as an optimal face image frame; or finding a largest face from the face image set corresponding to the face feature set as an optimal face, and the image frame corresponding to the optimal face is the optimal face image frame.
Comparing the face feature vector corresponding to the optimal face image frame with a face feature vector library in central cloud storage, if the similarity between the face feature vector and a certain face feature vector in a compared database is greater than the similarity, determining that the two faces belong to the same person, otherwise, determining that the compared feature vectors belong to different faces; if a plurality of face features are higher than the similarity, the face feature vector and the face feature vector with the highest similarity in the database belong to the same face; and then storing the image frame and the corresponding feature vector in a central cloud storage database, and otherwise, instructing the central cloud storage to create a new storage space and a unique identifier for the face.
If the number of the optimal faces in the optimal image frame exceeds 1, comparing each optimal face feature vector with a face feature vector library in central cloud storage, if matched face feature vectors exist, storing the image frame and the corresponding feature vectors in a storage space of the face, and otherwise, establishing a new storage space and a unique identifier for the face.
The first-stage real-time storage equipment is used for extracting video segments of the human face from video frames marked with the correct detected human face when the video is stored; the starting image frame of each video segment is the first frame image of which the existence of a human face is detected, or is stored in advance by N1 frames according to actual requirements; and the end of each video segment is the last frame with the detected human face, the monitoring date and time of each video segment are recorded, and then the video segments are combined into a preset file one by one according to the time sequence.
The central cloud storage device is used for establishing a feature vector library of the human face and other basic information according to actual requirements or establishing basic index information stored by the human face according to a system self-defining mode;
then judging whether the face is the existing face in the database according to the extracted face feature vector comparison result, if so, storing the face feature vector and the optimal image frame of the face, and recording the generation date and time of the image frame and the feature vector thereof;
if not, establishing a new unique index identification and a new storage space for the face according to rules defined by the system, and storing the optimal image frame corresponding to the face, the generation date and time of the optimal image frame and the feature vector of the optimal image frame.
The system detects the human face by adopting the same method as the human face detection according to the image input by the user, extracts the characteristic vector of the human face, compares the human face characteristic with the human face image characteristic vector stored in the primary real-time storage device and the central cloud storage device, finds all human face video segments or images matched with the human face characteristic vector, and feeds the human face video segments or images back to the front searching end. If no appropriate result is available, the user either abandons the search, provides a new image for inquiry, or inputs the date and time period for searching to obtain a corresponding feedback result.
During searching, the video segment of the primary real-time storage device is firstly inquired by default, and if the video segment meets the conditions, the result is fed back to the user from the primary storage center. And then querying the image data of the central cloud storage device.
The scheme of the invention at least has the following beneficial effects:
the problems that the amount of stored data of the current video monitoring is huge and the query is not easy are effectively solved. The system provided by the invention can greatly improve the effective data of the system storage of the general application level, is simple and convenient to search, and greatly simplifies the video storage and query process of the general scene application.
A large number of faces are stored through records, so that accurate management of personnel at an entrance and an exit is facilitated, and effective photo evidences are provided when disputes and major cases occur.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a basic architecture of a system method according to an embodiment of the present invention;
FIG. 2 is a flow chart of the process steps of an embodiment of the present invention;
FIG. 3 shows a process flow diagram for step 2 of an embodiment of the present invention;
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, procedural, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of embodiments of the invention encompasses the full ambit of the claims, as well as all available equivalents of the claims. Embodiments of the invention may be referred to herein, individually or collectively, by the term "invention" merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
As shown in fig. 1, the embodiment of the present invention includes:
the video acquisition front end 101 is configured with a standard high-definition camera, one or more cameras are adopted to monitor the places needing people stream monitoring in real time, and the acquired video is temporarily stored in the memory of the equipment.
In the above embodiments of the present invention, the method further includes:
the face recognition module 102 performs face detection on the video temporarily stored at the front end of the video acquisition frame by frame, and the detection mode is a deep convolutional neural network algorithm, such as: the depth is 9 layers, or other algorithms with the correct detection rate higher than 95%, such as: a face recognition technology based on a Gaussian process; recording the face detection result of each frame; if the number of correctly detected faces in the detection result of the same image frame is more than 1, recording each face detected in the image frame, extracting a video segment of the face from the video frame marked with the correctly detected face, and simultaneously transmitting the video segment to a primary real-time storage device.
Further, from the video segment in which the presence of the face is correctly detected, face feature extraction is performed on each face in each image frame in the video segment, and the feature extraction method is either a general knowledge-based characterization method or an algebraic feature-based or statistical learning-based characterization method, so as to obtain a feature vector characterizing each face, and the feature vector is recorded asWherein f isiIs the ith floating point element of the feature vector, and N is the total number of elements of the feature vector.
Further, in the video segmentation, starting from the first image frame with a face, the feature vector of each face in each image frame is matched with each face in the previous image frame one by one, and the face meeting the similarity requirement is considered as the same face, otherwise, the face is considered as a new face; forming a set of feature vectors belonging to the same face, marking the feature vectors correspondingly, and establishing a new feature vector set for a new face; until the last image frame in the video segment, or to the maximum continuous image frame processing capability of the present invention.
For example, the first frame image includes two faces, which are respectively denoted as F1,1And F1,2The corresponding feature vector isAndthe second frame image also contains two human faces, which are respectively marked as F2,1And F2,2The corresponding feature vector isAndrespectively calculating the matching degrees II of the four facesi,j
Wherein,in the form of a standard correlation function,vectors of any length are the same.
If u1,1Not less than β, wherein β is similarIf the threshold value is less than the threshold value, then the value is considered as F1,1And F2,1Is the same face, otherwise, it is considered as F1,1And F2,1Belonging to different human faces.
If u2,1< β and ^ n2,2If < β, F is considered to be2,2Belonging to a new appearing face.
Furthermore, in the feature vector set belonging to the same face, each feature vector is correlated with other feature vectors except for the feature vectorObtaining a similarity metric vector of the feature vectorsSumming all elements of the similarity metric value vector to obtain a matching value of the feature vector in the set, wherein the maximum matching value obtained by calculation in the feature vector set is the optimal face matching value, the face feature vector corresponding to the maximum matching value is the optimal face feature vector, and the image frame corresponding to the feature vector is recorded as the optimal face image frame; or finding a largest face from the face image set corresponding to the face feature set as an optimal face, and obtaining an image frame corresponding to the optimal face as an optimal face image frame.
For example, assume that the feature vectors belonging to the same face are set asWherein viThe feature vector of the ith element in the set, and M is the total number of the elements in the set; the similarity metric vector of the feature vector is noted asWhereinThe similarity metric vector corresponds to a matching value ofFurther, the best matching value m is obtained from the calculation resultopt=max(m0,m1,…,mM) Whereinfor the calculation of the maximum value of the vector, the feature vector corresponding to the optimal matching value is the optimal face, and the image corresponding to the optimal face is the optimal face image frame.
Further, comparing the face feature vector corresponding to the optimal face image frame with a face feature vector library in central cloud storage, wherein the comparison method is to calculate the correlation of the feature vectors by adopting the same correlation function in the processIf the similarity of the face feature vector and a certain face feature vector in the compared database is pii,jIf the comparison result is greater than the preset threshold, the two faces are considered to belong to the same person, otherwise, the compared feature vectors are considered to belong to different faces; if a plurality of face features are higher than the similarity, the face feature vector and the face feature vector with the highest similarity in the database belong to the same face; and then storing the image frame and the corresponding feature vector in a central cloud storage database, and otherwise, instructing the central cloud storage to create a new storage space and a unique identifier for the face. Such as: the storage space comprises a unique space identification ID, a unique ID of each image, a feature vector corresponding to each image, the generation time of the image, and personalized information of the face: name, age, gender, etc.
Further, if the number of the optimal faces in the optimal image frame exceeds 1, comparing each optimal face feature vector with a face feature vector library in central cloud storage, if matched face feature vectors exist, storing the image frame and the corresponding feature vectors in a storage space of the face, and otherwise, establishing a new storage space and a unique identifier for the face.
In the above embodiments of the present invention, the method further includes:
the primary real-time storage device 103 is used for extracting video segments of the human face from video frames marked with correct detected human faces when the videos are stored; the starting image frame of each video segment is the first image frame of which the existence of a human face is detected, or is stored in advance by N1 frames according to actual requirements, such as N1-5; and the end of each video segment is the last frame with the detected human face, the monitoring date and time of each video segment are recorded, and then the video segments are combined into a preset file one by one according to the time sequence, for example, the file is stored according to the day segmentation.
In the above embodiments of the present invention, the method further includes:
the central cloud storage device 104 establishes a feature library of the human face and other basic information according to actual needs, such as: setting a white list or a black list, and inputting basic information such as face photos, names, sexes, ages and the like of people in the white list and the black list; or establishing basic index information stored by the human face according to a system self-defining mode.
Further, whether the face is the existing face in the database is judged according to the extracted face feature vector comparison result, if yes, the face feature vector and the optimal image frame of the face are stored, and the generation date and time of the image frame and the feature vector of the image frame are recorded.
If not, establishing a new unique index identification and a new storage space for the face according to rules defined by the system, and storing the optimal image frame corresponding to the face, the generation date and time of the optimal image frame and the feature vector of the optimal image frame.
In the above embodiments of the present invention, the method further includes:
the searching module 105 queries the user or the input image containing the face, and the system performs face detection according to the image input by the user by using the same method as the face detection described in the above embodiment, then extracts the face feature vector, compares the face feature with the face feature vector stored in the primary real-time storage device and the central cloud storage device, finds all face video segments or images matched with the face feature vector, and feeds the face video segments or images back to the searching front end. Such as: the feedback result comprises all matched face images, unique image IDs, characteristic vectors, characteristic vector IDs, and generation dates and times of the images; all video files containing a matching face, video file ID, start time when the face appears in the video file, end time. If no appropriate result is available, the user either abandons the search, provides a new image for inquiry, or inputs the date and time period for searching to obtain a corresponding feedback result.
Furthermore, during searching, the video segment of the primary real-time storage device is firstly inquired by default, and if the video segment meets the conditions, the result is fed back to the user from the primary storage center. And then inquiring the image data of the central cloud storage equipment.
The following describes a specific operation flow of the above functional modules or devices with reference to fig. 2 and fig. 3:
the method comprises the steps of firstly, monitoring and collecting and temporarily storing videos, configuring a standard high-definition camera at the front end of the collection, adopting one or more cameras to monitor places needing people stream monitoring in real time, and temporarily storing the collected videos in an internal memory of the equipment.
Secondly, video face detection, feature extraction and comparison are carried out, face detection is carried out on the video temporarily stored at the front end of video acquisition frame by frame, and the face detection result of each frame is recorded; if the number of correctly detected faces in the detection result of the same image frame is greater than 1, each face detected in the image frame is recorded, a video segment of the face is extracted from the video frame marked with the correctly detected face, and the video segment is transmitted to the primary real-time storage device 103.
At the same time, from the above correct detectionIn the video segmentation of the human face, extracting the human face feature of each human face in each image frame in the video segmentation to obtain the feature vector representing each human face, and recording the feature vector asWherein f isiIs the ith floating point element of the feature vector, and N is the total number of elements of the feature vector.
In the video segmentation, starting from a first image frame with a face, matching the feature vector of each face in each image frame with each face in a previous image frame one by one, and considering the face meeting the similarity requirement as the same face, or considering the face as a new face; forming a set of feature vectors belonging to the same face, marking the feature vectors correspondingly, and establishing a new feature vector set for a new face; until the last image frame in the video segment, or to the maximum continuous image frame processing capability of the present invention.
In the feature vector set belonging to the same face, each feature vector is correlated with other feature vectors except the feature vectorObtaining a similarity metric vector of the feature vectorsSumming all elements of the similarity metric value vector to obtain a matching value of the feature vector in the set, wherein the maximum matching value obtained by calculation in the feature vector set is the optimal face matching value, the face feature vector corresponding to the maximum matching value is the optimal face feature vector, and the image frame corresponding to the feature vector is recorded as the optimal face image frame; or finding a largest face from the face image set corresponding to the face feature set as an optimal face, and obtaining an image frame corresponding to the optimal face as an optimal face image frame.
The optimal human face image frame is processedComparing the corresponding face feature vector with a face feature vector library in central cloud storage, wherein the comparison method is to calculate the correlation of the feature vectors by adopting the same correlation function in the processIf the similarity of the face feature vector and a certain face feature vector in the compared database is pii,jIf the comparison result is greater than the preset threshold, the two faces are considered to belong to the same person, otherwise, the compared feature vectors are considered to belong to different faces; if a plurality of face features are higher than the similarity, the face feature vector and the face feature vector with the highest similarity in the database belong to the same face; and then storing the image frame and the corresponding feature vector in a central cloud storage database, and otherwise, instructing the central cloud storage to create a new storage space and a unique identifier for the face.
If the number of the optimal faces in the optimal image frame exceeds 1, comparing each optimal face feature vector with a face feature vector library in central cloud storage, if matched face feature vectors exist, storing the image frame and the corresponding feature vectors in a storage space of the face, and otherwise, establishing a new storage space and a unique identifier for the face. The processing flow is shown in fig. 3.
Step three, storing the video segments in real time, and extracting the video segments of the human faces from the video frames marked with the correct detected human faces when storing the videos; the starting image frame of each video segment is the first frame image of which the existence of a human face is detected, or is stored in advance by N1 frames according to actual requirements; and the end of each video segment is the last frame with the detected human face, the monitoring date and time of each video segment are recorded, and then the video segments are combined into a preset file one by one according to the time sequence, for example, the file is stored according to the day segmentation.
Step four, cloud storage of the matched images and features, establishment of a face feature library and other basic information according to actual requirements, such as: setting a white list or a black list, and inputting basic information such as face photos, names, sexes, ages and the like of people in the white list and the black list; or establishing basic index information stored by the human face according to a system self-defining mode.
And then judging whether the face is the existing face in the database according to the extracted face feature vector comparison result, if so, storing the face feature vector and the optimal image frame of the face, and recording the generation date and time of the image frame and the feature vector of the image frame.
If not, establishing a new unique index identification and a new storage space for the face according to rules defined by the system, and storing the optimal image frame corresponding to the face, the generation date and time of the optimal image frame and the feature vector of the optimal image frame.
Step five, inputting a search condition, obtaining a search result, performing face detection by adopting the same method as the face detection in the embodiment according to an image input by a user, then extracting a face feature vector of the face, comparing the face feature with the face feature vector stored in the primary real-time storage device and the central cloud storage device, finding all face video segments or images matched with the face feature vector, and feeding the face video segments or images back to the search front end. Such as: the feedback result comprises all matched face images, unique image IDs, characteristic vectors, characteristic vector IDs, and generation dates and times of the images; all video files containing a matching face, video file ID, start time when the face appears in the video file, end time. If no appropriate result is available, the user either abandons the search, provides a new image for inquiry, or inputs the date and time period for searching to obtain a corresponding feedback result.
During searching, the video segment of the primary real-time storage device is firstly inquired by default, and if the video segment meets the conditions, the result is fed back to the user from the primary storage center. And then inquiring the image data of the central cloud storage equipment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An optimized surveillance video storage system and device, comprising:
the system and related equipment comprise a video acquisition front end, a face recognition module, primary real-time storage equipment, central cloud storage equipment and a search module;
the video acquisition front end is responsible for acquiring the people stream video at the entrance of the channel in real time and temporarily storing the acquired video in the memory of the acquisition front end equipment;
the face recognition module carries out face detection, feature extraction and face comparison on the temporarily stored videos frame by frame, marks whether each frame of image contains a face or not, stores the optimal face picture which is in line with system predefinition in the central cloud storage device, and stores the continuous video segment containing the face in the primary real-time storage device.
When a user needs to retrieve, inputting any photo of a searched object, carrying out face detection and feature extraction on the photo by a search engine, and comparing the face detection and feature extraction with face features in a cloud storage device database or a primary real-time storage device so as to obtain a corresponding photo and a related video.
2. The method and system of claim 1, including:
the face recognition module is used for carrying out face detection on the video temporarily stored at the front end of the video acquisition frame by frame, and marking the face detection result of each frame in a detection mode of a deep convolutional neural network learning algorithm or other algorithms with the correct detection rate higher than 95%; if the number of correctly detected faces in the detection result of the same image frame is more than 1, recording each face detected in the image frame, extracting a video segment of the face from the video frame marked with the correctly detected face, and simultaneously transmitting the video segment to a primary real-time storage device.
3. The method of claim 2, further comprising:
and performing face feature extraction on each face in each image frame in the video segmentation from the video segmentation in which the face is correctly detected, wherein the feature extraction method is a general knowledge-based characterization method or an algebraic feature-based or statistical learning-based characterization method to obtain a feature vector for characterizing each face.
Matching the feature vector of each face in each image frame with each face in the previous image frame one by one from the first image frame with the face, and considering the face meeting the similarity requirement as the same face, or considering the face as a new face; forming a set of feature vectors belonging to the same face, marking the feature vectors correspondingly, and establishing a new feature vector set for a new face; until the last image frame in the video segment, or to the maximum continuous image frame processing capability of the present invention.
4. The method of claim 2, comprising:
in a feature vector set belonging to the same face, performing correlation operation on each feature vector and other feature vectors except the feature vector and summing correlation operation results to obtain a matching value of the feature vector in the set, wherein the maximum matching value obtained by calculation in the feature vector set is the optimal face matching value, the face feature vector corresponding to the maximum matching value is the optimal face feature vector, and an image frame corresponding to the feature vector is recorded as an optimal face image frame. Or finding a largest face from the face image set corresponding to the face feature set as an optimal face, and obtaining an image frame corresponding to the optimal face as an optimal face image frame.
5. The method of claim 2, further characterized by comprising:
comparing the face feature vector corresponding to the optimal face image frame with a face feature vector library in central cloud storage, if the similarity between the face feature vector and a certain face feature vector in a compared database is greater than the similarity, determining that the two faces belong to the same person, otherwise, determining that the compared feature vectors belong to different faces; if a plurality of face features are higher than the similarity, the face feature vector and the face feature vector with the highest similarity in the database belong to the same face; and then storing the image frame and the corresponding feature vector in a central cloud storage database, and otherwise, instructing the central cloud storage to create a new storage space and a unique identifier for the face.
6. The method of claim 2, further characterized by comprising:
if the number of the optimal faces in the optimal image frame exceeds 1, comparing each optimal face feature vector with a face feature vector library in central cloud storage, if matched face feature vectors exist, storing the image frame and the corresponding feature vectors in a storage space of the face, and otherwise, establishing a new storage space and a unique identifier for the face.
7. The method and system of claim 1, further characterized by comprising:
the first-stage real-time storage equipment is used for extracting video segments of the human face from video frames marked with the correct detected human face when the video is stored; the starting image frame of each video segment is the first frame image of which the existence of a human face is detected, or is stored in advance by N1 frames according to actual requirements; and the end of each video segment is the last frame with the detected human face, the monitoring date and time of each video segment are recorded, and then the video segments are combined into a preset file one by one according to the time sequence.
8. The method and system of claim 1, further characterized by comprising:
the central cloud storage device establishes a face feature library and other basic information according to actual requirements or establishes basic index information stored by the face according to a system self-defining mode.
9. The method and system of claim 8, further characterized by comprising:
then judging whether the face is the existing face in the database according to the extracted face feature vector comparison result, if so, storing the face feature vector and the optimal image frame of the face, and recording the generation date and time of the image frame and the feature vector thereof;
if not, establishing a new unique index identification and a new storage space for the face according to rules defined by the system, and storing the optimal image frame corresponding to the face, the generation date and time of the optimal image frame and the feature vector of the optimal image frame.
10. The method and system of claim 1, including:
the system detects the human face by adopting the same method as the human face detection according to the image input by the user, extracts the characteristic vector of the human face, compares the human face characteristic with the human face image characteristic vector stored in the primary real-time storage device and the central cloud storage device, finds all human face video segments or images matched with the human face characteristic vector, and feeds the human face video segments or images back to the front searching end. If no appropriate result is available, the user either abandons the search, provides a new image for inquiry, or inputs the date and time period for searching to obtain a corresponding feedback result.
During searching, the video segment of the primary real-time storage device is firstly inquired by default, and if the video segment meets the conditions, the result is fed back to the user from the primary storage center. And then querying the image data of the central cloud storage device.
CN201611035754.6A 2016-11-19 2016-11-19 Optimized monitoring video storage system Active CN106709424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611035754.6A CN106709424B (en) 2016-11-19 2016-11-19 Optimized monitoring video storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611035754.6A CN106709424B (en) 2016-11-19 2016-11-19 Optimized monitoring video storage system

Publications (2)

Publication Number Publication Date
CN106709424A true CN106709424A (en) 2017-05-24
CN106709424B CN106709424B (en) 2022-11-11

Family

ID=58940179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611035754.6A Active CN106709424B (en) 2016-11-19 2016-11-19 Optimized monitoring video storage system

Country Status (1)

Country Link
CN (1) CN106709424B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733819A (en) * 2018-05-22 2018-11-02 深圳云天励飞技术有限公司 A kind of personnel's archives method for building up and device
CN108875512A (en) * 2017-12-05 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and electronic equipment
CN108881816A (en) * 2017-10-12 2018-11-23 北京旷视科技有限公司 Generation method, device and the computer storage medium of video file
CN109241378A (en) * 2018-08-29 2019-01-18 北京旷视科技有限公司 Archives method for building up, device, equipment and storage medium
CN109241299A (en) * 2018-09-25 2019-01-18 腾讯科技(深圳)有限公司 Multimedia resource searching method, device, storage medium and equipment
CN109815371A (en) * 2019-01-30 2019-05-28 无锡华云数据技术服务有限公司 Video monitoring data stores access system
CN110263704A (en) * 2019-06-18 2019-09-20 浙江宇视科技有限公司 Human face data acquisition method, device, server, video collector and medium
CN110267007A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110266953A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110866491A (en) * 2019-11-13 2020-03-06 腾讯科技(深圳)有限公司 Target retrieval method, device, computer readable storage medium and computer equipment
CN111295875A (en) * 2018-05-07 2020-06-16 合刃科技(武汉)有限公司 Shooting control method and electronic device
CN111385440A (en) * 2018-12-27 2020-07-07 芜湖潜思智能科技有限公司 Monitoring camera with face recording and inquiring functions
CN111414517A (en) * 2020-03-26 2020-07-14 成都市喜爱科技有限公司 Video face analysis method and device and server
CN111723242A (en) * 2020-05-21 2020-09-29 深圳信息职业技术学院 Customer portrait drawing method, customer portrait drawing device, terminal equipment and medium
CN111831742A (en) * 2020-05-25 2020-10-27 佛山市华全电气照明有限公司 Smart city video positioning method and system
CN112597961A (en) * 2020-12-30 2021-04-02 上海大学 Interest target extraction method and system based on big data
CN112989973A (en) * 2021-03-02 2021-06-18 上海汉时信息科技有限公司 Abnormal behavior detection reminding method and system
CN113098642A (en) * 2021-04-22 2021-07-09 浙江万里学院 Logistics management method based on Beidou satellite positioning technology
CN113343842A (en) * 2021-06-04 2021-09-03 中山大学 Self-adaptive face recognition method
CN113779309A (en) * 2021-09-01 2021-12-10 杭州视洞科技有限公司 Video screening method based on face recognition
CN114333119A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Vehicle unlocking method, vehicle management method, terminal, vehicle unlocking system, vehicle unlocking device, and storage medium
CN117111858A (en) * 2023-10-23 2023-11-24 深圳萨尔浒网络科技有限公司 Computer file data matching system

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183951A1 (en) * 2003-03-06 2004-09-23 Lee Hyeok-Beom Image-detectable monitoring system and method for using the same
JP2007300456A (en) * 2006-05-01 2007-11-15 Nippon Telegr & Teleph Corp <Ntt> Video monitoring system and its index generation method
CN101299812A (en) * 2008-06-25 2008-11-05 北京中星微电子有限公司 Method, system for analyzing, storing video as well as method, system for searching video
CN101505412A (en) * 2009-03-11 2009-08-12 上海伯里春贸易有限公司 Human face collecting identification monitoring system and method for hotel, office building and public place for entertainment
CN101546377A (en) * 2009-04-28 2009-09-30 上海银晨智能识别科技有限公司 Face image capture system and method
CN101763388A (en) * 2008-12-25 2010-06-30 北京中星微电子有限公司 Method for searching video, system therefor and device therefor as well as video storing method and system thereof
CN102170563A (en) * 2011-03-24 2011-08-31 杭州海康威视软件有限公司 Intelligent person capture system and person monitoring management method
CN102609729A (en) * 2012-02-14 2012-07-25 中国船舶重工集团公司第七二六研究所 Method and system for recognizing faces shot by multiple cameras
CN102665054A (en) * 2012-05-10 2012-09-12 江苏友上科技实业有限公司 Network video recorder system for fast face retrieval
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN102929903A (en) * 2012-07-04 2013-02-13 北京中盾安全技术开发公司 Rapid video retrieval method based on layered structuralized description of video information
CN103258028A (en) * 2013-05-08 2013-08-21 林凡 Video hierarchical and partitioned storage system based on content features
CN103530652A (en) * 2013-10-23 2014-01-22 北京中视广信科技有限公司 Face clustering based video categorization method and retrieval method as well as systems thereof
CN104239881A (en) * 2013-06-08 2014-12-24 杭州海康威视数字技术股份有限公司 Method and system for automatically finding and registering target in surveillance video
CN204156985U (en) * 2014-08-29 2015-02-11 上海智达商投资管理合伙企业(有限合伙) There is high-definition camera equipment and the supervisory control system of face recognition comparison function
CN204859420U (en) * 2015-08-19 2015-12-09 江苏联禹智能工程有限公司 Intelligent building video monitor system
CN105243160A (en) * 2015-10-28 2016-01-13 西安美林数据技术股份有限公司 Mass data-based distributed video processing system
CN105357475A (en) * 2015-10-28 2016-02-24 小米科技有限责任公司 Video playing method and device
CN105373626A (en) * 2015-12-09 2016-03-02 深圳融合永道科技有限公司 Distributed face recognition track search system and method
CN205566477U (en) * 2016-02-17 2016-09-07 成都理工大学 Low -power consumption people face watch -dog
CN106060442A (en) * 2016-05-20 2016-10-26 浙江宇视科技有限公司 Video storage method, device and system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183951A1 (en) * 2003-03-06 2004-09-23 Lee Hyeok-Beom Image-detectable monitoring system and method for using the same
JP2007300456A (en) * 2006-05-01 2007-11-15 Nippon Telegr & Teleph Corp <Ntt> Video monitoring system and its index generation method
CN101299812A (en) * 2008-06-25 2008-11-05 北京中星微电子有限公司 Method, system for analyzing, storing video as well as method, system for searching video
CN101763388A (en) * 2008-12-25 2010-06-30 北京中星微电子有限公司 Method for searching video, system therefor and device therefor as well as video storing method and system thereof
CN101505412A (en) * 2009-03-11 2009-08-12 上海伯里春贸易有限公司 Human face collecting identification monitoring system and method for hotel, office building and public place for entertainment
CN101546377A (en) * 2009-04-28 2009-09-30 上海银晨智能识别科技有限公司 Face image capture system and method
CN102170563A (en) * 2011-03-24 2011-08-31 杭州海康威视软件有限公司 Intelligent person capture system and person monitoring management method
CN102609729A (en) * 2012-02-14 2012-07-25 中国船舶重工集团公司第七二六研究所 Method and system for recognizing faces shot by multiple cameras
CN102665054A (en) * 2012-05-10 2012-09-12 江苏友上科技实业有限公司 Network video recorder system for fast face retrieval
CN102929903A (en) * 2012-07-04 2013-02-13 北京中盾安全技术开发公司 Rapid video retrieval method based on layered structuralized description of video information
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103258028A (en) * 2013-05-08 2013-08-21 林凡 Video hierarchical and partitioned storage system based on content features
CN104239881A (en) * 2013-06-08 2014-12-24 杭州海康威视数字技术股份有限公司 Method and system for automatically finding and registering target in surveillance video
CN103530652A (en) * 2013-10-23 2014-01-22 北京中视广信科技有限公司 Face clustering based video categorization method and retrieval method as well as systems thereof
CN204156985U (en) * 2014-08-29 2015-02-11 上海智达商投资管理合伙企业(有限合伙) There is high-definition camera equipment and the supervisory control system of face recognition comparison function
CN204859420U (en) * 2015-08-19 2015-12-09 江苏联禹智能工程有限公司 Intelligent building video monitor system
CN105243160A (en) * 2015-10-28 2016-01-13 西安美林数据技术股份有限公司 Mass data-based distributed video processing system
CN105357475A (en) * 2015-10-28 2016-02-24 小米科技有限责任公司 Video playing method and device
CN105373626A (en) * 2015-12-09 2016-03-02 深圳融合永道科技有限公司 Distributed face recognition track search system and method
CN205566477U (en) * 2016-02-17 2016-09-07 成都理工大学 Low -power consumption people face watch -dog
CN106060442A (en) * 2016-05-20 2016-10-26 浙江宇视科技有限公司 Video storage method, device and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MANEESH UPMANYU 等: "Efficient privacy preserving video surveillance", 《2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
YAN LI 等: "Face Video Retrieval with Image Query via Hashing across Euclidean Space and Riemannian Manifold", 《CVPR 2015》 *
无: "智能化人脸检测存储 全球互联网即时访问——杭州智诺ZN-DVR6104系列嵌入式人脸检测硬盘录像机评测", 《中国公共安全》 *
杨强 等: "视频检索技术中"人物"语义的提取方法", 《苏州科技学院学报(工程技术版)》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881816A (en) * 2017-10-12 2018-11-23 北京旷视科技有限公司 Generation method, device and the computer storage medium of video file
CN108875512A (en) * 2017-12-05 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and electronic equipment
CN111295875A (en) * 2018-05-07 2020-06-16 合刃科技(武汉)有限公司 Shooting control method and electronic device
CN108733819B (en) * 2018-05-22 2021-07-06 深圳云天励飞技术有限公司 Personnel archive establishing method and device
CN108733819A (en) * 2018-05-22 2018-11-02 深圳云天励飞技术有限公司 A kind of personnel's archives method for building up and device
CN109241378A (en) * 2018-08-29 2019-01-18 北京旷视科技有限公司 Archives method for building up, device, equipment and storage medium
CN109241299A (en) * 2018-09-25 2019-01-18 腾讯科技(深圳)有限公司 Multimedia resource searching method, device, storage medium and equipment
CN111385440A (en) * 2018-12-27 2020-07-07 芜湖潜思智能科技有限公司 Monitoring camera with face recording and inquiring functions
CN109815371A (en) * 2019-01-30 2019-05-28 无锡华云数据技术服务有限公司 Video monitoring data stores access system
CN110263704A (en) * 2019-06-18 2019-09-20 浙江宇视科技有限公司 Human face data acquisition method, device, server, video collector and medium
CN110263704B (en) * 2019-06-18 2021-07-20 浙江宇视科技有限公司 Face data acquisition method, device, server, video acquisition device and medium
CN110266953A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110266953B (en) * 2019-06-28 2021-05-07 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, server, and storage medium
CN110267007A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110866491A (en) * 2019-11-13 2020-03-06 腾讯科技(深圳)有限公司 Target retrieval method, device, computer readable storage medium and computer equipment
CN110866491B (en) * 2019-11-13 2023-11-24 腾讯科技(深圳)有限公司 Target retrieval method, apparatus, computer-readable storage medium, and computer device
CN111414517A (en) * 2020-03-26 2020-07-14 成都市喜爱科技有限公司 Video face analysis method and device and server
CN111414517B (en) * 2020-03-26 2023-05-19 成都市喜爱科技有限公司 Video face analysis method, device and server
CN111723242A (en) * 2020-05-21 2020-09-29 深圳信息职业技术学院 Customer portrait drawing method, customer portrait drawing device, terminal equipment and medium
CN111831742A (en) * 2020-05-25 2020-10-27 佛山市华全电气照明有限公司 Smart city video positioning method and system
CN112597961A (en) * 2020-12-30 2021-04-02 上海大学 Interest target extraction method and system based on big data
CN112989973A (en) * 2021-03-02 2021-06-18 上海汉时信息科技有限公司 Abnormal behavior detection reminding method and system
CN113098642A (en) * 2021-04-22 2021-07-09 浙江万里学院 Logistics management method based on Beidou satellite positioning technology
CN113098642B (en) * 2021-04-22 2023-05-09 浙江万里学院 Logistics management method based on Beidou satellite positioning technology
CN113343842A (en) * 2021-06-04 2021-09-03 中山大学 Self-adaptive face recognition method
CN113779309A (en) * 2021-09-01 2021-12-10 杭州视洞科技有限公司 Video screening method based on face recognition
CN114333119A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Vehicle unlocking method, vehicle management method, terminal, vehicle unlocking system, vehicle unlocking device, and storage medium
CN117111858A (en) * 2023-10-23 2023-11-24 深圳萨尔浒网络科技有限公司 Computer file data matching system
CN117111858B (en) * 2023-10-23 2024-04-02 深圳萨尔浒网络科技有限公司 Computer file data matching system

Also Published As

Publication number Publication date
CN106709424B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN106709424B (en) Optimized monitoring video storage system
JP6854881B2 (en) Face image matching system and face image search system
US9141184B2 (en) Person detection system
CN111444848A (en) Specific scene model upgrading method and system based on federal learning
CN101300588A (en) Determining a particular person from a collection
CN110532970B (en) Age and gender attribute analysis method, system, equipment and medium for 2D images of human faces
US8565493B2 (en) Method, system, and computer-readable recording medium for recognizing face of person included in digital data by using feature data
CN106709468A (en) City region surveillance system and device
CN111222373B (en) Personnel behavior analysis method and device and electronic equipment
CN110598551B (en) Method, device, equipment and medium for improving pedestrian identity recognition efficiency
US8837787B2 (en) System and method for associating a photo with a data structure node
CN109635146A (en) A kind of target query method and system based on characteristics of image
KR20190093799A (en) Real-time missing person recognition system using cctv and method thereof
WO2019083509A1 (en) Person segmentations for background replacements
CN113065615A (en) Scenario-based edge analysis algorithm issuing method and device and storage medium
CN112949346A (en) Feature library updating method and device, inference server and storage medium
CN114519863A (en) Human body weight recognition method, human body weight recognition apparatus, computer device, and medium
CN110543584A (en) method, device, processing server and storage medium for establishing face index
US11763613B2 (en) Automatic creation and management of digital identity profiles for access control
JP6795243B1 (en) Nose print matching device and method and program
WO2020155486A1 (en) Facial recognition optimization method and apparatus, computer device and storage medium
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
CN109359689A (en) A kind of data identification method and device
CN112883213B (en) Picture archiving method and device and electronic equipment
WO2023093241A1 (en) Pedestrian re-identification method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180702

Address after: 519085 the E3 unit of the second level nursery of A plant, No. 1, Port Road, Tangjia Bay, Zhuhai high tech Zone, Guangdong.

Applicant after: Guangdong Zhongke human intelligence technology Co.,Ltd.

Address before: No. 1, No. 19 Building, jasmine garden, A-1416, northwest of Haidian District, Beijing.

Applicant before: BEIJING ZHONGKE AIR-WORLD TECHNOLOGY CO.,LTD.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant