CN112132022A - Face snapshot framework, face snapshot method, device, equipment and storage medium - Google Patents

Face snapshot framework, face snapshot method, device, equipment and storage medium Download PDF

Info

Publication number
CN112132022A
CN112132022A CN202011004382.7A CN202011004382A CN112132022A CN 112132022 A CN112132022 A CN 112132022A CN 202011004382 A CN202011004382 A CN 202011004382A CN 112132022 A CN112132022 A CN 112132022A
Authority
CN
China
Prior art keywords
snapshot
face
video stream
server
service instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011004382.7A
Other languages
Chinese (zh)
Other versions
CN112132022B (en
Inventor
丁伟
李影
张国辉
宋晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011004382.7A priority Critical patent/CN112132022B/en
Priority to PCT/CN2020/135513 priority patent/WO2021159842A1/en
Publication of CN112132022A publication Critical patent/CN112132022A/en
Application granted granted Critical
Publication of CN112132022B publication Critical patent/CN112132022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application is suitable for the technical field of monitoring, and provides a face snapshot framework which comprises a server, a video stream snapshot service instance in communication connection with the server, and a camera device in communication connection with the video stream snapshot service instance; the number of the video stream snapshot service instances is dynamically deployed according to the number of the display cards in the server, and the video stream snapshot service instances control the camera equipment to carry out real-time face snapshot operation by receiving the face snapshot task request sent by the server. The framework has clear functions, strong maintainability and strong function expansibility, and can realize that one server supports the deployment of a plurality of video stream snapshot service instances, flexibly manages the plurality of video stream snapshot service instances, controls a plurality of camera devices by one video stream snapshot service instance and supports the real-time snapshot of the number of multiple video streams. The application also provides a face snapshot method, a face snapshot device, face snapshot equipment and a storage medium based on the face snapshot framework.

Description

Face snapshot framework, face snapshot method, device, equipment and storage medium
Technical Field
The present application relates to the field of monitoring technologies, and in particular, to a face snapshot architecture, a face snapshot method, an apparatus, a device, and a storage medium.
Background
In the technical field of monitoring, for a place with a large flow of people, the place is generally managed and decided by people flow statistics, face snapshot and other modes. The current background management system for face snapshot is generally designed to realize functions of camera management, real-time video stream decoding, face detection, snapshot data pushing and the like related to face snapshot based on an independent module. In practical application, the architecture can limit the number of paths of the video stream to be snapshot by the GPU server in real time, and the situation that the snapshot is not real-time occurs. In addition, the problems of disordered architecture, large coupling among modules, weak function expansibility, high maintenance cost and the like exist.
Disclosure of Invention
In view of this, the present application provides a face snapshot framework, a face snapshot method, an apparatus, a device, and a storage medium thereof, wherein the framework has a clear function, strong maintainability, and strong function expansibility; supporting distributed cluster deployment and dynamically adjusting a snapshot task; and the real-time face snapshot of multiple paths of cameras is realized, and the high-concurrency video stream path number is supported.
A first aspect of an embodiment of the present application provides a face snapshot framework, including: the system comprises a server, a video stream snapshot service instance in communication connection with the server and camera equipment in communication connection with the video stream snapshot service instance;
the number of the video stream snapshot service instances is dynamically deployed according to the number of the display cards in the server, and the video stream snapshot service instances control the camera equipment to carry out real-time face snapshot operation by receiving the face snapshot task request sent by the server.
With reference to the first aspect, in a first possible implementation manner of the first aspect, if a plurality of servers are configured in the face snapshot framework, the plurality of servers are deployed in a distributed cluster in the face snapshot framework.
With reference to the first aspect, in a second possible implementation manner of the first aspect, a face snapshot algorithm library is further configured in the face snapshot framework, wherein at least one of a face frame detection algorithm, a feature point detection algorithm, a face tracking algorithm, a face quality detection algorithm, and a face feature extraction algorithm is stored in the face snapshot algorithm library, and each algorithm is configured with a corresponding API interface for engineering call.
With reference to the first aspect, the first possible implementation manner of the first aspect, and any one of the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, a snapshot policy configuration mechanism is further configured in the face snapshot framework, and a processing manner for executing face snapshot is configured for the video stream snapshot service instance based on a plurality of snapshot policies preset in the snapshot policy configuration mechanism.
A second aspect of the embodiments of the present application provides a face snapshot method, where the face snapshot method includes:
a server in a face snapshot framework receives a face snapshot task request sent by a control center, wherein the face snapshot task request contains camera equipment information corresponding to a current snapshot task;
the server acquires load information of a video stream snapshot service instance in a face snapshot framework, wherein the video stream snapshot service instance is used for controlling a camera device to execute a face snapshot task, and sends the camera device information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
and the video stream snapshot server instance is connected with the camera equipment according to the camera equipment information, controls the camera equipment to snapshot the human face, and feeds snapshot data back to the control center through the server.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the method further includes:
identifying the snapshot strategy requirement of the current snapshot task from the face snapshot task request;
and calling a snapshot strategy matched with the snapshot strategy requirement from a preset snapshot strategy configuration mechanism, and configuring the snapshot strategy to the camera equipment pointed by the camera equipment information so as to enable the camera equipment to perform face snapshot according to the snapshot strategy.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the sending, according to the load information, the image capturing apparatus information to the video stream snapshot service instance to load balance the video stream snapshot service instance includes:
identifying the number of video stream snapshot service instances currently running in a face snapshot framework and the number of camera devices currently controlled by the video stream snapshot service instances;
calculating the number of the camera devices correspondingly connected with each video stream snapshot service instance by combining the number of the video stream snapshot service instances, the number of the camera devices currently controlled by the video stream snapshot service instances and the number of the camera devices corresponding to the current snapshot task;
and according to the calculated number of the camera devices correspondingly connected with each video stream snapshot service instance, distributing the camera devices to the video stream snapshot service instances and sending corresponding camera device information.
A third aspect of an embodiment of the present application provides a face capture device, including:
the system comprises a receiving module, a face snapshot task processing module and a face snapshot processing module, wherein the receiving module is used for receiving a face snapshot task request sent by a control center through a server in a face snapshot framework, and the face snapshot task request contains camera equipment information corresponding to a current snapshot task;
the processing module is used for acquiring load information of a video stream snapshot service instance in a face snapshot framework, wherein the video stream snapshot service instance is used for controlling a camera device to execute a face snapshot task, and sending the camera device information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
and the execution module is used for connecting the camera equipment through the video stream snapshot server instance according to the camera equipment information, controlling the camera equipment to carry out face snapshot and feeding snapshot data back to the control center through the server.
A fourth aspect of the embodiments of the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the computer device, where the processor implements the steps of the face snapshot method provided in the second aspect when executing the computer program.
A fifth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the face snapshot method provided by the second aspect.
The face snapshot framework, the snapshot method and the snapshot device provided by the embodiment of the application have the following beneficial effects:
the face snapshot framework comprises a server, a video stream snapshot service instance and camera equipment, wherein the server is in communication connection with the video stream snapshot service instance, the video stream snapshot service instance is in communication connection with the camera equipment, and the framework function is clear. The number of the video stream snapshot service instances is dynamically deployed according to the number of the display cards in the server, the video stream snapshot service instances control the plurality of the camera devices to carry out real-time face snapshot operation by receiving the face snapshot task requests of the server, the fact that one server supports deployment of the plurality of the video stream snapshot service instances is achieved, the plurality of the video stream snapshot service instances are flexibly managed, one video stream snapshot service instance can flexibly control the plurality of the camera devices, the number of the plurality of video stream paths is supported, maintainability is high, and expandability is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a face snapshot architecture according to a first embodiment of the present application;
fig. 2 is a flowchart illustrating an implementation of a face snapshot method according to a second embodiment of the present application;
fig. 3 is a flowchart illustrating an implementation of a face snapshot method according to a third embodiment of the present application;
fig. 4 is a flowchart illustrating an implementation of a face snapshot method according to a fourth embodiment of the present application;
fig. 5 is a block diagram of a face capture device according to a fifth embodiment of the present application;
fig. 6 is a block diagram of a computer device according to a sixth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a face snapshot structure according to a first embodiment of the present application. The face snapshot framework is mainly applied to a background system. As shown in fig. 1, a face snapshot framework provided in a first embodiment of the present application includes a server, a video stream snapshot service instance in communication connection with the server, and a camera device in communication connection with the video stream snapshot service instance; the number of the video stream snapshot service instances is dynamically deployed according to the number of the display cards in the server, and the video stream snapshot service instances control the camera equipment to carry out real-time face snapshot operation by receiving the face snapshot task request sent by the server.
In this embodiment, the number of the display cards is the number of GPU resources characterizing the server, and the configuration of the display cards of the server may be 1, 2, 4. In the specific implementation of this embodiment, the deployment of the video stream snapshot service instances in each server is dynamic deployment, that is, for a server with multiple display cards, the video stream snapshot service instances less than the number of the display cards may be deployed, and the video stream snapshot service instances equal to the number of the display cards may also be deployed. In a specific implementation of this embodiment, the server manages all video stream capture service instances deployed by the server, all image capturing devices, and an association relationship between an image capturing device and a video stream capture service instance, and is configured to receive a capture request from the control center. It can be understood that the association relationship between the camera device and the video stream capture service instance is dynamic and configurable, for example, in different capture tasks, the video stream capture service instances associated with the same camera device may be different; the same video stream snapshot service instance can also be associated with one camera device or a plurality of camera devices, and the like, and can be configured specifically according to the load condition of the current server or the task requirements of operators, so that the flexibility is strong. The video stream snapshot service instance is connected with the camera equipment in a protocol communication mode, and is responsible for carrying out GPU decoding and face detection operation on the camera equipment associated with the video stream snapshot service instance and pushing snapshot data to a server. Therefore, one video stream snapshot service instance can be correspondingly connected with and manage multiple paths of camera equipment, so that the fact that one video stream snapshot service instance simultaneously carries out real-time face snapshot on the multiple paths of camera equipment is achieved, high-concurrency video stream number is supported, maintainability is high, and expandability is high. The server and the video stream snapshot service instances are communicated in a memory sharing mode, namely, in the same server, any one video stream snapshot service instance pushes snapshot data to the server, other video stream snapshot service instances in the server can also synchronously share information for obtaining the pushed snapshot data, and therefore the two video stream snapshot service instances can share and transmit data with each other, and the communication efficiency is high.
The face snapshot framework provided by the embodiment comprises a server, a video stream snapshot service instance in communication connection with the server, and a camera device in communication connection with the video stream snapshot service instance; the number of the video stream snapshot service instances is dynamically deployed according to the number of the display cards in the server, and the video stream snapshot service instances control the camera equipment to carry out real-time face snapshot operation by receiving the face snapshot task request sent by the server. The architecture has clear functions, strong maintainability and strong function expansibility, and can realize that one server supports the deployment of a plurality of video stream snapshot service instances, flexibly manages the plurality of video stream snapshot service instances, controls a plurality of camera devices by one video stream snapshot service instance and supports the real-time snapshot of the number of multiple video streams.
In some embodiments of the application, when the face snapshot framework is provided with a plurality of servers, distributed cluster deployment is performed on the plurality of servers arranged in the face snapshot framework, so that a snapshot task is dynamically adjusted between the servers in the face snapshot framework. In this embodiment, the face snapshot architecture performs data interaction with an external control center through a switch in a message queue manner. The distributed cluster deployment of the plurality of servers in the face snapshot framework is embodied as follows: in the face snapshot framework, if a face snapshot task run by one server is issued to a designated switch through a rabbitMQ publish message queue, other servers can synchronously acquire the snapshot task load conditions of all servers in the face snapshot framework in a manner of subscribing switch data. If the snapshot task of one server stops, the load of a plurality of servers deployed in the face snapshot framework is balanced by dynamically adjusting the snapshot task in the running process of the server cluster.
In some embodiments of the present application, a face snapshot algorithm library may also be configured in the face snapshot framework, where the face snapshot algorithm library includes, but is not limited to, a storage of: at least one of a face frame detection algorithm, a feature point detection algorithm, a face tracking algorithm, a face quality detection algorithm and a face feature extraction algorithm. The face frame detection algorithm adopts a YOLOv3 detection model, and improves the detection speed while ensuring the precision through a lightweight backbone network and a characteristic pyramid detection network. The feature point detection algorithm and the face feature extraction algorithm adopt a lightweight network ShuffleNet model, and the calculated amount of the model is greatly reduced while the precision is kept in the process of feature point detection or face feature extraction through the packet convolution (position group convolution) of 1 × 1 convolution kernel and the recombination (channel shuffle) of the feature graph obtained after the packet convolution. The human face tracking algorithm adopts the method that the tracking condition is predicted through the human face frame and the characteristic point offset distance of continuous frames, the self-adaptive target tracking window is introduced, and the problem of adhesion and overlapping of targets in the multi-human face tracking process is solved through sequential tracking. Furthermore, a Kalman filter is introduced to predict the target, so that the tracking speed and accuracy of the face tracking algorithm are improved. Face quality detection algorithms include, but are not limited to, brightness detection, angle detection, blur detection, interpupillary detection, occlusion detection, and the like. An original discrete algorithm interface is designed on the bottom layer based on the face snapshot algorithm library, for various algorithms stored in the face snapshot algorithm library, corresponding API interfaces for engineering calling are configured for each algorithm respectively through packaging, when a video stream snapshot service instance carries out face detection on camera equipment associated with the video stream snapshot service instance, the face snapshot algorithm library can be called, so that the face snapshot algorithm library is responsible for scheduling management of a GPU/CPU (graphics processing unit/central processing unit) based on the API interfaces for engineering calling, and detection acceleration is realized by utilizing cuda parallel computing, so that the video stream snapshot service instance has high-availability and high-throughput data processing capacity.
In some embodiments of the application, the snapshot result of the video stream snapshot service instance is affected by device factors such as the erection height, position and device model of the camera device in different snapshot tasks and environment factors such as the brightness, angle, ambiguity, pupil distance and shielding of the external environment. In this embodiment, a face snapshot framework is configured with a snapshot policy configuration mechanism, in which a plurality of snapshot policies are preset, for example, including but not limited to: the method comprises a brightness strategy, an angle strategy, a fuzziness strategy, a interpupillary strategy, a shielding strategy, a snapshot duplication elimination strategy and the like of a face snapshot image. Based on the snapshot policy configuration mechanism, a currently applicable processing mode for executing face snapshot can be configured for the video stream snapshot service instance. For example, based on the precise snapshot duplication removal strategy, a face image with the optimal quality can be screened out and pushed according to the number of a plurality of continuous frames in the video stream, and the number of the continuous frames may be configured by default by the system or by self-definition by an operator. Or, a time window list for screening the facial images with the optimal quality is set, the time window list is characterized as a time period, the facial images with the optimal quality are screened from all image frames belonging to the time period in the video stream to be pushed, and the length of the time window list can be the default configuration of the system or the self-defined configuration of an operator. Therefore, the snapshot strategy of each video stream snapshot service instance is flexibly configured based on the snapshot strategy configuration mechanism, and the face image can be processed according to different snapshot strategies in the face snapshot framework. In actual use, the snapshot strategy of each video stream snapshot service instance can be dynamically adjusted according to the number of snapshot tasks and/or the GPU hardware utilization rate and/or the system load condition.
Referring to fig. 2, fig. 2 is a flowchart illustrating an implementation of a face capture method according to a second embodiment of the present application. The details are as follows:
step S11: a server in a face snapshot framework receives a face snapshot task request sent by a control center, wherein the face snapshot task request contains camera equipment information corresponding to a current snapshot task;
step S12: the server acquires load information of a video stream snapshot service instance in a face snapshot framework, wherein the video stream snapshot service instance is used for controlling a camera device to execute a face snapshot task, and sends the camera device information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
step S13: and the video stream snapshot server instance is connected with the camera equipment according to the camera equipment information, controls the camera equipment to snapshot the human face, and feeds snapshot data back to the control center through the server.
In this embodiment, based on the face snapshot framework provided in the first embodiment, the server in the face snapshot framework may receive the face snapshot task request sent by the control center, and then analyze the face snapshot task request to obtain the image capturing device information corresponding to the current snapshot task included in the face snapshot task request. The image capturing apparatus information includes, but is not limited to: number information of image pickup apparatuses, ID information of image pickup apparatuses, and the like. After the quantity information of the camera devices and the ID information of the camera devices are obtained, the server obtains load information of a video stream snapshot service instance which is configured in the face snapshot framework at present and used for controlling the camera devices to execute the face snapshot task through monitoring. And sending camera equipment information to the video stream snapshot service instance based on the obtained load information so as to balance the load of the video stream snapshot service instance configured in the server. After the video stream snapshot task instance obtains the camera equipment information of the current snapshot task, the corresponding camera equipment is connected according to the camera equipment information, the connected camera equipment is controlled to carry out face snapshot operation, the snapshot data obtained by face snapshot is fed back to the control center through the server, and therefore the complete face snapshot process is achieved based on the face snapshot framework. In this embodiment, after each video stream snapshot service instance receives the ID information of the image pickup device corresponding to the current snapshot task, the video stream snapshot service instance is connected to the corresponding image pickup device in a protocol communication manner according to the ID information of the image pickup device, and then GPU decoding is performed on the image pickup device connected to the video stream snapshot service instance, and then face snapshot processing is performed to acquire snapshot data. Furthermore, the acquired snapshot data can be pushed to a server in communication connection with the server in a shared memory mode, and after the server acquires the snapshot data, the server feeds the snapshot data back to the control center in a rabbitMQ publishing message queue mode.
In some embodiments of the present application, please refer to fig. 3, and fig. 3 is a flowchart illustrating an implementation of a face snapshot method according to a third embodiment of the present application. The details are as follows:
step S21: identifying the number of video stream snapshot service instances currently running in a face snapshot framework and the number of camera devices currently controlled by the video stream snapshot service instances;
step S22: calculating the number of the camera devices correspondingly connected with each video stream snapshot service instance by combining the number of the video stream snapshot service instances, the number of the camera devices currently controlled by the video stream snapshot service instances and the number of the camera devices corresponding to the current snapshot task;
step S23: and according to the calculated number of the camera devices correspondingly connected with each video stream snapshot service instance, distributing the camera devices to the video stream snapshot service instances and sending corresponding camera device information.
In this embodiment, in order to achieve load balancing of each video stream snapshot service instance in the face snapshot structure, specifically, by identifying the number of currently running video stream snapshot service instances in the face snapshot framework and the number of currently controlled camera devices of the video stream snapshot service instances, and further combining the number of the video stream snapshot service instances, the number of currently controlled camera devices of the video stream snapshot service instances, and the number of camera devices corresponding to a current snapshot task, the number of the camera devices corresponding to each video stream snapshot service instance is calculated, and the calculation principle is a load balancing principle, that is, after the camera devices corresponding to the current snapshot task are correspondingly allocated to each video stream snapshot service instance, loads between the video stream snapshot service instances still maintain a balanced state. For example, A, B, C video stream snapshot service instances are operated in a face snapshot framework, and each video stream snapshot service instance can control 10 cameras under the condition of full load, at this time, if an a video stream snapshot service instance has controlled 5 cameras, a B video stream snapshot service instance has controlled 5 cameras, and a C video stream snapshot service instance has controlled 3 cameras, if a current face snapshot task needs to control 5 cameras, 1 camera corresponding to the current snapshot task is allocated to the a video stream snapshot service instance, 1 camera corresponding to the current snapshot task is allocated to the B video stream snapshot service instance, and 3 cameras corresponding to the current snapshot task are allocated to the C video stream snapshot service instance in a load balancing manner. Thus, the ID information of the image pickup apparatus corresponding to the current snapshot task is sent to A, B, C three video stream snapshot service instances according to the distribution result. In some specific embodiments, if the number of controllable camera devices fully loaded in each video stream snapshot service instance is the same, for the case that the number of camera devices corresponding to the current snapshot task cannot make the number of camera devices connected to each video stream snapshot service instance equal after distribution, the video stream snapshot service instances may be sorted by the system, the camera devices are preferentially distributed according to the order, and the difference between the video stream snapshot service instance connected to the most camera devices and the video stream snapshot service instance connected to the least camera devices is less than or equal to 1. In some specific embodiments, if the number of fully-loaded controllable camera devices of each video stream snapshot service instance is different, load balancing distribution may be performed according to a ratio of the number of connected camera devices to the number of fully-loaded controllable camera devices.
Referring to fig. 4, fig. 4 is a flowchart illustrating an implementation of a face capture method according to a fourth embodiment of the present application. The details are as follows:
step S31: identifying the snapshot strategy requirement of the current snapshot task from the face snapshot task request;
step S32: and calling a snapshot strategy matched with the snapshot strategy requirement from a preset snapshot strategy configuration mechanism, and configuring the snapshot strategy to the camera equipment pointed by the camera equipment information so as to enable the camera equipment to perform face snapshot according to the snapshot strategy.
In this embodiment, based on the face snapshot framework provided in the first embodiment, an operator may set policy requirements of the current snapshot task, such as brightness of a snapshot image, an angle of the snapshot image, a sharpness of the snapshot image, a pupil distance of the snapshot image, a shielding requirement of the snapshot image, a deduplication processing mode of the snapshot, and the like, when sending a snapshot task request at the control center, so as to achieve targeted configuration of corresponding snapshot policies according to different snapshot tasks. In this embodiment, after receiving a face snapshot task request sent by a control center based on policy setting of an operator, various snapshot policy requirements of a current snapshot task can be identified from the face snapshot task request by analyzing the face snapshot task request. And then, the snapshot strategy configuration mechanism configures snapshot strategies for each video stream snapshot task instance which sends the camera equipment information corresponding to the current snapshot task according to the snapshot strategy requirements by transmitting the snapshot strategy requirements of the current snapshot task to a snapshot strategy configuration mechanism preset by the face snapshot framework. For example, when the snapshot policy requirement for the camera device X1 in the sub-snapshot task is 60% of the snapshot image brightness requirement, and the a video stream snapshot task instance controls the camera device X1 in the sub-snapshot task to perform the face snapshot process, then the snapshot policy for the camera device X1 is configured to be 60% of the brightness for the a video stream snapshot task instance. Therefore, the snapshot strategy of each video stream snapshot service instance is flexibly configured based on the snapshot strategy configuration mechanism, and the snapshot requirements of different snapshot tasks can be met.
Referring to fig. 5, fig. 5 is a block diagram of a face capture device according to a fifth embodiment of the present application. The apparatus in this embodiment comprises means for performing the steps of the method embodiments described above. The following description refers to the embodiments of the method. For convenience of explanation, only the portions related to the present embodiment are shown. As shown in fig. 5, the face capture apparatus includes: a receiving module 51, a processing module 52 and an executing module 53. Wherein:
the receiving module 51 is configured to receive a face snapshot task request sent by the control center, where the face snapshot task request includes information of a camera device corresponding to the current snapshot task. The processing module 52 is configured to acquire load information of each video stream snapshot service instance in the face snapshot framework, and send, in a load balancing manner, to each video stream snapshot service instance according to the load information of each video stream snapshot service instance, camera device information corresponding to the current snapshot task, so that each video stream snapshot service instance is connected to a camera device according to the received camera device information corresponding to the current snapshot task and performs face snapshot processing on the camera device. The execution module 53 is configured to push snapshot data obtained by performing face snapshot processing on the camera device by each video stream snapshot service instance to a server in communication connection with each video stream snapshot service instance, and feed the snapshot data back to the control center through the server.
The face snapshot device corresponds to the face snapshot method one by one, and details are not repeated here.
Referring to fig. 6, fig. 6 is a block diagram of a computer device according to a sixth embodiment of the present application. As shown in fig. 6, the computer device 6 of this embodiment includes: a processor 61, a memory 62 and a computer program 63, such as a program for a face snapshot method, stored in said memory 62 and executable on said processor 61. The processor 61 implements the steps in the various embodiments of the face capture method described above when executing the computer program 63. Alternatively, the processor 61 implements the functions of the modules in the embodiment corresponding to the face capturing apparatus when executing the computer program 63. Please refer to the description related to the embodiment, which is not repeated herein.
Illustratively, the computer program 63 may be divided into one or more modules (units) that are stored in the memory 62 and executed by the processor 61 to accomplish the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 63 in the computer device 6. For example, the computer program 63 may be divided into a receiving module, a processing module and an executing module, each module having the specific functions as described above.
The turntable device may include, but is not limited to, a processor 61, a memory 62. Those skilled in the art will appreciate that fig. 6 is merely an example of a computer device 6 and is not intended to limit computer device 6 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the turntable device may also include input output devices, network access devices, buses, etc.
The Processor 61 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 62 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. The memory 62 may also be an external storage device of the computer device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device 6. Further, the memory 62 may also include both an internal storage unit and an external storage device of the computer device 6. The memory 62 is used for storing the computer program and other programs and data required by the turntable device. The memory 62 may also be used to temporarily store data that has been output or is to be output.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face snapshot framework is characterized by comprising a server, a video stream snapshot service instance in communication connection with the server, and a camera device in communication connection with the video stream snapshot service instance;
the number of the video stream snapshot service instances is dynamically deployed according to the number of the display cards in the server, and the video stream snapshot service instances control the camera equipment to carry out real-time face snapshot operation by receiving the face snapshot task request sent by the server.
2. The face snapshot architecture of claim 1, wherein if a plurality of the servers are configured in the face snapshot architecture, the plurality of the servers are deployed in a distributed cluster in the face snapshot architecture.
3. The face snapshot framework of claim 1, wherein a face snapshot algorithm library is further configured in the face snapshot framework, wherein at least one of a face frame detection algorithm, a feature point detection algorithm, a face tracking algorithm, a face quality detection algorithm, and a face feature extraction algorithm is stored in the face snapshot algorithm library, and each algorithm is configured with a corresponding API interface for engineering call.
4. The face snapshot framework of any one of claims 1-3, wherein a snapshot policy configuration mechanism is further configured in the face snapshot framework, and a processing manner for executing face snapshot is configured for the video stream snapshot service instance based on a plurality of preset snapshot policies in the snapshot policy configuration mechanism.
5. A face snapshot method is characterized by comprising the following steps:
a server in a face snapshot framework receives a face snapshot task request sent by a control center, wherein the face snapshot task request contains camera equipment information corresponding to a current snapshot task;
the server acquires load information of a video stream snapshot service instance in a face snapshot framework, wherein the video stream snapshot service instance is used for controlling a camera device to execute a face snapshot task, and sends the camera device information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
and the video stream snapshot server instance is connected with the camera equipment according to the camera equipment information, controls the camera equipment to snapshot the human face, and feeds snapshot data back to the control center through the server.
6. The face snapshot method of claim 5, further comprising:
identifying the snapshot strategy requirement of the current snapshot task from the face snapshot task request;
and calling a snapshot strategy matched with the snapshot strategy requirement from a preset snapshot strategy configuration mechanism, and configuring the snapshot strategy to the camera equipment pointed by the camera equipment information so as to enable the camera equipment to perform face snapshot according to the snapshot strategy.
7. The face snapshot method according to claim 5, wherein the sending the camera device information to the video stream snapshot service instance according to the load information to load balance the video stream snapshot service instance comprises:
identifying the number of video stream snapshot service instances currently running in a face snapshot framework and the number of camera devices currently controlled by the video stream snapshot service instances;
calculating the number of the camera devices correspondingly connected with each video stream snapshot service instance by combining the number of the video stream snapshot service instances, the number of the camera devices currently controlled by the video stream snapshot service instances and the number of the camera devices corresponding to the current snapshot task;
and according to the calculated number of the camera devices correspondingly connected with each video stream snapshot service instance, distributing the camera devices to the video stream snapshot service instances and sending corresponding camera device information.
8. A face capture device, comprising:
the system comprises a receiving module, a face snapshot task processing module and a face snapshot processing module, wherein the receiving module is used for receiving a face snapshot task request sent by a control center through a server in a face snapshot framework, and the face snapshot task request contains camera equipment information corresponding to a current snapshot task;
the processing module is used for acquiring load information of a video stream snapshot service instance in a face snapshot framework, wherein the video stream snapshot service instance is used for controlling a camera device to execute a face snapshot task, and sending the camera device information to the video stream snapshot service instance according to the load information so as to balance the load of the video stream snapshot service instance;
and the execution module is used for connecting the camera equipment through the video stream snapshot server instance according to the camera equipment information, controlling the camera equipment to carry out face snapshot and feeding snapshot data back to the control center through the server.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 5 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 5 to 7.
CN202011004382.7A 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof Active CN112132022B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011004382.7A CN112132022B (en) 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof
PCT/CN2020/135513 WO2021159842A1 (en) 2020-09-22 2020-12-11 Face capture architecture, face capture method and apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011004382.7A CN112132022B (en) 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof

Publications (2)

Publication Number Publication Date
CN112132022A true CN112132022A (en) 2020-12-25
CN112132022B CN112132022B (en) 2023-09-29

Family

ID=73842470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011004382.7A Active CN112132022B (en) 2020-09-22 2020-09-22 Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof

Country Status (2)

Country Link
CN (1) CN112132022B (en)
WO (1) WO2021159842A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822177A (en) * 2021-09-06 2021-12-21 苏州中科先进技术研究院有限公司 Pet face key point detection method, device, storage medium and equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822216A (en) * 2021-09-29 2021-12-21 上海商汤智能科技有限公司 Event detection method, device, system, electronic equipment and storage medium
CN114666555B (en) * 2022-05-23 2023-03-24 创意信息技术股份有限公司 Edge gateway front-end system
CN116915786B (en) * 2023-09-13 2023-12-01 杭州立方控股股份有限公司 License plate recognition and vehicle management system with cooperation of multiple servers

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090085740A1 (en) * 2007-09-27 2009-04-02 Thierry Etienne Klein Method and apparatus for controlling video streams
WO2017031886A1 (en) * 2015-08-26 2017-03-02 北京奇虎科技有限公司 Method for obtaining picture by means of remote control, and server
CN106650589A (en) * 2016-09-30 2017-05-10 北京旷视科技有限公司 Real-time face recognition system and method
CN208271202U (en) * 2018-06-05 2018-12-21 珠海芯桥科技有限公司 A kind of screen monitor system based on recognition of face
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system
CN109815839A (en) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 Identification method and related products of wandering person under microservice architecture
CN109919069A (en) * 2019-02-27 2019-06-21 浙江浩腾电子科技股份有限公司 Oversize vehicle analysis system based on deep learning
WO2019127273A1 (en) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 Multi-person face detection method, apparatus, server, system, and storage medium
CN110457135A (en) * 2019-08-09 2019-11-15 重庆紫光华山智安科技有限公司 A kind of method of resource regulating method, device and shared GPU video memory
CN110798702A (en) * 2019-10-15 2020-02-14 平安科技(深圳)有限公司 Video decoding method, device, equipment and computer readable storage medium
WO2020094091A1 (en) * 2018-11-07 2020-05-14 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera, and monitoring system
CN111385540A (en) * 2020-04-17 2020-07-07 深圳市市政设计研究院有限公司 Wisdom municipal infrastructure management system based on video stream analysis technique

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827976A (en) * 2016-04-26 2016-08-03 北京博瑞空间科技发展有限公司 GPU (graphics processing unit)-based video acquisition and processing device and system
WO2019229213A1 (en) * 2018-06-01 2019-12-05 Canon Kabushiki Kaisha A load balancing method for video decoding in a system providing hardware and software decoding resources
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090085740A1 (en) * 2007-09-27 2009-04-02 Thierry Etienne Klein Method and apparatus for controlling video streams
WO2017031886A1 (en) * 2015-08-26 2017-03-02 北京奇虎科技有限公司 Method for obtaining picture by means of remote control, and server
CN106650589A (en) * 2016-09-30 2017-05-10 北京旷视科技有限公司 Real-time face recognition system and method
WO2019127273A1 (en) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 Multi-person face detection method, apparatus, server, system, and storage medium
CN208271202U (en) * 2018-06-05 2018-12-21 珠海芯桥科技有限公司 A kind of screen monitor system based on recognition of face
WO2020094091A1 (en) * 2018-11-07 2020-05-14 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera, and monitoring system
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system
CN109815839A (en) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 Identification method and related products of wandering person under microservice architecture
CN109919069A (en) * 2019-02-27 2019-06-21 浙江浩腾电子科技股份有限公司 Oversize vehicle analysis system based on deep learning
CN110457135A (en) * 2019-08-09 2019-11-15 重庆紫光华山智安科技有限公司 A kind of method of resource regulating method, device and shared GPU video memory
CN110798702A (en) * 2019-10-15 2020-02-14 平安科技(深圳)有限公司 Video decoding method, device, equipment and computer readable storage medium
CN111385540A (en) * 2020-04-17 2020-07-07 深圳市市政设计研究院有限公司 Wisdom municipal infrastructure management system based on video stream analysis technique

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
孙乔 等: "一种基于分布式服务器集群的可扩展负载均衡策略技术", 《运营技术广角》 *
孙乔 等: "一种基于分布式服务器集群的可扩展负载均衡策略技术", 《运营技术广角》, no. 9, 31 December 2017 (2017-12-31), pages 2017264 - 1 *
徐琛杰 等: "面向微服务系统的运行时部署优化", 《计算机应用与软件》, vol. 35, no. 10, pages 138 - 145 *
高迪 等: "面向公共安全数据处理的浪涌模型研究应用", 《计算机科学》 *
高迪 等: "面向公共安全数据处理的浪涌模型研究应用", 《计算机科学》, vol. 44, no. 6, 30 June 2017 (2017-06-30), pages 342 - 347 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822177A (en) * 2021-09-06 2021-12-21 苏州中科先进技术研究院有限公司 Pet face key point detection method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN112132022B (en) 2023-09-29
WO2021159842A1 (en) 2021-08-19

Similar Documents

Publication Publication Date Title
CN112132022B (en) Face snapshot architecture and face snapshot method, device, equipment and storage medium thereof
US20190052532A1 (en) Cross layer signaling for network resource scaling
CN106713944A (en) Method and apparatus for processing streaming data task
US10979492B2 (en) Methods and systems for load balancing
CN107067365A (en) The embedded real-time video stream processing system of distribution and method based on deep learning
EP2359565B1 (en) Reassembling streaming data across multiple packetized communication channels
US9002969B2 (en) Distributed multimedia server system, multimedia information distribution method, and computer product
US20210209411A1 (en) Method for adjusting resource of intelligent analysis device and apparatus
CN102301664B (en) Method and device for dispatching streams of multicore processor
CN112116636A (en) Target analysis method, device, system, node equipment and storage medium
CN109697122A (en) Task processing method, equipment and computer storage medium
CN108668138A (en) A kind of method for downloading video and user terminal
CN109698850B (en) Processing method and system
WO2021063026A1 (en) Inference service networking method and apparatus
Rachuri et al. Decentralized modular architecture for live video analytics at the edge
Peng et al. Tangram: High-resolution video analytics on serverless platform with slo-aware batching
CN115941907A (en) RTP data packet sending method, system, electronic equipment and storage medium
CN112236755B (en) Memory access method and device
US10877800B2 (en) Method, apparatus and computer-readable medium for application scheduling
US20140327781A1 (en) Method for video surveillance, a related system, a related surveillance server, and a related surveillance camera
CN113596325B (en) Method and device for capturing images, electronic equipment and storage medium
Afrah et al. Hive: A distributed system for vision processing
US9467661B2 (en) Method of operating camera, camera, and surveillance system having the same
CN113656150A (en) Deep learning computing power virtualization system
CN111461958A (en) System and method for controlling real-time detection and optimization processing of rapid multi-path data streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant