CN110688510B - Face background image acquisition method and system - Google Patents

Face background image acquisition method and system Download PDF

Info

Publication number
CN110688510B
CN110688510B CN201810634805.XA CN201810634805A CN110688510B CN 110688510 B CN110688510 B CN 110688510B CN 201810634805 A CN201810634805 A CN 201810634805A CN 110688510 B CN110688510 B CN 110688510B
Authority
CN
China
Prior art keywords
image
face
storage
video image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810634805.XA
Other languages
Chinese (zh)
Other versions
CN110688510A (en
Inventor
杨春燕
汤利波
谢会斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201810634805.XA priority Critical patent/CN110688510B/en
Publication of CN110688510A publication Critical patent/CN110688510A/en
Application granted granted Critical
Publication of CN110688510B publication Critical patent/CN110688510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention provides a method and a system for acquiring a face background image, wherein the method comprises the following steps: the image pickup equipment sends the face image to an analysis server and sends the video image to a storage server. The analysis server analyzes the face image to obtain face data, stores the face data in a database, and sends the face image to the storage server. And the storage server encodes and stores the video image and returns the storage information of the face image and the video image to the analysis server. After receiving the face image to be queried of the client, the analysis server identifies the face image to be queried to obtain storage information of the face image and the video image corresponding to the face image and the video image. And the client acquires a corresponding video image containing the background image of the face image to be inquired from one end of the storage server according to the storage information. The human face background acquisition scheme avoids the burden of a large number of video images on the analysis server, and improves the efficiency of human face background retrieval.

Description

Face background image acquisition method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a system for acquiring a face background image.
Background
Face recognition is a biometric technology that performs identification based on facial feature information of a person. The method comprises a series of related technologies of acquiring images or video streams containing human faces by using a camera or a pick-up head, automatically detecting and tracking the human faces in the images and further identifying the detected human faces. After the face is identified, one face matting and one face background picture are generally kept, and when face retrieval comparison is generally carried out, only the face matting needs to be presented. However, sometimes it is necessary to know the information such as the time, place, and surrounding situation of the face, so it is necessary to be able to search for the background picture of the face
Disclosure of Invention
In view of the above, the present invention provides a method and a system for obtaining a face background image to solve the above problem.
The invention provides a face background image acquisition method, which is applied to a face background image acquisition system, the system comprises an analysis server, a storage server and a camera device which are in communication connection, the analysis server and the storage server can also establish communication connection with a client, and the method comprises the following steps:
the camera equipment sends a face image extracted from the collected video image to the analysis server and sends the video image to the storage server, wherein the video image comprises a face background image;
The analysis server analyzes and processes the face image to obtain corresponding face data, stores the face data in a database, and sends the face image to the storage server;
the storage server encodes the received video image, stores the face image and the encoded video image, and returns the storage combination information of the face image and the video image to the analysis server;
when receiving a face image to be inquired sent by a client, the analysis server obtains storage combination information of the face image and feeds the storage combination information back to the client so that the client sends the storage combination information to the storage server;
and the storage server searches the face image corresponding to the face image to be inquired and the video image corresponding to the face image according to the storage combination information, and returns the face image and the video image to the client.
Optionally, the step of sending, by the image capturing device, the face image extracted from the acquired video image to the analysis server includes:
the camera equipment extracts a face image from the acquired video image and obtains a reference frame image of the video image;
And acquiring the time stamp information of the video image and the reference frame image, and sending the face image carrying the time stamp information to the analysis server.
Optionally, the step of obtaining timestamp information of the video image and the reference frame image, and sending the face image carrying the timestamp information to the analysis server includes:
acquiring a first time stamp of the reference frame image, a second time stamp of the video image and an offset of the video image relative to the reference frame image;
and forming time stamp information by the first time stamp, the second time stamp and the offset, and sending the face image carrying the time stamp information to the analysis server.
Optionally, the step of the storage server encoding the received video image, storing the face image and the encoded video image, and returning the storage combination information of the face image and the video image to the analysis server includes:
the storage server obtains the position information of the video image where the face image is located in the video frame according to the timestamp information carried by the face image obtained from the analysis server;
Obtaining a reference frame image of the video image according to the position information and the timestamp information, obtaining a video image where the face image is located according to the reference frame image, and storing the video image and the face image;
and sending the storage combination information of the video image and the face image to the analysis server.
Optionally, the step of obtaining a reference frame image of the video image according to the position information and the timestamp information, and obtaining a video image where the face image is located according to the reference frame image includes:
obtaining a reference frame image of the video image according to the position information and the offset between the video image and the reference frame image in the timestamp information;
and coding the reference frame image according to the reference frame image and the offset to obtain a video image where the face image is located.
Optionally, the step of sending the storage combination information of the video image and the face image to the analysis server includes:
and the storage server splices the storage position of the video image and the storage position of the face image into storage combination information in a URL format and sends the storage combination information to the analysis server.
Another preferred embodiment of the present invention provides a face background image acquiring system, the system includes an analysis server, a storage server and a camera device, the analysis server and the storage server are in communication connection, the analysis server and the storage server can also establish communication connection with a client, the analysis server includes a processing module and a storage combination information acquiring module, the storage server includes a storage module and a video image searching module, the camera device includes a sending module:
the sending module is used for sending a face image extracted from the collected video image to the analysis server and sending the video image to the storage server, wherein the video image comprises a face background image;
the processing module is used for analyzing and processing the face image to obtain corresponding face data, storing the face data in a database and sending the face image to the storage server;
the storage module is used for coding the received video image, storing the face image and the coded video image and returning the storage combination information of the face image and the video image to the analysis server;
The storage combination information acquisition module is used for acquiring the storage combination information of the face image when receiving the face image to be inquired sent by the client and feeding the storage combination information back to the client so that the client sends the storage combination information to the storage server;
the video image searching module is used for searching the face image corresponding to the face image to be inquired and the video image corresponding to the face image according to the storage combination information, and returning the face image and the video image to the client.
Optionally, the sending module includes an extracting unit and a timestamp information obtaining unit;
the extraction unit is used for extracting a face image from the acquired video image and obtaining a reference frame image of the video image;
the time stamp information acquisition unit is used for acquiring the time stamp information of the video image and the reference frame image and sending the face image carrying the time stamp information to the analysis server.
Optionally, the timestamp information acquiring unit includes an offset information acquiring subunit and a sending subunit;
the offset information acquisition subunit is used for acquiring a first time stamp of the reference frame image, a second time stamp of the video image and an offset of the video image relative to the reference frame image;
And the sending subunit is configured to combine the first timestamp, the second timestamp, and the offset into timestamp information, and send the face image carrying the timestamp information to the analysis server.
Optionally, the storage module includes a location information obtaining unit, a storage unit, and a sending unit;
the position information acquisition unit is used for acquiring the position information of a video image in which the face image is positioned in a video frame according to the timestamp information carried by the face image acquired from the analysis server;
the storage unit is used for obtaining a reference frame image of the video image according to the position information and the timestamp information, obtaining a video image where the face image is located according to the reference frame image, and storing the video image and the face image;
the sending unit is used for sending the storage combination information of the video image and the face image to the analysis server.
According to the method and the system for acquiring the face background image, the face image and the storage information of the video image in which the face image is located are stored in the analysis server, and the face image and the video image in which the face image is located are stored in the storage server. Therefore, when the analysis server receives the face image to be inquired sent by the client, the face image to be inquired can be analyzed to search the matched face data from the stored face data, and the corresponding face image and the storage information of the video image where the face image is located can be found according to the face data. The client can obtain the video image corresponding to the face image to be inquired from one end of the storage server according to the storage information, so that the face background image of the face image to be inquired is obtained. According to the scheme, the storage information of the face image and the video image of the face image in the storage server is stored in the analysis server, and the face image and the video image where the face image is located are stored in the storage server, so that the burden of a large number of video images on the analysis server is avoided, and the efficiency of face background retrieval is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic view of an application scenario of a method for obtaining a face background image according to a preferred embodiment of the present invention.
Fig. 2 is a flowchart of a method for obtaining a face background image according to a preferred embodiment of the present invention.
Fig. 3 is a flowchart of the substeps of step S110 in fig. 2.
Fig. 4 is a flowchart of sub-steps of step S130 in fig. 2.
Fig. 5 is a functional block diagram of a face background image acquisition system according to a preferred embodiment of the present invention.
Fig. 6 is a functional block diagram of a sending module according to a preferred embodiment of the present invention.
Fig. 7 is a functional block diagram of a timestamp information acquiring unit according to a preferred embodiment of the present invention.
Fig. 8 is a functional block diagram of a memory module according to a preferred embodiment of the present invention.
Icon: 100-an analysis server; 110-a processing module; 120-storage combination information acquisition module; 200-a storage server; 210-a storage module; 211-a location information acquisition unit; 212-a storage unit; 213-a transmitting unit; 220-video image searching module; 300-an image pickup apparatus; 310-a sending module; 311-an extraction unit; 312-a timestamp information acquisition unit; 3121-an offset information acquisition subunit; 3122-transmitting subunit.
Detailed Description
The inventor finds that in the existing storage mode of the face background, a face snapshot camera generally extracts a face thumbnail and then sends a video large image (containing a background image) and the face thumbnail to a face analysis server, and the face analysis server stores the face thumbnail and the video large image into storage equipment such as a storage server after extracting face features according to the face thumbnail. Meanwhile, the analysis data of the human face, such as the structured and unstructured feature information, is stored in a database. In addition, the face snapshot camera is also a video camera for video monitoring, so related video recordings are also stored in the storage server.
In the mode, the face recognition camera simultaneously pushes the large image and the small image to the face analysis server, and the face equipment camera is generally arranged at the front end of a subway, a station and the like, so that the bandwidth is very limited, and the bandwidth uploaded by the recognition camera can be increased by adopting the mode. The face analysis server does not process the face background large image, the face background large image is taken in and out of the face analysis server at one time, the bandwidth of the face analysis server is occupied, and the bandwidth occupied by the multi-path face snapshot cameras is very large after the images are transmitted to the same server.
Based on the research findings, the embodiment of the invention provides a human face background image acquisition scheme, which is characterized in that a human face image in a video image acquired by a camera device is sent to an analysis server, and the video image is sent to a storage server. And after the analysis server processes the face image, the face image is stored in the storage server. And the storage server returns the storage information of the face image and the video image to the analysis server. Therefore, when the face image to be inquired is sent by the client, the face image can be analyzed by the analysis server, and the storage position of the face image is provided. And then the storage server is utilized to search the corresponding video image containing the face image according to the storage position, and the video image is returned to the client. Therefore, the burden of storage of a large number of video images on the analysis server is avoided, and the face retrieval efficiency is improved.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 is a schematic view of an application scenario of a method for obtaining a face background image according to an embodiment of the present invention. The scene includes a face background image acquisition system, which includes an analysis server 100, a storage server 200, and a camera device 300. The analysis server 100, the storage server 200, and the image pickup apparatus 300 are communicatively connected to each other via a network to perform communication or interaction. In this embodiment, the image pickup apparatus 300 may include a plurality of image pickup apparatuses 300, and the plurality of image pickup apparatuses 300 are respectively connected to the analysis server 100 and the storage server 200 in communication. In this embodiment, the image capturing apparatus 300 may be a terminal apparatus having an image capturing function, such as a camera or a video camera. The analysis server 100 is a server that analyzes and processes a face image, and the storage server 200 is a storage device, such as an IPSAN device.
Fig. 2 is a flowchart of a face background image obtaining method applied to the face background image obtaining system according to an embodiment of the present invention. It should be noted that the method provided by the present invention is not limited by the specific sequence shown in fig. 2 and described below. The respective steps shown in fig. 2 will be described in detail below.
In step S110, the image capturing apparatus 300 sends a face image extracted from the acquired video image to the analysis server 100, and sends the video image to the storage server 200, where the video image includes a face background image.
In this embodiment, the image capturing apparatus 300 performs capturing to obtain video image data, determines whether the obtained video image data includes a human face, and if the obtained video image data includes a human face, may select a video image with the best quality, such as the best definition, the best human face angle, and the like, that includes the human face. The image pickup apparatus 300 may determine the position and size of a human face in the video image, extract a human face image from the video image, transmit the extracted human face image to the analysis server 100, and transmit the video image to the storage server 200. Wherein, the video image comprises a background image of the face image.
In this embodiment, in order to facilitate the storage and restoration of the face image and the video image by the subsequent storage server 200, optionally, referring to fig. 3, in this embodiment, the step S110 may include the following sub-steps:
In step S111, the image capturing apparatus 300 extracts a face image from the captured video image, and obtains a reference frame image of the video image.
Step S112, obtaining timestamp information of the video image and the reference frame image, and sending the face image carrying the timestamp information to the analysis server 100.
In the process of transmitting and storing the video data, the video data is very large in volume, so that a great burden is brought to the network and the storage. Therefore, in order to facilitate transmission and storage of video, video data is generally compressed when transmitted and restored when received at a receiving end, thereby reducing the number of files of video data. This involves I-frames and P-frames in the video data compression standard.
The I frame is also called an intra-frame coded frame, and is an independent frame with all information, and can be independently decoded without referring to other images, and can be simply understood as a static picture. The first frame in a video sequence is always an I-frame because it is a key frame. P-frames, also called inter-frame predictive coded frames, require reference to a previous I-frame for encoding. The difference between the current frame picture and the previous frame (which may be an I frame or a P frame) is shown. When decoding, the difference defined by the frame is superimposed on the picture buffered before, and the final picture is generated. P-frames generally occupy fewer data bits than I-frames, but are less sensitive to transmission errors due to their complex dependencies on previous P-and I-reference frames.
In this embodiment, assuming that the current video image is a P frame, a reference frame image of the frame video image may be obtained, and a first timestamp of the reference frame image and a second timestamp of the video image are recorded. Further, obtaining an offset between the video image and a reference frame image thereof, wherein the offset takes an I frame in the GOP length as an initial reference frame, and if the GOP length is 25, and a 10 th P frame is currently selected by the face snapshot camera, the offset is 10. The first timestamp, the second timestamp, and the offset are combined into timestamp information. The image pickup apparatus 300 can carry the time stamp information when transmitting the face image to the analysis server 100.
Step S120, the analysis server 100 analyzes the face image to obtain corresponding face data, stores the face data in a database, and sends the face image to the storage server 200.
In this embodiment, the analysis server 100 performs feature extraction on the face image after receiving the face image, and converts the face image into face data in binary form, and optionally, feature data which is helpful for face classification may be obtained according to the shape description of the face organ and the special effect of the distance between the face organ and the face organ. The human face is composed of parts such as eyes, a nose, a mouth, a chin and the like, and geometric description of the parts and the structural relationship among the parts can be used as important features for recognizing the human face. The face data obtained in this embodiment includes structured data and semi-structured data, where the structured data may include visualized information such as gender, short hair, long hair, and the like. The semi-structured data may include face data such as the interpupillary distance, the distance of the eyes and nose, the position of the human eyes on the face, and the like.
The analysis server 100 stores the face data obtained by the analysis in a database, and sends the face image with time stamp information to the storage server 200.
In step S130, the storage server 200 encodes the received video image, stores the face image and the encoded video image, and returns the storage combination information of the face image and the video image to the analysis server 100.
The storage server 200 stores the face image after receiving the face image sent by the analysis server 100. And analyzing the timestamp information carried by the face image so as to restore the video image according to the timestamp information and the reference frame image of the video image where the face image is located to obtain a clear video image.
Referring to fig. 4, in the present embodiment, the step S130 may include three substeps, namely step S131, step S132 and step S133.
In step S131, the storage server 200 obtains the position information of the video image where the face image is located in the video frame according to the timestamp information carried by the face image obtained from the analysis server 100.
Step S132, obtaining a reference frame image of the video image according to the position information and the timestamp information, obtaining a video image where the face image is located according to the reference frame image, and storing the video image and the face image.
Step S133, sending the storage combination information of the video image and the face image to the analysis server 100.
When the video data is stored, the following P frame is stored by referring to the previous frame, so that the P frame image is not a clear and complete image, the frame cannot be directly extracted and used, and the P frame image needs to be decoded firstly according to the reference relation of the previous frame and then is re-encoded into a complete and clear image to be stored in a disk.
In this embodiment, the storage server 200 obtains the position information of the video image where the face image is located in the video frame according to the timestamp information carried by the face image obtained from the analysis server 100. And obtaining a reference frame image of the video image according to the position information and the timestamp information. Optionally, the reference frame image of the video image is obtained according to the position information and the offset between the video image and the reference frame image thereof in the timestamp information. In this way, the video image can be re-encoded by using the reference frame image and combining the offset between the video image and the reference frame image, so as to obtain a clear video image.
The storage server 200 generally adopts a block structure storage manner, and generally stores the audio/video file data packet in a secondary index manner. Wherein each file is marked using a 16K SUPER DATA, which mainly contains a version number and a file mark. SUPER DATA is followed by a block of primary indices, 64K in size. The primary index chunk is followed by a series of 256M-sized data chunks for storing data, and the primary index chunk serves to retrieve the following 256M data chunks. The 256M block begins with a secondary index block followed by a series of I-frame group blocks, the secondary index block serving to retrieve the I-frame group blocks that follow it. When the remaining space of a 256M block is not enough to store an I frame group during data storage, the system will take the next 256M block for storage, and fill the entire 256M space with 0 in the block. Stored in the I-frame group data block are all the packets of the I-frame group. The index in the file is recorded in time and the smallest unit of recording is seconds.
Since the image index information carried by the analysis server 100 is not identical to the index information stored in the video of the storage server 200, a conversion is required. The storage server 200 first obtains the carried timestamp information, finds out the corresponding I frame group structure according to the timestamp information, then calculates which frame belongs to in the I frame group structure according to the offset, finds out the frame and all previous reference frames, decodes, and re-encodes to obtain a clear video frame corresponding to the frame.
In this embodiment, after obtaining a clear and complete video image, the storage server 200 stores the video image and the face image, and sends the storage combination information of the video image and the face image to the analysis server 100. In this way, the analysis server 100 may store the face data of the face image acquired by the image capturing apparatus 300 over a period of time, and the storage information of the face image and the video image in which the face image is located in the storage server 200. In this way, when a face image needs to be retrieved later to obtain a background image of the face image, the face image can be extracted from the storage server 200.
Optionally, the storage server 200 assembles the storage location of the video image and the storage location of the face image into storage combination information in a URL format, and sends the storage combination information to the analysis server 100. The format of the storage combination information in the URL format may be as follows:
http:// IPSANIpAddr:port
/record#UsrCode/YYYY/MM/DD/HH/pic.jpgdev=ndcode&fid
The picture is located in the disk, and the picture is located in the disk, where the picture is located in the disk, the picture block ID + resid + slicoid + sliceops + len, the resid is the ID of the resource, the sliceod is the ID of the allocated storage block, the sliceops is the offset of the picture in the storage block, and len represents the length of the picture file size.
In step S140, when receiving the face image to be queried sent by the client, the analysis server 100 obtains storage combination information of the face image, and feeds the storage combination information back to the client, so that the client sends the storage combination information to the storage server 200.
The user can input a face image, namely a face image to be inquired, through the client, wherein the face image to be inquired can be shot by a mobile phone, shot by a camera or copied by other channels. The client sends the facial image to be queried to the analysis server 100. The analysis server 100 performs feature extraction on the image to be queried to obtain a feature value of the face image to be queried. As can be seen from the above, the analysis server 100 stores a plurality of sets of face data, so that the analysis server 100 can compare the obtained feature values with the face data in the database to find out the face data matched with the feature values of the face image to be queried. In addition, a plurality of sets of face images and storage combination information of the video images where the face images are located are also stored in the analysis server 100, and optionally, the analysis server 100 may obtain the face images corresponding to the found face data and obtain storage locations of the face images and the video images where the face images are located in the storage server 200.
The analysis server 100 returns the storage combination information to the client. The client can obtain the video image containing the background image corresponding to the face image to be queried from the storage server 200 according to the storage combination information.
Step S150, the storage server 200 finds the face image corresponding to the face image to be queried and the video image corresponding to the face image according to the storage combination information, and returns the face image and the video image to the client.
After receiving the storage combination information sent by the client, the storage server 200 may find the face image corresponding to the face image to be queried and the video image where the face image is located, and send the face image to the client. Therefore, the client can obtain the background image of the face image to be inquired.
In addition, in this embodiment, when the above-mentioned human face background image acquisition scheme is adopted, the storage server 200 needs to perform a process of decoding and re-encoding the background frame of the human face image, and then stores another copy, which needs to consume resources such as a CPU of the storage server 200 and a storage space. Therefore, on the basis of the above, it is considered that when the client side is used for searching the face background picture, the GOP length of the time is directly returned to be one GOP length, the GOP length is 25 according to the normal configuration, and 1 second of video is played during searching. Compared with the previous scheme, the scheme has the advantages that the resource consumption of a CPU (central processing unit) and the like of the back-end storage device is saved, and meanwhile, the storage space is also saved.
Optionally, after capturing and extracting the face image, the front-end device sends the face image and the timestamp of the I-frame group where the face image is located to the analysis server 100. The analysis server 100 transmits the device ID of the image pickup device 300 and the face image to the storage server 200 after performing analysis processing on the face image. The storage server 200 receives the face image, stores the face image, and returns the storage information of the URL to the analysis server 100.
The analysis server 100 stores the storage information of the face image, the time stamp of the I-frame group in which the face image is located, and the remaining face data in a database. When the analysis server 100 receives an image to be queried sent by a client, the analysis server 100 identifies the image to be queried to obtain face data matched with the face image to be queried, and returns the I-frame group timestamp of the face image corresponding to the face data to the client. The client obtains the I frame group of the facial image to be queried from the storage server 200 according to the I frame group timestamp, so that a video image corresponding to the I frame group with the duration of 1 second and containing the facial image to be queried can be played on the client to obtain a background image of the facial image to be queried.
Referring to fig. 5, a system for acquiring a face background image according to an embodiment of the present invention includes an analysis server 100, a storage server 200, and a camera 300, where the analysis server 100 and the storage server 200 are in communication connection, and may also establish communication connection with a client. The analysis server 100 includes a processing module 110 and a storage composition information acquisition module 120, the storage server 200 includes a storage module 210 and a video image search module 220, and the image capturing apparatus 300 includes a transmission module 310.
The sending module 310 is configured to send a face image extracted from the acquired video image to the analysis server 100, and send the video image to the storage server 200, where the video image includes a face background image. The sending module 310 may be configured to execute step S110 shown in fig. 2, and the detailed description of step S110 may be referred to for a specific operation method.
The processing module 110 is configured to encode the received video image, store the face image and the encoded video image, and return storage combination information of the face image and the video image to the analysis server 100. The processing module 110 may be configured to execute step S120 shown in fig. 2, and the detailed description of step S120 may be referred to for a specific operation method.
The storage module 210 is configured to store the received face image and the received video image, and return storage combination information of the face image and the video image to the analysis server 100. The storage module 210 may be configured to execute step S130 shown in fig. 2, and the detailed description of step S130 may be referred to for a specific operation method.
The storage combination information obtaining module 120 is configured to, when receiving a face image to be queried sent by a client, obtain storage combination information of the face image, and feed the storage combination information back to the client, so that the client sends the storage combination information to the storage server 200. The storage combination information acquiring module 120 may be configured to execute step S140 shown in fig. 2, and the detailed description of step S140 may be referred to for a specific operation method.
The video image searching module 220 is configured to search, according to the storage combination information, a face image corresponding to the face image to be searched and a video image corresponding to the face image, and return the face image and the video image to the client. The video image searching module 220 may be configured to execute step S150 shown in fig. 2, and the detailed description of step S150 may be referred to in the specific operation method.
Optionally, referring to fig. 6, in this embodiment, the sending module 310 includes an extracting unit 311 and a timestamp information obtaining unit 312.
The extracting unit 311 is configured to extract a face image from the acquired video image, and obtain a reference frame image of the video image. The extracting unit 311 may be configured to perform step S111 shown in fig. 3, and a detailed description of the step S111 may be referred to for a specific operation method.
The timestamp information obtaining unit 312 is configured to obtain timestamp information of the video image and the reference frame image, and send a face image carrying the timestamp information to the analysis server 100. The video image searching module 220 can be used to execute step S112 shown in fig. 3, and the detailed description of step S112 can be referred to for a specific operation method.
Optionally, referring to fig. 7, in this embodiment, the timestamp information acquiring unit 312 includes an offset information acquiring sub-unit 3121 and a transmitting sub-unit 3122.
The offset information obtaining subunit 3121 is configured to obtain a first timestamp of the reference frame image, a second timestamp of the video image, and an offset amount of the video image with respect to the reference frame image.
The sending subunit 3122 is configured to combine the first timestamp, the second timestamp, and the offset into timestamp information, and send the face image carrying the timestamp information to the analysis server 100.
Optionally, referring to fig. 8, in the present embodiment, the storage module 210 includes a location information obtaining unit 211, a storage unit 212, and a sending unit 213.
The position information acquiring unit 211 is configured to acquire position information of a video image in which the face image is located in a video frame according to timestamp information carried by the face image acquired from the analysis server 100. The location information acquiring unit 211 may be configured to execute step S131 shown in fig. 4, and a detailed description of the step S131 may be referred to for a specific operation method.
The storage unit 212 is configured to obtain a reference frame image of the video image according to the position information and the timestamp information, obtain a video image where the face image is located according to the reference frame image, and store the video image and the face image. The storage unit 212 may be configured to execute step S132 shown in fig. 4, and the detailed description of step S132 may be referred to for a specific operation method.
The sending unit 213 is configured to send the stored combination information of the video image and the face image to the analysis server 100. The sending unit 213 may be configured to execute step S133 shown in fig. 4, and the detailed description of step S133 may be referred to for a specific operation method.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
In summary, in the method and system for obtaining a face background image according to the embodiments of the present invention, the storage information of the face image and the video image where the face image is located is stored in the analysis server 100, and the face image and the video image where the face image is located are stored in the storage server 200. Therefore, when the analysis server 100 receives the face image to be queried sent by the client, the face image to be queried may be analyzed to find out the matched face data from the stored face data, and the corresponding face image and the storage information of the video image where the face image is located may be found out according to the face data. The client can obtain the video image corresponding to the face image to be queried from one end of the storage server 200 according to the storage information, so as to obtain the face background image of the face image to be queried. According to the scheme, the storage information of the face image and the video image of the face image in the storage server 200 is stored in the analysis server 100, and the video image where the face image and the video image are located is stored in the storage server 200, so that the burden of a large number of video images on the analysis server 100 is avoided, and the efficiency of face background retrieval is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (10)

1. A face background image acquisition method is characterized in that the method is applied to a face background image acquisition system, the system comprises an analysis server, a storage server and a camera device which are in communication connection, the analysis server and the storage server can also establish communication connection with a client, and the method comprises the following steps:
the camera equipment sends a face image extracted from the collected video image to the analysis server and sends the video image to the storage server, wherein the video image comprises a face background image;
the analysis server analyzes and processes the face image to obtain corresponding face data, stores the face data in a database and sends the face image to the storage server;
the storage server encodes the received video image, stores the face image and the encoded video image, and returns the storage combination information of the face image and the video image to the analysis server;
when receiving a face image to be inquired sent by a client, the analysis server obtains storage combination information of the face image and feeds the storage combination information back to the client so that the client sends the storage combination information to the storage server;
And the storage server searches the face image corresponding to the face image to be inquired and the video image corresponding to the face image according to the storage combination information, and returns the face image and the video image to the client.
2. The method for acquiring the face background image according to claim 1, wherein the step of sending the face image extracted from the acquired video image to the analysis server by the camera device comprises:
the camera equipment extracts a face image from the acquired video image and obtains a reference frame image of the video image;
and acquiring the time stamp information of the video image and the reference frame image, and sending the face image carrying the time stamp information to the analysis server.
3. The method for acquiring the face background image according to claim 2, wherein the step of obtaining the time stamp information of the video image and the reference frame image and sending the face image carrying the time stamp information to the analysis server comprises:
acquiring a first time stamp of the reference frame image, a second time stamp of the video image and an offset of the video image relative to the reference frame image;
And forming timestamp information by the first timestamp, the second timestamp and the offset, and sending the face image carrying the timestamp information to the analysis server.
4. The method for obtaining the face background image according to claim 2, wherein the step of the storage server encoding the received video image, storing the face image and the encoded video image, and returning the storage combination information of the face image and the video image to the analysis server comprises:
the storage server obtains the position information of the video image where the face image is located in the video frame according to the timestamp information carried by the face image obtained from the analysis server;
obtaining a reference frame image of the video image according to the position information and the timestamp information, obtaining a video image where the face image is located according to the reference frame image, and storing the video image and the face image;
and sending the storage combination information of the video image and the face image to the analysis server.
5. The method for obtaining the face background image according to claim 4, wherein the step of obtaining a reference frame image of the video image according to the position information and the timestamp information, and obtaining a video image where the face image is located according to the reference frame image comprises:
Obtaining a reference frame image of the video image according to the position information and the offset between the video image and the reference frame image in the timestamp information;
and coding the reference frame image according to the reference frame image and the offset to obtain a video image where the face image is located.
6. The method for acquiring the face background image according to claim 4, wherein the step of sending the storage combination information of the video image and the face image to the analysis server includes:
and the storage server splices the storage position of the video image and the storage position of the face image into storage combination information in a URL format and sends the storage combination information to the analysis server.
7. The system for acquiring the face background image is characterized by comprising an analysis server, a storage server and a camera device which are in communication connection, wherein the analysis server and the storage server can also be in communication connection with a client, the analysis server comprises a processing module and a storage combination information acquisition module, the storage server comprises a storage module and a video image searching module, and the camera device comprises a sending module:
The sending module is used for sending a face image extracted from the collected video image to the analysis server and sending the video image to the storage server, wherein the video image comprises a face background image;
the processing module is used for analyzing and processing the face image to obtain corresponding face data, storing the face data in a database and sending the face image to the storage server;
the storage module is used for coding the received video image, storing the face image and the coded video image and returning the storage combination information of the face image and the video image to the analysis server;
the storage combination information acquisition module is used for acquiring the storage combination information of the face image when receiving the face image to be inquired sent by the client and feeding the storage combination information back to the client so that the client sends the storage combination information to the storage server;
the video image searching module is used for searching the face image corresponding to the face image to be inquired and the video image corresponding to the face image according to the storage combination information, and returning the face image and the video image to the client.
8. The face background image acquisition system according to claim 7, wherein the transmission module includes an extraction unit and a time stamp information acquisition unit;
the extraction unit is used for extracting a face image from the acquired video image and obtaining a reference frame image of the video image;
the time stamp information acquisition unit is used for acquiring the time stamp information of the video image and the reference frame image and sending the face image carrying the time stamp information to the analysis server.
9. The face background image acquisition system according to claim 8, wherein the time stamp information acquisition unit includes an offset information acquisition subunit and a transmission subunit;
the offset information acquisition subunit is used for acquiring a first time stamp of the reference frame image, a second time stamp of the video image and an offset of the video image relative to the reference frame image;
and the sending subunit is configured to combine the first timestamp, the second timestamp, and the offset into timestamp information, and send the face image carrying the timestamp information to the analysis server.
10. The system for acquiring a face background image according to claim 8, wherein the storage module comprises a position information acquisition unit, a storage unit and a transmission unit;
The position information acquisition unit is used for acquiring the position information of the video image in which the face image is positioned in the video frame according to the timestamp information carried by the face image acquired from the analysis server;
the storage unit is used for obtaining a reference frame image of the video image according to the position information and the timestamp information, obtaining a video image where the face image is located according to the reference frame image, and storing the video image and the face image;
the sending unit is used for sending the storage combination information of the video image and the face image to the analysis server.
CN201810634805.XA 2018-06-20 2018-06-20 Face background image acquisition method and system Active CN110688510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810634805.XA CN110688510B (en) 2018-06-20 2018-06-20 Face background image acquisition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810634805.XA CN110688510B (en) 2018-06-20 2018-06-20 Face background image acquisition method and system

Publications (2)

Publication Number Publication Date
CN110688510A CN110688510A (en) 2020-01-14
CN110688510B true CN110688510B (en) 2022-06-14

Family

ID=69106223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810634805.XA Active CN110688510B (en) 2018-06-20 2018-06-20 Face background image acquisition method and system

Country Status (1)

Country Link
CN (1) CN110688510B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255399A (en) * 2020-02-10 2021-08-13 北京地平线机器人技术研发有限公司 Target matching method and system, server, cloud, storage medium and equipment
CN113190707B (en) * 2021-05-24 2023-04-07 浙江大华技术股份有限公司 Face library management system, method and device, storage equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030365A (en) * 2007-04-10 2007-09-05 北京中星微电子有限公司 Digital image storage displaying method and device
CN103581626A (en) * 2013-11-04 2014-02-12 浙江宇视科技有限公司 Video monitoring system and video storage information recording method
CN106445315A (en) * 2016-09-08 2017-02-22 乐视控股(北京)有限公司 Picture query method and apparatus
CN106781168A (en) * 2011-05-24 2017-05-31 韩华泰科株式会社 Monitoring system
CN106878676A (en) * 2017-01-13 2017-06-20 吉林工商学院 A kind of storage method for intelligent monitoring video data
CN106980844A (en) * 2017-04-06 2017-07-25 武汉神目信息技术有限公司 A kind of character relation digging system and method based on face identification system
KR101775650B1 (en) * 2016-12-29 2017-09-07 주식회사 포커스에이치엔에스 A facial recognition management system using portable terminal
CN107798093A (en) * 2017-10-25 2018-03-13 成都尽知致远科技有限公司 Image search method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7847815B2 (en) * 2006-10-11 2010-12-07 Cisco Technology, Inc. Interaction based on facial recognition of conference participants

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030365A (en) * 2007-04-10 2007-09-05 北京中星微电子有限公司 Digital image storage displaying method and device
CN106781168A (en) * 2011-05-24 2017-05-31 韩华泰科株式会社 Monitoring system
CN103581626A (en) * 2013-11-04 2014-02-12 浙江宇视科技有限公司 Video monitoring system and video storage information recording method
CN106445315A (en) * 2016-09-08 2017-02-22 乐视控股(北京)有限公司 Picture query method and apparatus
KR101775650B1 (en) * 2016-12-29 2017-09-07 주식회사 포커스에이치엔에스 A facial recognition management system using portable terminal
CN106878676A (en) * 2017-01-13 2017-06-20 吉林工商学院 A kind of storage method for intelligent monitoring video data
CN106980844A (en) * 2017-04-06 2017-07-25 武汉神目信息技术有限公司 A kind of character relation digging system and method based on face identification system
CN107798093A (en) * 2017-10-25 2018-03-13 成都尽知致远科技有限公司 Image search method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Face database generation based on text–video correlation;Dan Zeng et al.;《Neurocomputing》;20160513;第240-249页 *
超算云环境下监控视频的人脸识别研究;邹江;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20180415;第2018年卷(第04期);第I138-3205页 *

Also Published As

Publication number Publication date
CN110688510A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
JP6445716B2 (en) Entity-based temporal segmentation of video streams
KR101703931B1 (en) Surveillance system
CN105264892B (en) Video compress is adjusted for high frame per second and variable frame rate capture
US10582222B2 (en) Robust packet loss handling in recording real-time video
KR102050780B1 (en) Method and Server Apparatus for Delivering Content Based on Content-aware Using Neural Network
US20110102593A1 (en) Method and apparatus for operating a video system
CN101287089B (en) Image capturing apparatus, image processing apparatus and control methods thereof
CN110688510B (en) Face background image acquisition method and system
JP6686541B2 (en) Information processing system
KR102150847B1 (en) Image processing apparatus and image processing method
KR101087194B1 (en) Encoding System and Method of Moving Picture
CN114079820A (en) Interval shooting video generation centered on an event/object of interest input on a camera device by means of a neural network
CN105979189A (en) Video signal processing and storing method and video signal processing and storing system
CN108932254A (en) A kind of detection method of similar video, equipment, system and storage medium
JP2011087090A (en) Image processing method, image processing apparatus, and imaging system
US8340176B2 (en) Device and method for grouping of images and spanning tree for video compression
US11095901B2 (en) Object manipulation video conference compression
CN114125371A (en) Video capture at camera devices by reducing the bit rate of the video by means of neural network input to save bandwidth at intelligent intervals
KR101838792B1 (en) Method and apparatus for sharing user's feeling about contents
TWI586176B (en) Method and system for video synopsis from compressed video images
CN115379233B (en) Big data video information analysis method and system
CN112714336B (en) Video segmentation method and device, electronic equipment and computer readable storage medium
US20190215573A1 (en) Method and device for acquiring and playing video data
CN114268730A (en) Image storage method and device, computer equipment and storage medium
CN115297323B (en) RPA flow automation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant