CN111275012A - Advertisement screen passenger flow volume statistical system and method based on face recognition - Google Patents

Advertisement screen passenger flow volume statistical system and method based on face recognition Download PDF

Info

Publication number
CN111275012A
CN111275012A CN202010125487.1A CN202010125487A CN111275012A CN 111275012 A CN111275012 A CN 111275012A CN 202010125487 A CN202010125487 A CN 202010125487A CN 111275012 A CN111275012 A CN 111275012A
Authority
CN
China
Prior art keywords
face
tracking
picture
passenger flow
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010125487.1A
Other languages
Chinese (zh)
Inventor
冯希宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinghong Cluster Co Ltd
Original Assignee
Xinghong Cluster Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinghong Cluster Co Ltd filed Critical Xinghong Cluster Co Ltd
Priority to CN202010125487.1A priority Critical patent/CN111275012A/en
Publication of CN111275012A publication Critical patent/CN111275012A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • G06Q30/0245Surveys
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an advertisement screen passenger flow volume statistical system and method based on face recognition, which comprises the following steps: the method comprises the following steps: the screen comprises a screen body, wherein a display screen and a camera shooting assembly are arranged on the surface of the screen body, and a control chip, a memory, a power supply chip, a graphic processor, a network chip and a face recognition unit are arranged in the screen body; the face recognition unit includes: the system comprises a face detection module, a face tracking module, a face key point extraction module, a face alignment module and a face characteristic point extraction module. The invention can not only count the pedestrian volume in the process of detecting the face, but also count the attention of the advertisement according to the time length of the tracked face watching the advertisement screen, and has important guiding function for advertisement putting and value extension.

Description

Advertisement screen passenger flow volume statistical system and method based on face recognition
Technical Field
The invention relates to the technical field of face recognition, in particular to an advertisement screen passenger flow volume statistical system and method based on face recognition.
Background
In recent years, as the living standard of people is improved, the number of large-scale merchants in cities is increased rapidly, and the competitive pressure among the merchants is huge. Therefore, in order to better master the operating conditions, people can often meet the statistics and investigation of the passenger flow and the advertisement attention of workers in business overtime, but the statistics and investigation is carried out in a manual mode, so that large errors exist, manpower is consumed, and the actual passenger flow and the advertisement attention cannot be accurately reflected.
At present, the face recognition accuracy rate based on the mobile terminal can meet the application requirements, and the memory space occupied by the lightweight face recognition network and the model parameters is gradually reduced, so that the face recognition analysis on the intelligent terminal hardware in real time becomes possible.
Therefore, in order to provide the accuracy and efficiency of passenger flow statistics, people flow statistics products based on face recognition technology have come to the ground, and there are two main methods:
1. capturing images through a camera, and transmitting the video images or the face images back to a server side for processing such as face recognition analysis; however, the method is high in cost, a camera is adopted to collect the face image, the structured data is uploaded to a background face recognition server to identify and analyze the image, passenger flow statistics is carried out, deployment is complex, the method is limited by network speed bandwidth, and the recognition server is expensive.
2. Processing such as face recognition analysis and the like is carried out on the low-configuration intelligent terminal; however, this method sacrifices accuracy in order to achieve real-time results, resulting in poor results.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide an advertisement screen passenger flow volume statistics system and method based on face recognition, which can not only count the passenger flow volume in the process of detecting the face, but also count the attention of the advertisement according to the time length of the face that is tracked when watching the advertisement screen, and have important guidance for advertisement delivery and value extension.
In order to achieve the purpose, the invention is realized by the following technical scheme: an advertisement screen passenger flow volume statistical system based on face recognition comprises: the screen comprises a screen body, wherein a display screen and a camera shooting assembly are arranged on the surface of the screen body, and a control chip, a memory, a power supply chip, a graphic processor, a network chip and a face recognition unit are arranged in the screen body; the control chip is respectively in data connection with the memory, the power chip, the image processor, the network chip, the face recognition unit, the display screen and the camera shooting assembly;
the face recognition unit includes:
the face detection module is used for detecting the face in the picture shot by the camera shooting assembly and marking position information;
the face tracking module is used for tracking the face position;
the face key point extraction module is used for extracting and displaying the core key point information of the face;
the face alignment module aligns the extracted key points with a preset template and is used for extracting the feature points;
and the human face feature point extraction module is used for providing a 128-dimensional feature vector for each human face picture, comparing the similarity of the human faces in the two pictures and returning a similarity score.
Further, the face detection module, the face key point extraction module and the face feature point extraction module all adopt a convolutional neural network in a deep learning model.
Further, the control chip adopts a CPU with the model of RK3288, and the power supply chip adopts an ACT8846 power supply IC.
Further, the graphics processor adopts a Mali-T760 MP4 graphics processor, and the network chip adopts an RTL8211E gigabit Ethernet card.
Correspondingly, the invention also discloses an advertisement screen passenger flow volume statistical method based on face recognition, which comprises the following steps:
s1: capturing image data through a camera assembly on a screen body and generating a face picture;
s2: calling a multiTracker interface of a face software development kit to perform face recognition on the face picture;
s3: carrying out face tracking on the face picture, and removing faces which are not successfully tracked in the face picture;
s4: carrying out face detection on the successfully tracked face;
s5: adding the detected new face to a tracking queue, and extracting the face features;
s6: performing linear searching on the extracted face features in a preset face cache to calculate the face features closest to the features;
s7: judging whether the extracted face features are new faces or not according to a preset feature similarity threshold, if so, adding corresponding face data to a face library for caching and persisting the face data to the face library; if not, packaging a face data structure;
s8: and updating the face data list.
Further, the step S3 includes the following steps:
s31: tracking and initializing the face picture, scaling the face picture according to a proportion, and setting a timing detection interval variable;
s32: executing a face detection process, specifically, adopting a multitask face detection network to sequentially detect faces of the face pictures to generate candidate face vectors;
s33: adding all the candidate face vectors into the tracking face vectors, emptying the candidate face vectors, and executing an optical flow tracking algorithm based on the feature points of the face detection area on the tracking face vectors to perform preliminary prejudgment on the face frame area;
s34: positioning the face picture and a preliminarily pre-judged face frame area, judging whether the face frame area is a face frame, if not, failing to track the face, and removing a corresponding face vector from a tracked face vector; if so, the face tracking is successful, and the next step is carried out.
Further, the step S4 specifically includes:
and if the face tracking time is longer than the timing detection interval variable, scaling the face picture according to a proportion, setting masks for all faces in the tracked face vector, and executing face detection.
Further, the face detection includes:
sequentially detecting faces by adopting P-Net, R-Net, O-Net and L-Net networks in a multitask face detection network, and generating a predicted boundary frame by the face picture through the P-Net network; then inputting an original image and a boundary frame generated by a P-Net network, and obtaining a corrected boundary frame after R-Net; and then the boundary frame obtained by the human face picture and the R-Net is subjected to O-Net network to obtain a corrected boundary frame and human face key points, finally the boundary frame obtained by the O-Net network is sent to an L-Net network to obtain an accurate human face frame region, and the detected human face frame region and the detected human face key points are all added into the candidate human face vector.
Further, the step S34 specifically includes:
positioning the face picture and the preliminarily pre-judged face frame region through an O-Net network, judging whether the preliminarily pre-judged face frame region is a face frame or not through confidence returned by the O-Net network, if the confidence is smaller than a preset threshold, judging that the preliminarily pre-judged face frame region is not the face frame, failing to track the face, and removing the corresponding face vector from the tracked face vector; otherwise, the face tracking is successful, and the next step is carried out.
Further, the face data structure includes:
the system comprises an identity number, a tracking number and a face frame area, wherein the face frame area specifically comprises an upper left corner coordinate and a lower right corner coordinate of a corresponding face frame.
Compared with the prior art, the invention has the beneficial effects that:
1. the face detection module, the face key point extraction module and the face feature point extraction module all adopt a convolutional neural network in a deep learning model, parameter calculation in the face detection and face feature point extraction models is reduced by using C + + programming optimization, face key points are extracted in the face detection module at the same time, calculation steps are reduced, and calculation speed is improved.
2. The redundant information is more in the continuity of the video, in order to avoid redundant operations of face detection and face characteristic point extraction on each frame of image, face tracking is added to reduce the execution times of the face detection and the face characteristic point extraction, so that the hardware space can be saved, and the calculation power can be saved for more accurate face detection and face characteristic point extraction.
3. The invention can not only count the pedestrian volume in the process of detecting the face, but also count the attention of the advertisement according to the time length of the tracked face watching the advertisement screen, and has important guiding function for advertisement putting and value extension.
4. The advertisement screen is simple to install, and the statistics of the passenger flow and the attention of the advertisement based on the face recognition is accurate, so that the advertisement screen is very suitable for the fields of accurate advertisement putting, promotion of merchant brands, user consumption conversion and the like.
Therefore, compared with the prior art, the invention has prominent substantive features and remarkable progress, and the beneficial effects of the implementation are also obvious.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a system configuration diagram according to a first embodiment of the present invention.
FIG. 2 is a flow chart of a method according to a second embodiment of the present invention.
FIG. 3 is a flow chart of a method according to a third embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings.
The first embodiment is as follows:
as shown in fig. 1, an advertisement screen passenger flow volume statistical system based on face recognition includes: the screen comprises a screen body, wherein a display screen and a camera shooting assembly are arranged on the surface of the screen body, and a control chip, a memory, a power supply chip, a graphic processor, a network chip and a face recognition unit are arranged in the screen body; the control chip is respectively in data connection with the memory, the power supply chip, the image processor, the network chip, the face recognition unit, the display screen and the camera shooting assembly.
The control chip adopts RuiKe micro RK3288, the CPU is RK3288, the four-core Cortex-A17, and the dominant frequency is 1.8 GHz; the memory is designed by using a double-channel DDR3, and 2GB is standard configuration; storing standard configuration 8 GB; the power supply chip adopts ACT 8846; the graphics processor is Mali-T760 MP4, supports OpenGL ES 1.1/2.0/3.0, Openvg1.1, OpenCL and Directx11 embedded high-performance 2D acceleration hardware, supports 4K H.264 and 10bits H.265 video decoding and 1080P multi-format video decoding; ethernet uses an RTL8211E gigabit ethernet PHY.
The system adopts a passenger flow counting system and an advertisement watching amount counting system which are realized by adopting a face recognition technology, and can achieve good processing effect without uploading image data to a server for processing. Wherein, the face recognition unit includes:
the face detection module is used for detecting the face in the picture shot by the camera shooting assembly and marking position information;
the face tracking module is used for tracking the face position;
the face key point extraction module is used for extracting and displaying the core key point information of the face;
the face alignment module aligns the extracted key points with a preset template and is used for extracting the feature points;
and the human face feature point extraction module is used for providing a 128-dimensional feature vector for each human face picture, comparing the similarity of the human faces in the two pictures and returning a similarity score.
The face detection module, the face key point extraction module and the face characteristic point extraction module all adopt a convolution neural network in a deep learning model.
The bottom layer algorithms of the five modules are all realized by C + + and called by a Java layer, and high-speed and high-precision calculation can be realized in a low-computing-force environment.
Example two:
correspondingly, as shown in fig. 2, the invention also discloses an advertisement screen passenger flow volume statistical method based on face recognition, which comprises the following steps:
s1: and capturing image data through a camera assembly on the screen body and generating a face picture.
S2: and calling a multiTracker interface of a face software development kit to perform face recognition on the face picture.
S3: and carrying out face tracking on the face picture, and removing faces which are not successfully tracked in the face picture.
S4: and carrying out face detection on the successfully tracked face.
S5: and adding the detected new face to a tracking queue, and extracting the face features.
S6: and performing linear searching on the extracted face features in a preset face cache to calculate the face features closest to the features.
S7: judging whether the extracted face features are new faces or not according to a preset feature similarity threshold, if so, adding corresponding face data to a face library for caching and persisting the face data to the face library; and if not, packaging the face data structure.
S8: and updating the face data list.
By the method, the number of the faces can be counted after the face data list is acquired, the passenger flow watching duration/watching time period can be acquired according to the addition or deletion of the face tracking, corresponding data analysis can be performed, for example, the attention of advertisements in each time period can be acquired, and the attention of various types of advertisements can be counted according to the face attributes (information such as gender and age), so that the purpose of accurately putting the advertisements is achieved.
Example three:
as shown in fig. 3, the invention also discloses an advertisement screen passenger flow volume statistical method based on face recognition, which comprises the following steps:
step 1: and capturing image data through a camera carried by the advertising screen on the JAVA layer.
Step 2: a multitrack interface of a face SDK (Software Development Kit) is called in a JAVA layer to perform face recognition.
And step 3: during tracking initialization, scaling the picture, setting a timing detection Interval detection _ Interval, executing a face detection process, wherein a P-Net, an R-Net, an O-Net and an L-Net network in an MTCNN (multi-tasking face detection network) are adopted to detect a face in sequence, and an original picture generates a predicted boundary frame through the P-Net network; then inputting an original image and a boundary frame generated by a P-Net network, and obtaining a corrected boundary frame after R-Net; then the original picture and the boundary frame obtained by R-Net are subjected to O-Net network to obtain a corrected boundary frame and face key points, finally the boundary frame obtained by the O-Net network is sent to an L-Net network to obtain an accurate face frame region, and the detected face frame region and the face key points are all added into the candidate face vector;
and 4, step 4: adding all the candidate face vectors into the tracked face vectors, emptying the candidate face vectors, and executing an optical flow tracking algorithm based on the feature points of the face detection area on the tracked face vectors to perform preliminary pre-judgment on the face frame area;
and 5: similar to the step 3, the original picture and the primary face frame region obtained in the step 4 are used for carrying out more fine positioning on the face frame through an O-Net network, and finally, whether the frame is the face frame is judged according to the confidence coefficient returned by the O-Net network, if the frame is not the face frame and is smaller than the threshold (0.9), the face tracking fails, and the face is removed from the tracked face vector trackingFaces; otherwise go to step 6;
step 6: if the face tracking time is larger than detection _ Interval, the image is scaled, all faces in the tracking faces vector are set as masks, and the face detection process is executed. The steps can effectively avoid repeated human face detection, thereby reducing the calculated amount and improving the detection speed.
And 7: and adding the detected new face into a tracking queue, and extracting the face features.
And 8: linearly searching the most similar human face features in the human face database cache, judging whether a new human face exists according to a feature similarity threshold, and if the human face is the new human face, adding human face data into the cache and persisting the human face data into the database; if the face is not a new face, encapsulating a face data structure personnd (identity number), a trackId (tracking number), rect (face frame region, [ x1, y1, x2, y2], corresponding to the upper left corner coordinate and the lower right corner coordinate of the face frame), and returning the face data structure to the JAVA layer for the JAVA layer to obtain a face data list.
The JAVA layer can count the number of faces after acquiring the face data list, and can acquire the passenger flow viewing duration/viewing time period according to the addition or deletion of face tracking, and can perform corresponding data analysis, for example, can acquire the attention of advertisements in each time period, and subsequently can count the attention of various types of advertisements according to face attributes (information such as gender, age, etc.), thereby achieving the purpose of accurately delivering advertisements.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention. The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the embodiments provided by the present invention, it should be understood that the disclosed system, system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit.
Similarly, each processing unit in the embodiments of the present invention may be integrated into one functional module, or each processing unit may exist physically, or two or more processing units are integrated into one functional module.
The invention is further described with reference to the accompanying drawings and specific embodiments. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and these equivalents also fall within the scope of the present application.

Claims (10)

1. An advertisement screen passenger flow volume statistical system based on face recognition is characterized by comprising: the screen comprises a screen body, wherein a display screen and a camera shooting assembly are arranged on the surface of the screen body, and a control chip, a memory, a power supply chip, a graphic processor, a network chip and a face recognition unit are arranged in the screen body; the control chip is respectively in data connection with the memory, the power chip, the image processor, the network chip, the face recognition unit, the display screen and the camera shooting assembly;
the face recognition unit includes:
the face detection module is used for detecting the face in the picture shot by the camera shooting assembly and marking position information;
the face tracking module is used for tracking the face position;
the face key point extraction module is used for extracting and displaying the core key point information of the face;
the face alignment module aligns the extracted key points with a preset template and is used for extracting the feature points;
and the human face feature point extraction module is used for providing a 128-dimensional feature vector for each human face picture, comparing the similarity of the human faces in the two pictures and returning a similarity score.
2. The advertising screen passenger flow volume statistical system based on face recognition according to claim 1, wherein the face detection module, the face key point extraction module and the face feature point extraction module all adopt a convolutional neural network in a deep learning model.
3. The advertising screen passenger flow volume statistical system based on face recognition as claimed in claim 1, wherein the control chip adopts a CPU with a model number RK3288, and the power supply chip adopts an ACT8846 power supply IC.
4. The human face recognition-based billboard passenger flow statistics system of claim 1, wherein the graphics processor is a Mali-T760 MP4 graphics processor, and the network chip is an RTL8211E gigabit ethernet card.
5. An advertisement screen passenger flow volume statistical method based on face recognition is characterized by comprising the following steps:
s1: capturing image data through a camera assembly on a screen body and generating a face picture;
s2: calling a multiTracker interface of a face software development kit to perform face recognition on the face picture;
s3: carrying out face tracking on the face picture, and removing faces which are not successfully tracked in the face picture;
s4: carrying out face detection on the successfully tracked face;
s5: adding the detected new face to a tracking queue, and extracting the face features;
s6: performing linear searching on the extracted face features in a preset face cache to calculate the face features closest to the features;
s7: judging whether the extracted face features are new faces or not according to a preset feature similarity threshold, if so, adding corresponding face data to a face library for caching and persisting the face data to the face library; if not, packaging a face data structure;
s8: and updating the face data list.
6. The advertisement screen passenger flow statistical method based on face recognition according to claim 5, wherein the step S3 comprises the following steps:
s31: tracking and initializing the face picture, scaling the face picture according to a proportion, and setting a timing detection interval variable;
s32: executing a face detection process, specifically, adopting a multitask face detection network to sequentially detect faces of the face pictures to generate candidate face vectors;
s33: adding all the candidate face vectors into the tracking face vectors, emptying the candidate face vectors, and executing an optical flow tracking algorithm based on the feature points of the face detection area on the tracking face vectors to perform preliminary prejudgment on the face frame area;
s34: positioning the face picture and a preliminarily pre-judged face frame area, judging whether the face frame area is a face frame, if not, failing to track the face, and removing a corresponding face vector from a tracked face vector; if so, the face tracking is successful, and the next step is carried out.
7. The advertisement screen passenger flow volume statistical method based on face recognition according to claim 6, wherein the step S4 specifically comprises:
and if the face tracking time is longer than the timing detection interval variable, scaling the face picture according to a proportion, setting masks for all faces in the tracked face vector, and executing face detection.
8. The advertisement screen passenger flow statistical method based on face recognition according to claim 5, wherein the face detection comprises:
sequentially detecting faces by adopting P-Net, R-Net, O-Net and L-Net networks in a multitask face detection network, and generating a predicted boundary frame by the face picture through the P-Net network; then inputting an original image and a boundary frame generated by a P-Net network, and obtaining a corrected boundary frame after R-Net; and then the boundary frame obtained by the human face picture and the R-Net is subjected to O-Net network to obtain a corrected boundary frame and human face key points, finally the boundary frame obtained by the O-Net network is sent to an L-Net network to obtain an accurate human face frame region, and the detected human face frame region and the detected human face key points are all added into the candidate human face vector.
9. The advertisement screen passenger flow volume statistical method based on face recognition according to claim 6 or 8, wherein the step S34 specifically comprises:
positioning the face picture and the preliminarily pre-judged face frame region through an O-Net network, judging whether the preliminarily pre-judged face frame region is a face frame or not through confidence returned by the O-Net network, if the confidence is smaller than a preset threshold, judging that the preliminarily pre-judged face frame region is not the face frame, failing to track the face, and removing the corresponding face vector from the tracked face vector; otherwise, the face tracking is successful, and the next step is carried out.
10. The advertisement screen passenger flow volume statistical method based on face recognition according to claim 5, wherein the face data structure comprises:
the system comprises an identity number, a tracking number and a face frame area, wherein the face frame area specifically comprises an upper left corner coordinate and a lower right corner coordinate of a corresponding face frame.
CN202010125487.1A 2020-02-27 2020-02-27 Advertisement screen passenger flow volume statistical system and method based on face recognition Pending CN111275012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010125487.1A CN111275012A (en) 2020-02-27 2020-02-27 Advertisement screen passenger flow volume statistical system and method based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010125487.1A CN111275012A (en) 2020-02-27 2020-02-27 Advertisement screen passenger flow volume statistical system and method based on face recognition

Publications (1)

Publication Number Publication Date
CN111275012A true CN111275012A (en) 2020-06-12

Family

ID=70999244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010125487.1A Pending CN111275012A (en) 2020-02-27 2020-02-27 Advertisement screen passenger flow volume statistical system and method based on face recognition

Country Status (1)

Country Link
CN (1) CN111275012A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380925A (en) * 2020-10-27 2021-02-19 北京城建智控科技有限公司 Data processing method, device, equipment and storage medium
CN113378765A (en) * 2021-06-25 2021-09-10 四川启睿克科技有限公司 Intelligent statistical method and device for advertisement attention crowd and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380925A (en) * 2020-10-27 2021-02-19 北京城建智控科技有限公司 Data processing method, device, equipment and storage medium
CN113378765A (en) * 2021-06-25 2021-09-10 四川启睿克科技有限公司 Intelligent statistical method and device for advertisement attention crowd and computer readable storage medium

Similar Documents

Publication Publication Date Title
Dvornik et al. On the importance of visual context for data augmentation in scene understanding
US10580179B2 (en) Method and apparatus for processing video image and electronic device
WO2022089360A1 (en) Face detection neural network and training method, face detection method, and storage medium
CN108229322B (en) Video-based face recognition method and device, electronic equipment and storage medium
CN110610510B (en) Target tracking method and device, electronic equipment and storage medium
JP2023521952A (en) 3D Human Body Posture Estimation Method and Apparatus, Computer Device, and Computer Program
US20110129118A1 (en) Systems and methods for tracking natural planar shapes for augmented reality applications
US20220351390A1 (en) Method for generating motion capture data, electronic device and storage medium
CN111754541A (en) Target tracking method, device, equipment and readable storage medium
US20220415072A1 (en) Image processing method, text recognition method and apparatus
CN104731964A (en) Face abstracting method and video abstracting method based on face recognition and devices thereof
Li et al. Depthwise nonlocal module for fast salient object detection using a single thread
Huynh-The et al. Hierarchical topic modeling with pose-transition feature for action recognition using 3D skeleton data
US11681409B2 (en) Systems and methods for augmented or mixed reality writing
CN111680678A (en) Target area identification method, device, equipment and readable storage medium
CN114758362A (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking
CN111275012A (en) Advertisement screen passenger flow volume statistical system and method based on face recognition
CN114723888A (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN113223125B (en) Face driving method, device, equipment and medium for virtual image
CN113642481A (en) Recognition method, training method, device, electronic equipment and storage medium
CN113255501A (en) Method, apparatus, medium, and program product for generating form recognition model
CN113177432A (en) Head pose estimation method, system, device and medium based on multi-scale lightweight network
CN109241942B (en) Image processing method and device, face recognition equipment and storage medium
Cao Face recognition robot system based on intelligent machine vision image recognition
CN113139539B (en) Method and device for detecting characters of arbitrary-shaped scene with asymptotic regression boundary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination