CN114708554A - Intelligent library people flow monitoring method and device based on face detection - Google Patents

Intelligent library people flow monitoring method and device based on face detection Download PDF

Info

Publication number
CN114708554A
CN114708554A CN202210379735.4A CN202210379735A CN114708554A CN 114708554 A CN114708554 A CN 114708554A CN 202210379735 A CN202210379735 A CN 202210379735A CN 114708554 A CN114708554 A CN 114708554A
Authority
CN
China
Prior art keywords
face detection
image
library
face
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210379735.4A
Other languages
Chinese (zh)
Inventor
颜晓红
王天宇
郝学元
张诚超
王伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210379735.4A priority Critical patent/CN114708554A/en
Publication of CN114708554A publication Critical patent/CN114708554A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
    • G08B5/221Local indication of seats occupied in a facility, e.g. in a theatre
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The invention discloses a method and a device for monitoring the flow of people in an intelligent library based on face detection. The intelligent human face detection system comprises an image acquisition module, a human face detection module, an intelligent interconnection module and a real-time display module. The face detection module is based on an MTCNN face detection model, and a BN layer is added to accelerate the network convergence speed; the deep separable convolution is introduced, so that the parameter quantity of the original network is greatly reduced, the reasoning speed is improved, and the trained model can be deployed on embedded equipment to realize real-time face detection; and a face detection frame with higher confidence coefficient is reserved by using Soft-NMS, so that the detection accuracy is improved. The invention greatly improves the efficiency and the safety of library management during epidemic situations, and also greatly improves the reading comfort level of readers, so that the library management is more intelligent and convenient.

Description

Intelligent library people flow monitoring method and device based on face detection
Technical Field
The invention belongs to the technical field of computer vision, and particularly provides a method and a device for monitoring the pedestrian flow of an intelligent library based on face detection.
Background
At present, many cities have libraries of their own, which can provide reading and learning places for citizens. However, the efficiency of manually managing the library is not high, particularly in an epidemic situation, the library is dense in personnel and the mobility of the personnel is extremely high, and under the condition, the people flow in the library is reasonably arranged, so that the efficiency and the quality of personnel management are particularly important to improve. In the current society, with the development of artificial intelligence technology, more and more libraries apply emerging technological innovations in the development thereof, and good effects are achieved. The invention aims to apply the computer vision technology in the artificial intelligence technology to the aspects of library people flow monitoring, personnel management and the like, and designs the intelligent library people flow monitoring system based on the computer vision, so that the service efficiency of the library is improved, and the system is safer, more efficient and quicker.
CN112418064A, a real-time automatic detection method for the number of people in a library reading room, comprising the following steps: step 1) a camera collects real-time image information of each area of a library and sends the real-time image information to a background host; step 2) the background host analyzes each image, identifies key points of a human body and obtains a rectangular detection frame of the human body; step 3) abandoning the human body rectangular detection frame outside the image selection frame to obtain the effective number of people in each image; step 4) adding the number of people according to the grouping relation of the cameras to obtain the number of people in each browsing room and the number of empty positions; and 5) releasing the information of the number of people in real time. The technology does not optimize the algorithm of face recognition, a large amount of deviation exists, and the response efficiency is poor.
Disclosure of Invention
The invention aims to provide an intelligent management method for monitoring the pedestrian flow of a library, which is based on deep learning of computer vision, sends real-time images acquired by a camera into a trained neural network model to realize offline real-time face detection, designs a game algorithm to display the seat allowance of a study room in real time, and is combined with an online seat reservation system to realize more convenient and faster intelligent management.
The technical scheme of the invention is as follows:
in a first aspect, the present invention provides a method for monitoring a flow of people in a library, including:
and step 1, an image acquisition module is used. The method comprises the steps that a camera collects real-time pictures of a study room, a series of preprocessing is conducted on the pictures, kernels with different sizes are adopted to slide on the pictures, a series of initial face detection frames are obtained, and an image pyramid is formed;
step 2, detecting a face region in an image through a pre-trained improved MTCNN, wherein the improved MTCNN model introduces deep separable convolution, and the number of network layers and the parameter quantity are reduced to ensure the recognition accuracy and improve the reasoning speed at the same time, so as to realize real-time recognition on embedded equipment;
step 3, a face detection frame with higher confidence coefficient is reserved through an improved Soft-NMS algorithm;
step 4, in order to reduce the problems of false detection and missed detection of the human face in image recognition, a game algorithm is designed; the game algorithm collects multiple frames of images every second, compares the real-time images with the previous frame of image, and finally determines the number of final face detection frames by adopting a big number algorithm as most of the images collected in the study room are static and have small change amplitude every time;
step 5, assigning different ID values to the face detection at different positions by the finally determined face detection frame obtained in the step 4, counting the number of the final face detection frames, recording the coordinate information of each ID to calculate the use time of the seat, combining the use time with the number of the seats reserved on line by the Internet to obtain the real-time face number and the use condition of the seat, and displaying the seat allowance on an LED digital display screen in real time;
and 6, mutually sending the real-time face number information to the library personnel management platform by utilizing the recognized face number so as to realize the real-time management of personnel in the library study room.
In a second aspect, the present invention provides a library people flow rate monitoring device, comprising:
the image acquisition module is used for acquiring images acquired by a camera of a library study room in real time;
the detection module is used for roughly detecting a face area according to the image picture content and carrying out face detection on the image acquired in real time in the library study room through a pre-trained MTCNN face detection model to obtain a face detection frame; the MTCNN face detection algorithm introduces deep separable convolution, reduces the number of layers and parameters of a network, achieves higher accuracy and faster detection speed, and retains a face detection frame with higher confidence coefficient through Soft-NMS;
and the information interaction module is used for being combined with the online study room reservation module and displaying the real-time seat allowance of the study room of the library through data integration.
In a third aspect, the present invention provides an electronic device comprising: the system comprises at least one camera, a processor, a memory and an LED electronic display screen;
the camera is used for acquiring a real-time image of a study room of the library;
the processor is used for running a face detection model, an algorithm contained in the human flow monitoring according to any one of claims 1-5;
the memory is used for storing pre-trained model parameters and image information;
the LED electronic display screen is used for detecting the number of the face frames detected in real time and the surplus seat of the study room set according to requirements.
The invention has the beneficial effects that: (1) the invention provides a method and a device for monitoring the pedestrian flow of a library based on computer vision and electronic equipment. Aiming at a public place with centralized pedestrian flow, such as a library, the pedestrian flow is conveniently controlled when needed, (2) the invention obtains real-time image information of a study room of the library through a camera, and utilizes a pre-trained face detection neural network based on MTCNN to detect the face of the area to obtain an initial face detection frame, wherein the face detection neural network based on MTCNN introduces deep separable convolution, and the inference speed of the face detection is greatly improved while the higher identification accuracy is kept by reducing the number of network layers and parameters. (3) Combine it with the online seat reservation system of internet, need not artifical management, the people flow control in library is realized to intelligence, is improving the security simultaneously, has improved the personnel management efficiency in library, has greatly strengthened reader's reading comfort level, really realizes the intelligent people flow monitoring management in wisdom library.
Drawings
Fig. 1 is a schematic flowchart illustrating a method for monitoring a flow of people in a smart library according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of an improved MTCNN neural network model P-net network structure provided in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an improved MTCNN neural network model R-net network structure provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an improved MTCNN neural network model O-net network structure provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a smart library people flow monitoring device according to a second embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present disclosure.
Detailed Description
The present invention will be further described with reference to the following detailed description so that the technical means, creativity, attainment objectives and applications of the invention are easily understood.
Example 1
The invention discloses an intelligent management method for monitoring the flow of people in a library, which aims at solving the problems of complex management operation, high labor cost and the like of traditional library flow personnel and aims at controlling the flow of people in crowded places of people flow during epidemic situations. And finally, designing a data display module, displaying the detected number of the human faces on a screen in real time, calculating the seat allowance of the library, and prompting that the seat is full at the entrance of the library when the allowance is smaller than a specified threshold value. The face detection and face attribute recognition modules provided by the method are all realized by Python codes. The method can more intelligently control the real-time pedestrian volume of the library, greatly reduces the labor cost, and is safer and more convenient to manage during epidemic situations.
Fig. 1 is a schematic flow chart of people flow monitoring in a smart library according to an embodiment of the present disclosure. As shown in fig. 1, an example of the present disclosure provides a method for monitoring people flow in a smart library, including:
the image acquisition module in the embodiment comprises a camera image acquisition module and an image preprocessing module. Real-time study room photos are collected through a camera, and data are written in and read through the DDR 3. Carrying out a series of preprocessing processes on the picture, wherein the preprocessing processes comprise the step of cutting the image; and performing image enhancement on the cut image, and increasing the image contrast by adopting a point operation algorithm of a spatial domain. Namely, directly calculating pixel points on the image:
Y(x,y)=F(x,y)*X(x,y)
where X and Y represent the original image and the transformed image and F is the transfer function.
In this embodiment, the flow rate of people in the library is mainly concentrated in the space of the study room, and the images taken by the study room are basically static images, so that the flow rate monitoring can be concentrated in the space of the study room.
The face detection process in this embodiment includes sending the preprocessed image obtained in the previous step to the modified MTCNN face detection neural network. And rotating and scaling the sent picture in different scales to generate an image pyramid. In a first-level network P-net structure of an image pyramid and a biographer MTCNN, the P-net detects human faces without considering confidence, a large number of target candidate frames relative to an original input image are generated on the basis of the pyramid image, and coordinate information of the detection frame areas is transmitted to a next-level network R-net for face recognition. The R-net carries out confidence degree scoring on a large number of detection frames containing face images transmitted by the P-net network, deletes the detection frames lower than a preset confidence degree threshold value, calculates the degree of fit among the target detection frames through the IOU, removes repeated detection frames transmitted by the P-net through a flexible non-maximum value Soft-NMS, cuts and scales the rest preselected frames, and inputs the cut and scaled frames into the next-level O-net network. And because the R-net deletes the detection frames with low confidence coefficient, the O-net uses Soft-NMS to remove the repeated detection frames again from the face candidate frames with higher confidence coefficient scores, calculates the position of the target area in the original image through the transmitted target area coordinates, and marks the target area in the original image by using a rectangular frame, namely the final detected face frame. In the present embodiment, in the MTCNN-based face detection model, depth-separable convolution is incorporated, and as shown in FIGS. 2-4, in P-net, R-net and O-net, the original convolution layer is replaced by depth-separable convolution. The depth-separable convolution differs from the standard convolution in that the depth-separable convolution divides the multi-channel feature map from the previous layer into a plurality of feature maps of a single channel, performs convolution on the feature maps respectively, and finally superimposes a plurality of convolution results together. In the standard convolution, the parameters are:
Np=DF*DF*Cin*Cout
the calculated amount is as follows:
Nc=Hout*Wout*Cout*DF*DF*Cin
and for depth separable convolution, the parameters for DepthWise are:
Figure BDA0003592236910000071
the calculated amount is as follows:
Figure BDA0003592236910000072
the parameters of PointWise are:
Figure BDA0003592236910000073
the calculated amount is as follows:
Figure BDA0003592236910000074
the total number of parameters for the deep separable convolution is:
Figure BDA0003592236910000075
the total calculated amount is:
Figure BDA0003592236910000076
the parameter ratio compared to the standard convolution is:
Figure BDA0003592236910000077
the ratio of the calculated quantities is:
Figure BDA0003592236910000078
it follows that the inference speed of the depth separable convolution is standard convolution
Figure BDA0003592236910000079
More than twice.
In this embodiment, as shown in fig. 2 to 4, a BN layer is added to the convolutional layer and the relu layer in the MTCNN three tandem networks. The BN layer realizes accelerated network convergence by enabling the feature diagram obtained after the convolutional layer to meet the feature distribution with the mean value of 0 and the variance of 1. The calculation formula of the BN layer is as follows:
Figure BDA0003592236910000081
in this embodiment, for the missing detection phenomenon generated when the face detection frames overlap, a flexible non-maximum suppression method is used to remove the overlapped face detection frames. The detection frame with the IOU value larger than the preset threshold value in the original model is directly deleted, and the IOU is easily deleted by mistake due to overlarge condition when two different faces are very close. The confidence scores of candidate boxes that overlap the detected box are reduced by attenuation. The greater the IOU values of the candidate and detection boxes, the greater the degree of reduction in the confidence score. When the IOU value exceeds a preset threshold N, its score decays linearly. The calculation formula is as follows:
Figure BDA0003592236910000082
furthermore, a detection error caused by a sudden change generated when the IOU value is greater than N is prevented by adopting Gaussian weighted optimization, and the formula is as follows:
Figure BDA0003592236910000083
where D is the set of final detection boxes.
In the embodiment, as shown in fig. 1, for the finally determined face detection frame, the coordinate information of the face detection frame area is returned, for example, the coordinate of the upper left corner is (x)a,ya) And different ID values are assigned to the areas to distinguish different face frames. When the face is detected to appear on the seat, the ID is assigned and the initial time T is recordedi. In this embodiment, the usage time of the seat is determined according to the degree of change of the coordinate, and the calculation formula is as follows:
Tc=To-Ti
and the coordinate transformation degree algorithm is that whether the reader leaves the seat or not is judged according to the coordinate change amplitude when the coordinate of the face frame with the same initial coordinate is changed compared with the image detected in the previous frame. The calculation formula is as follows:
Figure BDA0003592236910000084
when alpha is larger than a certain threshold value, the reader is judged to leave the seat, and the time T at the moment is recordedo
Comparative example 2
In this embodiment, a comparison description is made of the conventional scenic spot pedestrian flow detection technology. Taking the scenic spot people flow rate detection based on image processing as an example, because the scenic spot people flow rate has high mobility, the specific coordinate position of each person is difficult to determine, and the size of the people flow rate is usually judged according to the obtained crowd density heat map by adopting an image regression algorithm. However, in the conventional target detection algorithm such as fast RCNN, the number of layers of the network model is large, so that the network model is difficult to deploy on the embedded device, and the requirement of real-time detection of the mobile device is difficult to realize due to large parameter quantity and low network reasoning speed. The invention discloses a human flow monitoring system of an intelligent library based on face detection, wherein whether a reader enters or leaves the region is judged by comparing face detection frame coordinates in two frames of pictures before and after aiming at a special scene that a person in a study room of the library is relatively static. The judgment of the entrance and exit of the pedestrian volume specifically comprises the following steps: firstly, setting a target area, such as an area from the first row to the last row in a study room of a library, and storing the position coordinates of the area after setting the area; and secondly, judging whether a new face appears in the current frame, comparing the new face with the image of the previous frame according to the real-time image acquired by the current frame, mainly comparing the coordinate information of the detected face frame, adding one to the total number of the faces when the coordinates of the new ID face frame in the image of the previous frame and the image of the current frame are changed from the outside of the set area to the inside of the set area, reducing the time consumed by repeated detection through the comparison of the images of the previous frame and the image of the next frame, and greatly improving the overall detection efficiency of the system.
Example 2
For the intelligent library people flow rate monitoring system of the above embodiment, fig. 5 discloses a schematic structural diagram of the intelligent library people flow rate monitoring system provided by embodiment 3. Referring to fig. 5, the smart library people flow rate monitoring system device includes: the intelligent seat reservation system comprises a camera image acquisition module, an image preprocessing module, an MTCNN face detection module, an intelligent interconnection module (comprising a data integration module, a library personnel management system and an online seat reservation module) and a display module.
The camera image acquisition module is used for acquiring images acquired by a camera of a library study room in real time;
the MTCNN face detection module is used for roughly detecting a face area according to the image picture content and carrying out face detection on the image acquired in real time in the library study room through a pre-trained MTCNN face detection model to obtain a face detection frame; the MTCNN face detection algorithm introduces deep separable convolution, reduces the number of layers and parameters of a network, achieves higher accuracy and higher detection speed, and retains a face detection frame with higher confidence coefficient through Soft-NMS;
the intelligent interconnection module is used for synchronizing data in a library personnel management system and an online seat reservation system in real time;
the data integration module is used for combining the number of the identified face detection frames with the number of persons making an appointment on line, counting the sum of seats already used and seat numbers already made an appointment, simultaneously storing coordinate information of the face detection frames, and recording the use time of the seats through the transformation degree of coordinates;
and the display module is used for displaying the seat allowance, calculating the seat allowance of the study room of the library according to the set open seat amount of the library through the intelligent interconnection module, and displaying the seat allowance on the LED electronic display screen in real time.
Example 3
Fig. 6 is a schematic structural diagram of an electronic device provided in embodiment 4 of the present disclosure, and as shown in fig. 6, the electronic device of this embodiment includes a camera, a processor, a memory, and an LED electronic display screen.
The camera is used for acquiring a real-time image of a study room of the library;
the processor is used for operating the face detection model and taking the real-time image acquired by the camera as input image information;
the memory is used for storing pre-trained model parameters and image information;
the LED electronic display screen is used for detecting the number of the face frames detected in real time and the surplus seat of the study room set according to requirements.
The invention is not the best known technology.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (8)

1. A smart library people flow monitoring method based on face detection is characterized by comprising the following steps: acquiring real-time study room images through a camera, adopting kernels with different sizes to slide on the images for preprocessing, carrying out face detection through a pre-trained MTCNN (multiple terminal coupled neural network), reserving a high-confidence face detection frame, determining the number of real-time study room people according to the detected face detection frame, carrying out statistics, displaying the real-time seat allowance of the study room on an electronic screen, and prompting a reader to go to the study room with the seat allowance when the number is lower than a preset threshold value;
the face detection comprises the steps of sending an image into an MTCNN face detection neural network, rotating and zooming the sent image to generate an image pyramid, transmitting the image pyramid into a first-level network P-net structure of the MTCNN, generating target candidate frames by the P-net for detecting the face without considering confidence coefficient, and transmitting the target candidate frames into a next-level network R-net for face recognition; the R-net carries out confidence score on detection frames which are transmitted by the P-net network and contain face images, deletes the face detection frames lower than a preset confidence threshold, calculates the fit degree among the target detection frames through an IOU, adopts a flexible non-maximum value Soft-NMS to remove repeated detection frames transmitted by the P-net, cuts and scales the rest face detection frames, inputs the cut and scaled face detection frames into the next-level O-net network, uses Soft-NMS to remove the repeated face detection frames again, calculates the position of the target area in the original image through transmitted target area coordinates, and simultaneously marks the face detection frames with rectangular frames in the original image, namely the final detected face detection frames.
2. The intelligent library people flow monitoring method based on face detection as claimed in claim 1, wherein: the preprocessing process comprises the steps of cutting an image; and performing image enhancement on the cut image, increasing the image contrast by adopting a point operation algorithm of a spatial domain, and performing operation on pixel points on the image.
3. The intelligent library people flow monitoring method based on face detection as claimed in claim 1, wherein: and optimizing the MTCNN, adding a BN layer in a convolution layer and a relu layer in the three MTCNN cascade networks, accelerating the convergence speed of the networks, and simultaneously introducing deep separable convolution to reduce the parameter calculation amount.
4. The intelligent library people flow monitoring method based on face detection as claimed in claim 1, wherein: the detection system adopts a face detection frame coordinate positioning method to calculate the service time of a certain seat, the method comprises the steps of assigning an ID (identity) value and recording the initial time when a face appears on the seat, and compared with an image detected in the previous frame, when the coordinate of a face frame with the same initial coordinate changes, and when the amplitude of the coordinate change is larger than a certain threshold value, judging whether a reader leaves the seat or not, and recording the time at the moment. To calculate the age of the seat.
5. The intelligent library people flow monitoring method based on face detection as claimed in claim 1, wherein: and the judgment of the seat allowance adopts a coordinate transformation degree algorithm, and whether a reader enters a study room or leaves the seat is judged according to the coordinate transformation degree of the face detection frame in the images of the front frame and the back frame.
6. The intelligent library people flow monitoring method based on face detection as claimed in claim 1, wherein: the method further comprises the step of sending the real-time people flow information to the library access control platform, and when the number of people reaches a set value, temporarily stopping the readers from entering.
7. The intelligent library people flow monitoring method based on face detection as claimed in claim 1, wherein: the method also comprises the step of combining the real-time people flow detection of the library with the seat reservation of the online study room, and the step of displaying the seat allowance online according to the real-time detected number of people for readers to check and reserve the seat.
8. The utility model provides a wisdom library people flow monitoring devices based on face detection which characterized in that includes: the system comprises an image acquisition module, a detection module, an information interaction module, a camera, a processor, a memory and an electronic display screen;
the image acquisition module is used for acquiring images acquired by a camera of a library study room in real time;
the detection module is used for detecting a face area according to the image picture content and carrying out face detection on the image acquired in real time in the library study room through a pre-trained MTCNN face detection model to obtain a face detection frame; the MTCNN face detection algorithm introduces deep separable convolution, reduces the number of layers and parameters of a network, and retains a face detection frame with higher confidence coefficient through Soft-NMS;
the information interaction module is used for being combined with the online study room reservation module and displaying the real-time seat allowance of the study room of the library through data integration;
the camera is used for acquiring a real-time image of a study room of the library;
the processor is used for operating a face detection model and an algorithm;
the memory is used for storing pre-trained model parameters and image information;
the electronic display screen is used for counting the number of the face frames detected in real time and setting the surplus of the seats left in the study room according to requirements.
CN202210379735.4A 2022-04-12 2022-04-12 Intelligent library people flow monitoring method and device based on face detection Pending CN114708554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210379735.4A CN114708554A (en) 2022-04-12 2022-04-12 Intelligent library people flow monitoring method and device based on face detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210379735.4A CN114708554A (en) 2022-04-12 2022-04-12 Intelligent library people flow monitoring method and device based on face detection

Publications (1)

Publication Number Publication Date
CN114708554A true CN114708554A (en) 2022-07-05

Family

ID=82174703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210379735.4A Pending CN114708554A (en) 2022-04-12 2022-04-12 Intelligent library people flow monitoring method and device based on face detection

Country Status (1)

Country Link
CN (1) CN114708554A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895047A (en) * 2023-07-24 2023-10-17 北京全景优图科技有限公司 Rapid people flow monitoring method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895047A (en) * 2023-07-24 2023-10-17 北京全景优图科技有限公司 Rapid people flow monitoring method and system
CN116895047B (en) * 2023-07-24 2024-01-30 北京全景优图科技有限公司 Rapid people flow monitoring method and system

Similar Documents

Publication Publication Date Title
WO2021208275A1 (en) Traffic video background modelling method and system
Hsieh et al. A real time hand gesture recognition system using motion history image
WO2018188453A1 (en) Method for determining human face area, storage medium, and computer device
US9355432B1 (en) Method and system for automatically cropping images
US9552622B2 (en) Method and system for automatically cropping images
CN106845383A (en) People's head inspecting method and device
CN109101602A (en) Image encrypting algorithm training method, image search method, equipment and storage medium
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN108764085A (en) Based on the people counting method for generating confrontation network
CN112906545B (en) Real-time action recognition method and system for multi-person scene
US20220129682A1 (en) Machine-learning model, methods and systems for removal of unwanted people from photographs
CN111062263B (en) Method, apparatus, computer apparatus and storage medium for hand gesture estimation
CN110210474A (en) Object detection method and device, equipment and storage medium
CN103902958A (en) Method for face recognition
CN110348381A (en) A kind of video behavior recognition methods based on deep learning
CN112733802B (en) Image occlusion detection method and device, electronic equipment and storage medium
CN111080746B (en) Image processing method, device, electronic equipment and storage medium
CN112836625A (en) Face living body detection method and device and electronic equipment
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN114708554A (en) Intelligent library people flow monitoring method and device based on face detection
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
WO2022095818A1 (en) Methods and systems for crowd motion summarization via tracklet based human localization
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN115346169B (en) Method and system for detecting sleep post behaviors
CN109447016A (en) A kind of demographic method and system between adding paper money based on structure light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination