CN109902628B - Library seat management system based on vision thing networking - Google Patents

Library seat management system based on vision thing networking Download PDF

Info

Publication number
CN109902628B
CN109902628B CN201910150021.4A CN201910150021A CN109902628B CN 109902628 B CN109902628 B CN 109902628B CN 201910150021 A CN201910150021 A CN 201910150021A CN 109902628 B CN109902628 B CN 109902628B
Authority
CN
China
Prior art keywords
reader
seat
image
library
management system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910150021.4A
Other languages
Chinese (zh)
Other versions
CN109902628A (en
Inventor
伍冯洁
肖颖
梁梓慧
黄文恺
陈伟涛
谭成威
林佳翰
李锦韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN201910150021.4A priority Critical patent/CN109902628B/en
Publication of CN109902628A publication Critical patent/CN109902628A/en
Application granted granted Critical
Publication of CN109902628B publication Critical patent/CN109902628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a library seat management system based on a visual Internet of things, which comprises: reader identity authentication module: the identity authentication of readers on the seats is realized through face recognition; reader real-time attitude detection module: detecting the posture of a reader, and judging whether the reader leaves the seat; an overtime reminding module: the timing is started when the reader leaves the seat, and the leaving time exceeds a threshold value to send a reminder to the reader and/or an administrator. The system can automatically identify whether a reader leaves a seat, automatically start timing when detecting that the reader leaves the seat, and realize the function of reminding the reader to return to the seat or informing a worker to recover the seat according to the preset conditions so as to solve the problems of unreasonable seat distribution and management modes and the pain point of long-time seat occupation of the library at present.

Description

Library seat management system based on vision thing networking
Technical Field
The invention relates to the technical field of visual Internet of things, in particular to a library seat management system based on image processing and real-time posture detection.
Background
Reading room and self-service room are special places for readers to look up documents and study in library. With the improvement of the education level of China, the audience of higher education is more and more, people who search for data and study in libraries are increasing day by day, and reading room seats become a scarce study resource. In colleges, the library reading seat configuration problem has been a "pain spot". The 'malicious seat occupation' often occurs, which not only causes the waste of precious learning resources, but also easily causes the vicious events such as the loss of personal finance, the friction among the seat occupation people and the like.
In view of the above problems, it is of great practical significance to combine multiple technologies to solve the optimization problem of library seat resource allocation.
The face recognition technology is an identity authentication technology which extracts face features in real time through a camera and compares the face features with data in a face database to determine identity. The method has the advantages of less required distributed terminals, high identity identification accuracy, high identification efficiency and low cost, so the method is very suitable for occasions with large identification workload and low absolute accuracy requirement, such as public occasions.
The real-time posture detection is to collect the action of the monitored person through a camera and analyze whether the action is reasonable through an algorithm, and the technology is very suitable for the specific behavior detection in public occasions.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a library seat management system based on a visual Internet of things, and solves the problems that the current library seat distribution and management mode is unreasonable, and the seat is occupied for a long time. The camera in the library is used for collecting the face image of a reader to recognize the face, so that the non-contact identity authentication is realized, the identity of the reader is obtained, and meanwhile, the identity authentication is matched with the seat where the reader is located, and the reader is shown to have the right to use the seat before the seat is recycled. When a reader reads, the system detects the reading state of the reader in real time through the camera, the whole process is automatically completed by a system algorithm, and manual monitoring is not needed. The system can automatically identify whether the reader leaves the seat, automatically start timing when detecting that the reader leaves the seat, and realize the function of reminding the reader to return to the seat or informing the staff to recover the seat according to the preset conditions. The communication between the system and readers and the communication between the system and workers are carried out under a local area network and a mobile network, and the communication is informed in the form of webpage reminding and short messages.
The purpose of the invention is realized by the following technical scheme:
a library seat management system based on a visual Internet of things comprises:
a high-definition camera is arranged in a library reading area; acquiring a human face image of a reader through a camera; preprocessing the collected face image; extracting a characteristic value of a human face image of a reader;
and comparing the extracted face image characteristic data with the template in the system face database, outputting reader identity information after the similarity exceeds a certain threshold, and finishing non-contact reader identity authentication.
The system carries out real-time calculation and analysis on the video stream collected by the camera, so that the reading state of a reader is detected, and the detection comprises the series of actions of temporarily leaving a seat (not picking up articles when leaving).
The system carries out real-time calculation and analysis on the video stream collected by the camera, thereby realizing the detection of the reading state of the reader, including returning to the seat.
The system carries out real-time calculation and analysis on the video stream acquired by the camera, so that the reading state of readers is detected, including alarming behaviors or violent conflict behaviors among the readers.
The system carries out real-time calculation and analysis on the video stream acquired by the camera, so that the reading state of a reader is detected, and other behaviors of violating reading regulations of a library are included.
The system carries out real-time calculation analysis on the video stream collected by the camera, so that the reading state of a reader is detected, the seat use is finished (whether the reader picks up articles and other series of actions when leaving), the reader is disconnected from being matched with the seat, and the seat is recovered by the system.
When the reader leaves the seat, the system automatically counts the time.
The system overtime threshold can be self-defined, and the system can automatically and reasonably distribute the use time for each reader according to the time limit of applying for using the seat by the reader.
The reader takes the seat for a long time, and the system reminds the reader to leave the seat through webpage information and short message information.
The reader takes the seat for a long time, and the system prompts the seat to recover and warn through webpage information and short message information.
The reader takes the seat for a long time, and the system informs the library manager to recover the seat through webpage information and short message information.
The readers, the system and the library manager communicate in real time through the local area network and the mobile network.
When the staff recovers the seat, the original personal articles of the reader are stored by the specific storage device and are stored in the designated area.
And after the seat is recovered, the original reader is informed of the recovered personal articles in the form of webpage messages or short messages.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the unique identity authentication of the reader is completed, the reader and the seats are matched in a one-to-one correspondence manner during the reading period of the reader, the reader leaves for a short time, and the seats cannot be occupied by other people due to the fact that the reader takes books from a toilet, a bookshelf, question consultation and the like.
2. Can real-time detection reader read the state, the reader is gone into the seat, is left the position etc. and all can be detected by the system audio-visual, can promote the availability factor of library's seat resource greatly, improves reader and reads and study and experience, solves the painful point of "occupy-place". The 'seat occupation' overtime reminding is timely, the original private articles of the readers are kept thoroughly after the seats are recovered, and the risk of losing the articles of the readers is greatly reduced.
3. The additional equipment is less, a hundred-square meter of reading space containing dozens of seats can be completed only by four cameras, the development and maintenance cost is extremely low, and the device is convenient to popularize in the whole society.
Drawings
FIG. 1 is a schematic diagram of a library management system according to an embodiment.
Fig. 2 is a flowchart of an embodiment of identity authentication based on image processing.
FIG. 3 is a flow chart of real-time attitude detection.
FIG. 4 is a diagram of an embodiment single person pose estimation.
FIG. 5 is a diagram illustrating multi-person pose estimation according to an embodiment.
FIG. 6 is a flowchart illustrating an embodiment of managing system message alerts and processing.
Fig. 7 is a schematic diagram of camera division (only one camera is illustrated monitoring n seats within a certain radius, not a row of students).
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example 1
A library seat management system based on image processing and real-time gesture detection, comprising: an identity authentication system based on face recognition; a wireless communication module based on the wireless local area network and the mobile data; a reader reading state real-time posture detection module; timeout reminding and a background data management system. One of the application scenarios is described as follows:
in a reading room of a library, 4 cameras are distributed at four corners of the top of the reading room, and a plurality of reader seats are distributed in the reading room. The seats of the reader can be irregularly distributed or arranged in columns and the like. The positions of the seats in the library are numbered according to the positions of the seats in the library, and relevant information such as the coordinates of the seats, corresponding numbers and the like is recorded into a seat management system of the library in advance.
Before the reading room opens the authority to the readers, the readers need to register by acquiring face images.
Registered readers enter a reading room to arrive at seats, any mobile terminal of the readers, such as mobile phones, tablet computers, notebook computers and the like, accesses a local area network of a library, opens library management system software on the mobile terminal, the readers input seat numbers, then, according to software prompts, the readers visually see corresponding cameras and stay for 1-2 seconds (so that the cameras can acquire clear face images). And uploading the real-time video data stream collected by the camera to a control center.
And selecting a plurality of frames of face images from the real-time video data stream collected by the camera as a comparison material to perform face recognition, completing non-contact reader identity authentication, and enabling a reader to start reading or learning. The camera continues to collect the video in the reading room.
The system carries out real-time computational analysis to the video stream that the camera was gathered to realize reader's reading state's detection, include:
1. temporarily out of the seat (no series of actions to pick up items upon exit).
2. And returning to the seat.
3. The seat use is finished (the reader leaves the seat to have a series of actions such as picking up articles and the like).
4. Alarming behavior between readers, or violent conflict behavior.
5. Violating other behaviors of the library reading regulations.
If the reader finishes using the seat, the reader is disconnected from matching with the seat, and the system recycles the seat.
When the reader leaves the seat, the system automatically counts the time.
The reader takes the seat for a long time (the reader temporarily leaves the seat for a long time), the system reminds the reader to leave the seat through webpage messages and short message messages, prompts a seat recovery warning and informs a library manager to recover the seat.
When the staff recovers the seat, the original personal articles of the reader are stored by the specific storage device and are stored in the designated area. And after the seat is recovered, the original reader is informed of the recovered personal articles in the form of webpage messages or short messages.
The system overtime threshold can be self-defined, and the system can automatically and reasonably distribute the use time for each reader according to the time limit of applying for using the seats by the readers. In this example, a 15 minute reminder, a 30 minute warning, and a 1 hour recovery seat were used.
If the behavior of alarming, violent conflict or other behaviors violating reading regulations of the gallery among readers exists, the system reminds workers to arrive for processing through webpage messages and short messages.
As shown in fig. 1, the library seat management system based on image processing and real-time posture detection specifically includes: the system comprises a library manager client, a reader interaction terminal, a camera, a wireless communication system and a control center.
The library administrator client, the reader interaction terminal, the camera and the control center are communicated in real time through the wireless communication system.
Wireless communication systems include wireless local area networks and mobile data communications. The system communicates the reader with the management system through the wireless local area network and the mobile data communication. The library usually has a local area network of the library, and a mobile phone, a tablet computer, a notebook computer and the like can be used as contact terminals, when the terminal equipment of a reader accesses the local area network of the library, the reader and the management system can communicate through a wireless local area network, and if the reader does not access the local area network of the library, the reader and the management system communicate through a mobile network.
As shown in fig. 2, the process of confirming the identity of a reader through face recognition includes:
preprocessing a face image: the directly collected face image is often not directly used due to the interference of factors such as ambient light and random events. The face image more suitable for feature value extraction can be obtained through preprocessing such as gray level correction and noise filtering.
Face feature extraction: the features commonly used for face recognition include face image transformation coefficient features, visual features, pixel statistical features, face image algebraic features and the like. Face feature extraction is actually the process of modeling face features. The embodiment adopts a Gabor feature algorithm to extract the face feature value, can provide multi-scale and multi-directional detailed description for the face image, and has excellent video aggregation and strong detail description and local structure capability. When the face image selected from the real-time video data stream cannot acquire enough required feature points, acquisition may fail, and acquisition success and failure in the acquisition process are correspondingly prompted, such as re-acquisition is prompted, until enough required feature points can be acquired.
Feature matching and identity authentication: the extracted face image feature data needs to be searched and matched with data in a system face database, when the similarity between the face image of the reader to be authenticated and a face template pre-stored in the database exceeds a preset threshold, the output of a matching result can be obtained, and the identity of the reader is finally determined. At present, various mature face recognition algorithms exist, such as a face feature point-based recognition algorithm, a whole face-based recognition algorithm, a neural network-based recognition algorithm and the like, and the reader identity authentication is performed by adopting the face feature point-based recognition algorithm in the embodiment.
The position of a reader can be accurately known through the camera, the camera can 'know' the positions of the seat set governed by the camera, when the reader sits on the positions and looks at the camera for identification, the system automatically establishes the binding relationship between the reader and the positions, and sends related binding information to the reader. The voice and light are adopted to carry out the prompt of the collection work in the collection process, the mobile terminal and the camera are both provided with prompts, the mobile terminal is provided with message prompt, and the camera is provided with acousto-optic prompt (which can be adjusted according to specific environmental requirements). In a library environment, face image acquisition, successful acquisition, reacquisition and the like are respectively prompted by normal brightness, single flash and double flash of a camera and the like, and after the face image acquisition is successful, a message is additionally sent to a mobile terminal of a reader from a background.
The management system in the library is expected to acquire the following states of the reader in the library: whether to leave the seat; whether to return to the seat again; whether there is overstimulation between readers. The step of obtaining the reader state includes:
and S3-1, obtaining digital video information from the camera and carrying out image preprocessing.
S3-2, moving image processing is carried out on the acquired video information, and each moving target area is acquired, wherein the steps are as follows:
converting continuous monitoring video images of the (k-1) th, k and k +1 th frames into a gray image f k-1 (x,y)、f k (x,y)、f k+1 (x, y), detecting a binary image of the moving object in the k frame image by using a symmetric difference method according to the relative change of the moving object and the background image to obtain the contour of the moving object, wherein the calculation formula is as follows:
d (k-1,k) (x,y)=|f k (x,y)-f k-1 (x,y)|
d (k,k+1) (x,y)=|f k+1 (x,y)-f k (x,y)|
b k (x,y)=b (k-1,k) (x,y)∩b (k,k+1) (x,y)
wherein, d is (k-1,k) (x,y)、d (k,k+1) (x, y) is a gray difference image of two adjacent frames, b (k-1,k) (x,y)、b (k,k+1) (x, y) is a binary image of two adjacent frames of gray scale difference images, b k And (x, y) a binary image of the moving object of the kth frame.
And S3-3, modifying the obtained binary image by using an edge detection algorithm to obtain a modified contour image. Whether the reader leaves the seat or whether the reader returns to the seat can be obtained according to the multi-frame outline images. According to a certain feature set of a multi-frame image, such as how many frames of images exist from the position A to the position B of a certain reader, the moving speed of the reader or the feature set can be calculated, and then the action intensity of the monitored object is judged. The motion characteristics of readers can be extracted through optical flow algorithm analysis, and therefore whether the readers have overstimulation or not is judged.
The steps are realized by a video intelligent processing unit of a control center; the video intelligent processing unit carries out image preprocessing, moving object detection and correction on the sent video information, and simultaneously adopts an optical flow method to analyze the optical flow intensity of each moving object and carries out feature extraction on the optical flow information of each moving object; the posture analysis system constructs a behavior pattern judgment model according to the extracted feature quantity, and judges the reading state of a reader by comparing predetermined thresholds (such as optical flow intensity, optical flow speed, optical flow direction and the like).
The reader reading state real-time detection comprises single-person posture estimation and multi-person posture estimation.
Single posture estimation:
the gesture recognition is considered as a structured prediction problem (structured prediction). Suppose that
Figure BDA0001981262850000071
For the set of all joint positions (u, v) in the picture, then ≥>
Figure BDA0001981262850000072
Representing the pixel location of the joint point p. The human body posture estimation aims to be as follows: identifying P person body joint point position Y = (Y) in picture 1 ,...,Y P ). This estimator consists of a multi-clas predictor sequence, as in fig. 4:
wherein g is t (. Cndot.) is the classifier model to be trained to predict the location of the individual human joint points in each layer.
For all T e {1,. Eta., T }, the classifier g t Confidence value of each joint point position of output
Figure BDA0001981262850000073
The confidence values are all based on the feature x extracted from a certain point of the image z ∈R d And Y of classifier output in previous layer P The domain space content information is classified. Wherein:
Figure BDA0001981262850000074
Figure BDA0001981262850000075
when stage t =1
Figure BDA0001981262850000076
Remember at each position of the picture z = (u, v) T All confidence scores for the joint position p are
Figure BDA0001981262850000077
Where w is the width of the picture and h is the height of the picture, then:
Figure BDA0001981262850000081
when stage t > 1, the classifier needs to predict the confidence value based on two inputs:
1. picture feature x consistent with above z ∈R d
2. Spatial content information output by a classifier in a previous layer
Figure BDA0001981262850000082
Since the pose estimation often needs to refer to the surrounding image information and may be affected by the obstruction, the characteristics of CNN convolutional neural network can be introduced to make the upper layer have a larger receiving field (reliable field) so as to consider the surrounding information at the same time.
The flow of the whole algorithm can be summarized as follows:
1. identifying all the persons appearing in the image, and obtaining the joint points of each person through regression
2. Removing the effects of others from the center map
3. The final result is obtained by repeated prediction.
Multi-person pose estimation
The multi-person pose estimation is based on the single-person pose estimation, as shown in fig. 5. The overall process of the model is as follows:
1. reading a picture with the picture of w multiplied by h;
2. training an image feature F with the same size of w x h by a VGG-19 network which enters a 10-layer network;
3. two layers of different convolutional neural networks are introduced:
keypoint confidence network S = (S) 1 ,S 2 ,...,S J ) Wherein J represents J parts of human body
Figure BDA0001981262850000083
Keypoint affinity vector field
Figure BDA0001981262850000084
4. And clustering key points to obtain a skeleton.
S is a confidence network, L is an affinity vector field network:
Figure BDA0001981262850000085
Figure BDA0001981262850000086
the loss function of the whole model is the average square sum of the real (ground _ true) and predicted (predicted) values of the two convolutional networks:
Figure BDA0001981262850000091
Figure BDA0001981262850000092
Figure BDA0001981262850000093
in the foregoing process, a set of discrete candidate locations for keypoints is obtained based on a network of confidence values, because there may be multiple human bodies in the picture, or incorrect keypoints, and there may be many different candidate locations for each keypoint that require a score to be calculated for the candidate keypoints.
The hypothesis model obtains all candidate key points to form a set
Figure BDA0001981262850000094
Wherein N is j Number of positions as candidate key points->
Figure BDA0001981262850000095
The coordinates of the mth candidate pixel of the keypoint j.
The goal is to train the key points belonging to the same person into the torso, for which a variable is defined to measure whether these points belong to one person:
Figure BDA0001981262850000096
namely:
Figure BDA0001981262850000097
for/>
Figure BDA0001981262850000098
for two different keypoints j 1 ,j 2 Their corresponding candidate keypoint set should be->
Figure BDA0001981262850000099
And &>
Figure BDA00019812628500000910
The correct key points can be found by a method of a linear equation system:
Figure BDA00019812628500000911
s.t.
Figure BDA00019812628500000912
Figure BDA00019812628500000913
wherein E c The weight corresponding to the trunk c represents the total connection affinity between two key points related to the trunk c,
Figure BDA00019812628500000914
is a corresponding->
Figure BDA00019812628500000915
The final problem can be seen as:
Figure BDA00019812628500000916
the control center of the seat management system mainly undertakes calculation and analysis of face images and video stream data, and because real-time monitoring is adopted, the data processing amount is large, and the control center is required to have strong calculation capacity.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (5)

1. A library seat management system based on a visual Internet of things, comprising:
reader identity authentication module: the identity authentication of readers on the seats is realized through face recognition;
reader real-time attitude detection module: detecting the posture of a reader, and judging whether the reader leaves the seat;
an overtime reminding module: timing is started when the reader leaves the seat, and the leaving time exceeds a threshold value and a prompt is sent to the reader and/or an administrator;
the reader real-time posture detection module can also detect the specific posture of the reader and judge whether the reader has actions which do not accord with the regulation of the library; if the action condition does not accord with the library regulation, the library manager is informed;
detecting the specific posture of the reader includes single-person posture detection:
considering gesture recognition as a structured prediction problem, assume
Figure FDA0004053695490000011
For all joints in the picturePosition (u, v) set, then->
Figure FDA0004053695490000012
The pixel position representing the joint point p, the object of the human pose estimation is: identifying P person body joint point position Y = (Y) in picture 1 ,…,Y P ) This estimator consists of a multi-clas predictor sequence;
g t () is a classifier model to be trained for predicting the location of individual human joint points in each layer; for all T e {1, …, T }, classifier g t (. The confidence value of each joint point position of output
Figure FDA0004053695490000013
The confidence values are all based on the characteristic x extracted from a certain point of the image z ∈R d And Y of classifier output in previous layer P The domain space content information is classified, wherein:
Figure FDA0004053695490000014
Figure FDA0004053695490000015
when staget =1
Figure FDA0004053695490000016
Remember at each position of the picture z = (u, v) T All confidence scores for the joint position p are
Figure FDA0004053695490000017
Where w is the width of the picture and h is the height of the picture, then:
Figure FDA0004053695490000018
when staget > 1, the classifier needs to predict a confidence value based on two inputs:
1) Picture feature x consistent with above z ∈R d
2) Spatial content information output by the classifier in the previous layer:
Figure FDA0004053695490000021
in order to consider image information and obstruction around a human body, the characteristic of a CNN convolutional neural network is introduced, so that an upper layer network has a larger receiving domain, and therefore, the surrounding information is considered at the same time;
the whole process can be summarized as follows: identifying all the persons appearing in the image, and regressing to obtain the joint points of each person; removing the influence of other people according to the center map; the final result is obtained by repeated prediction.
2. The library seat management system of claim 1, wherein a high-definition camera is provided in a library seat area; acquiring a human face image of a reader through a camera; preprocessing the collected face image; extracting a characteristic value of a human face image of a reader; and comparing the extracted face image characteristic data with a template in a pre-collected reader face database, outputting reader identity information after the similarity exceeds a threshold value, and finishing reader identity authentication.
3. The library seat management system of claim 1, wherein the step of determining whether the reader is away from the seat comprises:
s3-1, obtaining digital video information from a camera, and performing image preprocessing;
s3-2, moving image processing is carried out on the acquired video information, and each moving target area is acquired, wherein the steps are as follows:
converting continuous monitoring video images of the (k-1) th, k and k +1 th frames into a gray image f k-1 (x,y)、f k (x,y)、f k+1 (x, y), detecting a binary image of the moving object in the k frame image by using a symmetric difference method according to the relative change of the moving object and the background image to obtain the contour of the moving object, wherein the calculation formula is as follows:
d (k-1,k) (x,y)=|f k (x,y)-f k-1 (x,y)|
d (k,k+1) (x,y)=|f k+1 (x,y)-f k (x,y)|
b k (x,y)=b (k-1,k) (x,y)∩b (k,k+1) (x,y)
wherein, d is (k-1,k) (x,y)、d (k,k+1) (x, y) is a gray difference image of two adjacent frames, b (k-1,k) (x,y)、b (k,k+1) (x, y) is a binary image of the gray difference image of two adjacent frames, b k (x, y) a binary image of the k-th frame moving object;
s3-3, correcting the obtained binary image by using an edge detection algorithm to obtain a corrected contour image; whether the reader leaves the seat or whether the reader returns to the seat can be obtained according to the multi-frame outline image.
4. The library seat management system of claim 1, wherein detecting the specific posture of the reader comprises a multi-person posture detection, the multi-person posture estimation based on a single-person posture estimation as follows:
1) Reading a picture with the picture of w multiplied by h;
2) Training an image feature F with the same value of w x h through a VGG-19 network;
3) Two layers of different convolutional neural networks are introduced, so that:
keypoint confidence network S = (S) 1 ,S 2 ,…,S J ) Wherein J represents J parts of human body
Figure FDA0004053695490000031
Keypoint affinity vector field
Figure FDA0004053695490000032
4) Clustering key points to obtain a skeleton;
s is a confidence network, L is an affinity vector field network:
Figure FDA0004053695490000033
Figure FDA0004053695490000034
the loss function of the whole model is the average square sum of the true and predicted values of the two convolutional networks:
Figure FDA0004053695490000035
Figure FDA0004053695490000036
Figure FDA0004053695490000037
in the previous process, a group of discrete candidate positions of the key points are obtained according to the confidence value network, and a score is calculated for the candidate key points;
the hypothesis model obtains all candidate key points to form a set
Figure FDA0004053695490000038
Wherein N is j Number of locations for candidate keypoints>
Figure FDA0004053695490000039
The coordinates of the mth candidate pixel which is the key point j;
the goal is to train the key points belonging to the same person into the torso, for which a variable is defined to measure whether these points belong to one person:
Figure FDA00040536954900000310
namely:
Figure FDA00040536954900000311
for two different keypoints j 1 ,j 2 Their corresponding candidate key point sets should be
Figure FDA00040536954900000312
And &>
Figure FDA00040536954900000313
Finding the correct key point by a method of a linear equation system:
Figure FDA0004053695490000041
Figure FDA0004053695490000042
Figure FDA0004053695490000043
wherein E c The weight corresponding to the trunk c represents the total connection affinity between two key points related to the trunk c,
Figure FDA0004053695490000044
is corresponding to the c trunk>
Figure FDA0004053695490000045
The final problem can be seen as:
Figure FDA0004053695490000046
5. the library seat management system of claim 1, wherein the readers and the library administrator receive reminders through web messages and text messages.
CN201910150021.4A 2019-02-28 2019-02-28 Library seat management system based on vision thing networking Active CN109902628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910150021.4A CN109902628B (en) 2019-02-28 2019-02-28 Library seat management system based on vision thing networking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910150021.4A CN109902628B (en) 2019-02-28 2019-02-28 Library seat management system based on vision thing networking

Publications (2)

Publication Number Publication Date
CN109902628A CN109902628A (en) 2019-06-18
CN109902628B true CN109902628B (en) 2023-04-07

Family

ID=66945880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910150021.4A Active CN109902628B (en) 2019-02-28 2019-02-28 Library seat management system based on vision thing networking

Country Status (1)

Country Link
CN (1) CN109902628B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889393A (en) * 2019-12-10 2020-03-17 上海芯翌智能科技有限公司 Human body posture estimation method and device
CN111611850A (en) * 2020-04-09 2020-09-01 吴子华 Seat use state analysis processing method, system and storage medium
CN112001347B (en) * 2020-08-31 2023-07-21 重庆科技学院 Action recognition method based on human skeleton morphology and detection target
CN115757941A (en) * 2020-11-02 2023-03-07 乐恒冬 Communication method based on shared information
CN113034714A (en) * 2021-03-19 2021-06-25 广东电网有限责任公司 Business transaction timeout statistical method, device, equipment and storage medium
CN113392776B (en) * 2021-06-17 2022-07-12 深圳日海物联技术有限公司 Seat leaving behavior detection method and storage device combining seat information and machine vision
CN113343870B (en) * 2021-06-17 2024-02-23 南京金盾公共安全技术研究院有限公司 Identification enabling method based on Android system mobile equipment
CN113674480A (en) * 2021-07-26 2021-11-19 重庆生产力促进中心 Malicious tampering preventing method based on self-service terminal of government center
CN113516112B (en) * 2021-09-14 2021-11-30 长沙鹏阳信息技术有限公司 Clustering-based method for automatically identifying and numbering regularly arranged objects
CN115174647B (en) * 2022-07-29 2023-10-31 北京印刷学院 Internet of things self-learning seat management system and method based on time rewarding mechanism
CN117479372B (en) * 2023-12-26 2024-03-08 永林电子股份有限公司 Library intelligent light control system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8688087B2 (en) * 2010-12-17 2014-04-01 Telecommunication Systems, Inc. N-dimensional affinity confluencer
CN102387345B (en) * 2011-09-09 2014-08-06 浙江工业大学 Safety monitoring system based on omnidirectional vision for old people living alone
CN102495997B (en) * 2011-10-30 2014-05-14 南京师范大学 Reading room intelligent management system based on video detection and GIS (geographic information system) image visualization
CN104301697A (en) * 2014-07-15 2015-01-21 广州大学 Automatic public place violence incident detection system and method thereof

Also Published As

Publication number Publication date
CN109902628A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109902628B (en) Library seat management system based on vision thing networking
CN108805093B (en) Escalator passenger tumbling detection method based on deep learning
CN109271554B (en) Intelligent video identification system and application thereof
Bedagkar-Gala et al. Multiple person re-identification using part based spatio-temporal color appearance model
Chackravarthy et al. Intelligent crime anomaly detection in smart cities using deep learning
CN110751022A (en) Urban pet activity track monitoring method based on image recognition and related equipment
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
Gowsikhaa et al. Suspicious Human Activity Detection from Surveillance Videos.
CN107967458A (en) A kind of face identification method
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN106919921B (en) Gait recognition method and system combining subspace learning and tensor neural network
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN111178136A (en) Big data-based smart campus identity recognition method and system
Wechsler et al. Automatic video-based person authentication using the RBF network
CN112651342A (en) Face recognition method and device, electronic equipment and storage medium
CN112434545A (en) Intelligent place management method and system
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
CN111178129B (en) Multi-mode personnel identification method based on human face and gesture
CN103745204A (en) Method of comparing physical characteristics based on nevus spilus points
CN109740527B (en) Image processing method in video frame
CN114581990A (en) Intelligent running test method and device
CN110443179A (en) It leaves the post detection method, device and storage medium
CN110348386B (en) Face image recognition method, device and equipment based on fuzzy theory
CN112257559A (en) Identity recognition method based on gait information of biological individual

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant