CN113963378A - Neural network-based elevator taking personnel identification and elevator control method and device - Google Patents
Neural network-based elevator taking personnel identification and elevator control method and device Download PDFInfo
- Publication number
- CN113963378A CN113963378A CN202110107093.8A CN202110107093A CN113963378A CN 113963378 A CN113963378 A CN 113963378A CN 202110107093 A CN202110107093 A CN 202110107093A CN 113963378 A CN113963378 A CN 113963378A
- Authority
- CN
- China
- Prior art keywords
- human body
- elevator
- image
- neural network
- personnel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Hardware Design (AREA)
- Image Analysis (AREA)
- Indicating And Signalling Devices For Elevators (AREA)
- Elevator Control (AREA)
Abstract
The invention discloses a method and a device for identifying elevator taking personnel and controlling an elevator based on a neural network, wherein the method comprises the following steps: collecting an elevator scene image and a fingerprint image of a calling person; respectively carrying out face detection and fingerprint feature detection on the scene image and the fingerprint image; decomposing the detected face image into a low-frequency sub-band image, a horizontal edge sub-band image and a vertical edge sub-band image; identifying the identity characteristics of the elevator passengers through the decomposed images, and carrying out fingerprint comparison on the detected fingerprint characteristics; when the identity characteristic identification and/or the fingerprint characteristic identification fail, the elevator taking personnel and/or the elevator calling personnel are judged as the specific personnel, the face image and/or the fingerprint image of the specific personnel are stored in the database, and when the fingerprint comparison succeeds, the elevator lifting key is activated to change the elevator from a locking state to a usable state. The invention improves the safety of the elevator.
Description
Technical Field
The invention relates to the technical field of elevator safety, in particular to a method and a device for identifying elevator taking personnel and controlling an elevator based on a neural network.
Background
Along with the development of science and technology, the intelligent degree of building is higher and higher, has not only promoted greatly in the aspect of automation technology, has proposed higher requirement to the safety protection system moreover. In high-rise buildings, the elevator has a very obvious effect, and the safety of the buildings except for an access control system cannot be ignored. At present, the method for ensuring the safety of elevator riding is more, the personnel taking the elevator are conveyed to the appointed floor in a card swiping elevator riding mode, but the card is easy to lose, forget, decipher, inconvenient to carry and the like, so that the requirement of high safety performance of the elevator cannot be met.
Disclosure of Invention
The invention provides a method and a device for identifying elevator taking personnel and controlling an elevator based on a neural network, aiming at quickly and efficiently identifying specific personnel of the elevator and increasing the elevator taking safety.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides a method for identifying elevator taking personnel and controlling an elevator based on a neural network, which comprises the following steps:
1) collecting an elevator scene image and a fingerprint image of a calling person;
2) carrying out face detection on the scene image, and carrying out fingerprint feature detection on the fingerprint image;
3) decomposing the detected face image associated with each elevator passenger into a low-frequency sub-band image, a horizontal edge sub-band image, a vertical edge sub-band image and a high-frequency sub-band image;
4) inputting the low-frequency sub-band image into a first neural network model to extract identity characteristics of the elevator passengers; inputting the output of the first neural network model and the horizontal edge sub-band image into a second neural network model, and extracting the face posture characteristics of the elevator taking personnel; inputting the output of the first neural network model and the vertical edge sub-band image into a third neural network model, and extracting the facial expression characteristics of the elevator passengers; comparing the detected fingerprint features with fingerprints stored in a fingerprint feature library, and outputting a fingerprint feature comparison result;
5) when the identity characteristic identification fails, judging the elevator taking personnel as specific personnel; when the fingerprint feature comparison fails, judging the calling person as a specific person;
6) when the identity characteristic identification and/or the fingerprint characteristic comparison fails, generating an abnormal alarm signal to be pushed to an elevator manager, and storing the face image and/or the fingerprint image associated with the specific person into a database; after the fingerprint characteristics are compared successfully, the elevator control system activates an elevator lifting key to change the elevator from a locking state to a usable state.
Preferably, in step 2), firstly, human body feature detection is performed on the scene image, and whether the elevator passengers are the specific persons is judged according to the detected human body features, after it cannot be judged whether the elevator passengers are the specific persons through the human body features, a human face detection process for the scene image is started, and the step of detecting the human body features specifically includes:
2.1) carrying out human body detection on the scene image through a human body detection frame;
2.2) intercepting the human body area selected by the human body detection frame into a human body image and storing the human body image;
2.3) inputting each intercepted human body image into a feature extraction network to extract the human body features of each elevator taking personnel;
2.4) when the human body characteristics of the associated specific elevator taking personnel extracted from the current frame can not meet the human body characteristic comparison condition, extracting the human body characteristics of all the elevator taking personnel from the preorder frame of the current frame;
2.5) carrying out feature matching on the human body features of the specific elevator passengers extracted from the current frame and the human body features of all the elevator passengers extracted from the previous frame of the current frame, and carrying out feature comparison on the human body features of the specific elevator passengers as the human body features of the specific elevator passengers in the current frame, which are successfully matched from the previous frame, and the pedestrian features stored in a passenger ReID library.
Preferably, the method for judging whether the elevator-taking person is the specific person according to the human body characteristics specifically comprises the following steps:
2.51) converting the human body features related to the specific elevator taking personnel extracted from the current frame into a first human body feature vector, and respectively converting the human body features of all the elevator taking personnel extracted from the previous frame of the current frame into corresponding second human body feature vectors;
2.52) performing inner product operation on the first human body feature vector and each second human body feature vector respectively to obtain an inner product value of the first human body feature vector and each second human body feature vector;
2.53) judging whether an inner product value larger than a preset threshold value exists in each inner product value,
if so, taking the human body feature corresponding to the second human body feature vector with the maximum inner product value obtained by operation as the successfully matched human body feature;
if not, the human body feature matching fails, and the specific elevator taking personnel are directly judged as the specific personnel.
Preferably, in step 3), the low-frequency subband image, the horizontal edge subband image, and the vertical edge subband image are obtained by performing wavelet decomposition on the face image.
Preferably, the method for training the first neural network model specifically includes:
4.1) carrying out wavelet decomposition on the face images of different persons in a face image library to obtain the low-frequency sub-band image associated with each face image;
4.2) identity coding is carried out on each low-frequency subband image;
4.3) taking each low-frequency subband image with identity codes and an original face image associated with each low-frequency subband image as training samples, and training through a radial basis function neural network to form the first neural network model.
Preferably, in step 6), the face image stored in the database is formed by image reconstruction from outputs of the first, second and third neural network models.
The invention also provides a device for identifying elevator taking personnel and controlling the elevator based on the neural network, which comprises:
the elevator scene image acquisition module is used for acquiring an elevator scene image;
the fingerprint image acquisition module is used for acquiring a fingerprint image of the calling person;
the face detection module is connected with the scene image acquisition module and is used for carrying out face detection on the scene image, intercepting a detected face area into a face image and storing the face image;
the human face image decomposition module is connected with the human face detection module and is used for decomposing the human face image associated with each elevator passenger into a low-frequency sub-band image, a horizontal edge sub-band image, a vertical edge sub-band image and a high-frequency sub-band image;
the human face feature extraction module is connected with the human face image decomposition module and used for taking the low-frequency sub-band image as the input of a first neural network model and extracting the identity features of the elevator passengers through the first neural network model; the horizontal edge sub-band image and the output of the first neural network model are used as the input of a second neural network model, and the face posture characteristics of the elevator taking personnel are extracted through the second neural network model; the human face expression features of the elevator passengers are extracted through the third neural network model by taking the output of the first neural network model and the vertical edge sub-band image as the input of the third neural network model;
the fingerprint feature extraction module is connected with the fingerprint image acquisition module and is used for extracting fingerprint features on the fingerprint image;
the fingerprint feature comparison module is connected with the fingerprint feature extraction module and used for comparing the extracted fingerprint features with fingerprints stored in a fingerprint feature library and outputting a fingerprint feature comparison result;
the specific personnel judging module is respectively connected with the face feature extracting module and the fingerprint feature comparing module and is used for judging the elevator passengers and/or the calling passengers as specific personnel when the identity features of the elevator passengers cannot be extracted and/or the fingerprint features cannot be successfully compared;
the specific person image reconstruction module is connected with the specific person judgment module and the face feature extraction module and is used for reconstructing feature maps output by the first neural network model, the second neural network model and the third neural network model into a face reconstruction image associated with the specific person;
the specific personnel data storage module is respectively connected with the specific personnel judgment module, the specific personnel image reconstruction module and the fingerprint image acquisition module and is used for storing the face reconstruction image and/or the fingerprint image which are judged as the specific personnel into a database;
and the elevator control module is connected with the fingerprint characteristic comparison module and used for activating the elevator lifting key to change the elevator from a locking state to a usable state after the fingerprint characteristics are successfully compared.
Preferably, the apparatus further comprises:
the human body detection module is connected with the elevator scene image acquisition module and is used for detecting a human body of the scene image, and intercepting and storing a detected human body area as a human body image;
the human body feature detection module is connected with the human body detection module and is used for detecting the human body features of the human body images related to all the elevator passengers;
the human body feature comparison module is connected with the human body feature detection module and a passenger ReID library and is used for respectively carrying out feature comparison on the detected human body features related to each elevator passenger and the pedestrian features stored in the passenger ReID library;
the specific personnel judging module is also connected with the human body characteristic comparison module and is used for judging the elevator taking personnel with the human body characteristic comparison failure as specific personnel;
the specific personnel data storage module is also connected with the human body detection module and is used for storing the human body image judged as the specific personnel into the database.
Preferably, the human body feature detection module specifically includes:
the human body detection unit is used for carrying out human body detection on the scene image through a human body detection frame;
the human body image intercepting unit is connected with the human body detecting unit and is used for intercepting and storing the human body area selected by the human body detecting frame into a human body image;
the human body feature extraction unit is connected with the human body image intercepting unit and is used for inputting each intercepted human body image into a feature extraction network to extract the human body features of each elevator taking person;
the human body feature comparison condition is connected with the human body feature extraction unit and used for judging whether the human body features extracted from the current frame meet the human body feature comparison condition;
and the human body feature matching unit is connected with the human body feature comparison condition whether meeting the judgment unit and the human body feature extraction unit, and is used for performing feature matching on the human body features of the specific elevator taking personnel extracted from the current frame and the human body features of all the elevator taking personnel extracted from the preorder frame of the current frame when the human body features of the associated specific elevator taking personnel extracted from the current frame cannot meet the human body feature comparison condition, and taking the human body features successfully matched from the preorder frame as the human body features of the specific elevator taking personnel.
Preferably, the human body feature matching unit specifically includes:
the human body feature conversion subunit is used for converting the human body features which are extracted from the current frame and are related to the specific elevator taking personnel into first human body feature vectors and respectively converting the human body features of all the elevator taking personnel extracted from the preorder frame of the current frame into corresponding second human body feature vectors;
an inner product operation subunit, connected to the human body feature conversion subunit, and configured to perform an inner product operation on the first human body feature vector and each of the second human body feature vectors, respectively, to obtain an inner product value of the first human body feature vector and each of the second human body feature vectors;
an inner product value judging subunit, connected to the inner product operation subunit, for judging whether an inner product value greater than a preset threshold exists in each of the inner product values;
a maximum inner product value obtaining unit, connected to the inner product value judging subunit and the inner product operation subunit, configured to obtain, when it is judged that the inner product value greater than the preset threshold value exists, the inner product value with the largest value from each of the inner product values greater than the preset threshold value;
and the human body feature matching subunit is connected with the maximum inner product value acquisition unit and is used for taking the human body features corresponding to the second human body feature vector which is operated to obtain the maximum inner product value as the successfully matched human body features.
The invention can quickly and accurately identify specific personnel in elevator taking personnel and perform abnormity early warning based on the neural network identification technology, thereby reducing the difficulty of elevator operation and maintenance safety management; the elevator operation authority of the calling personnel is identified and verified based on the fingerprint identification technology, and the running state of the elevator is automatically controlled according to the verification result, so that the use safety of the elevator is greatly enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a step diagram of an elevator passenger identification and elevator control method based on a neural network according to an embodiment of the present invention;
FIG. 2 is a diagram of method steps for human feature detection of a scene image;
FIG. 3 is a method step diagram for determining whether the elevator-taking personnel is a specific personnel according to human body characteristics;
FIG. 4 is a diagram of method steps for training a first neural network model;
fig. 5 is a schematic structural diagram of an elevator passenger identification and elevator control device based on a neural network according to an embodiment of the present invention;
fig. 6 is a schematic view of the internal structure of a human body feature detection module in the elevator passenger recognition and elevator control apparatus;
fig. 7 is a schematic diagram of an internal structure of a human body feature matching unit in the human body feature detection module.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
An elevator taking personnel identification and elevator control method based on a neural network provided by an embodiment of the invention is shown in fig. 1, and specifically comprises the following steps:
step 1) collecting an elevator scene image and a fingerprint image of a calling person; the scene image is a video frame image, and the video frame image is collected from the time when the elevator passengers enter the elevator and is continuously collected until all the elevator passengers in the elevator leave the elevator. The elevator calling personnel indicate personnel controlling the elevator to ascend and descend, fingerprint acquisition equipment is arranged near a lifting key of the elevator, and the elevator calling personnel finish fingerprint acquisition according to a fingerprint acquisition prompt tone played by the elevator;
step 2) carrying out face detection on the scene image, and carrying out fingerprint feature detection on the fingerprint image; because the human face area is small, the elevator is often occupied with more persons, and the difficulty of human face detection is high in the close state, in order to improve the speed and accuracy of elevator passenger identification, preferably, human body feature detection is performed on a scene image, whether the elevator passenger is a specific person is judged according to the detected human body feature, and when the elevator passenger cannot be judged to be the specific person through the human body feature, the human face detection process of the scene image is started. Specifically, as shown in fig. 2, the human body feature detection method provided by an embodiment of the present invention includes the following steps:
step 2.1) carrying out human body detection on the scene image through a human body detection frame;
step 2.2) intercepting the human body area selected by the human body detection frame into a human body image and storing the human body image;
and 2.3) inputting each intercepted human body image into a feature extraction network to extract the human body features of each elevator taking personnel.
When the human body features associated with a specific elevator passenger extracted from the current frame cannot meet the subsequent human body feature comparison condition (for example, the human body image of a specific elevator passenger extracted from the current frame is not clear enough and cannot extract effective human body features), the method for detecting the human body features of the scene image further comprises the following steps:
step 2.4) extracting human body characteristics of all elevator taking personnel from the preorder frame of the current frame; in general, the image information of the previous frame image of the current frame is closest to the image information of the current frame image, and all the human body characteristics of all the elevator taking personnel are preferably extracted from the previous frame of the current frame;
and 2.5) carrying out feature matching on the human body features of the specific elevator taking personnel extracted from the current frame and the human body features of all the elevator taking personnel extracted from the previous frame of the current frame, and carrying out feature comparison on the human body features of the specific elevator taking personnel in the current frame, which are successfully matched from the previous frame, and the human body features of the specific elevator taking personnel stored in the passenger ReID library. ReID refers to pedestrian re-identification, which is a technique for determining whether a specific pedestrian is present in an image or video sequence using computer vision techniques. The present embodiment applies ReID pedestrian re-identification technology to extract the human body features of a specific elevator passenger from the preceding frame of the current frame when the effective human body features of the specific elevator passenger cannot be extracted from the current frame image.
Step 2.5), more specifically, as shown in fig. 3, the step of performing feature matching on the human body features of the associated specific elevator passengers extracted from the current frame and the human body features of all elevator passengers extracted from the previous frame of the current frame includes:
step 2.51) the human body characteristics of the associated specific elevator taking personnel extracted from the current frame are converted into first human body characteristic vectors, and the human body characteristics of all elevator taking personnel extracted from the preorder frame of the current frame are respectively converted into corresponding second human body characteristic vectors;
step 2.52) performing inner product operation on the first human body feature vector and each second human body feature vector respectively to obtain an inner product value of the first human body feature vector and each second human body feature vector;
step 2.53) judging whether the inner product value is larger than a preset threshold value or not,
if so, taking the human body feature corresponding to the second human body feature vector with the maximum inner product value obtained by operation as the successfully matched human body feature;
if not, the human body feature matching fails, and the specific elevator taking personnel are directly judged as the specific personnel.
Referring to fig. 1, when the human body characteristics of a specific elevator passenger cannot be successfully matched, the method for identifying the elevator passenger and controlling the elevator based on the neural network according to the embodiment of the present invention further includes:
and 3) carrying out face detection on the scene image, and decomposing the detected face image associated with each elevator passenger into a low-frequency sub-band image, a horizontal edge sub-band image, a vertical edge sub-band image and a high-frequency sub-band image. In this embodiment, it is preferable to decompose the face image into a low-frequency subband image, a horizontal edge subband image, a vertical edge subband image, and a high-frequency subband image by using a wavelet decomposition method;
step 4), inputting the decomposed low-frequency sub-band image into a first neural network model, and outputting the identity characteristics of the elevator passengers; comparing the detected fingerprint features with fingerprints stored in a fingerprint feature library, and outputting a fingerprint feature comparison result;
the following briefly describes the training method of the first neural network model:
as shown in fig. 4, the method for training the first neural network model specifically includes the steps of:
4.1) carrying out wavelet decomposition on the face images of different persons in the face image library to obtain a low-frequency sub-band image associated with each face image;
4.2) carrying out identity coding on each low-frequency subband image;
and 4.3) taking each low-frequency subband image with the identity code and the original face image associated with each low-frequency subband image as training samples, and training through a radial basis function neural network to form a first neural network model. The model training parameters, network structure, and the loss function employed, etc. with respect to the radial basis function neural network are not set forth herein.
Referring to fig. 1, the method for identifying elevator passengers and controlling an elevator based on a neural network according to the embodiment of the present invention further includes:
step 5) judging the elevator taking personnel and/or the elevator calling personnel as specific personnel when the identity characteristic identification and/or the fingerprint characteristic comparison fail, performing abnormal prompt alarm, and storing the face image and/or the fingerprint image of the specific personnel into a database;
after the fingerprint comparison is successful, the elevator control system activates an elevator lifting key to change the elevator from a locking state to a usable state. It should be noted that, in order to ensure the sharpness of the face image, the face image stored in the database is preferably a face reconstruction image formed by reconstructing feature maps output by the first neural network model, the second neural network model and the third neural network model. The input of the second neural network model is the output of the first neural network model and the horizontal edge sub-band image decomposed from the same face image, and the output is the posture characteristics of the face (such as a front face, a left-leaning face, a right-leaning face and the like); the input of the third neural network model is the output of the first neural network model and the vertical edge sub-band image decomposed from the same face image, and the output is the expression characteristics of the face (such as facial expressions like happy and too young). The first neural network model, the second neural network model and the third neural network model are preferably obtained through RBF radial basis function neural network training.
The specific method for reconstructing the face image based on the feature map output by the model is not within the scope of the claimed invention, so the specific image reconstruction process is not described herein.
An embodiment of the present invention further provides an elevator boarding person identification and elevator control device based on a neural network, as shown in fig. 5, the device includes:
the elevator scene image acquisition module is used for acquiring an elevator scene image; the scene image is a video frame image, and the video frame image starts to be collected when the elevator passengers enter the elevator and finishes the collection when all the elevator passengers leave the elevator;
the fingerprint image acquisition module is used for acquiring a fingerprint image of the calling person;
the face detection module is connected with the scene image acquisition module and is used for carrying out face detection on the scene image, intercepting the detected face area into a face image and storing the face image;
the human face image decomposition module is connected with the human face detection module and is used for decomposing the human face image associated with each elevator passenger into a low-frequency sub-band image, a horizontal edge sub-band image, a vertical edge sub-band image and a high-frequency sub-band image;
the human face feature extraction module is connected with the human face image decomposition module and used for taking the low-frequency sub-band image as the input of the first neural network model and extracting the identity features of the elevator passengers through the first neural network model; the horizontal edge sub-band image and the output of the first neural network model are used as the input of a second neural network model, and the human face posture characteristics of the elevator passengers are extracted through the second neural network model; the output of the first neural network model and the vertical edge sub-band image are used as the input of a third neural network model, and the facial expression characteristics of the elevator passengers are extracted through the third neural network model;
the fingerprint feature extraction module is connected with the fingerprint image acquisition module and is used for extracting fingerprint features on the fingerprint image;
the fingerprint feature comparison module is connected with the fingerprint feature extraction module and used for comparing the extracted fingerprint features with the fingerprints stored in the fingerprint feature library and outputting a fingerprint feature comparison result;
the specific personnel judging module is respectively connected with the face feature extracting module and the fingerprint feature comparing module and is used for judging the personnel taking the elevator and/or the personnel calling the elevator as specific personnel when the identity features of the personnel taking the elevator cannot be extracted and/or the fingerprint features cannot be successfully compared;
the specific person image reconstruction module is connected with the specific person judgment module and the face feature extraction module and is used for reconstructing feature images output by the first neural network model, the second neural network model and the third neural network model into face reconstruction images associated with specific persons;
the specific personnel data storage module is respectively connected with the specific personnel judgment module, the specific personnel image reconstruction module and the fingerprint image acquisition module and is used for storing the face reconstruction image and/or the fingerprint image which are judged as specific personnel into a database;
and the elevator control module is connected with the fingerprint characteristic comparison module and used for activating the elevator lifting key to change the elevator from the locking state to the usable state after the fingerprint characteristic comparison is successful.
Because the human face area is smaller, compared with human body detection, the human face area detection needs longer time, so that in order to accelerate the identification speed of the identity of the elevator taking personnel, the human body detection is firstly carried out and the human body characteristics are identified before the human face area is detected, and when the identity of the elevator taking personnel cannot be identified by utilizing the human body characteristics, the human face detection process is started. Therefore, as shown in fig. 5, the elevator boarding person recognition and elevator control device preferably further includes:
the human body detection module is connected with the elevator scene image acquisition module and is used for detecting human bodies of the scene images, intercepting the detected human body areas into human body images and storing the human body images;
the human body characteristic detection module is connected with the human body detection module and is used for detecting the human body characteristics of the human body images related to all elevator passengers;
the human body characteristic comparison module is connected with the human body characteristic detection module and a passenger ReID library and is used for respectively carrying out characteristic comparison on the detected human body characteristics related to each elevator passenger and pedestrian characteristics stored in the passenger ReID library;
the specific personnel judging module is also connected with the human body characteristic comparison module and is used for judging elevator taking personnel with human body characteristic comparison failure as specific personnel;
the specific personnel data storage module is also connected with the human body detection module and is used for storing the human body image judged as the specific personnel into the database.
As shown in fig. 6, the human body feature detection module specifically includes:
the human body detection unit is used for carrying out human body detection on the scene image through a human body detection frame;
the human body image intercepting unit is connected with the human body detecting unit and is used for intercepting and storing the human body area selected by the human body detecting frame into a human body image;
the human body feature extraction unit is connected with the human body image intercepting unit and used for inputting each intercepted human body image into a feature extraction network to extract the human body features of each elevator taking person;
the human body feature comparison condition is connected with the human body feature extraction unit and used for judging whether the human body features extracted from the current frame meet the human body feature comparison condition; each human body has a plurality of human body feature points, for example, a human face can be regarded as one feature point, a body type of the human body can be regarded as one feature point, and the like, and when the feature points can not express human body features due to unclear reasons, serious distortion reasons and the like, the extracted human body features are regarded as not meeting human body feature comparison conditions;
and the human body characteristic matching unit is connected with the human body characteristic comparison condition whether meeting the judgment unit and the human body characteristic extraction unit and is used for matching the human body characteristics of the specific elevator passengers extracted from the current frame with the human body characteristics of all the elevator passengers extracted from the preorder frame of the current frame when the human body characteristics of the associated specific elevator passengers extracted from the current frame can not meet the human body characteristic comparison condition, and taking the human body characteristics successfully matched from the preorder frame as the human body characteristics of the specific elevator passengers.
As shown in fig. 7, the human body feature matching unit specifically includes:
the human body feature conversion subunit is used for converting the human body features which are extracted from the current frame and are related to the specific elevator taking personnel into first human body feature vectors and respectively converting the human body features of all the elevator taking personnel extracted from the preorder frame of the current frame into corresponding second human body feature vectors;
the inner product operation subunit is connected with the human body feature conversion subunit and is used for carrying out inner product operation on the first human body feature vector and each second human body feature vector respectively to obtain an inner product value of the first human body feature vector and each second human body feature vector;
the inner product value judging subunit is connected with the inner product operation subunit and is used for judging whether an inner product value larger than a preset threshold exists in each inner product value;
a maximum inner product value obtaining unit, connected to the inner product value judging subunit and the inner product operation subunit, for obtaining the inner product value with the maximum value from the inner product values larger than the preset threshold value when the inner product value larger than the preset threshold value is judged;
and the human body feature matching subunit is connected with the maximum inner product value acquisition unit and is used for taking the human body features corresponding to the second human body feature vector which is operated to obtain the maximum inner product value as the successfully matched human body features.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.
Claims (10)
1. A method for identifying elevator taking personnel and controlling an elevator based on a neural network is characterized by comprising the following specific steps:
1) collecting an elevator scene image and a fingerprint image of a calling person;
2) respectively carrying out face detection and fingerprint feature detection on the scene image and the fingerprint image;
3) decomposing the detected face image associated with each elevator passenger into a low-frequency sub-band image, a horizontal edge sub-band image, a vertical edge sub-band image and a high-frequency sub-band image;
4) inputting the decomposed low-frequency sub-band image into a first neural network model, and outputting the identity characteristics of the elevator passengers; comparing the detected fingerprint features with fingerprints stored in a fingerprint feature library, and outputting a fingerprint feature comparison result;
5) when the identity characteristic identification and/or the fingerprint characteristic comparison fails, judging the elevator taking personnel and/or the elevator calling personnel as specific personnel, and storing the face image and/or the fingerprint image related to the specific personnel into a database;
after the fingerprint characteristics are compared successfully, the elevator control system activates an elevator lifting key to change the elevator from a locking state to a usable state.
2. The neural network-based elevator-taking personnel identification and elevator control method according to claim 1, wherein in step 2), firstly, human body feature detection is performed on the scene image, whether the elevator-taking personnel is the specific personnel is judged according to the detected human body feature, and when the elevator-taking personnel is not judged to be the specific personnel through the human body feature, the human face detection process on the scene image is started, and the step of detecting the human body feature specifically comprises the following steps:
2.1) carrying out human body detection on the scene image through a human body detection frame;
2.2) intercepting the human body area selected by the human body detection frame into a human body image and storing the human body image;
2.3) inputting each intercepted human body image into a feature extraction network to extract the human body features of each elevator taking personnel;
2.4) when the human body characteristics of the associated specific elevator taking personnel extracted from the current frame can not meet the human body characteristic comparison condition, extracting the human body characteristics of all the elevator taking personnel from the preorder frame of the current frame;
2.5) carrying out feature matching on the human body features of the specific elevator passengers extracted from the current frame and the human body features of all the elevator passengers extracted from the previous frame of the current frame, and carrying out feature comparison on the human body features of the specific elevator passengers as the human body features of the specific elevator passengers in the current frame, which are successfully matched from the previous frame, and the pedestrian features stored in a passenger ReID library.
3. The neural network-based elevator taking personnel identification and elevator control method according to claim 2, wherein the method for judging whether the elevator taking personnel is the specific personnel according to human body characteristics comprises the following steps:
2.51) converting the human body features related to the specific elevator taking personnel extracted from the current frame into a first human body feature vector, and respectively converting the human body features of all the elevator taking personnel extracted from the previous frame of the current frame into corresponding second human body feature vectors;
2.52) performing inner product operation on the first human body feature vector and each second human body feature vector respectively to obtain an inner product value of the first human body feature vector and each second human body feature vector;
2.53) judging whether an inner product value larger than a preset threshold value exists in each inner product value,
if so, taking the human body feature corresponding to the second human body feature vector with the maximum inner product value obtained by operation as the successfully matched human body feature;
if not, the human body feature matching fails, and the specific elevator taking personnel are directly judged as the specific personnel.
4. The neural network-based elevator passenger identification and elevator control method according to claim 1, wherein in step 3), the low-frequency subband image, the horizontal edge subband image, the vertical edge subband image, and the high-frequency subband image are obtained by performing wavelet decomposition on the face image.
5. The method for passenger identification and elevator control based on neural network as claimed in claim 1, wherein the method step of training the first neural network model specifically comprises:
4.1) carrying out wavelet decomposition on the face images of different persons in a face image library to obtain the low-frequency sub-band image associated with each face image;
4.2) identity coding is carried out on each low-frequency subband image;
4.3) taking each low-frequency subband image with identity codes and an original face image associated with each low-frequency subband image as training samples, and training through a radial basis function neural network to form the first neural network model.
6. The neural network-based elevator-taking person identification and elevator control method according to claim 1, wherein step 4) further comprises a feature extraction process for the horizontal edge sub-band image and a feature extraction process for the vertical edge sub-band image, and the feature extraction process for the horizontal edge sub-band image is:
inputting the output of the first neural network model and the horizontal edge sub-band image into a second neural network model, and outputting the face posture characteristics of the elevator passengers;
the characteristic extraction process of the vertical edge sub-band image comprises the following steps:
outputting the output of the first neural network model and the vertical edge sub-band image to a third neural network model, and outputting the facial expression characteristics of the elevator passengers;
in step 5), the face image stored in the database is a face reconstruction image, and the face reconstruction image is formed by image reconstruction performed by the outputs of the first neural network model, the second neural network model and the third neural network model.
7. An elevator passenger identification and elevator control device based on a neural network, the device comprising:
the elevator scene image acquisition module is used for acquiring an elevator scene image;
the fingerprint image acquisition module is used for acquiring a fingerprint image of the calling person;
the face detection module is connected with the scene image acquisition module and is used for carrying out face detection on the scene image, intercepting a detected face area into a face image and storing the face image;
the human face image decomposition module is connected with the human face detection module and is used for decomposing the human face image associated with each elevator passenger into a low-frequency sub-band image, a horizontal edge sub-band image, a vertical edge sub-band image and a high-frequency sub-band image;
the human face feature extraction module is connected with the human face image decomposition module and used for taking the low-frequency sub-band image as the input of a first neural network model and extracting the identity features of the elevator passengers through the first neural network model; the horizontal edge sub-band image and the output of the first neural network model are used as the input of a second neural network model, and the face posture characteristics of the elevator taking personnel are extracted through the second neural network model; the human face expression features of the elevator passengers are extracted through the third neural network model by taking the output of the first neural network model and the vertical edge sub-band image as the input of the third neural network model;
the fingerprint feature extraction module is connected with the fingerprint image acquisition module and is used for extracting fingerprint features on the fingerprint image;
the fingerprint feature comparison module is connected with the fingerprint feature extraction module and used for comparing the extracted fingerprint features with fingerprints stored in a fingerprint feature library and outputting a fingerprint feature comparison result;
the specific personnel judging module is respectively connected with the face feature extracting module and the fingerprint feature comparing module and is used for judging the elevator passengers and/or the calling passengers as specific personnel when the identity features of the elevator passengers cannot be extracted and/or the fingerprint features cannot be successfully compared;
the specific person image reconstruction module is connected with the specific person judgment module and the face feature extraction module and is used for reconstructing feature maps output by the first neural network model, the second neural network model and the third neural network model into a face reconstruction image associated with the specific person;
the specific personnel data storage module is respectively connected with the specific personnel judgment module, the specific personnel image reconstruction module and the fingerprint image acquisition module and is used for storing the face reconstruction image and/or the fingerprint image which are judged as the specific personnel into a database;
and the elevator control module is connected with the fingerprint characteristic comparison module and used for activating the elevator lifting key to change the elevator from a locking state to a usable state after the fingerprint characteristics are successfully compared.
8. The neural network-based elevator attendant recognition and elevator control apparatus as claimed in claim 7, wherein said apparatus further comprises:
the human body detection module is connected with the elevator scene image acquisition module and is used for detecting a human body of the scene image, and intercepting and storing a detected human body area as a human body image;
the human body feature detection module is connected with the human body detection module and is used for detecting the human body features of the human body images related to all the elevator passengers;
the human body feature comparison module is connected with the human body feature detection module and a passenger ReID library and is used for respectively carrying out feature comparison on the detected human body features related to each elevator passenger and the pedestrian features stored in the passenger ReID library;
the specific personnel judging module is also connected with the human body characteristic comparison module and is used for judging the elevator taking personnel with the human body characteristic comparison failure as specific personnel;
the specific personnel data storage module is also connected with the human body detection module and is used for storing the human body image judged as the specific personnel into the database.
9. The apparatus for identifying persons on an elevator and controlling an elevator according to claim 8, wherein the human body feature detection module comprises:
the human body detection unit is used for carrying out human body detection on the scene image through a human body detection frame;
the human body image intercepting unit is connected with the human body detecting unit and is used for intercepting and storing the human body area selected by the human body detecting frame into a human body image;
the human body feature extraction unit is connected with the human body image intercepting unit and is used for inputting each intercepted human body image into a feature extraction network to extract the human body features of each elevator taking person;
the human body feature comparison condition is connected with the human body feature extraction unit and used for judging whether the human body features extracted from the current frame meet the human body feature comparison condition;
and the human body feature matching unit is connected with the human body feature comparison condition whether meeting the judgment unit and the human body feature extraction unit, and is used for performing feature matching on the human body features of the specific elevator taking personnel extracted from the current frame and the human body features of all the elevator taking personnel extracted from the preorder frame of the current frame when the human body features of the associated specific elevator taking personnel extracted from the current frame cannot meet the human body feature comparison condition, and taking the human body features successfully matched from the preorder frame as the human body features of the specific elevator taking personnel.
10. The apparatus for identifying persons on an elevator and controlling an elevator according to claim 9, wherein the human body feature matching unit comprises:
the human body feature conversion subunit is used for converting the human body features which are extracted from the current frame and are related to the specific elevator taking personnel into first human body feature vectors and respectively converting the human body features of all the elevator taking personnel extracted from the preorder frame of the current frame into corresponding second human body feature vectors;
an inner product operation subunit, connected to the human body feature conversion subunit, and configured to perform an inner product operation on the first human body feature vector and each of the second human body feature vectors, respectively, to obtain an inner product value of the first human body feature vector and each of the second human body feature vectors;
an inner product value judging subunit, connected to the inner product operation subunit, for judging whether an inner product value greater than a preset threshold exists in each of the inner product values;
a maximum inner product value obtaining unit, connected to the inner product value judging subunit and the inner product operation subunit, configured to obtain, when it is judged that the inner product value greater than the preset threshold value exists, the inner product value with the largest value from each of the inner product values greater than the preset threshold value;
and the human body feature matching subunit is connected with the maximum inner product value acquisition unit and is used for taking the human body features corresponding to the second human body feature vector which is operated to obtain the maximum inner product value as the successfully matched human body features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110107093.8A CN113963378A (en) | 2021-01-26 | 2021-01-26 | Neural network-based elevator taking personnel identification and elevator control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110107093.8A CN113963378A (en) | 2021-01-26 | 2021-01-26 | Neural network-based elevator taking personnel identification and elevator control method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113963378A true CN113963378A (en) | 2022-01-21 |
Family
ID=79459382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110107093.8A Pending CN113963378A (en) | 2021-01-26 | 2021-01-26 | Neural network-based elevator taking personnel identification and elevator control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113963378A (en) |
-
2021
- 2021-01-26 CN CN202110107093.8A patent/CN113963378A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102098516B1 (en) | Passenger management device and passenger management method | |
CN104361276B (en) | A kind of multi-modal biological characteristic identity identifying method and system | |
CN111881726B (en) | Living body detection method and device and storage medium | |
CN107437074B (en) | Identity authentication method and device | |
CN112149638B (en) | Personnel identity recognition system construction and use method based on multi-modal biological characteristics | |
CN107392112A (en) | A kind of facial expression recognizing method and its intelligent lock system of application | |
US20060193502A1 (en) | Device control apparatus and method | |
CN110414459B (en) | Method and device for establishing man-vehicle association | |
CN110058699B (en) | User behavior identification method based on intelligent mobile device sensor | |
EP3685288B1 (en) | Apparatus, method and computer program product for biometric recognition | |
CN110386515B (en) | Method for controlling elevator stop floor based on artificial intelligence and related equipment | |
CN105261105A (en) | Safety access control method | |
CN108171138B (en) | Biological characteristic information acquisition method and device | |
CN108376215A (en) | A kind of identity identifying method | |
CN112634561A (en) | Safety alarm method and system based on image recognition | |
US9378406B2 (en) | System for estimating gender from fingerprints | |
CN111046810A (en) | Data processing method and processing device | |
CN112489808A (en) | Demand recommendation method and device, electronic equipment and storage medium | |
CN112800940A (en) | Elevator control and abnormity alarm method and device based on biological feature recognition | |
CN114519900A (en) | Riding method and device, electronic equipment and storage medium | |
KR101350882B1 (en) | Server for analysing video | |
KR101420189B1 (en) | User recognition apparatus and method using age and gender as semi biometrics | |
CN110930545A (en) | Intelligent door lock control method, control device, control equipment and storage medium | |
CN113963378A (en) | Neural network-based elevator taking personnel identification and elevator control method and device | |
CN207483097U (en) | A kind of Application on Voiceprint Recognition call control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |