CN117037340B - Intelligent lock control system based on data identification - Google Patents

Intelligent lock control system based on data identification Download PDF

Info

Publication number
CN117037340B
CN117037340B CN202311145216.2A CN202311145216A CN117037340B CN 117037340 B CN117037340 B CN 117037340B CN 202311145216 A CN202311145216 A CN 202311145216A CN 117037340 B CN117037340 B CN 117037340B
Authority
CN
China
Prior art keywords
user
model
intelligent lock
palm
lock control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311145216.2A
Other languages
Chinese (zh)
Other versions
CN117037340A (en
Inventor
王文鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Anbounde Intelligent Lock Technology Co ltd
Original Assignee
Dongguan Anbounde Intelligent Lock Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Anbounde Intelligent Lock Technology Co ltd filed Critical Dongguan Anbounde Intelligent Lock Technology Co ltd
Priority to CN202311145216.2A priority Critical patent/CN117037340B/en
Publication of CN117037340A publication Critical patent/CN117037340A/en
Application granted granted Critical
Publication of CN117037340B publication Critical patent/CN117037340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00309Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated with bidirectional data transmission between data carrier and locks
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00571Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated by interacting with a central unit

Abstract

The invention discloses an intelligent lock control system based on data identification, and relates to the technical field of data identification; the intelligent lock comprises an identity information management module, a biological identification model module, an intelligent lock control module and an identity verification module; the invention collects the physical information and palm print information of the user through the identity information management module, establishes a user identity information base, establishes a physical recognition model and a motion track recognition model according to the physical information through the biological recognition model module, establishes a palm print recognition model according to the palm print information, further obtains a biological recognition model corresponding to the user, collects real-time motion video and real-time palm image of the user through the intelligent lock control module, sends the real-time motion video and the real-time palm image to the identity verification module, controls the on-off state of the intelligent lock according to the verification result sent by the identity verification module, and carries out identity judgment on the user according to the physical image, the real-time motion video and the real-time palm image from the intelligent lock control module.

Description

Intelligent lock control system based on data identification
Technical Field
The invention relates to the technical field of data identification, in particular to an intelligent lock control system based on data identification.
Background
The intelligent lock is different from the traditional mechanical lock, is more intelligent in terms of user identification, safety and manageability, is an execution component of a door lock in an access control system, and is controlled by an intelligent lock control technology from a fingerprint identification technology in biological identification and a password to a finger/palm vein in cloud service and biological identification, and finally to a 3D face recognition technology of extremely fire explosion in recent years. Along with the development of intelligent home and intelligent security, the intelligent door lock industry is quickly raised and grows into one of the categories for developing fire explosion in the field of intelligent home;
however, the existing intelligent lock control system is generally prone to being affected by ambient light and inaccurate in fingerprint verification, so that the intelligent lock control system cannot accurately control the intelligent lock, and therefore users cannot enter the room or the like.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an intelligent lock control system based on data identification.
In order to achieve the above object, the present invention provides the following technical solutions:
the intelligent lock control system based on data identification comprises a management and control center, wherein the management and control center is in communication connection with an identity information management module, a biological identification model module, an intelligent lock control module and an identity verification module;
the identity information management module is provided with a body state information acquisition unit, a palm print information acquisition unit and a database unit, and is used for acquiring body state information and palm print information of a user and establishing a user identity information base;
the biological recognition model module is used for establishing a body state recognition model, a motion track recognition model and a palm print recognition model of a corresponding user according to a user information set in the user identity information base so as to obtain a biological recognition model;
the intelligent lock control module is used for acquiring real-time motion video and real-time palm images of a user, sending the real-time motion video and the real-time palm images to the identity verification module, and controlling the on-off state of the intelligent lock according to the return verification result of the identity verification module;
the identity verification module is used for judging the identity of the user according to the body state image, the real-time motion video and the real-time palm image from the intelligent lock control module.
Further, the process for acquiring the posture information includes:
the posture information acquisition unit is provided with a camera and an infrared sensing device, and sends a posture image acquisition prompt and a walking posture acquisition prompt to a user;
the user enters a corresponding place according to the body type image acquisition prompt, and then the body type information acquisition unit acquires appearance images of the user from multiple angles through the camera and generates appearance body type image data in an integrated mode, and simultaneously acquires skeleton images of the user from multiple angles through the infrared sensing device and generates skeleton body type image data in an integrated mode;
setting an acquisition starting point and an acquisition end point, and recording and acquiring walking gesture video data by an infrared sensing device from the acquisition starting point after a user receives a walking gesture acquisition prompt;
the posture information acquisition unit integrates the appearance posture image data, the skeleton posture image data and the walking posture video data to obtain posture information, and marks the identity number of the user;
further, the collecting process of the palmprint information includes:
the palm print information acquisition unit is provided with a palm print information acquisition device, and the palm print information acquisition device is provided with a camera, an infrared sensor and a fingerprint acquisition device; the user places the palm at palm print information collector, and then infrared sensor sends near infrared light source to the palm, and then the camera shoots the vein line of multiunit palm and generates vein line image to the collection generates vein image data, and the fingerprint collector acquires the fingerprint image of multiunit palm simultaneously, and the collection generates fingerprint image data, obtains palm print information with vein image data and fingerprint image data integration, and marks corresponding user's identification number to it.
Further, the biological recognition model is composed of a posture recognition model, a motion track recognition model and a palm print recognition model, wherein the posture recognition model is composed of a user appearance model and a user skeleton model.
Further, the establishing process of the user appearance model comprises the following steps:
the biological recognition model module extracts the body form information from the user information set, extracts the appearance body form image data from the body form information, and sets a number for the appearance body form image in the appearance body form image data;
dividing all appearance body type images in the appearance body type image data into a head image and a trunk image by using head identification points, and numbering the head image and the trunk image;
acquiring the gray value of each pixel point in the head image, if the gray value of the pixel point in the head image is larger than or equal to the gray value of all the adjacent pixel points, setting a texture value of 1 for the pixel point, otherwise, setting a texture value of 0 for the pixel point;
all pixel points with texture value 1 in the head image are sequentially connected end to end in a clockwise or anticlockwise manner, and two adjacent pixel points are set as a vector starting point and a vector ending point according to the connection direction, so that texture feature vectors are obtained;
obtaining texture feature vectors of all head images, carrying out normal distribution on the total number of the texture feature vectors of each head image, selecting the total number of the texture feature vectors corresponding to the middle part as a standard number Num, further judging whether the total number of the texture feature vectors of each head image is within [ Num-1, num+1], if so, reserving, and if not, rejecting;
establishing a user face virtual model according to the reserved texture feature vectors of the head images, setting corresponding appearance identification points in the user face virtual model according to the pixel points of the texture values 1 in the head images, and mutually overlapping and mapping the user face virtual models corresponding to the reserved head images into a user face model;
the biological recognition model module processes the trunk image by adopting the same step of establishing a user face model, further establishes a user trunk model, and combines the user trunk model and the user face model to obtain a user appearance model;
further, the establishing process of the user skeleton model comprises the following steps:
extracting bone body type image data from the body type information, establishing bone feature vectors of all bone body type images for all bone body type images in the bone body type image data by adopting a method which is the same as a texture value of pixel points in a labeling head image, carrying out overlapping mapping on the bone feature vectors of all bone body type images to obtain a user bone model, and labeling bone identification points on the user bone model according to the bone feature vectors;
establishing a three-dimensional rectangular coordinate system, and mapping appearance identification points on the appearance model of the user and bone identification points on the bone model of the user in the three-dimensional rectangular coordinate system at the same time;
setting a mapping distance threshold value, further calculating a distance value between adjacent appearance identification points and bone identification points on a three-dimensional rectangular coordinate system, and comparing an absolute value of the distance value with the mapping distance threshold value;
connecting appearance identification points on the corresponding user appearance model and bone identification points on the user bone model according to the comparison result;
and sequentially connecting the appearance identification points on the user appearance model and the appearance identification points which are mapped with each other in the bone identification points on the user bone model with the bone identification points, so as to obtain the posture identification model.
Further, the process for establishing the motion trail identification model comprises the following steps:
extracting walking gesture videos from the walking gesture video data, dividing each walking gesture video into a plurality of walking gesture images frame by frame, mapping the user body states in the walking gesture images to a body state recognition model, and simultaneously setting a plurality of walking recognition points, wherein the walking recognition points are respectively positioned on shoulders, footsteps, knees, palms and heads;
according to the distribution of each walking recognition point on each walking gesture image, a posture change track diagram corresponding to the walking gesture video is established, wherein the posture change track diagram consists of motion curves of each walking recognition point in the same time node;
and establishing a behavior decision tree, dividing a motion curve of a walking recognition point in each posture change track graph into a plurality of tree nodes, sequentially overlapping and mapping the tree nodes in the behavior decision tree by taking a time node as a class sequence, selecting the tree nodes with the largest overlapping number in each class sequence, and sequentially connecting the tree nodes with the largest overlapping number under each class sequence to obtain a motion track recognition model of a corresponding user.
Further, the establishing process of the palmprint recognition model comprises the following steps:
the method comprises the steps of intensively extracting palm print information from user information, extracting vein image data and fingerprint image data from the palm print information, setting palm contour feature points, extracting corresponding palm contours from vein images in the vein image data through the palm contour feature points, and mutually mapping palm contours of all the vein images to obtain user palm contours;
meanwhile, the biological recognition model module splits vein lines in all vein images in vein image data into a plurality of line points, establishes a rectangular coordinate system, distributes the line points of all vein paths in the rectangular coordinate system at the same time, further obtains line point clouds, establishes a user palm print image according to the overall trend of the line point clouds, simultaneously adopts the same method to extract the user fingerprint image from the fingerprint image data, and combines the user palm print image and the user fingerprint image into a user palm outline to obtain the palm print recognition model.
Further, the process of controlling the intelligent lock by the intelligent lock control module comprises the following steps:
the intelligent lock control module is provided with a camera, an infrared sensing device and an intelligent lock, and the intelligent lock control module sets an identification range according to the shooting distance of the camera, when a user enters the intelligent lock identification range, the intelligent lock control module acquires body state images, real-time motion videos and real-time palm images of multiple groups of users through the camera and sends the body state images, the real-time motion videos and the real-time palm images to the identity verification module, and then whether the intelligent lock is opened or not is judged according to the verification result of the identity verification module.
Further, the process of generating the verification result by the identity verification module comprises the following steps:
after receiving the body state images from the intelligent lock control module, the identity verification module establishes a real-time body state image according to the body state images by adopting the same method for establishing the user face virtual model, matches the real-time body state image with the body state recognition models in the biological recognition models of all users, generates verification according to the matching result and sends the verification to the intelligent lock control module through prompts;
when the identity verification module receives the real-time motion video from the intelligent lock control module, searching a corresponding motion trail identification model according to the identity number of the current user, establishing a real-time motion trail according to a method for establishing the motion trail identification model, mapping and matching the real-time motion trail with the motion trail identification model, generating a morphological verification according to a matching result, and sending the morphological verification to the intelligent lock control module through a prompt;
when the identity verification module receives the real-time palm image from the intelligent lock control module, a corresponding palm print recognition model is searched according to the identity number of the current user, a real-time palm print model is built according to a method for building the palm print recognition model, then the real-time palm print model and the palm print recognition model are mapped and matched, palm print verification is generated according to a matching result, and the palm print verification is sent to the intelligent lock control module through a prompt.
Compared with the prior art, the invention has the beneficial effects that:
the invention collects the physical information and palmprint information of the user through the identity information management module, establishes a user identity information base, establishes a physical recognition model and a motion track recognition model according to the physical information through the biological recognition model module, establishes a palmprint recognition model according to the palmprint information, further obtains a biological recognition model of the corresponding user, establishes a corresponding biological recognition model through collecting the multidimensional identity information of the user, ensures the accuracy of verification, and ensures the fidelity of a data source for establishing the biological recognition model; the intelligent lock control module is used for collecting real-time motion video and real-time palm images of a user and sending the real-time motion video and the real-time palm images to the identity verification module, the identity verification module is used for controlling the on-off state of the intelligent lock according to the verification result sent by the identity verification module, and the identity verification module is used for carrying out identity judgment on the user according to the body state images, the real-time motion video and the real-time palm images from the intelligent lock control module, so that the accuracy and the safety of identification are improved.
Drawings
For a clearer description of embodiments of the present application or of the solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments described in the present invention, and that other drawings may be obtained according to these drawings for a person skilled in the art.
Fig. 1 is a schematic diagram of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, based on the examples herein, which are within the scope of the invention as defined by the claims, will be within the scope of the invention as defined by the claims.
As shown in fig. 1, the intelligent lock control system based on data identification comprises a management and control center, wherein the management and control center is in communication connection with an identity information management module, a biological identification model module, an intelligent lock control module and an identity verification module;
the identity information management module is provided with a body state information acquisition unit, a palm print information acquisition unit and a database unit, and is used for acquiring body state information and palm print information of a user and establishing a user identity information base;
the user sends identity information to the identity information management module, wherein the identity information comprises a user name and a living position, the living position is 9 pieces 302, for example, and then the identity information management module adds an identity number, 23424, to the identity information, generates an identity information acquisition instruction and sends the identity information to the body state information acquisition unit and the palm print information acquisition unit, and then the body state information unit and the palm print information acquisition unit acquire body state information and palm print information of the user respectively;
the process for acquiring the physical state information of the user by the physical state information acquisition unit comprises the following steps:
the posture information acquisition unit is provided with a camera and an infrared sensing device, and when receiving an identity information acquisition instruction, the posture information acquisition unit sends a posture image acquisition prompt and a walking posture acquisition prompt to a user;
the user enters a corresponding place according to the body type image acquisition prompt, and then the body type information acquisition unit firstly acquires appearance images of the user from multiple angles through the camera and generates appearance body type image data in a integrating way, and simultaneously acquires skeleton images of the user from multiple angles through the infrared induction device and generates skeleton body type image data in an integrating way;
the system comprises a posture information acquisition unit, a user, an infrared sensing device, a posture information display unit, a display unit and a display unit, wherein the posture information acquisition unit is used for setting an acquisition starting point and an acquisition end point at corresponding sites, starting from the acquisition starting point after a user receives a walking posture acquisition prompt, recording the walking process of the user through the infrared sensing device, stopping the infrared sensing device when the user reaches the acquisition end point, and generating walking posture video data;
the body state information acquisition unit integrates the appearance body type image data, the skeleton body type image data and the walking gesture video data to obtain body state information, marks the identity number of the user, sends the body state information to the database unit, and simultaneously sends a palm print information acquisition prompt to the palm print information acquisition unit;
further, the process of collecting the palm print information of the user by the palm print information collecting unit comprises the following steps:
the palm print information acquisition unit is provided with a palm print information acquisition device, the palm print information acquisition device is provided with a camera, an infrared sensor and a fingerprint acquisition device, and when the palm print information acquisition unit receives an information acquisition instruction and a palm print information acquisition prompt in sequence, the palm print information acquisition prompt is sent to a user;
after a user collects and prompts according to palm print information, the palm is placed in a palm print information collector, then an infrared sensor sends a near infrared light source to the palm, according to the absorption characteristic of heme in blood in the palm to near infrared light, the palm irradiated by the near infrared light source can present darker lines at subcutaneous veins, a camera shoots vein lines of a plurality of groups of the palm to generate vein line images, vein image data is generated in a integrating manner, and meanwhile the fingerprint collector acquires fingerprint images of a plurality of groups of the palm and generates fingerprint image data in an integrating manner;
the palm print information acquisition unit integrates the vein image data and the fingerprint image data to obtain palm print information, marks the palm print information with the identity number of the corresponding user, and then sends the palm print information to the database unit;
further, after receiving the body state information and the palm print information, the database unit obtains the name and the residence position of the user corresponding to the identity number from the identity information management module according to the identity number mark carried by the body state information and the palm print information;
the database unit combines the name, the identity number, the living position, the posture information and the palmprint information of the user to establish a user information set;
and establishing a user identity information base, and storing user information sets of all users in the user identity information base.
The biological recognition model module is used for establishing a biological recognition model of a corresponding user according to a user information set in the user identity information base, and the specific process comprises the following steps:
the method comprises the steps that a biometric model module sends a request for linking an identity information base of a user to an identity information management module, and after the identity information management module passes the request, the biometric model module extracts a user information set of each user from the identity information base of the user;
the biological recognition model module establishes a body state recognition model according to the body state information in the user information set, and establishes a palm print recognition model according to the palm print information;
the process of establishing the morphological recognition model by the biological recognition model module comprises the following steps:
the biometric model module extracts the body form information from the user information set, extracts the appearance body form image data from the body form information, and sets a number, for example S, for the appearance body form image in the appearance body form image data 1 、S 2 、……、S n N is a natural number greater than 0;
dividing all appearance body images in the appearance body image data into head images and trunk images by head recognition points, and numbering the head images and the trunk images, for example, the head images are numbered t 1 、t 2 、……、t n Torso image number q 1 、q 2 、……、q n
The biological recognition model module further obtains the gray value of each pixel point in the head image, if the gray value of the pixel point in the head image is larger than or equal to the gray value of all the adjacent pixel points, the texture value 1 is set for the pixel point, otherwise, the texture value 0 is set for the pixel point;
all pixel points with texture value 1 in the head image are connected end to end in turn according to the clockwise or anticlockwise direction, and two adjacent pixel points are set as a vector starting point and a vector ending point according to the connection direction, so that num is obtained i Each texture feature vector num i Wherein the representation is numbered t i The total number of texture feature vectors of the head image;
and so on, obtaining texture feature vectors of all the head images, carrying out normal distribution on the total number of the texture feature vectors of each head image, selecting the total number of the texture feature vectors corresponding to the middle part as a standard number Num, further judging whether the total number of the texture feature vectors of each head image is within [ Num-1, num+1], if so, reserving, and if not, rejecting;
further, a user face virtual model is established according to the reserved texture feature vector of the head image, and corresponding appearance identification points are set in the user face virtual model according to the pixel points with the texture value of 1 in the head image;
the virtual models of the user faces corresponding to the reserved head images are mutually overlapped and mapped into a user face model;
the biological recognition model module processes the trunk image by adopting the same step of establishing a user face model, further establishes a user trunk model, and combines the user trunk model and the user face model to obtain a user appearance model;
further, the biological recognition model module extracts bone body type image data from the body type information, establishes bone feature vectors of all bone body type images by adopting the same method as the texture value of pixel points in the labeling head image for all bone body type images in the bone body type image data, carries out overlapping mapping on the bone feature vectors of all bone body type images to obtain a user bone model, and labels bone identification points on the user bone model according to the bone feature vectors;
establishing a three-dimensional rectangular coordinate system, and mapping appearance identification points on the appearance model of the user and bone identification points on the bone model of the user in the three-dimensional rectangular coordinate system at the same time;
setting a mapping distance threshold value, further calculating a distance value between adjacent appearance identification points and bone identification points on a three-dimensional rectangular coordinate system, and comparing an absolute value of the distance value with the mapping distance threshold value;
if the absolute value of the distance value is larger than the mapping distance threshold value, judging that the corresponding appearance identification point position and the corresponding skeleton identification point position are not mapped mutually, and not performing any operation;
if the absolute value of the distance value is smaller than or equal to the mapping distance threshold value, the corresponding appearance identification point position and the skeleton identification point position are judged to be mapped mutually, and then the appearance identification point position on the corresponding user appearance model and the skeleton identification point position on the user skeleton model are connected;
and so on, sequentially connecting appearance identification points on the appearance model of the user and appearance identification points which are mutually mapped in bone identification points on the bone model of the user with the bone identification points, so as to obtain a posture identification model;
further, extracting walking gesture videos from the walking gesture video data, dividing each walking gesture video into a plurality of walking gesture images frame by frame, mapping the user body states in the walking gesture images to a body state recognition model, and simultaneously setting a plurality of walking recognition points, wherein the walking recognition points are respectively positioned on shoulders, footsteps, knees, palms and heads;
according to the distribution of each walking recognition point on each walking gesture image, a posture change track diagram corresponding to the walking gesture video is established, wherein the posture change track diagram consists of motion curves of each walking recognition point in the same time node;
establishing a behavior decision tree, dividing a motion curve of a walking recognition point in each posture change track graph into a plurality of tree nodes, sequentially overlapping and mapping the tree nodes in the behavior decision tree by taking a time node as a class sequence, selecting the tree nodes with the largest overlapping number in each class sequence, and sequentially connecting the tree nodes with the largest overlapping number in each class sequence to obtain a motion track recognition model of a corresponding user;
further, the biological recognition model module extracts palm print information from the user information set, and then firstly extracts vein image data and fingerprint image data from the palm print information;
setting palm contour feature points, extracting corresponding palm contours from vein images in vein image data through the palm contour feature points, and mutually mapping the palm contours of all the vein images to obtain palm contours of users;
meanwhile, the biological identification model module splits vein lines in all vein images in vein image data into a plurality of line points, establishes a rectangular coordinate system, and distributes the line points of all vein lines in the rectangular coordinate system at the same time so as to obtain line point clouds;
according to the overall trend of the line point cloud, a user palm print image is established, and meanwhile, the same method is adopted to extract the user fingerprint image from the fingerprint image in the fingerprint image data;
the palm print image and the user fingerprint image are integrated into the outline of the palm of the user, so that a palm print recognition model is obtained;
and integrating the posture recognition model, the motion track recognition model and the palm print recognition model to obtain a biological recognition model of the corresponding user, setting the identity number of the user, and then sending the biological recognition model to the identity verification module.
The intelligent lock control module is used for collecting real-time motion video and real-time palm images of a user and sending the real-time motion video and the real-time palm images to the identity verification module, and controlling the on-off state of the intelligent lock according to the return verification result of the identity verification module, wherein the specific process comprises the following steps:
the intelligent lock control module is provided with a camera, an infrared sensing device and an intelligent lock, and the intelligent lock control module sets an identification range according to the shooting distance of the camera;
it should be noted that, the intelligent lock is disposed at each floor or house gate, name information of the floor or house where the intelligent lock is located is disposed in the intelligent lock, for example, 3 pieces of 201, and the intelligent lock control module is in communication connection with the user identity information base;
when a user enters an intelligent lock identification range, the intelligent lock control module acquires body state images of a plurality of groups of users through a camera and sends the body state images to the identity verification module;
according to the user identity information sent by the identity verification, the intelligent lock control module searches a user information set of a corresponding user from a user identity information base, and further judges whether the user is a resident of the floor according to the user information set and the intelligent lock built-in information of the user;
if not, no operation is performed; if yes, acquiring a real-time motion video of the current user, and sending the real-time motion video to an identity verification module;
if the identity verification module sends a prompt that the body state verification is passed, the intelligent lock control module opens a corresponding intelligent lock;
if the verification passing prompt sent by the identity verification module does not pass, sending a verification abnormal prompt to the current user, and verifying the palmprint prompt when sending the verification abnormal prompt to the current user, and further collecting a real-time palmprint image of the current user and sending the real-time palmprint image to the identity verification module;
if the identity verification module sends a palm print verification passing prompt, the intelligent lock control module opens a corresponding intelligent lock;
if the palm print verification is not passed through the prompt sent by the identity verification module, the intelligent lock control module sends a prompt of refusing to enter to the current user.
The identity verification module is used for judging the identity of a user according to the body state image, the real-time motion video and the real-time palm image from the intelligent lock control module, and the specific process comprises the following steps:
after receiving the body state images from the intelligent lock control module, the identity verification module establishes a real-time body state image according to the body state images by adopting the same method for establishing the face virtual model of the user, and matches the real-time body state image with the body state recognition models in the biological recognition models of all users;
if no matching result exists, judging the corresponding user as an abnormal user, generating a prompt of no matching result and sending the prompt to the intelligent lock control module;
if the matching result is obtained, the identity number carried by the biological identification model of the matched body state identification model is sent to the intelligent lock control module;
when the identity verification module receives the real-time motion video from the intelligent lock control module, searching a corresponding motion trail identification model according to the identity number of the current user, establishing a real-time motion trail according to a method for establishing the motion trail identification model, mapping and matching the real-time motion trail with the motion trail identification model, and matching the number x of tree nodes in the real-time motion trail and the motion trail identification model;
setting a matching degree threshold value, and calculating a matching degree p according to the total number X of tree nodes in the motion trail identification model, wherein p=x/X;
if the matching degree threshold value is smaller than or equal to the matching degree p, generating a physical verification passing prompt and sending the physical verification passing prompt to the intelligent lock control module;
if the matching degree threshold is larger than the matching degree p, generating a physical verification and sending the physical verification to the intelligent lock control module without prompting;
further, when the identity verification module receives the real-time palm image from the intelligent lock control module, searching a corresponding palm print recognition model according to the identity number of the current user, and establishing a real-time palm print model according to a method for establishing the palm print recognition model, so that the real-time palm print model and the palm print recognition model are mapped and matched;
if the difference between the two is between 0 and 10 percent, generating palm print verification and sending the palm print verification to the intelligent lock control module through a prompt;
if the difference degree of the two is not between 0 and 10 percent, generating palm print verification and sending the palm print verification to the intelligent lock control module without prompting.
The above embodiments are only for illustrating the technical method of the present invention and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present invention may be modified or substituted without departing from the spirit and scope of the technical method of the present invention.

Claims (9)

1. The intelligent lock control system based on data identification comprises a management and control center, and is characterized in that the management and control center is in communication connection with an identity information management module, a biological identification model module, an intelligent lock control module and an identity verification module;
the identity information management module is provided with a body state information acquisition unit, a palm print information acquisition unit and a database unit, and is used for acquiring body state information and palm print information of a user and establishing a user identity information base;
the biological recognition model module is used for establishing a body state recognition model, a motion track recognition model and a palm print recognition model of a corresponding user according to a user information set in the user identity information base so as to obtain a biological recognition model;
the intelligent lock control module is used for acquiring real-time motion video and real-time palm images of a user, sending the real-time motion video and the real-time palm images to the identity verification module, and controlling the on-off state of the intelligent lock according to the return verification result of the identity verification module;
the identity verification module is used for judging the identity of the user according to the body state image, the real-time motion video and the real-time palm image from the intelligent lock control module;
the acquisition process of the posture information comprises the following steps:
the posture information acquisition unit is provided with a camera and an infrared sensing device, and sends a posture image acquisition prompt and a walking posture acquisition prompt to a user;
the user enters a corresponding place according to the body type image acquisition prompt, and then the body type information acquisition unit acquires appearance images of the user from multiple angles through the camera and generates appearance body type image data in an integrated mode, and simultaneously acquires skeleton images of the user from multiple angles through the infrared sensing device and generates skeleton body type image data in an integrated mode;
setting an acquisition starting point and an acquisition end point, and recording and acquiring walking gesture video data by an infrared sensing device from the acquisition starting point after a user receives a walking gesture acquisition prompt;
the posture information acquisition unit integrates the appearance posture image data, the skeleton posture image data and the walking posture video data to obtain posture information, and marks the identity number of the user.
2. The intelligent lock control system based on data identification of claim 1, wherein the collection process of palmprint information comprises:
the palm print information acquisition unit is provided with a palm print information acquisition device, and the palm print information acquisition device is provided with a camera, an infrared sensor and a fingerprint acquisition device; the user places the palm at palm print information collector, and then infrared sensor sends near infrared light source to the palm, and then the camera shoots the vein line of multiunit palm and generates vein line image to the collection generates vein image data, and the fingerprint collector acquires the fingerprint image of multiunit palm simultaneously, and the collection generates fingerprint image data, obtains palm print information with vein image data and fingerprint image data integration, and marks corresponding user's identification number to it.
3. The intelligent lock control system based on data recognition according to claim 2, wherein the biometric model is composed of a posture recognition model, a motion track recognition model and a palmprint recognition model, wherein the posture recognition model is composed of a user appearance model and a user skeleton model.
4. A data identification based smart lock control system as claimed in claim 3 wherein the user appearance model creation process includes:
the biological recognition model module extracts appearance body type image data from the body type information in the user information set, and sets a number for an appearance body type image in the appearance body type image data;
dividing the appearance body type image into a head image and a trunk image, numbering the head image and the trunk image, obtaining the gray value of each pixel point in the head image, setting a texture value of 1 for the pixel point if the gray value of the pixel point in the head image is greater than or equal to the gray value of all the adjacent pixel points, otherwise setting a texture value of 0 for the pixel point;
all pixel points with texture value 1 in the head image are sequentially connected end to end in a clockwise or anticlockwise manner, and two adjacent pixel points are set as a vector starting point and a vector ending point according to the connection direction, so that texture feature vectors are obtained;
obtaining texture feature vectors of all head images, normally distributing the total number of the texture feature vectors of all head images, selecting the total number of the texture feature vectors corresponding to the middle part as a standard number Num, further judging whether the total number of the texture feature vectors of all head images is within [ Num-1, num+1], if so, reserving, and if not, rejecting;
establishing a user face virtual model according to the reserved texture feature vectors of the head images, setting corresponding appearance identification points in the user face virtual model according to the pixel points of the texture values 1 in the head images, and mutually overlapping and mapping the user face virtual models corresponding to the reserved head images into a user face model;
the biological recognition model module processes the trunk image by adopting the same process of establishing the user face model, further establishes the user trunk model, and combines the user trunk model and the user face model to obtain the user appearance model.
5. The intelligent lock control system based on data identification of claim 4, wherein the process of creating the user bone model comprises:
extracting bone body type image data from the body type information, establishing bone feature vectors of all bone body type images according to all bone body type images in the bone body type image data, performing overlapping mapping on the bone feature vectors of all bone body type images to obtain a user bone model, and marking bone identification points on the user bone model according to the bone feature vectors;
establishing a three-dimensional rectangular coordinate system, and mapping appearance identification points on the appearance model of the user and bone identification points on the bone model of the user in the three-dimensional rectangular coordinate system at the same time;
setting a mapping distance threshold value, further calculating a distance value between adjacent appearance identification points and bone identification points on a three-dimensional rectangular coordinate system, and comparing an absolute value of the distance value with the mapping distance threshold value;
connecting appearance identification points on the corresponding user appearance model and bone identification points on the user bone model according to the comparison result;
and sequentially connecting the appearance identification points on the user appearance model and the appearance identification points which are mapped with each other in the bone identification points on the user bone model with the bone identification points, so as to obtain the posture identification model.
6. The intelligent lock control system based on data identification according to claim 3, wherein the process of establishing the motion trail identification model comprises the following steps:
extracting walking gesture videos from the walking gesture video data, dividing each walking gesture video into a plurality of walking gesture images frame by frame, mapping the user body states in the walking gesture images to a body state recognition model, and simultaneously setting a plurality of walking recognition points, wherein the walking recognition points are respectively positioned on shoulders, footsteps, knees, palms and heads;
according to the distribution of each walking recognition point on each walking gesture image, a posture change track diagram corresponding to the walking gesture video is established, wherein the posture change track diagram consists of motion curves of each walking recognition point in the same time node;
and establishing a behavior decision tree, dividing a motion curve of a walking recognition point in each posture change track graph into a plurality of tree nodes, sequentially overlapping and mapping the tree nodes in the behavior decision tree by taking a time node as a class sequence, selecting the tree nodes with the largest overlapping number in each class sequence, and sequentially connecting the tree nodes with the largest overlapping number under each class sequence to obtain a motion track recognition model of a corresponding user.
7. A data recognition-based intelligent lock control system according to claim 3, wherein the process of creating the palmprint recognition model comprises:
extracting vein image data and fingerprint image data from palm print information in a user information set, setting palm contour feature points, extracting corresponding palm contours from vein images in the vein image data through the palm contour feature points, and mutually mapping palm contours of all the vein images to obtain a user palm contour;
meanwhile, the biological recognition model module splits vein lines in all vein images in vein image data into a plurality of line points, establishes a rectangular coordinate system, distributes the line points of all vein lines in the rectangular coordinate system at the same time, further obtains line point clouds, establishes a user palm print image according to the overall trend of the line point clouds, extracts the user fingerprint image from the fingerprint image in the fingerprint image data, and combines the user palm print image and the user fingerprint image into a user palm outline to obtain the palm print recognition model.
8. A data identification based smart lock control system as claimed in claim 3 wherein the smart lock control module controls the smart lock process comprising:
the intelligent lock control module is provided with a camera, an infrared sensing device and an intelligent lock, and the intelligent lock control module sets an identification range according to the shooting distance of the camera, when a user enters the intelligent lock identification range, the intelligent lock control module acquires body state images, real-time motion videos and real-time palm images of multiple groups of users through the camera and sends the body state images, the real-time motion videos and the real-time palm images to the identity verification module, and then whether the intelligent lock is opened or not is judged according to the verification result of the identity verification module.
9. The intelligent lock control system based on data identification of claim 8, wherein the process of generating the verification result by the identity verification module comprises:
after receiving the body state image from the intelligent lock control module, the identity verification module establishes a real-time body state image according to the body state image, matches the real-time body state image with the body state recognition models in the biological recognition models of all users, generates verification according to the matching result and sends the verification to the intelligent lock control module through prompt;
when the identity verification module receives the real-time motion video from the intelligent lock control module, searching a corresponding motion trail identification model according to the identity number of the current user, establishing a real-time motion trail, mapping and matching the real-time motion trail and the motion trail identification model, generating a physical verification according to a matching result, and sending the physical verification to the intelligent lock control module through a prompt;
when the identity verification module receives the real-time palm image from the intelligent lock control module, a corresponding palm print recognition model is found according to the identity number of the current user, a real-time palm print model is built, the real-time palm print model and the palm print recognition model are mapped and matched, palm print verification is generated according to the matching result, and the palm print verification is sent to the intelligent lock control module through a prompt.
CN202311145216.2A 2023-09-06 2023-09-06 Intelligent lock control system based on data identification Active CN117037340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311145216.2A CN117037340B (en) 2023-09-06 2023-09-06 Intelligent lock control system based on data identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311145216.2A CN117037340B (en) 2023-09-06 2023-09-06 Intelligent lock control system based on data identification

Publications (2)

Publication Number Publication Date
CN117037340A CN117037340A (en) 2023-11-10
CN117037340B true CN117037340B (en) 2024-04-12

Family

ID=88626404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311145216.2A Active CN117037340B (en) 2023-09-06 2023-09-06 Intelligent lock control system based on data identification

Country Status (1)

Country Link
CN (1) CN117037340B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1591269A (en) * 2003-08-27 2005-03-09 索尼株式会社 Electronic device and checking method
CN105374098A (en) * 2015-12-14 2016-03-02 天津光电通信技术有限公司 Method used for unlocking using human body double-characteristic identification module
CN105787420A (en) * 2014-12-24 2016-07-20 北京三星通信技术研究有限公司 Method and device for biometric authentication, and biometric authentication system
CN205964974U (en) * 2016-07-23 2017-02-22 郑州大学 Emulation antagonism recreation system based on kinect body is felt
CN107818304A (en) * 2017-10-27 2018-03-20 佛山科学技术学院 A kind of intelligent safety and defence system personal identification method
CN109035120A (en) * 2018-10-30 2018-12-18 深圳市海能通信股份有限公司 A kind of big data security-protection management system
CN110288731A (en) * 2019-06-13 2019-09-27 珠海格力电器股份有限公司 A kind of unlocking method, device and electronic lock
WO2021208986A1 (en) * 2020-04-15 2021-10-21 深圳Tcl新技术有限公司 Door opening/closing state monitoring method, device and apparatus, and computer readable storage medium
KR20220153701A (en) * 2021-05-11 2022-11-21 주식회사 이랜텍 System that provide posture information
CN218912631U (en) * 2022-10-14 2023-04-25 山东皓维智能科技有限公司 Intelligent business library for biological identification and authentication
CN116543430A (en) * 2023-03-24 2023-08-04 国网河北省电力有限公司超高压分公司 Device and method for actively defending relay protection pressing plate

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9916746B2 (en) * 2013-03-15 2018-03-13 August Home, Inc. Security system coupled to a door lock system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1591269A (en) * 2003-08-27 2005-03-09 索尼株式会社 Electronic device and checking method
CN105787420A (en) * 2014-12-24 2016-07-20 北京三星通信技术研究有限公司 Method and device for biometric authentication, and biometric authentication system
CN105374098A (en) * 2015-12-14 2016-03-02 天津光电通信技术有限公司 Method used for unlocking using human body double-characteristic identification module
CN205964974U (en) * 2016-07-23 2017-02-22 郑州大学 Emulation antagonism recreation system based on kinect body is felt
CN107818304A (en) * 2017-10-27 2018-03-20 佛山科学技术学院 A kind of intelligent safety and defence system personal identification method
CN109035120A (en) * 2018-10-30 2018-12-18 深圳市海能通信股份有限公司 A kind of big data security-protection management system
CN110288731A (en) * 2019-06-13 2019-09-27 珠海格力电器股份有限公司 A kind of unlocking method, device and electronic lock
WO2021208986A1 (en) * 2020-04-15 2021-10-21 深圳Tcl新技术有限公司 Door opening/closing state monitoring method, device and apparatus, and computer readable storage medium
KR20220153701A (en) * 2021-05-11 2022-11-21 주식회사 이랜텍 System that provide posture information
CN218912631U (en) * 2022-10-14 2023-04-25 山东皓维智能科技有限公司 Intelligent business library for biological identification and authentication
CN116543430A (en) * 2023-03-24 2023-08-04 国网河北省电力有限公司超高压分公司 Device and method for actively defending relay protection pressing plate

Also Published As

Publication number Publication date
CN117037340A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US7881524B2 (en) Information processing apparatus and information processing method
Ahmed et al. DTW-based kernel and rank-level fusion for 3D gait recognition using Kinect
JP3469031B2 (en) Face image registration apparatus and method
CN110298221B (en) Self-help fitness method and system, electronic equipment and storage medium
CN102332093A (en) Identity authentication method and device adopting palmprint and human face fusion recognition
Milovanovic et al. Walking in colors: human gait recognition using kinect and cbir
CN111462379A (en) Access control management method, system and medium containing palm vein and face recognition
CN114067358A (en) Human body posture recognition method and system based on key point detection technology
US7020310B2 (en) Animated toy utilizing artificial intelligence and fingerprint verification
Kim et al. A non-cooperative user authentication system in robot environments
Lin et al. A de-identification face recognition using extracted thermal features based on deep learning
CN103123690A (en) Information acquisition device, information acquisition method, identification system and identification method
CN112989889B (en) Gait recognition method based on gesture guidance
Badave et al. Evaluation of person recognition accuracy based on OpenPose parameters
CN205644823U (en) Social security self -service terminal device
Al-Nima Human authentication with earprint for secure telephone system
CN114218543A (en) Encryption and unlocking system and method based on multi-scene expression recognition
CN117037340B (en) Intelligent lock control system based on data identification
Yin et al. Fusion of face recognition and facial expression detection for authentication: a proposed model
Milovanović et al. New gait recognition method using Kinect stick figure and CBIR
Takeda et al. Biometric personal identification by dinamics of sole pressure at walking
Manocha et al. Palm vein recognition for human identification using NN
KR20160042646A (en) Method of Recognizing Faces
GB2356961A (en) Biometrics system
Bhise et al. IoT based door lock and unlock system using face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant