CN117173813A - Door lock motor assembly control method, intelligent door lock and computer readable medium - Google Patents

Door lock motor assembly control method, intelligent door lock and computer readable medium Download PDF

Info

Publication number
CN117173813A
CN117173813A CN202311048979.5A CN202311048979A CN117173813A CN 117173813 A CN117173813 A CN 117173813A CN 202311048979 A CN202311048979 A CN 202311048979A CN 117173813 A CN117173813 A CN 117173813A
Authority
CN
China
Prior art keywords
image
palm
palm vein
face
door lock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311048979.5A
Other languages
Chinese (zh)
Inventor
孙福尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunding Network Technology Beijing Co Ltd
Original Assignee
Yunding Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunding Network Technology Beijing Co Ltd filed Critical Yunding Network Technology Beijing Co Ltd
Priority to CN202311048979.5A priority Critical patent/CN117173813A/en
Publication of CN117173813A publication Critical patent/CN117173813A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Embodiments of the present disclosure disclose a door lock motor assembly control method, an intelligent door lock, and a computer readable medium. One embodiment of the method comprises the following steps: shooting a target environment to obtain a target environment image; performing palm recognition on the target environment image to obtain a palm recognition result; responding to the palm result to represent that the target environment image meets the preset palm condition, and collecting palm vein images; matching the palm vein image; responding to the fact that the palm recognition result represents that the target environment image does not meet the preset palm condition or the palm vein matching result represents that the matching fails, and carrying out face recognition on the target environment image; cutting the face area of the target environment image; matching the face image; and responding to the palm vein matching result or the face matching result to represent successful matching, and controlling the door lock motor assembly to execute unlocking operation. According to the intelligent door lock identification verification method and device, the intelligent door lock identification verification speed can be improved, and therefore user experience is improved.

Description

Door lock motor assembly control method, intelligent door lock and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a door lock motor assembly control method, an intelligent door lock, and a computer readable medium.
Background
With the rapid development of security technologies, more and more families begin to use intelligent door locks. Currently, related intelligent door locks mostly have at least one of the following identification modes: face recognition, fingerprint recognition, palm vein recognition, and the like. When an intelligent door lock has at least two recognition modes, it is generally determined according to the distance between the recognition object and the intelligent door lock, for example, when the distance between the recognition object and the intelligent door lock is detected to be less than or equal to 20 cm, the palm vein recognition mode is adopted, and when the distance between the recognition object and the intelligent door lock is detected to be greater than 20 cm and less than 50 cm, the face recognition mode is adopted.
However, the inventors found that when the user is authenticated by the above-described identification method, there are often the following technical problems:
first, when an intelligent door lock has at least two kinds of recognition modes, each recognition mode has certain distance restriction, can only adopt one kind of recognition mode to discern in one section settlement distance, and the user need remove many times when using and reach the settlement distance in order to discern the verification, causes the discernment to verify consuming time longer, and many times verification failure can cause the recognition mode of intelligent door lock to be locked, can't continue the discernment, causes inconvenience to the user, leads to user experience to feel relatively poor.
Secondly, shadows exist in the collected face images at the darker positions or the uneven positions of the light, so that the definition of the collected face images is low, and the accuracy of face recognition is low.
Thirdly, the intelligent door lock adopting the identification mode can not determine whether the person with failed identification is an abnormal person or even a dangerous person according to the historical identification times, and when the abnormal person or even the dangerous person tries to unlock, the user is not warned, so that the safety of the user is lower.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a door lock motor assembly control method, an intelligent door lock, and a computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a door lock motor assembly control method, applied to an intelligent door lock, the method comprising: in response to detecting that an object exists within a preset identification distance through the associated distance detection device, shooting a target environment through the associated camera device to obtain a target environment image; performing palm recognition on the target environment image to obtain a palm recognition result; responding to the fact that the target environment image represented by the palm result meets the preset palm condition, and acquiring a palm vein image through an associated infrared camera device, wherein the preset palm condition is an image which is included in the target environment image and only represents a complete palm area; carrying out matching treatment on the palm vein images to obtain palm vein matching results; responding to the fact that the palm recognition result represents that the target environment image does not meet the preset palm condition or the fact that the palm vein matching result represents that the matching fails is determined, and carrying out face recognition on the target environment image to obtain a face recognition result; responding to the face recognition result to represent the face region included in the target environment image, and performing face region cutting processing on the target environment image to obtain a face image; matching the face images to obtain a face matching result; and controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation in response to the fact that the palm vein matching result or the face matching result represents successful matching.
Optionally, in response to determining that the face matching result indicates that the matching fails, respectively acquiring illumination intensities corresponding to the target environment through an associated first illuminance sensor and an associated second illuminance sensor, where the first illuminance sensor and the second illuminance sensor are respectively located at two sides of the intelligent door lock; determining whether the first illumination intensity and the second illumination intensity meet a preset illumination intensity condition; in response to determining that the first illumination intensity and the second illumination intensity do not meet the preset illumination intensity condition, determining a first target illumination of a first light source and a second target illumination of a second light source according to the first illumination intensity and the second illumination intensity, wherein the first light source and the second light source are respectively positioned at two sides of the intelligent door lock; controlling the first light source and the second light source to perform a light emitting operation according to the first target illuminance and the second target illuminance; collecting illumination face images under the first light source and the second light source through the camera device; performing feature extraction processing on the illumination face image to obtain illumination face feature information; according to a pre-stored face feature information set, carrying out matching processing on the face feature weighting information to obtain a face feature matching result; and controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation in response to the fact that the face feature matching result is determined to represent that the face matching is successful.
Optionally, in response to determining that the face matching result represents the matching failure, performing feature extraction processing on the face image to obtain face image feature information; according to a pre-stored Zhou Nayi constant face characteristic information set, carrying out matching processing on the face image characteristic information to obtain a weekly abnormal matching result; responding to the fact that the intra-week abnormal matching result is determined to represent Zhou Nayi that the regular face matching is successful, and screening out pre-stored intra-month abnormal face feature information corresponding to the face image feature information from the pre-stored intra-month abnormal face feature information set to serve as target intra-month abnormal face feature information; determining historical anomaly detection times corresponding to the anomaly face feature information in the target month; determining location information of the intelligent door lock in response to determining that the historical anomaly detection times are greater than or equal to a preset historical anomaly detection times threshold; acquiring a dangerous person face characteristic information set corresponding to the place information; for each dangerous person face feature information in the dangerous person face feature information set, determining the face feature similarity of the dangerous person face feature information and the face image feature information; generating dangerous personnel alarm information according to the face images in response to the fact that face feature similarity greater than or equal to a preset face feature similarity threshold exists in the face feature similarity; generating abnormal personnel alarm information according to the face image in response to determining that the face feature similarity greater than or equal to a preset face feature similarity threshold does not exist in the determined face feature similarity; and sending the dangerous person alarm information or the abnormal person alarm information to the associated terminal equipment.
In a second aspect, some embodiments of the present disclosure provide a smart door lock, comprising: one or more processors; distance detecting means configured to detect whether an object exists within a preset recognition distance; an imaging device configured to acquire an image; an infrared camera device configured to acquire a palmar vein image; a door lock motor assembly configured to perform an unlocking operation; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a third aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: the door lock motor assembly control method can improve the speed of intelligent door lock identification verification, so that the user experience is improved. Specifically, the reason for making the user experience feel worse is that: when an intelligent door lock is provided with at least two identification modes, each identification mode is provided with a certain distance limit, only one identification mode can be adopted for identification within a set distance, a user needs to move for a plurality of times to reach the set distance for identification verification when using the intelligent door lock, the time consumption for identification verification is long, and the identification mode of the intelligent door lock can be locked and cannot be continuously identified due to the failure of the plurality of times of verification, so that inconvenience is caused to the user. Based on this, the door lock motor assembly control method of some embodiments of the present disclosure first obtains a target environment image by photographing a target environment through an associated photographing device in response to detecting that an object exists within a preset recognition distance through an associated distance detecting device. Therefore, when the object exists in the preset recognition distance, the target environment image can be acquired to perform recognition verification. And secondly, carrying out palm recognition on the target environment image to obtain a palm recognition result. Thus, the palm state of the user within the preset recognition distance can be recognized. And then, responding to the fact that the palm result represents the target environment image to meet the preset palm condition, and acquiring a palm vein image through an associated infrared camera device. The preset palm condition is that the target environment image comprises only one image representing a complete palm area. Thus, upon determining that there is one and only one complete palm within the target environment, palm vein images may be acquired for palm vein identification. And then, carrying out matching processing on the palm vein images to obtain palm vein matching results. Thus, the palm vein image can be identified and verified to determine whether the verification is successful. And then, in response to determining that the palm recognition result represents that the target environment image does not meet the preset palm condition or that the palm vein matching result represents that the matching fails, performing face recognition on the target environment image to obtain a face recognition result. Thus, it is possible to determine whether a face is present in the target environment when there is no one and only one complete palm within the target environment, or when the palm vein identification is unsuccessful. And then, responding to the face recognition result to represent the face region included in the target environment image, and performing face region cutting processing on the target environment image to obtain a face image. Therefore, when the face exists in the target environment, the face image can be obtained by cutting the target environment image, so that the size of the image for matching the face subsequently is reduced, processing steps are reduced, and efficiency is improved. And then, in response to the fact that the palm vein matching result or the face matching result is determined to represent successful matching, controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation. Thus, unlocking can be performed when palm vein recognition is successful or face recognition is successful. And because the image is only required to be acquired within the preset recognition distance, the image is recognized, the recognition mode is determined, the user is not required to move and adjust to the corresponding distance range, the recognition mode is determined, the step of moving and adjusting the user can be reduced, the recognition and verification time consumption is reduced, the use is convenient, and the user experience is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a door lock motor assembly control method according to the present disclosure;
fig. 2 is a schematic structural diagram of various electronic components included in a smart door lock suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Operations such as collection, storage, and use of personal information (e.g., palm vein images and face images) of a user involved in the present disclosure include performing personal information security impact assessment, informing a personal information subject of obligations, and obtaining authorized consent of the personal information subject in advance, before performing the corresponding operations.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of some embodiments of a door lock motor assembly control method according to the present disclosure is shown. The door lock motor assembly control method comprises the following steps:
in step 101, in response to detecting that an object exists within a preset recognition distance through the associated distance detection device, shooting the target environment through the associated image pickup device, and obtaining a target environment image.
In some embodiments, in response to detecting the presence of an object within a preset recognition distance by an associated distance detection device, an execution subject of the door lock motor assembly control method (e.g., a smart door lock) may capture a target environment by an associated camera device, resulting in a target environment image. The distance detecting device may be a device for detecting whether an object exists within a preset recognition distance. For example, the distance detecting device may be a distance sensor. The preset recognition distance may be a preset distance. For example, the preset recognition distance may be 20 cm. The image pickup device may be a device in communication with the intelligent door lock. For example, the imaging device may be a camera. The communication connection mode between the camera device and the intelligent door lock can be a wired connection mode or a wireless connection mode. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means. The target environment may be a photographable area of the image pickup device.
And 102, carrying out palm recognition on the target environment image to obtain a palm recognition result.
In some embodiments, the execution body may perform palm recognition on the target environment image to obtain a palm recognition result. The palm recognition result can represent whether the target environment image meets the preset palm condition or not. The preset palm condition may be that the target environment image includes only one image representing the complete palm area.
In some optional implementations of some embodiments, the executing body may perform palm recognition on the target environment image to obtain a palm recognition result through the following steps:
and firstly, performing skin color segmentation on the target environment image to obtain a skin portion image. The skin portion image may be an image corresponding to a skin portion of a human body in the target environment image. In practice, the executing body can segment the skin color of the target environment image through a YCbCr color space model to obtain a skin portion image. Specifically, the executing body may divide the region with the corresponding color of the target environment image as the preset skin color to obtain the skin portion image.
And secondly, extracting contour features of the skin part image to obtain contour feature information. The contour feature information may be information representing contour features of the skin portion image. For example, the contour feature information may be a contour feature vector. In practice, the executing body can extract contour features of the skin portion image through an edge detection algorithm to obtain contour feature information. The edge detection algorithm may include, but is not limited to, at least one of: sobel operator, canny operator, laplacian operator.
And thirdly, inputting the outline characteristic information into a pre-trained palm recognition result generation model to obtain a palm recognition result. The pre-trained palm recognition result generation model may be a machine learning model with outline feature information as input and a palm recognition result as output. For example, the pre-trained palm recognition result generation model may be a support vector machine.
And step 103, responding to the fact that the palm result represents the target environment image to meet the preset palm condition, and acquiring a palm vein image through an associated infrared camera device.
In some embodiments, the executing entity may acquire the palmar vein image via an associated infrared camera in response to determining that the palmar result characterizes the target environmental image as meeting a preset palmar condition. The preset palm condition may be that the target environment image includes only one image representing the complete palm area. The associated infrared camera may be an infrared camera communicatively coupled to the executing body.
And 104, carrying out matching processing on the palm vein image to obtain a palm vein matching result.
In some embodiments, the execution body may perform matching processing on the palm vein image to obtain a palm vein matching result. The palm vein matching result can represent palm vein matching success or palm vein matching failure.
In some optional implementations of some embodiments, the executing body may perform a matching process on the palm vein image to obtain a palm vein matching result by:
and firstly, extracting an interested region from the palm vein image to obtain a palm vein effective region image. The palm vein effective area image may be an image portion with more palm veins. For example, the palm vein effective area image may be an image corresponding to a palm center portion.
And secondly, performing image enhancement processing on the palm vein effective area image to obtain an image enhanced palm vein effective area image. In practice, the execution subject may perform image enhancement processing on the palm vein effective area image through an image enhancement algorithm, so as to obtain an image enhanced palm vein effective area image. For example, the image enhancement algorithm described above may be histogram equalization.
And thirdly, extracting image characteristics of the palm vein effective area image after the image enhancement to obtain palm vein characteristic information. In practice, the execution subject may perform image feature extraction on the image of the palm vein effective area after image enhancement through an image feature extraction algorithm, so as to obtain palm vein feature information. Wherein, the image feature extraction algorithm can include, but is not limited to, at least one of the following: LBP (Local Binary Pattern ), wavelet transform and SIFT (Scale-invariant features transform, scale-invariant feature transform).
And step four, carrying out matching processing on the palm vein characteristic information according to a pre-stored palm vein characteristic information set to obtain a palm vein matching result. The pre-stored palm vein feature information set may be feature information of each palm vein image stored in advance in a storage device included in the execution body. In practice, first, the executing body may determine a similarity between each of the pre-stored palm vein feature information and the palm vein feature information in the pre-stored palm vein feature information set. As an example, the execution subject may determine the similarity of each pre-stored palm vein feature information in the pre-stored palm vein feature information set and the palm vein feature information through a cosine similarity algorithm. And then, in response to the fact that the similarity greater than or equal to a preset palm vein similarity threshold exists in the determined similarities, determining preset palm vein matching information representing successful matching as a palm vein matching result.
In some optional implementations of some embodiments, the executing body may extract the region of interest from the palm vein image by:
and firstly, carrying out background segmentation processing on the palm vein image to obtain a palm vein segmentation image. In practice, the execution subject may perform skin portion processing on the palmar vein image through the YCbCr color space model, to obtain a palmar vein skin portion image as a palmar vein segmentation image.
And secondly, carrying out smoothing filtering treatment on the palm vein segmentation image to obtain a palm vein filtering image. In practice, the execution subject may perform smoothing filtering processing on the palmar vein segmentation image through an average filtering algorithm, so as to obtain a palmar vein filtering image.
And thirdly, carrying out gray level binarization processing on the palm vein filtered image to obtain a palm vein gray level image. In practice, the executing body can perform gray level binarization processing on the palm vein filtered image through a gray level binarization algorithm to obtain a palm vein gray level image.
And fourthly, performing rotation processing on the palm vein gray level image to obtain a rotated palm vein gray level image. The rotated palm vein grayscale image may be a forward palm vein grayscale image.
And fifthly, cutting the rotated palm vein gray level image to obtain a palm vein effective area image.
In some optional implementations of some embodiments, the executing body may rotate the palm vein grayscale image to obtain a rotated palm vein grayscale image by:
and firstly, determining the midpoint position information of the bottom of the palm corresponding to the palm vein gray level image. The information of the midpoint position at the bottom of the palm may be information representing the position of the midpoint of the wrist. For example, the above-described palm bottom midpoint position information may be represented by coordinates. In practice, first, the execution subject may perform wrist recognition on the palm vein grayscale image to obtain a wrist recognition area. In practice, the execution subject may input the palm vein grayscale image into a wrist recognition model trained in advance to obtain a wrist recognition area. The pre-trained wrist recognition model may be a machine learning model with palm vein gray level images as input and wrist recognition areas as output. For example, the wrist identification model may be a RetinaNet model. Then, the execution body may determine the center point of the wrist recognition area as palm bottom midpoint position information.
And secondly, performing contour detection processing on the palm vein gray level image to obtain the contour boundary position information of each hand corresponding to the palm vein gray level image. The above-mentioned respective hand contour boundary position information may be information representing respective positions of the hand contour. The above-described respective hand contour boundary position information can be expressed by coordinates. In practice, the execution body may perform contour detection processing on the palm vein grayscale image through a contour tracking algorithm, so as to obtain the boundary position information of each hand contour corresponding to the palm vein grayscale image.
And thirdly, for each piece of hand contour boundary position information in the hand contour boundary position information, carrying out coordinate transformation processing on the hand contour boundary position information by taking the midpoint position information of the bottom of the palm as an origin to obtain hand contour transformation position information. The hand contour conversion position information may be information representing a position of any contour converted by using the palm bottom midpoint position information as an origin. In practice, according to the coordinate origin corresponding to the palm vein gray level image and the position corresponding to the palm bottom midpoint position information, the execution body may translate the original coordinate system to obtain a new coordinate system. Then, the execution subject may use the new coordinates corresponding to each hand contour boundary position information in the new coordinate system as the hand contour conversion position information. As an example, the coordinates of the above-described palm bottom midpoint position information characterization may be (10, 5). The coordinates represented by the hand contour boundary position information may be (7, 6). And carrying out coordinate transformation processing on the hand contour boundary position information by taking the palm bottom midpoint position information as an origin to obtain the hand contour transformation position information with the characteristic coordinates (-3, 1).
And fourthly, performing curve fitting processing on the determined hand contour conversion position information to obtain hand contour curve information. The hand contour curve information may be information representing a curve corresponding to each hand contour conversion position information. The hand contour curve information may be represented by a curve. In practice, the executing body can perform curve fitting processing on the determined hand contour conversion position information through a curve fitting method to obtain hand contour curve information.
And fifthly, determining each corresponding minimum value in the hand contour curve information as a finger seam valley point position, and obtaining a finger seam valley point position set. The finger seam valley point position may be information representing the position of the finger seam of the palm. For example, the positions of the finger joints can be represented by coordinates. In practice, the coordinates of each corresponding minimum value in the hand contour curve information are determined as the positions of the points of the finger joints, and obtaining a finger seam valley point position set.
And sixthly, screening out two finger seam valley point positions meeting a preset median condition from the finger seam valley point positions in a concentrated manner. The preset median condition may be: the two selected finger seam valley point positions are the two finger seam valley point positions with the largest position distance corresponding to the position information of the midpoint at the bottom of the palm in the finger seam valley point position set.
And seventh, determining finger joint tangent line information corresponding to the positions of the two finger joint valley points. The finger seam tangent line may be a tangent function corresponding to the positions of the two finger seam valley points.
And eighth, rotating the palm vein filtered image according to the slope corresponding to the finger seam tangent information to obtain a forward palm vein image. In practice, first, the executing body may determine, by an arctangent function, an inclination angle corresponding to a slope corresponding to the slit tangent information. Then, the execution subject may rotate the palm vein filtered image clockwise by the inclination angle to obtain a forward palm vein image.
And ninth, determining the positive palm vein image as a rotated palm vein gray scale image.
In some optional implementations of some embodiments, the executing body may perform a clipping process on the rotated palm vein grayscale image to obtain a palm vein effective area image by:
and in the first step, determining the minimum finger seam valley point position in the finger seam valley point position set as the finger seam valley point position in the palm. In practice, the execution body may determine the finger seam valley point position with the smallest distance from the finger seam valley point position set and the palm bottom midpoint position information as the palm middle finger seam valley point position.
And secondly, determining horizontal tangent line information corresponding to the middle finger seam point position in the palm. The horizontal tangent information may be a tangent function in a horizontal direction passing through the position of the middle finger seam point in the palm.
And thirdly, determining central line information corresponding to the palm vein gray level image according to the horizontal tangent line information and the finger seam tangent line information. In practice, the executing body may determine a distance between a tangent line corresponding to the horizontal tangent line information and a tangent line corresponding to the slit tangent line information. Then, the execution body may determine a horizontal line function at a distance of 1/4 from a tangent line corresponding to the horizontal tangent line information as the center line information.
And step four, determining the central position information corresponding to the palm vein gray level image according to the central line information. The central position information may be information indicating a midpoint position of the palm center. In practice, the execution subject may determine, as the center position information, a position of a midpoint of a line segment where the center line corresponding to the center line information coincides with the palm vein grayscale image.
Fifthly, determining cutting radial line information corresponding to the palm vein gray level image according to the palm vein gray level image. The cutting radial line information may be information indicating a range of the palm vein effective area. In practice, the execution subject may determine the 1/2 length of the line segment where the center line information overlaps the palm vein grayscale image as the trimming radial line information.
And sixthly, cutting the palm vein gray level image according to the cutting radial line information and the central position information to obtain a palm vein effective area image. In practice, first, the execution subject may determine, as the palm vein effective area, an area having a radius of a length corresponding to the trimming radial line information, with the coordinates corresponding to the center position information in the palm vein grayscale image as an origin. Then, the execution subject may cut the palm vein effective area in the palm vein grayscale image to obtain a palm vein effective area image.
In some optional implementations of some embodiments, the executing body may perform image enhancement processing on the palm vein effective area image to obtain an image-enhanced palm vein effective area image by:
and firstly, carrying out contrast stretching treatment on the palm vein effective area image to obtain a palm vein effective area stretching image. In practice, the execution body performs contrast stretching processing on the palm vein effective area image through a contrast stretching transformation function to obtain a palm vein effective area stretching image.
And secondly, carrying out palm print extraction processing on the palm vein effective area stretching image to obtain a palm print image. In practice, the executing body may perform palm print extraction processing on the palm vein effective area stretching image through a Gabor filter to obtain a palm print image. Specifically, first, the execution body may perform texture recognition on the stretched image of the palm vein effective area by using a Gabor filter to obtain a palm print recognition area. Then, the executing body may cut an image corresponding to the palm print recognition area in the palm vein effective area stretching image to obtain a palm print image.
And thirdly, superposing the palm print image and the palm vein effective area stretching image to obtain a superposed palm vein effective area stretching image. In practice, the execution subject may superimpose the palm print image and the palm vein effective area stretching image through a cv2.Add function, so as to obtain a superimposed palm vein effective area stretching image.
And fourthly, determining the superimposed palm vein effective area stretching image as an image-enhanced palm vein effective area image.
And 105, in response to determining that the palm recognition result represents that the target environment image does not meet the preset palm condition or that the palm vein matching result represents that the matching fails, performing face recognition on the target environment image to obtain a face recognition result.
In some embodiments, in response to determining that the palm recognition result indicates that the target environment image does not meet the preset palm condition, or determining that the palm vein matching result indicates that the matching fails, the execution subject may perform face recognition on the target environment image to obtain a face recognition result. The face recognition result may represent that the target environment image includes a face region or the target environment image does not include a face region. In practice, first, the executing body may segment the skin color of the target environment image through the YCbCr color space model to obtain a skin portion image. Then, the executing body can extract contour features of the skin portion image through an edge detection algorithm to obtain contour feature information. And then, the execution main body can input the outline characteristic information into a pre-trained face recognition result generation model to obtain a face recognition result. The pre-trained face recognition result generation model may be a machine learning model with contour feature information as input and face recognition result as output. For example, the pre-trained face recognition result generation model may be a support vector machine.
And step 106, responding to the determined face recognition result to represent the face region included in the target environment image, and performing face region cutting processing on the target environment image to obtain the face image.
In some embodiments, in response to determining that the face recognition result indicates that the target environment image includes a face region, the execution subject may perform face region cropping on the target environment image to obtain a face image. In practice, the executing body may cut the face region in the target environment image to obtain a face image.
And 107, carrying out matching processing on the face image to obtain a face matching result.
In some embodiments, the executing body may perform matching processing on the face image to obtain a face matching result. The method for obtaining the face matching result is the same as the above method for obtaining the palm vein matching result, and will not be described here again.
And step 108, controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation in response to the fact that the palm vein matching result or the face matching result is determined to represent successful matching.
In some embodiments, in response to determining that the palm vein matching result or the face matching result indicates that the matching is successful, the execution subject may control a door lock motor assembly included in the smart door lock to perform an unlocking operation. The door lock motor assembly can be a motor for controlling the door lock to be opened.
Optionally, the above execution body may further execute the following steps:
in the first step, in response to determining that the face matching result represents the matching failure, the illumination intensity corresponding to the target environment is respectively acquired through the associated first illumination sensor and the associated second illumination sensor. The first illuminance sensor and the second illuminance sensor may be located at two sides of the intelligent door lock, respectively. The first illuminance sensor and the second illuminance sensor may be communicatively connected to the execution subject.
And a second step of determining whether the first illumination intensity and the second illumination intensity meet a preset illumination intensity condition. The preset illumination intensity condition may be that the first illumination intensity and the second illumination intensity are both less than or equal to a preset illumination intensity threshold, or that an illumination intensity difference value between the first illumination intensity and the second illumination intensity is greater than or equal to a preset illumination intensity difference value threshold.
And a third step of determining a first target illuminance of the first light source and a second target illuminance of the second light source according to the first illumination intensity and the second illumination intensity in response to determining that the illumination intensity does not satisfy the preset illumination intensity condition. The first light source and the second light source may be respectively located at two sides of the intelligent door lock. The first light source and the second light source may be devices for emitting light. For example, the first light source and the second light source may each be a flash lamp. In practice, the executing body may determine the first target illuminance of the first light source according to a preset first target illuminance configuration information set. Each preset first target illuminance configuration information in the preset first target illuminance configuration information set may include a preset first illuminance intensity and a preset first target illuminance. Specifically, first, the executing body may determine preset first target illuminance configuration information including the preset first illuminance configuration information set having the same preset first illuminance intensity as the target first target illuminance configuration information. Second, the execution subject may determine a preset first target illuminance included in the target first target illuminance configuration information as the first target illuminance. Here, the specific step of determining the second target illuminance is the same as the specific step of determining the first target illuminance, and will not be described in detail here.
And a fourth step of controlling the first light source and the second light source to perform a light emitting operation according to the first target illuminance and the second target illuminance. In practice, the executing body may control the first light source to emit light according to the first target illuminance. And controlling the second light source to emit light according to the second target illuminance.
And fifthly, acquiring illumination face images under the first light source and the second light source through the image pickup device. It is understood that the image capturing device captures an illuminated face image when the first light source and the second light source emit light.
And sixthly, carrying out feature extraction processing on the illumination face image to obtain illumination face feature information. In practice, the executing body can perform feature extraction processing on the illumination face image through an image feature extraction algorithm to obtain illumination face feature information.
And seventhly, carrying out matching processing on the face feature weighting information according to a pre-stored face feature information set to obtain a face feature matching result. The pre-stored face feature information set may be feature information of each face image stored in advance in a storage device included in the execution body. In practice, first, the executing body may determine a similarity between each piece of pre-stored face feature information in the pre-stored face feature information set and the face feature information. As an example, the executing body may determine the similarity between each pre-stored face feature information in the pre-stored face feature information set and the face feature information through a cosine similarity algorithm. And then, determining the preset face matching information representing successful matching to be a face feature matching result in response to the fact that the similarity greater than or equal to a preset face similarity threshold exists in the determined similarities.
And eighth step, in response to the fact that the face feature matching result is determined to represent that the face matching is successful, controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation.
The above-mentioned related matters are taken as an invention point of the embodiments of the present disclosure, and solve the second technical problem mentioned in the background art, that is, shadows exist in the collected face image at a darker light or a non-uniform light, which results in lower definition of the collected face image and lower accuracy of face recognition. Factors that lead to lower accuracy of face recognition tend to be as follows: shadows exist in the collected face images at darker or uneven light, so that the definition of the collected face images is lower. If the factors are solved, the effect of improving the accuracy of face recognition can be achieved. To achieve this effect, the door lock motor assembly control method of the present disclosure further includes: responding to the fact that the face matching result represents the matching failure, and respectively acquiring illumination intensities corresponding to the target environment through an associated first illumination sensor and an associated second illumination sensor, wherein the first illumination sensor and the second illumination sensor are respectively positioned on two sides of the intelligent door lock; determining whether the first illumination intensity and the second illumination intensity meet a preset illumination intensity condition; in response to determining that the first illumination intensity and the second illumination intensity do not meet the preset illumination intensity condition, determining a first target illumination of a first light source and a second target illumination of a second light source according to the first illumination intensity and the second illumination intensity, wherein the first light source and the second light source are respectively positioned at two sides of the intelligent door lock; controlling the first light source and the second light source to perform a light emitting operation according to the first target illuminance and the second target illuminance; collecting illumination face images under the first light source and the second light source through the camera device; performing feature extraction processing on the illumination face image to obtain illumination face feature information; according to a pre-stored face feature information set, carrying out matching processing on the face feature weighting information to obtain a face feature matching result; and controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation in response to the fact that the face feature matching result is determined to represent that the face matching is successful. Because when the illumination intensity of the two sides of the intelligent door lock is detected to be smaller than or equal to a preset illumination intensity threshold value or the illumination intensity deviation of the two sides of the intelligent door lock is larger, the illumination intensity of the light sources of the two sides of the intelligent door lock can be determined according to the detected illumination intensity of the two sides of the intelligent door lock, and the light sources are controlled to emit light according to the determined illumination intensity, so that the same illumination intensity can be provided on the two sides of the intelligent door lock, shadows existing on a human face are reduced, the definition of an acquired human face image is improved, and the accuracy of human face recognition is improved.
Optionally, the above execution body may further execute the following steps:
the first step, in response to determining that the face matching result represents the matching failure, the face image is subjected to feature extraction processing to obtain face image feature information. In practice, the executing body can perform feature extraction processing on the face image through an image feature extraction algorithm to obtain face image feature information.
And secondly, carrying out matching processing on the face image characteristic information according to a pre-stored Zhou Nayi constant face characteristic information set to obtain an intra-week abnormal matching result. The pre-stored intra-week abnormal face feature information set may be face feature information that is pre-stored in advance in a week of each recognition failure of the storage device included in the execution body. The intra-week abnormal face matching result can represent success of intra-week abnormal face matching or failure of intra-week abnormal face matching. The Zhou Nayi regular face matching success can be understood as a situation that face recognition of a person corresponding to the face image fails within one week. In practice, first, the executing body may determine the similarity between the abnormal face feature information in each pre-stored week and the face image feature information in the pre-stored week abnormal face feature information set. As an example, the executing body may determine the similarity between each pre-stored intra-week abnormal face feature information in the pre-stored intra-week abnormal face feature information set and the face image feature information through a cosine similarity algorithm. Then, in response to the determined similarity of the respective similarities being greater than or equal to a preset similarity threshold, preset intra-week abnormal matching information representing that Zhou Nayi constant face matching is successful is determined as an intra-week abnormal matching result.
And thirdly, responding to the fact that the weekly abnormal matching result represents Zhou Nayi that the regular face matching is successful, and screening out pre-stored intra-month abnormal face characteristic information corresponding to the face image characteristic information from a pre-stored intra-month abnormal face characteristic information set to serve as target intra-month abnormal face characteristic information. The pre-stored intra-month abnormal face feature information set may be face feature information that is pre-stored in the memory device included in the execution body and fails to identify each of the face feature information within one month. In practice, the executing body can determine the similarity between the abnormal face characteristic information in each pre-stored month and the face image characteristic information in the pre-stored month through a cosine similarity algorithm. And then, determining the corresponding intra-month abnormal face characteristic information with the highest similarity as target intra-month abnormal face characteristic information.
And step four, determining the historical anomaly detection times corresponding to the anomaly face characteristic information in the target month. In practice, the execution subject may determine the number of times of history recognition corresponding to the intra-target-month abnormal face feature information stored in the storage means as the number of times of history abnormality detection.
And fifthly, determining the location information of the intelligent door lock in response to determining that the historical anomaly detection times are greater than or equal to a preset historical anomaly detection times threshold value. The threshold value of the preset historical abnormality detection times may be a threshold value of a preset historical abnormality detection times. For example, the threshold value of the number of times of detection of the preset history abnormality may be 3 times. The location information may be province or urban area where the intelligent door lock is located. As an example, in response to determining that the historical anomaly detection count is greater than or equal to a preset historical anomaly detection count threshold, the executing entity may determine location information for the smart door lock via an associated locator.
And sixthly, acquiring a dangerous person face characteristic information set corresponding to the location information. The dangerous person face feature information set may be information representing the face features of dangerous persons. In practice, the executing body may acquire the dangerous person face feature information set corresponding to the location information from the associated intelligent door lock server. The intelligent door lock server may be a server corresponding to an intelligent door lock. The intelligent door lock server can download face images of dangerous personnel from various webpages, extract face characteristic information of the downloaded face images and store the face characteristic information.
Seventh, for each dangerous person face feature information in the dangerous person face feature information set, determining a face feature similarity between the dangerous person face feature information and the face image feature information. In practice, the executing body can determine the similarity between the face feature information of the dangerous person and the face image feature information through a cosine similarity algorithm to serve as the face feature similarity.
And eighth step, generating dangerous personnel alarm information according to the face image in response to the fact that face feature similarity greater than or equal to a preset face feature similarity threshold exists in the face feature similarity. The preset face feature similarity threshold may be a preset face feature similarity threshold. In practice, the executing body can splice the face image with a preset dangerous person alarm information template to obtain dangerous person alarm information. The preset dangerous person alarm information template can be' ______ above the image is suspicious as dangerous person, and the alarm is recommended.
And ninth, generating abnormal personnel alarm information according to the face image in response to the fact that the face feature similarity which is greater than or equal to a preset face feature similarity threshold value does not exist in the face feature similarity. In practice, the executing body can splice the face image with a preset abnormal person alarm information template to obtain abnormal person alarm information. The preset abnormal person alarm information template may be "the person in the image is suspected to be an abnormal person, please note safety".
And tenth, sending the dangerous person alarm information or the abnormal person alarm information to the associated terminal equipment. The associated terminal device may be a terminal in communication connection with the intelligent door lock. For example, the associated terminal device may be a mobile phone.
The above related content is taken as an invention point of the embodiment of the disclosure, which solves the technical problem mentioned in the background art, namely, the intelligent door lock adopting the identification mode fails to determine whether the person with failed identification is an abnormal person or even a dangerous person according to the historical identification times, and when the abnormal person or even the dangerous person tries to unlock, the user is not warned, so that the safety of the user is lower. Factors that lead to lower security for the user tend to be as follows: the intelligent door lock adopting the identification mode can not determine whether the person with failed identification is an abnormal person or even a dangerous person according to the historical identification times, and when the abnormal person or even the dangerous person tries to unlock, the user is not warned. If the above factors are solved, the effect of improving the safety of the user can be achieved. To achieve this effect, the door lock motor assembly control method of the present disclosure further includes: responding to the fact that the face matching result is determined to represent the matching failure, and carrying out feature extraction processing on the face image to obtain face image feature information; according to a pre-stored Zhou Nayi constant face characteristic information set, carrying out matching processing on the face image characteristic information to obtain a weekly abnormal matching result; responding to the fact that the intra-week abnormal matching result is determined to represent Zhou Nayi that the regular face matching is successful, and screening out pre-stored intra-month abnormal face feature information corresponding to the face image feature information from the pre-stored intra-month abnormal face feature information set to serve as target intra-month abnormal face feature information; determining historical anomaly detection times corresponding to the anomaly face feature information in the target month; determining location information of the intelligent door lock in response to determining that the historical anomaly detection times are greater than or equal to a preset historical anomaly detection times threshold; acquiring a dangerous person face characteristic information set corresponding to the place information; for each dangerous person face feature information in the dangerous person face feature information set, determining the face feature similarity of the dangerous person face feature information and the face image feature information; generating dangerous personnel alarm information according to the face images in response to the fact that face feature similarity greater than or equal to a preset face feature similarity threshold exists in the face feature similarity; generating abnormal personnel alarm information according to the face image in response to determining that the face feature similarity greater than or equal to a preset face feature similarity threshold does not exist in the determined face feature similarity; and sending the dangerous person alarm information or the abnormal person alarm information to the associated terminal equipment. After the face recognition fails, whether the face recognition condition exists in the person corresponding to the face image in one week is further determined, and the number of times of the face recognition failure in one month is determined when the face recognition condition exists in the person corresponding to the face image in one week is determined. When the number of recognition failures is small, the user can be reminded of safety. When the number of times of recognition failure is large, whether the person corresponding to the face image is a dangerous person or not is further determined, and when the person corresponding to the face image is determined to be the dangerous person, the user is reminded to alarm, so that when abnormal persons or even dangerous persons try to unlock, the user can be warned or even reminded to alarm, and the safety of the user is improved.
The above embodiments of the present disclosure have the following advantageous effects: the door lock motor assembly control method can improve the speed of intelligent door lock identification verification, so that the user experience is improved. Specifically, the reason for making the user experience feel worse is that: when an intelligent door lock is provided with at least two identification modes, each identification mode is provided with a certain distance limit, only one identification mode can be adopted for identification within a set distance, a user needs to move for a plurality of times to reach the set distance for identification verification when using the intelligent door lock, the time consumption for identification verification is long, and the identification mode of the intelligent door lock can be locked and cannot be continuously identified due to the failure of the plurality of times of verification, so that inconvenience is caused to the user. Based on this, the door lock motor assembly control method of some embodiments of the present disclosure first obtains a target environment image by photographing a target environment through an associated photographing device in response to detecting that an object exists within a preset recognition distance through an associated distance detecting device. Therefore, when the object exists in the preset recognition distance, the target environment image can be acquired to perform recognition verification. And secondly, carrying out palm recognition on the target environment image to obtain a palm recognition result. Thus, the palm state of the user within the preset recognition distance can be recognized. And then, responding to the fact that the palm result represents the target environment image to meet the preset palm condition, and acquiring a palm vein image through an associated infrared camera device. The preset palm condition is that the target environment image comprises only one image representing a complete palm area. Thus, upon determining that there is one and only one complete palm within the target environment, palm vein images may be acquired for palm vein identification. And then, carrying out matching processing on the palm vein images to obtain palm vein matching results. Thus, the palm vein image can be identified and verified to determine whether the verification is successful. And then, in response to determining that the palm recognition result represents that the target environment image does not meet the preset palm condition or that the palm vein matching result represents that the matching fails, performing face recognition on the target environment image to obtain a face recognition result. Thus, it is possible to determine whether a face is present in the target environment when there is no one and only one complete palm within the target environment, or when the palm vein identification is unsuccessful. And then, responding to the face recognition result to represent the face region included in the target environment image, and performing face region cutting processing on the target environment image to obtain a face image. Therefore, when the face exists in the target environment, the face image can be obtained by cutting the target environment image, so that the size of the image for matching the face subsequently is reduced, processing steps are reduced, and efficiency is improved. And then, in response to the fact that the palm vein matching result or the face matching result is determined to represent successful matching, controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation. Thus, unlocking can be performed when palm vein recognition is successful or face recognition is successful. And because the image is only required to be acquired within the preset recognition distance, the image is recognized, the recognition mode is determined, the user is not required to move and adjust to the corresponding distance range, the recognition mode is determined, the step of moving and adjusting the user can be reduced, the recognition and verification time consumption is reduced, the use is convenient, and the user experience is improved.
Referring now to fig. 2, a schematic diagram of the various electronic components included in a smart door lock 200 suitable for use in implementing some embodiments of the present disclosure is shown. The smart door lock shown in fig. 2 is only one example and should not impose any limitations on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 2, the smart door lock 200 may include a processing device (e.g., a central processor, a graphics processor, etc.) 201 that may perform various appropriate actions and processes according to programs stored in a read-only memory (ROM) 202 or programs loaded from a storage device 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data required for the operation of the smart door lock 200 are also stored. The processing device 201, ROM 202, and RAM 203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
In general, the following devices may be connected to the I/O interface 205: input devices 206 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, a distance detection device, an imaging device, an infrared imaging device, and the like; the distance detecting means may be means for detecting whether an object exists within a preset recognition distance. For example, the distance detecting device may be a distance sensor. The image pickup device may be a device in communication with the intelligent door lock. For example, the imaging device may be a camera. The associated infrared camera may be an infrared camera communicatively coupled to the executing body. An output device 207 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a communication device 209. The communication means 209 may allow the smart door lock 200 to communicate with other devices wirelessly or by wire to exchange data; and a door lock motor assembly 210 configured to perform an unlocking operation. The door lock motor assembly can be a motor for controlling the door lock to be opened. While fig. 2 illustrates a smart door lock 200 having various devices, it should be understood that not all of the illustrated devices are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 2 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication device 209, or from the storage device 208, or from the ROM 202. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 201.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the smart door lock; or may exist alone without being assembled into the smart door lock. The computer readable medium carries one or more programs that, when executed by the smart door lock, cause the smart door lock to: in response to detecting that an object exists within a preset identification distance through the associated distance detection device, shooting a target environment through the associated camera device to obtain a target environment image; performing palm recognition on the target environment image to obtain a palm recognition result; responding to the fact that the target environment image represented by the palm result meets the preset palm condition, and acquiring a palm vein image through an associated infrared camera device, wherein the preset palm condition is an image which is included in the target environment image and only represents a complete palm area; carrying out matching treatment on the palm vein images to obtain palm vein matching results; responding to the fact that the palm recognition result represents that the target environment image does not meet the preset palm condition or the fact that the palm vein matching result represents that the matching fails is determined, and carrying out face recognition on the target environment image to obtain a face recognition result; responding to the face recognition result to represent the face region included in the target environment image, and performing face region cutting processing on the target environment image to obtain a face image; matching the face images to obtain a face matching result; and controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation in response to the fact that the palm vein matching result or the face matching result represents successful matching.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: the processor comprises a shooting unit, a palm recognition unit, an acquisition unit, a first matching unit, a face recognition unit, a cutting unit, a second matching unit and a control unit. The names of these units do not constitute limitations on the units themselves in some cases, and for example, the control unit may also be described as "a unit that controls the door lock motor assembly included in the above-described smart door lock to perform an unlocking operation".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (9)

1. A door lock motor assembly control method is applied to an intelligent door lock and comprises the following steps:
in response to detecting that an object exists within a preset identification distance through the associated distance detection device, shooting a target environment through the associated camera device to obtain a target environment image;
performing palm recognition on the target environment image to obtain a palm recognition result;
responsive to determining that the palm result characterizes the target environment image as meeting a preset palm condition, acquiring a palm vein image by an associated infrared camera device, wherein the preset palm condition is that an image characterizing a complete palm area is included in the target environment image;
carrying out matching treatment on the palm vein image to obtain a palm vein matching result;
responding to the fact that the palm recognition result represents that the target environment image does not meet the preset palm condition or the fact that the palm vein matching result represents that the matching fails is determined, and performing face recognition on the target environment image to obtain a face recognition result;
responding to the fact that the face recognition result represents that the target environment image comprises a face region, and performing face region cutting processing on the target environment image to obtain a face image;
Matching the face image to obtain a face matching result;
and controlling a door lock motor assembly included in the intelligent door lock to execute unlocking operation in response to the fact that the palm vein matching result or the face matching result represents successful matching.
2. The method of claim 1, wherein the performing the matching process on the palm vein image to obtain a palm vein matching result comprises:
extracting an interested region from the palm vein image to obtain a palm vein effective region image;
performing image enhancement processing on the palm vein effective area image to obtain an image enhanced palm vein effective area image;
extracting image characteristics of the palm vein effective area image after the image enhancement to obtain palm vein characteristic information;
and carrying out matching processing on the palm vein characteristic information according to a pre-stored palm vein characteristic information set to obtain a palm vein matching result.
3. The method of claim 2, wherein the extracting the region of interest from the palm vein image to obtain a palm vein effective region image comprises:
performing background segmentation processing on the palm vein image to obtain a palm vein segmented image;
Carrying out smoothing filtering treatment on the palm vein segmentation image to obtain a palm vein filtering image;
carrying out gray level binarization processing on the palm vein filtered image to obtain a palm vein gray level image;
performing rotation processing on the palm vein gray level image to obtain a rotated palm vein gray level image;
and cutting the rotated palm vein gray level image to obtain a palm vein effective area image.
4. A method according to claim 3, wherein the rotating the metacarpal vein grayscale image to obtain a rotated metacarpal vein grayscale image includes:
determining palm bottom midpoint position information corresponding to the palm vein gray level image;
contour detection processing is carried out on the palm vein gray level image, so that contour boundary position information of each hand corresponding to the palm vein gray level image is obtained;
for each piece of hand contour boundary position information in the hand contour boundary position information, coordinate transformation processing is carried out on the hand contour boundary position information by taking the midpoint position information at the bottom of the palm as an origin to obtain hand contour transformation position information;
performing curve fitting processing on the determined hand contour conversion position information to obtain hand contour curve information;
Determining each corresponding minimum value in the hand contour curve information as a finger seam valley point position to obtain a finger seam valley point position set;
screening out two finger seam valley point positions meeting a preset median condition from the finger seam valley point positions in a concentrated manner;
determining finger joint tangent line information corresponding to the positions of the two finger joint valley points;
rotating the palm vein filtering image according to the slope corresponding to the finger seam tangent line information to obtain a forward palm vein image;
and determining the positive palm vein image as a rotated palm vein gray scale image.
5. The method according to claim 4, wherein the performing a cropping process on the rotated palm vein grayscale image to obtain a palm vein effective area image includes:
determining the minimum finger seam valley point position in the finger seam valley point position set as the finger seam valley point position in the palm;
determining horizontal tangent line information corresponding to the middle finger seam point position in the palm;
determining central line information corresponding to the palm vein gray level image according to the horizontal tangent line information and the finger seam tangent line information;
according to the central line information, determining central position information corresponding to the palm vein gray level image;
Determining cutting radial line information corresponding to the palm vein gray level image according to the palm vein gray level image;
and cutting the palm vein gray level image according to the cutting radial line information and the central position information to obtain a palm vein effective area image.
6. The method according to claim 2, wherein the performing image enhancement processing on the palm vein effective area image to obtain an image-enhanced palm vein effective area image includes:
carrying out contrast stretching treatment on the palm vein effective area image to obtain a palm vein effective area stretching image;
carrying out palm print extraction processing on the palm vein effective area stretching image to obtain a palm print image;
superposing the palm print image and the palm vein effective area stretching image to obtain a superposed palm vein effective area stretching image;
and determining the superimposed palm vein effective area stretching image as an image-enhanced palm vein effective area image.
7. The method of claim 1, wherein the performing palm recognition on the target environment image to obtain a palm recognition result includes:
performing skin color segmentation on the target environment image to obtain a skin portion image;
Extracting contour features of the skin part image to obtain contour feature information;
inputting the outline characteristic information into a pre-trained palm recognition result generation model to obtain a palm recognition result.
8. An intelligent door lock, comprising:
one or more processors;
distance detecting means configured to detect whether an object exists within a preset recognition distance;
an imaging device configured to acquire an image;
an infrared camera device configured to acquire a palmar vein image;
a door lock motor assembly configured to perform an unlocking operation;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
9. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202311048979.5A 2023-08-18 2023-08-18 Door lock motor assembly control method, intelligent door lock and computer readable medium Pending CN117173813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311048979.5A CN117173813A (en) 2023-08-18 2023-08-18 Door lock motor assembly control method, intelligent door lock and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311048979.5A CN117173813A (en) 2023-08-18 2023-08-18 Door lock motor assembly control method, intelligent door lock and computer readable medium

Publications (1)

Publication Number Publication Date
CN117173813A true CN117173813A (en) 2023-12-05

Family

ID=88934714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311048979.5A Pending CN117173813A (en) 2023-08-18 2023-08-18 Door lock motor assembly control method, intelligent door lock and computer readable medium

Country Status (1)

Country Link
CN (1) CN117173813A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101439021B1 (en) * 2013-04-25 2014-09-05 가톨릭대학교 산학협력단 Vascular clamp or instrument for venous surgery
CN111435558A (en) * 2018-12-26 2020-07-21 杭州萤石软件有限公司 Identity authentication method and device based on biological characteristic multi-mode image
CN218568091U (en) * 2022-07-28 2023-03-03 珠海横琴光鉴科技有限公司 Door lock with face brushing and palm brushing functions
CN115761826A (en) * 2022-12-06 2023-03-07 上海银欣高新技术发展股份有限公司 Palm vein effective area extraction method, system, medium and electronic device
WO2023028947A1 (en) * 2021-09-02 2023-03-09 青岛奥美克生物信息科技有限公司 Palm vein non-contact three-dimensional modeling method and apparatus, and authentication method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101439021B1 (en) * 2013-04-25 2014-09-05 가톨릭대학교 산학협력단 Vascular clamp or instrument for venous surgery
CN111435558A (en) * 2018-12-26 2020-07-21 杭州萤石软件有限公司 Identity authentication method and device based on biological characteristic multi-mode image
WO2023028947A1 (en) * 2021-09-02 2023-03-09 青岛奥美克生物信息科技有限公司 Palm vein non-contact three-dimensional modeling method and apparatus, and authentication method
CN218568091U (en) * 2022-07-28 2023-03-03 珠海横琴光鉴科技有限公司 Door lock with face brushing and palm brushing functions
CN115761826A (en) * 2022-12-06 2023-03-07 上海银欣高新技术发展股份有限公司 Palm vein effective area extraction method, system, medium and electronic device

Similar Documents

Publication Publication Date Title
US20200184187A1 (en) Feature extraction and matching for biometric authentication
CN110852160B (en) Image-based biometric identification system and computer-implemented method
KR102483642B1 (en) Method and apparatus for liveness test
US10748017B2 (en) Palm vein identification method and device
CN111626163B (en) Human face living body detection method and device and computer equipment
CN110532746B (en) Face checking method, device, server and readable storage medium
CN112507897A (en) Cross-modal face recognition method, device, equipment and storage medium
CN117173813A (en) Door lock motor assembly control method, intelligent door lock and computer readable medium
Leo et al. Highly usable and accurate iris segmentation
CN108875467B (en) Living body detection method, living body detection device and computer storage medium
CN108694347B (en) Image processing method and device
CN114783002B (en) Object intelligent matching method applied to scientific and technological service field
CN117133023A (en) Identity authentication method, device, equipment and medium based on palm print recognition
KR20220142748A (en) Electronic device to prevent theft of fingerprints
CN118015670A (en) Face recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination